[Libwebsockets] problems with big dynamic content
per at bothner.com
Thu Jun 21 22:49:32 CEST 2018
On 06/21/2018 01:18 PM, Andy Green wrote:
> On http/1.1, the browser shoves multiple sets of headers down the same connection asynchronously. Reading header sets (if none of them are POST...) at the server is relatively simple. However for the results, the ability for the client to understand where one transaction's data ended and the next began relies entirely on there being a content-length **and it being exactly accurate**.
> LWS_ILLEGAL_HTTP_CONTENT_LEN causes lws to not provide any content-length, defeating keep-alive on that connection and discarding any pending additional transactions; the browser will retry them on a new connection... it's a loss of efficiency.
> Chained h1 transactions is tested on travis (8 x pipelined fetches) and for file serving from lws it's solid.
> So you should check carefully your computed content-length is the precise payload amount actually sent.
The computed length is accurate. I'm sending data from a resources.c files
containing compiled-in resources, each stored as an initialized string and length.
I use the length to drive how much to send.
However, I send the data as chunks of maximum 2000 bytes, similar to
the minimal-http-server-dynamic.c code. That might confuse lws's logic.
Do you have a test uses the same logic as minimal-http-server-dynamic.c,
*and* sets an explicit content length *and* the response data is sent
in multiple 2k chunks because it is over 100k in length? Possibly a relevant factor
is that are 14 different resources, which are all requested and then repeated
when a new terminal window is opened.
This wouldn't such a big deal if it only happened with compiled-in-resources
(which isn't the default anyway and where I can specify LWS_ILLEGAL_HTTP_CONTENT_LEN)
but it also happens when serving from a zip file.
per at bothner.com http://per.bothner.com/
More information about the Libwebsockets