[Libwebsockets] Malloc and http headers
andy at warmcat.com
Thu Dec 24 03:33:58 CET 2015
On 12/24/2015 10:22 AM, Bruce Perens wrote:
> It's going to break your current API, but received headers should only
> exist for the duration of one callback, and transmit headers and data
> should only exist for the duration of one callback.
Sorry I don't get your point... what's the problem that HTTP headers
exist in lws between the time they are received and the point the
connection leaves HTTP mode?
Returned / transmitted headers for the handshake don't have any special
lifetime, they are already transient.
But the received HTTP -- we're just talking about HTTP, not ws --
headers we received have a lot of potentially interesting things like
cookies. They need to stick around while the whole of the headers are
acquired (which may be arbitrarily fragmented network-wise) and even be
fragmented in terms of the same header given multiple times with the
If we upgrade to ws protocol though, those headers are not needed any
more (the user gets the opportunity to copy out anything he is
interested in before they are deleted), so they are held in storage
separate from the wsi already.
That already is what happens I am just talking about making an
allocation pool for rx HTTP header storage and restricting how many
connections can do HTTP header processing simultaneously, delaying
additional connections to keep under that ceiling. ATM there is no
limit to the connections other than the available fds and that means the
peak allocation on malloc for header storage could be scary / unpredictable.
> On Wed, Dec 23, 2015 at 4:54 PM, Andy Green <andy at warmcat.com
> <mailto:andy at warmcat.com>> wrote:
> Hi -
> While wandering about the code in various ways the last few weeks,
> doing quite restful cleaning activities, I noticed we are basically
> willing to allocate headers storage for any amount of connects at
> the moment. We free() it when the connection upgrades to ws or
> closes of course but the peak allocation in a connection storm is
> not really controlled.
> At the moment if we get a connection, it's enough to make us
> allocate the struct lws (~256 bytes) and allocated_headers (~2300
> Actually for mbed3 where there's only 256KB RAM in the system,
> that's not so good... even for larger systems it's better if under
> stress it doesn't just spew mallocs but makes the connections wait
> beyond a certain point until the guys using the headers completed,
> timed out or upgraded to ws.
> The default header content limit of 1024 could then be increased, if
> we strictly controlled how many of them could be around at a time.
> About mallocs in general, ignoring one-time small allocs and the
> extensions, we have these lws_malloc + lws_zalloc:
> - client-parser.c: allocates and keeps a client per-connection
> buffer for ping payload (PONG must later repeat the payload
> according to the standard). Specified to be < 128 bytes.
> - client.c: the client per-connection user buffer
> - client-handshake.c: the client struct lws
> - getifaddrs.c: allocates the connection peer name temporarily
> - hpack.c (http2): allocates dynamic header dictionary as needed
> - libwebsockets.c: user space allocation
> - output.c: per-connection truncated send storage
> - parsers.c: the http header storage freed at ws upgrade (struct
> allocated_headers = 1024 header content + 164 x 8-byte frags + ~100
> = ~2300 bytes); server per-connection ping payload buffer (<128)
> - server.c: the server per-connection rx_user_buffer; the struct
> lws for new connections
> - service.c: rx flow control cache (he had a buffer of rx, but
> during processing set rx flow control... need to cache remaining and
> return to event loop)
> How about the following
> 1) Make new connection accept flow controllable (modulate his POLLIN)
> 2) Have the max connection header content size settable by info,
> default to 2048.
> 3) Preallocate a pool of struct allocated_headers in the context,
> how many is set by info, default to say 8. (default to 16KB reserved
> for HTTP headers in the context... can be as low as 1 x 1024 set in
> info or as big as you like... but it will be finite now).
> 4) Switch to using the pool and flow control accepts if they run
> dry... timeouts should stop this becoming a DoS
> 5) Put the PONG / Close buffer as a unsigned char pingbuf in
> the struct _lws_websocket_related (part of the wsi union active when
> in ws mode) and eliminate the related malloc management code.
> struct lws will bloat to ~384 but PONG / Close buffer is part of ws
> standard and the related malloc is gone. If PONG is in use, it will
> be used on every connection. And every connection may receive a
> Close at some point, which also needs this buffer. So might as well
> bite the bullet.
> This shouldn't affect the ABI except wrt the info struct, everything
> else is private / internal changes. (A bit late it might have been
> smart to bloat the info struct with a fake array at the end we
> reduce when we add new members.)
> Lws is used in a lot of different usecases from very small to very
> large, I think this makes things better for everyone but if it
> sounds like trouble or could be better, discussion is welcome.
> Libwebsockets mailing list
> Libwebsockets at ml.libwebsockets.org
> <mailto:Libwebsockets at ml.libwebsockets.org>
More information about the Libwebsockets