[Libwebsockets] Using websockets as a pure buffer manipulation engine.

Alan Conway aconway at redhat.com
Fri Nov 18 14:19:06 CET 2016


This sounds promising. I have an immediate websocket task that
hopefully I can solve with libwebsockets in its current form, then
armed with some knowledge of libwebsockets I will return to the more
ambitious goal. Thanks for the info, I will be bothering you again :)

On Fri, 2016-11-18 at 10:10 +0800, Andy Green wrote:
> On Thu, 2016-11-17 at 09:50 -0500, Alan Conway wrote:
> > 
> > On Thu, 2016-11-17 at 22:04 +0800, Andy Green wrote:
> > > 
> > >  
> > > > 
> > > > I'm working on qpid.apache.org/proton, an AMQP messaging
> > > > library.
> > > > It
> > > > currently has a single-threaded, event-based "reactor" API
> > > > using
> > > > sockets & poll. It also has a low-level "protocol driver" that
> > > > deals
> > > > strictly with byte buffers in memory - no IO assumptions.
> > > 
> > > I understand the kind of abstraction you mean, but I think it's
> > > not
> > > enough for a practical system without also considering flow
> > > control
> > > (equivalent to POLLIN/POLLOUT)
> > 
> > Ohhhh yes indeedy. If I had a dime for every distributed system 
> 
> Great.
> 
> > 
> > screwup
> > I've fixed (or caused) due to ignoring flow control, I'd have a lot
> > of
> > dimes :)
> > 
> > Here's how it works for proton: 2 variable sized but bounded
> > buffers
> > for read and write (they are actually variable sized windows on a
> > fixed
> > buffer). Normal operation is something like:
> > 
> > - non-blocking or async read of max read-buffer-size bytes
> > - parse & handle events which puts stuff in the write-buffer
> > - non-blocking or async partial write of write-buffer till it's
> > empty
> > 
> > EPOLLIN == read-buffer-size > 0
> > EPOLLOUT == write-buffer-size > 0
> 
> Right... what's implied by that are elective events lws can receive
> on
> the connection triggered by changes in the window state.
> 
> This brings a nice additional feature actually it's possible to know
> how much can be written with confidence.  There's an exported api to
> expose that already existing for the http/2 tx window,
> lws_get_peer_write_allowance().  So the user code can tune what it
> tries to write to match what may be written.
> 
> > 
> > If the write buffer fills up, the read-buffer is disabled till
> > there
> > is
> > some write space.
> 
> Lws offers immediate quench of rx activity if you ask for rx to be
> flow
> controlled... this is done both by removing POLLIN (or disabling rx
> event if libuv etc), and by internally making any buffered RX wait.
> 
> Basically over and above "just buffers" you must
> 
>  - track connection lifecycle through lws (so it can do timeouts,
> manage his own wsi object for the connection etc).  This means you
> must
> create a logical "fd" to represent the connection, which lws allows
> to
> be typedef'd to an opaque struct.  But he will pass it back on read /
> write / close so you can act on the correct connection context etc.
> 
>  - allow lws to close connections
> 
>  - create a serialized event system around Proton buffers that can
> call
> into lws on rx, or space in the tx buffer appearing.  The event
> enables
> are controlled by lws using POLLIN / POLLOUT on lws side, but you can
> translate that into whatever on the proton side.
> 
> Basically you must recreate your own "fd" and the basic event loop
> actions on proton side.
> 
> > 
> > There's definitely room for improvement on the buffering scheme but
> > it
> > does handle the flow control problem pretty well.
> > 
> > > 
> > > > 
> > > > The FD notion also isn't very helpful for fast native windows
> > > > IOCP
> > > > - we
> > > > ended up having to fake a poll()-like loop which is a terribly
> > > > inefficient way to use IOCP.
> > > 
> > > Dunno what iocp is, but --->
> > 
> > Windows IO Completion Ports: a multi-threaded, proactive IO
> > framework.
> > Most apps deal with it by making it look like a single-threaded
> > poller
> > (e.g. libuv) which gives acceptable single-threaded performance but
> > does not scale as well on multi-cores as proper multi-threaded use.
> > 
> > > 
> > > > 
> > > > So I'd love to see a byte-buffer based websocket "driver" that
> > > > could be
> > > > used independently of the libwebsocket loop and has no
> > > > assumptions
> > > 
> > > The 'flow control' type concerns are not theoretical... 
> > 
> > +1000, preaching to the choir. Beware unbounded data structures,
> > there
> > are no computers with unbounded memory ;)
> > 
> > > 
> > > Sure, it should not be that hard since the recv() and send() bits
> > > are
> > > already isolated.  But you will need a flow control method even
> > > to
> > > duplicate the mirror demo reliably.
> > 
> > Yep. What do you think of the scheme described above? There are
> > other
> > ways to skin that cat. We know proton's buffer scheme has some
> > limitations (control of allocation, may force copies from
> > externally-
> > owned buffers) but it has the advantage of being simple. If you
> > want
> > to
> > come up with something more sophisticated I'd enjoy that
> > discussion.
> 
> There are a few things above the buffering that need getting sorted
> before it will work.  So probably just wiring up what you got with
> the
> buffering until it works would be the first step.
> 
> LWS has a concept of platform-specific code, but that doesn't fit
> here
> since you might want this on different platforms.  The closest place
> to
> put it is the concept of the selectable event library, eg, default
> poll() or use libuv, stuff like enable / disable POLLIN/OUT go
> through
> that, so you can cleanly map that to whatever is firing the events.
> 
> So far I think the steps would look like this
> 
> 1) define a new cmake define like LWS_PROTON to allow you to force
> lws_sockfd_type to be a pointer to the opaque per-connection struct
> in
> "proton".  Add a context creation time server option flag to select
> to
> use proton.c, same way there is one for libuv, libev etc.
> 
> 2) defeat listen sockets etc that are handled out of scope now
> 
> 3) call lws_adopt_socket_vhost(vhost, sockfd) when there is a new
> connection at proton (it is already exported).
> 
> 4) clone libuv.c into proton.c or so and select it from LWS_PROTON,
> fix
> up the api names, visit any preprocessor use of LWS_USE_LIBUV looking
> for what needs doing.
> 
> 5) Have proton.c translate POLLIN / POLLOUT enable / disable to calls
> into your proton code (libuv.c does this kind of translation already)
> 
> 6) Provide an exported entry point in proton.c for asynchronous
> events
> to arrive at (these must be serialized by the caller...).  Again
> libuv.c already has this kind of code for the necessary functions
> since
> that is also how libuv works.
> 
> 7) Look in output.c about lws_ssl_capable_read_no_ssl() for recv
> overload and lws_ssl_capable_write_no_ssl() for send() overload,
> divert
> these to proton
> 
> 8) Adapt close() usage in libwebsockets.c to do it via proton.c and
> call into proton there to get it done
> 
> If there is some heavy earthmoving inside lws that will make this
> better or easier I can consider to help with that.
> 
> -Andy
> 
> > 
> > 




More information about the Libwebsockets mailing list