[Libwebsockets] performance client vs server
justinbeech at gmail.com
Mon Nov 7 03:12:42 CET 2016
Using a modified fraggle.c, removing deflate, increasing the message size
to batches of 32k, removing the generation of random data and the
checksums, I see that when the client runs at 100% cpu the server is only
running at 10% cpu. (fraggle.c is arranged so when a client connects the
server sends a bunch of messages).
Doing a quick profile it looks like all the client cpu time is taken up
by lws_client_rx_sm which appears to be a character by character state
machine for receiving bytes.
It isn't totally clear to me why the server is 10x faster than the client
at sending data than the client is at reading data. If the server sends a
32k block of zeros as a binary message, at some point isn't there a payload
length and a payload of 32k does each byte have to be processed
individually on one side but not the other?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Libwebsockets