[Libwebsockets] lws_write speed

Roger Light roger at atchoo.org
Wed May 18 17:46:43 CEST 2016


Hi Andy,

Would you consider a new set of apis to help with this case of where
the message data already exists on the heap and we don't want to alloc
new memory and copy the data, or lws to take a copy of the data
either?

Something like:

lws_start_message() - generate and send the LWS_PRE data, with that
data stored in the wsi so we can call lws_start_message() again if
necessary.
lws_write_message_data() - equivalent to write(), sends data without
copying, and modifies data in place with XOR, returns the amount of
data consumed
lws_end_message() - anything that needs to be done to finalise the
message. I don't know if this is needed or not.

Cheers,

Roger



On Wed, May 18, 2016 at 12:55 PM, Andy Green <andy at warmcat.com> wrote:
>
>
> On 05/18/2016 07:46 PM, Denis Osvald wrote:
>>
>> On 05/18/2016 01:38 PM, Andy Green wrote:
>>>
>>>
>>>
>>> On 05/18/2016 07:08 PM, Denis Osvald wrote:
>>>>
>>>> Hi all,
>>>>
>>>> On 05/16/2016 09:33 PM, Pierre Laplante wrote:
>>>>>
>>>>> In the example of the server, you are using malloc to copy the data
>>>>> before using lws_write as in:
>>>>>
>>>>> ...
>>>>> Is there another way to do this ?
>>>>
>>>>
>>>> Andy, have you looked at the writev syscall, if it's usable in this
>>>> scenario?
>>>>
>>>> It accepts multiple buffers, so it could maybe use the first buffer for
>>>> the padding thus not requiring user to have padding space in their
>>>> buffer. It would send them both at once in one writev call then.
>>>>
>>>> Maybe it's worth exploring?
>>>
>>>
>>> I think it doesn't help with the basic problem, which is that the kernel
>>> must restrict how much memory it sets aside for buffering a specific
>>> networking connection.
>>>
>>> Actually writev() semantics seem to be no different than write() in the
>>> sense it will inform you afterwards how much it actually used
>>>
>>> RETURN VALUE
>>>         On success, readv() and preadv() return the number of bytes read;
>>> writev() and pwritev() return the number of bytes writ‐
>>>         ten.  On error, -1 is returned, and errno is set appropriately.
>>>
>>> The basic problem is that the kernel may conclude it only wants to take
>>> a little of your data into kernelside buffers based on global memory
>>> situation and the conditions on the specific connection.
>>>
>>> writev() won't magically solve that if I understood it, it's more about
>>> matching kernelside scatter-gather semantics when there is no special
>>> restriction on buffering from the driver (ie, storage).
>>
>>
>> Well, you're right that it wouldn't resolve any of the performance or
>> bufferring issues.
>>
>> The only thing that I see writev() helping with is library users not
>> having to copy data into a new buffer with padding. This would help when
>> data comes from other sources. E.g. some app uses lws and some libfoo
>> which allocates and gives back void* foodata + foolen, then app has to
>> malloc + memcpy because libfoo doesn't provide in-place providing of
>> data where app would have padding in the buffer as well.
>
>
> Agreed, but only if *writev() actually gets the kernel to take the data*.
> When it doesn't, because it's regulated by the network stack and the
> dynamics of a specific connection, we will be back in the same boat.
>
> If I understood it the basic situation is unrelated to our having the data
> in multiple, scatter-gather buffers, which is what this is designed for, and
> is related to the kernel / network stack refusing to take on our glut of
> data on that connection: the amount it does take will be the same for
> write() or writev() doing the asking since writev() / scatter-gather
> semantics doesn't change the actual problem of parking the connection data
> kernelside.
>
> lws_write() will destroy the incoming data anyway for client write case
> since it'll XOR it as required by ws, in-place.  If the data's in a linear
> buffer and only for this connection, you only need to arrange LWS_PRE
> padding before the very start of the buffer, it'll trash a bit of
> already-sent data behind the 'cursor' in the buffer for the next tranche.
>
> -Andy
>
> _______________________________________________
> Libwebsockets mailing list
> Libwebsockets at ml.libwebsockets.org
> http://libwebsockets.org/mailman/listinfo/libwebsockets



More information about the Libwebsockets mailing list