[Libwebsockets] Segfault

Andy Green andy.green at linaro.org
Thu Feb 7 18:50:23 CET 2013


On 08/02/13 00:21, the mail apparently from Jack Mitchell included:
>
>> Why do you have n contexts?  They should normally all be handled from
>> one... each context implies a different listening socket.  There
>> doesn't seem any reason to split things up like that.
>>
>> Isn't it that you want one context, ie, one listening socket, which
>> can serve information from any of the FPGAs?
>>
>> If I were you I would back up and look again at your kernelside
>> support before trying to get any more working all the way through,
>> it's showing signs of becoming a tangle / muddle at the architectural
>> level because the split between what the kernel and userland does is
>> unclear.
>>
>> What you want is one logical device per FPGA that follows file or
>> socket descriptor semantics.  Since you mention you have an ioctl(), I
>> guess you might already have a char device for each FPGA.  In that
>> case, you can add buffering and poll() support there.  Then, you can
>> service everyone in one poll loop using the extpoll scheme shown in
>> the test server (and all in one context). It seems to me if you don't
>> eliminate the confusion that's coming from the kernel side not being
>> right you won't get a good result.
>>
>> -Andy
>>
>
> Hi Andy,
>
> As per your advice I have spent the last week writing a (much improved)
> v2 kernel driver for my FPGA's. In doing so I coded a blocking device
> file which is also poll enabled, as you mentioned a lot of the locking
> complexity fell out of the design. I now have what seems initially to be
> a reliable working solution and also a buffered input which takes the
> real-time constraints out of my userspace application.

Cool.

> The only parts which I am not completely happing with, is that I haven't
> worked the extpoll into my main poll loop yet; I'm still investigating
> if this is necessary, and also the technique to which I push data out. I
> have essentially a ring buffer with a set amount of space, then when a
> client establishes connection they receive the head position of the
> ringbuffer and keep and increment their own tail position as successful
> writes are made - always checking if they've reached the head.

You can get away without extpoll if the poll() timeout is small enough 
for your application, you can do libwebsockets service call then check 
your stuff afterwards before looping.  However if the websocket 
connections are all idle, that "cheapo" way of doing it can introduce a 
whole timeout period of latency before you check your FPGA driver device 
nodes for having anything.  If that doesn't matter, and you're OK with 
burning a little CPU even when idle spinning in this loop, then it's OK 
to stick with that... you're still fully singlethreaded that way.

> This works well, however I need to find a mechanism for dropping clients
> if they haven't accepted data by the time the ring buffer comes around
> again. This is relevant because the data is incremental so they may
> become out of sync if the buffer goes all the way round and they pick it
> up at a later date.

I see... yes dropping them and having the client maybe reconnect to 
start over might be good.  If your ringbuffer is big enough compared to 
the incoming data rate, it should cover any reasonable network delay... 
if the client really goes away for 10s or whatever then dropping sounds 
pretty OK.

> Thank you for your advice and for giving me the final push to get round
> to properly rewriting my kernel module again!

You're welcome.

-Andy

-- 
Andy Green | TI Landing Team Leader
Linaro.org │ Open source software for ARM SoCs | Follow Linaro
http://facebook.com/pages/Linaro/155974581091106  - 
http://twitter.com/#!/linaroorg - http://linaro.org/linaro-blogo



More information about the Libwebsockets mailing list