[Libwebsockets] Segfault

Jack Mitchell ml at communistcode.co.uk
Thu Feb 7 17:21:11 CET 2013


> Why do you have n contexts?  They should normally all be handled from 
> one... each context implies a different listening socket.  There 
> doesn't seem any reason to split things up like that.
>
> Isn't it that you want one context, ie, one listening socket, which 
> can serve information from any of the FPGAs?
>
> If I were you I would back up and look again at your kernelside 
> support before trying to get any more working all the way through, 
> it's showing signs of becoming a tangle / muddle at the architectural 
> level because the split between what the kernel and userland does is 
> unclear.
>
> What you want is one logical device per FPGA that follows file or 
> socket descriptor semantics.  Since you mention you have an ioctl(), I 
> guess you might already have a char device for each FPGA.  In that 
> case, you can add buffering and poll() support there.  Then, you can 
> service everyone in one poll loop using the extpoll scheme shown in 
> the test server (and all in one context). It seems to me if you don't 
> eliminate the confusion that's coming from the kernel side not being 
> right you won't get a good result.
>
> -Andy
>

Hi Andy,

As per your advice I have spent the last week writing a (much improved) 
v2 kernel driver for my FPGA's. In doing so I coded a blocking device 
file which is also poll enabled, as you mentioned a lot of the locking 
complexity fell out of the design. I now have what seems initially to be 
a reliable working solution and also a buffered input which takes the 
real-time constraints out of my userspace application.

The only parts which I am not completely happing with, is that I haven't 
worked the extpoll into my main poll loop yet; I'm still investigating 
if this is necessary, and also the technique to which I push data out. I 
have essentially a ring buffer with a set amount of space, then when a 
client establishes connection they receive the head position of the 
ringbuffer and keep and increment their own tail position as successful 
writes are made - always checking if they've reached the head.

This works well, however I need to find a mechanism for dropping clients 
if they haven't accepted data by the time the ring buffer comes around 
again. This is relevant because the data is incremental so they may 
become out of sync if the buffer goes all the way round and they pick it 
up at a later date.

Thank you for your advice and for giving me the final push to get round 
to properly rewriting my kernel module again!

Best Regards,
Jack.

-- 

   Jack Mitchell (jack at embed.me.uk)
   Embedded Systems Engineer
   http://www.embed.me.uk

--




More information about the Libwebsockets mailing list