[Libwebsockets] Synchronisation of objects when using uv timer_start in LWS_CALLBACK_PROTOCOL_INIT

Andy Green andy at warmcat.com
Wed Aug 3 01:37:27 CEST 2016



On August 2, 2016 1:56:58 PM GMT+08:00, Meir Yanovich <meiry242 at gmail.com> wrote:
>Hello
>Andy , what is you suggestion if i want not to send to all other
>connection
>, but only to selected , then i do need to save them in some static map
>?

It depends on what you're trying to do.

Ask for the callback for everyone using the protocol, and decide in the individual connection callbacks if there is actually anything to send can still be efficient if there's generally a high probability by the time you request the callback, most connections have something or other to send.  Some applications are like that.

If it's more like you are not trying to update a globally 'shared world', but that only a few connections share one subworld / 'room' that must be updated to see a consistent view only of each others' actions in the room, a very good, scalable technique is use the per_session (pss) and per_vhost (pvh) structs similar to how dumb increment and mirror examples do.

 - add struct lws * members to the pvh struct to be the start of your lists.  Eg

    struct lws *rooms[MAX_ROOMS];

these are initialized to 0 / NULL, ie, empty.

 - add struct lws * members in the pss to form the continuation of each list.  There are many ways to do it but, eg,

     int my_room_idx; /* which room # in pvh rooms[] array we are in */
     struct lws *room_next, *room_prev;  /* prev and next wsi in same room */

 - hook ESTABLISHED and CLOSED to see every wsi born and die, set the room and remove wsi from lists then.

Then to ask to write to 'all guys in the same room'

     struct lws *w = pvh->rooms[pss->my_room_idx];
     while (w) {
        lws_callback_on_writable(w);
        w = ((my_pss_type *)lws_wsi_user(w))->room_next;
     }

If there are never that many guys sharing a room, no need for maintaining room_prev since only there to speed up deletions.

This is very cheap and quick because it is unaffected by the total number of connections, just by the number of guys in a 'room'.  And you can have many lists like this, expressing relationships between wsi, without having to maintain or size additional storage other than the list head in the pvh and a "next guy in this list" pointer inside the pss.

I'm sure you have more questions.  How about you don't ask them but actually work with the examples first, and try this list stuff.  I feel like I answered enough emails on this.

-Andy

>On Mon, Aug 1, 2016 at 8:38 AM, Andy Green <andy at warmcat.com> wrote:
>
>> On Mon, 2016-08-01 at 07:33 +0300, Meir Yanovich wrote:
>> > Thanks Andy for your very informative answer , it sure made some
>> > things very clear on how lws + networking works .
>> > so as im looking at the mirror example , i can see the server is
>> > actually collects the client transmit part (Tx) in to the session
>> > data structure . (per_session_data__lws_mirror) .
>> > 1. on LWS_CALLBACK_RECEIVE it fills the payload by the order they
>> > received in the FIFO container
>> > 2  on LWS_CALLBACK_SERVER_WRITEABLE it iterate through the
>collected
>> > massages and write back to the client .  from the FIFO container
>> >
>> > Question :
>> > .In my example i keep each session data ( each client which is
>> > connected ) with its own wsi pointer as a member in the session
>data
>> > structure .
>> > in to hashmap so i could write back to them and update what other
>> > client did which works fine.
>>
>> You can do that, but it seems a waste of time and effort when lws
>> provides you with per-wsi session data (you can even just allocate a
>> pointer in there to your own thing, and eliminate the hashmap).
>>
>> Likewise if the primitive you need is actually "call back all guys
>> connecetd with this protocol", lws has an api exactly for that.
>>
>> > now if i will like to be able to response to the connected client
>and
>> > to other clients ( the one that i keep in the hashmap ) only from
>> > the uv_timer_start callback and not right away .
>>
>> Yeah just call lws_callback_on_writable_all_protocol(context,
>protocol)
>> from your timer cb.
>>
>> > do you think it will be a problem ?
>> > imagine that i will take the code from the mirror example which is
>> > in LWS_CALLBACK_SERVER_WRITEABLE and move it to the
>> > static bool response_to_client_cp(void* key, void* value, void*
>> > context) callback in my example .
>>
>> Who knows what your code does, not me.
>>
>> The lws-related way that will work is when you realize you want to
>> write to all the connected clients,
>> call lws_callback_on_writable_all_protocol().
>>
>> At their own pace, when the connection can accept more, you will get
>a
>> WRITABLE callback and can send some more for that connection there. 
>If
>> the connection could already accept more, that'll happen
>> "immediately" consistent with what else is waiting in the event loop.
>>  If he's got no more room, it'll happen when he does have space.
>>
>> If in the WRITABLE callback you see you have even more to send to
>him,
>> call lws_callback_on_writable() just for that wsi so you will come
>back
>> when he can take more.
>>
>> In that way data goes out according to the rate the remote client can
>> handle it, individually, even if there are large numbers of
>connections
>> active.
>>
>> > and In addition it will iterate through the hashmap  to write back
>to
>> > other connections . ?
>>
>> Unless there's some other reason you didn't mention, your hashmap is
>> not needed, lws already knows who else is connected on that protocol
>> and provides what you need to provoke mass writable callbacks on them
>> in one step.
>>
>> -Andy
>>
>> >
>> > Thanks
>> >
>> >
>> >
>> >
>> >
>> > On Mon, Aug 1, 2016 at 1:37 AM, Andy Green <andy at warmcat.com>
>wrote:
>> > >
>> > >
>> > > On August 1, 2016 1:24:49 AM GMT+08:00, Meir Yanovich
><meiry242 at gma
>> > > il.com> wrote:
>> > > >Hello
>> > > >first of all i can't post full source code here. so i will try
>to
>> > > >explain
>> > > >the problem as best as i can
>> > >
>> > > ...
>> > >
>> > > >I receiving WS calls on high rate , i capture them and process
>> > > them .
>> > > >now i have timer which invokes every 100ms , on this timer
>> > > callback i
>> > > >handle data caming from the session_data .
>> > > >the problem is that the values i receive are wrong .is missing.
>> > >
>> > > What does 'wrong' or 'missing' mean?  Either you received some rx
>> > > data or not.
>> > >
>> > > >probably  this is , i guess because the high rate of the WS
>calls
>> > > i
>> > > >receive
>> > > >from the client .
>> > >
>> > > Hm
>> > >
>> > > >The question is , how do i handle the session_data to sync ,
>isn't
>> > > in
>> > > >async
>> > > >server they suppose to stack until they served ?
>> > >
>> > > Not sure what you are really asking, but looking at your code, I
>> > > think it will be 'no'.  Network programming over the actual
>> > > internet doesn't look like that, in fact in terms of in ->
>process
>> > > -> out the two sides' behaviours may be completely unrelated.  Rx
>> > > may continuously pile in, no more tx may be possible for some
>> > > period... then what?  What if the period is minutes?   Your
>server
>> > > has infinite resources to buffer endless rx from many
>connections?
>> > >
>> > > >
>> > > >#include "game_handler.h"
>> > > >#include "simplog.h"
>> > > >
>> > > >
>> > > >extern int debug_level;
>> > > >connection_num_as_id = 0;
>> > > >Hashmap *users_map_main = NULL;
>> > > >//Array *gems_array = NULL;
>> > > >uv_timer_t timeout_watcher;
>> > > >
>> > > >
>> > > >//response to every 100ms   loop tick
>> > > >static bool response_to_client_cp(void* key, void* value, void*
>> > > >context)
>> > > >{
>> > > >struct per_session_data__apigataway *pss =
>> > > >(struct per_session_data__apigataway
>*)hashmapGet(users_map_main,
>> > > (char
>> > > >*)key);
>> > > >        // HERE the values are not as expected
>> > > >int s = pss->status;
>> > > >
>> > > >return true;
>> > > >}
>> > > >
>> > > >static void game_loop_cb(uv_timer_t* handle) {
>> > > >
>> > > >ASSERT(handle != NULL);
>> > > >ASSERT(1 == uv_is_active((uv_handle_t*)handle));
>> > > >
>> > > >
>> > > >if (hashmapSize(users_map_main)>0)
>> > > >{
>> > > >hashmapForEach(users_map_main, &response_to_client_cp,
>> > > users_map_main);
>> > > >}
>> > > >
>> > > >
>> > > >}
>> > > >
>> > > >int callback_wsapi(struct lws *wsi, enum lws_callback_reasons
>> > > reason,
>> > > >void *user, void *in, size_t len)
>> > > >{
>> > > >if (users_map_main == NULL)
>> > > >{
>> > > >users_map_main = hashmapCreate(10, str_hash_fn, str_eq);
>> > > >}
>> > > >//char *resp_json;
>> > > >unsigned char response_to_client[LWS_PRE + 1024];
>> > > >struct per_session_data__apigataway *pss =
>> > > >(struct per_session_data__apigataway *)user;
>> > > >unsigned char *p_response_to_clientout =
>> > > &response_to_client[LWS_PRE];
>> > > >int n;
>> > > >  switch (reason) {
>> > > >case LWS_CALLBACK_PROTOCOL_INIT:
>> > > >{
>> > > >
>> > > >uv_timer_init(lws_uv_getloop(lws_get_context(wsi),
>> > > >0),&timeout_watcher);
>> > > >//every 100ms
>> > > >uv_timer_start(&timeout_watcher, game_loop_cb, 1000, 100);
>> > > >break;
>> > > >}
>> > > >case LWS_CALLBACK_ESTABLISHED:
>> > > >{
>> > > >
>> > > >
>> > > >break;
>> > > >}
>> > > >case LWS_CALLBACK_SERVER_WRITEABLE:
>> > > >{
>> > > >
>> > > >       struct per_session_data__apigataway *pss  =
>> > > >hashmapPut(users_map_main, pss->player_id, pss);
>> > > >
>> > > >
>> > > >break;
>> > > >}
>> > > >default:
>> > > >lwsl_notice("Invalid status \n");
>> > > >
>> > > >}
>> > > >break;
>> > > >}
>> > > >case LWS_CALLBACK_RECEIVE:
>> > > >{
>> > > >if (len < 1)
>> > > >{
>> > > >break;
>> > > >}
>> > > >pss->binary = lws_frame_is_binary(wsi);
>> > > >
>> > > >memcpy(&pss->request_from_client_buf, in, len);
>> > > >>request_from_client_buf);
>> > > >pss->recive_all_from_client = 1;
>> > > >//Only invoke callback back to client when baby client is ready
>to
>> > > eat
>> > > >lws_callback_on_writable(wsi);
>> > >
>> > > What will happen if you receive multiple rx inbetween the
>> > > connection becoming writeable?
>> > >
>> > > If the client just spams ws frames how it likes, there is nothing
>> > > to guarantee when they arrive or how often the tcp connection
>going
>> > > back to the client has some more space.
>> > >
>> > > Tcp guarantees stuff will be presented as arriving in the right
>> > > order if possible.  But to live up to that guarantee, there are
>> > > various restrictions about exponential retry backoff and
>disordered
>> > > fragment buffering that can create unpredictable latencies
>followed
>> > > by "several packets arriving at once" as the already-received
>> > > disordered packets are replayed.  Tcp allows fragments to be
>> > > coalesced anywhere along the way.  So the rx size you get at the
>> > > callback has no guarantee to be related to the packet size that
>was
>> > > sent.  That is one reason why lws itself uses bytewise state
>> > > machines everywhere.
>> > >
>> > > There is nothing globally regulating that the server sees the
>> > > client becoming writeable in lockstep with the remote client
>> > > sending one packet.  Network programming is something else.
>> > >
>> > > Look at how the mirror example deals with it
>> > >
>> > >  - a fifo for rx data, to deal with multiple rx
>> > >
>> > >  - if the situation starts to get beyond our fifo, use of rx flow
>> > > control to signal to the remote peer using the tcp window that we
>> > > are not keeping up with what he's sending (from his perspective,
>> > > the socket connected to us becomes unwritable once we get to
>> > > filling our rx fifo and becomes writable again if and when we
>> > > caught up)
>> > >
>> > > -Andy
>> > >
>> > > >break;
>> > > >}
>> > > >
>> > > >case LWS_CALLBACK_FILTER_PROTOCOL_CONNECTION:
>> > > >
>> > > >break;
>> > > >
>> > > >case LWS_CALLBACK_WS_PEER_INITIATED_CLOSE:
>> > > >{
>> > > >break;
>> > > >
>> > > >}
>> > > >
>> > > >default:
>> > > >break;
>> > > >}
>> > > >
>> > > >return 0;
>> > > >}
>> > > >
>> > > >
>> > > >----------------------------------------------------------------
>> > > --------
>> > > >
>> > > >_______________________________________________
>> > > >Libwebsockets mailing list
>> > > >Libwebsockets at ml.libwebsockets.org
>> > > >http://libwebsockets.org/mailman/listinfo/libwebsockets
>> > >
>> > >
>> >
>>




More information about the Libwebsockets mailing list