[Libwebsockets] Synchronisation of objects when using uv timer_start in LWS_CALLBACK_PROTOCOL_INIT

Andy Green andy at warmcat.com
Mon Aug 1 00:37:12 CEST 2016



On August 1, 2016 1:24:49 AM GMT+08:00, Meir Yanovich <meiry242 at gmail.com> wrote:
>Hello
>first of all i can't post full source code here. so i will try to
>explain
>the problem as best as i can

...

>I receiving WS calls on high rate , i capture them and process them .
>now i have timer which invokes every 100ms , on this timer callback i
>handle data caming from the session_data .
>the problem is that the values i receive are wrong .is missing.

What does 'wrong' or 'missing' mean?  Either you received some rx data or not.

>probably  this is , i guess because the high rate of the WS calls i
>receive
>from the client .

Hm

>The question is , how do i handle the session_data to sync , isn't in
>async
>server they suppose to stack until they served ?

Not sure what you are really asking, but looking at your code, I think it will be 'no'.  Network programming over the actual internet doesn't look like that, in fact in terms of in -> process -> out the two sides' behaviours may be completely unrelated.  Rx may continuously pile in, no more tx may be possible for some period... then what?  What if the period is minutes?   Your server has infinite resources to buffer endless rx from many connections?

>
>#include "game_handler.h"
>#include "simplog.h"
>
>
>extern int debug_level;
>connection_num_as_id = 0;
>Hashmap *users_map_main = NULL;
>//Array *gems_array = NULL;
>uv_timer_t timeout_watcher;
>
>
>//response to every 100ms   loop tick
>static bool response_to_client_cp(void* key, void* value, void*
>context)
>{
>struct per_session_data__apigataway *pss =
>(struct per_session_data__apigataway *)hashmapGet(users_map_main, (char
>*)key);
>        // HERE the values are not as expected
>int s = pss->status;
>
>return true;
>}
>
>static void game_loop_cb(uv_timer_t* handle) {
>
>ASSERT(handle != NULL);
>ASSERT(1 == uv_is_active((uv_handle_t*)handle));
>
>
>if (hashmapSize(users_map_main)>0)
>{
>hashmapForEach(users_map_main, &response_to_client_cp, users_map_main);
>}
>
>
>}
>
>int callback_wsapi(struct lws *wsi, enum lws_callback_reasons reason,
>void *user, void *in, size_t len)
>{
>if (users_map_main == NULL)
>{
>users_map_main = hashmapCreate(10, str_hash_fn, str_eq);
>}
>//char *resp_json;
>unsigned char response_to_client[LWS_PRE + 1024];
>struct per_session_data__apigataway *pss =
>(struct per_session_data__apigataway *)user;
>unsigned char *p_response_to_clientout = &response_to_client[LWS_PRE];
>int n;
>  switch (reason) {
>case LWS_CALLBACK_PROTOCOL_INIT:
>{
>
>uv_timer_init(lws_uv_getloop(lws_get_context(wsi),
>0),&timeout_watcher);
>//every 100ms
>uv_timer_start(&timeout_watcher, game_loop_cb, 1000, 100);
>break;
>}
>case LWS_CALLBACK_ESTABLISHED:
>{
>
>
>break;
>}
>case LWS_CALLBACK_SERVER_WRITEABLE:
>{
>
>       struct per_session_data__apigataway *pss  =
>hashmapPut(users_map_main, pss->player_id, pss);
>
>
>break;
>}
>default:
>lwsl_notice("Invalid status \n");
>
>}
>break;
>}
>case LWS_CALLBACK_RECEIVE:
>{
>if (len < 1)
>{
>break;
>}
>pss->binary = lws_frame_is_binary(wsi);
>
>memcpy(&pss->request_from_client_buf, in, len);
>>request_from_client_buf);
>pss->recive_all_from_client = 1;
>//Only invoke callback back to client when baby client is ready to eat
>lws_callback_on_writable(wsi);

What will happen if you receive multiple rx inbetween the connection becoming writeable?

If the client just spams ws frames how it likes, there is nothing to guarantee when they arrive or how often the tcp connection going back to the client has some more space.

Tcp guarantees stuff will be presented as arriving in the right order if possible.  But to live up to that guarantee, there are various restrictions about exponential retry backoff and disordered fragment buffering that can create unpredictable latencies followed by "several packets arriving at once" as the already-received disordered packets are replayed.  Tcp allows fragments to be coalesced anywhere along the way.  So the rx size you get at the callback has no guarantee to be related to the packet size that was sent.  That is one reason why lws itself uses bytewise state machines everywhere.

There is nothing globally regulating that the server sees the client becoming writeable in lockstep with the remote client sending one packet.  Network programming is something else.

Look at how the mirror example deals with it

 - a fifo for rx data, to deal with multiple rx

 - if the situation starts to get beyond our fifo, use of rx flow control to signal to the remote peer using the tcp window that we are not keeping up with what he's sending (from his perspective, the socket connected to us becomes unwritable once we get to filling our rx fifo and becomes writable again if and when we caught up)

-Andy

>break;
>}
>
>case LWS_CALLBACK_FILTER_PROTOCOL_CONNECTION:
>
>break;
>
>case LWS_CALLBACK_WS_PEER_INITIATED_CLOSE:
>{
>break;
>
>}
>
>default:
>break;
>}
>
>return 0;
>}
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Libwebsockets mailing list
>Libwebsockets at ml.libwebsockets.org
>http://libwebsockets.org/mailman/listinfo/libwebsockets




More information about the Libwebsockets mailing list