Project homepage Mailing List  Warmcat.com  API Docs  Github Mirror 
{"schema":"libjg2-1", "vpath":"/git/", "avatar":"/git/avatar/", "alang":"en-US,en;q\u003d0.5", "gen_ut":1606538411, "reponame":"libwebsockets", "desc":"libwebsockets lightweight C networking library", "owner": { "name": "Andy Green", "email": "andy@warmcat.com", "md5": "c50933ca2aa61e0fe2c43d46bb6b59cb" },"url":"https://libwebsockets.org/repo/libwebsockets", "f":3, "items": [ {"schema":"libjg2-1", "cid":"607f5ed5aff77b17320c909842336576", "oid":{ "oid": "41240965cf0359128c7c51cd35d6bd6d0bc95bb0", "alias": [ "refs/heads/main","refs/heads/master"]},"blobname": "READMEs/README.lws_sul.md", "blob": "# `lws_sul` scheduler api\n\nSince v3.2 lws no longer requires periodic checking for timeouts and\nother events. A new system was refactored in where future events are\nscheduled on to a single, unified, sorted linked-list in time order,\nwith everything at us resolution.\n\nThis makes it very cheap to know when the next scheduled event is\ncoming and restrict the poll wait to match, or for event libraries\nset a timer to wake at the earliest event when returning to the\nevent loop.\n\nEverything that was checked periodically was converted to use `lws_sul`\nand schedule its own later event. The end result is when lws is idle,\nit will stay asleep in the poll wait until a network event or the next\nscheduled `lws_sul` event happens, which is optimal for power.\n\n# Side effect for older code\n\nIf your older code uses `lws_service_fd()`, it used to be necessary\nto call this with a NULL pollfd periodically to indicate you wanted\nto let the background checks happen. `lws_sul` eliminates the whole\nconcept of periodic checking and NULL is no longer a valid pollfd\nvalue for this and related apis.\n\n# Using `lws_sul` in user code\n\nSee `minimal-http-client-multi` for an example of using the `lws_sul`\nscheduler from your own code; it uses it to spread out connection\nattempts so they are staggered in time. You must create an\n`lws_sorted_usec_list_t` object somewhere, eg, in you own existing object.\n\n```\nstatic lws_sorted_usec_list_t sul_stagger;\n```\n\nCreate your own callback for the event... the argument points to the sul object\nused when the callback was scheduled. You can use pointer arithmetic to translate\nthat to your own struct when the `lws_sorted_usec_list_t` was a member of the\nsame struct.\n\n```\nstatic void\nstagger_cb(lws_sorted_usec_list_t *sul)\n{\n...\n}\n```\n\nWhen you want to schedule the callback, use `lws_sul_schedule()`... this will call\nit 10ms in the future\n\n```\n\tlws_sul_schedule(context, 0, \u0026sul_stagger, stagger_cb, 10 * LWS_US_PER_MS);\n```\n\nIn the case you destroy your object and need to cancel the scheduled callback, use\n\n```\n\tlws_sul_schedule(context, 0, \u0026sul_stagger, NULL, LWS_SET_TIMER_USEC_CANCEL);\n```\n\n# lws_sul2 and system suspend\n\nIn v4.1, alongside the existing `lws_sul` apis there is a refactor and additional\nfunctionality aimed at negotiating system suspend, while remaining completely\nbackwards-compatible with v3.2+ lws_sul apis.\n\nDevicewide suspend is basically the withdrawal of CPU availability for an unbounded\namount of time, so what may have been scheduled by the user code may miss its time\nslot because the cpu was down and nothing is getting serviced. Whether that is\nactively desirable, OK, a big disaster, or a failure that will be corrected at other\nlayers at the cost of, eg, some additional latency, depends on the required device\nbehaviours and the function of the user code that was scheduled, and its meaning to\nthe system.\n\nBefore v4.1, lws just offers the same scheduling service for everything both internal\nand arranged by user code, and has no way to know what is critical for the device to\noperate as intended, and so must force wake from suspend, or if for that scheduled\nevent 'failure [to get the event] is an option'.\n\nFor example locally-initiated periodic keepalive pings not happening may allow\npersistently dead (ie, no longer passing data) connections to remain unrenewed, but\neventually when suspend ends for another reason, the locally-initiated PING probes\nwill resume and it will be discovered and if the connectivity allows, corrected.\n\nIf the device's function can handle the latency of there being no connectivity in\nsuspend under those conditions until it wakes for another reason, it's OK for these\nkind of timeouts to be suppressed during suspend and basically take the power saving\ninstead. If for a particular device it's intolerable to ever have a silently dead\nconnection for more than a very short time compared to suspend durations, then these\nkind of timeouts must have the priority to wake the whole device from suspend so\nthey continue to operate unimpeded.\n\nThat is just one example, lws offers generic scheduler services the user code can\nexploit for any purpose, including mission-critical ones. The changes give the user\ncode a way to tell lws if a particular scheduled event is important enough to the\nsystem operation to wake the system from devicewide suspend.\n\n","s":{"c":1606504189,"u": 168}} ],"g": 387,"chitpc": 0,"ehitpc": 0,"indexed":0 , "ab": 0, "si": 0, "db":0, "di":0, "sat":0, "lfc": "7d0a"}