Project homepage Mailing List  Warmcat.com  API Docs  Github Mirror 
{"schema":"libjg2-1", "vpath":"/git/", "avatar":"/git/avatar/", "alang":"", "gen_ut":1731526548, "reponame":"libwebsockets", "desc":"libwebsockets lightweight C networking library", "owner": { "name": "Andy Green", "email": "andy@warmcat.com", "md5": "c50933ca2aa61e0fe2c43d46bb6b59cb" },"url":"https://libwebsockets.org/repo/libwebsockets", "f":3, "items": [ {"schema":"libjg2-1", "cid":"5afec968ecdf771c480feb77f5e2d77d", "oid":{ "oid": "6328231f2aa628632344d96a58e25c3b194af506", "alias": [ "refs/heads/main"]},"blobname": "READMEs/README.event-loops-intro.md", "blob": "# Considerations around Event Loops\n\nMuch of the software we use is written around an **event loop**. Some examples\n\n - Chrome / Chromium, transmission, tmux, ntp SNTP... [libevent](https://libevent.org/)\n - node.js / cjdns / Julia / cmake ... [libuv](https://archive.is/64pOt)\n - Gstreamer, Gnome / GTK apps ... [glib](https://people.gnome.org/~desrt/glib-docs/glib-The-Main-Event-Loop.html)\n - SystemD ... sdevent\n - OpenWRT ... uloop\n\nMany applications roll their own event loop using poll() or epoll() or similar,\nusing the same techniques. Another set of apps use message dispatchers that\ntake the same approach, but are for cases that don't need to support sockets.\nEvent libraries provide crossplatform abstractions for this functoinality, and\nprovide the best backend for their event waits on the platform automagically.\n\nlibwebsockets networking operations require an event loop, it provides a default\none for the platform (based on poll() for Unix) if needed, but also can natively\nuse any of the event loop libraries listed above, including \u0022foreign\u0022 loops\nalready created and managed by the application.\n\n## What is an 'event loop'?\n\nEvent loops have the following characteristics:\n\n - they have a **single thread**, therefore they do not require locking\n - they are **not threadsafe**\n - they require **nonblocking IO**\n - they **sleep** while there are no events (aka the \u0022event wait\u0022)\n - if one or more event seen, they call back into user code to handle each in\n turn and then return to the wait (ie, \u0022loop\u0022)\n\n### They have a single thread\n\nBy doing everything in turn on a single thread, there can be no possibility of\nconflicting access to resources from different threads... if the single thread\nis in callback A, it cannot be in two places at the same time and also in\ncallback B accessing the same thing: it can never run any other code\nconcurrently, only sequentially, by design.\n\nIt means that all mutexes and other synchronization and locking can be\neliminated, along with the many kinds of bugs related to them.\n\n### They are not threadsafe\n\nEvent loops mandate doing everything in a single thread. You cannot call their\napis from other threads, since there is no protection against reentrancy.\n\nLws apis cannot be called safely from any thread other than the event loop one,\nwith the sole exception of `lws_cancel_service()`.\n\n### They have nonblocking IO\n\nWith blocking IO, you have to create threads in order to block them to learn\nwhen your IO could proceed. In an event loop, all descriptors are set to use\nnonblocking mode, we only attempt to read or write when we have been informed by\nan event that there is something to read, or it is possible to write.\n\nSo sacrificial, blocking discrete IO threads are also eliminated, we just do\nwhat we should do sequentially, when we get the event indicating that we should\ndo it.\n\n### They sleep while there are no events\n\nAn OS \u0022wait\u0022 of some kind is used to sleep the event loop thread until something\nto do. There's an explicit wait on file descriptors that have pending read or\nwrite, and also an implicit wait for the next scheduled event. Even if idle for\ndescriptor events, the event loop will wake and handle scheduled events at the\nright time.\n\nIn an idle system, the event loop stays in the wait and takes 0% CPU.\n\n### If one or more event, they handle them and then return to sleep\n\nAs you can expect from \u0022event loop\u0022, it is an infinite loop alternating between\nsleeping in the event wait and sequentially servicing pending events, by calling\ncallbacks for each event on each object.\n\nThe callbacks handle the event and then \u0022return to the event loop\u0022. The state\nof things in the loop itself is guaranteed to stay consistent while in a user\ncallback, until you return from the callback to the event loop, when socket\ncloses may be processed and lead to object destruction.\n\nEvent libraries like libevent are operating the same way, once you start the\nevent loop, it sits in an inifinite loop in the library, calling back on events\nuntil you \u0022stop\u0022 or \u0022break\u0022 the loop by calling apis.\n\n## Why are event libraries popular?\n\nDevelopers prefer an external library solution for the event loop because:\n\n - the quality is generally higher than self-rolled ones. Someone else is\n maintaining it, a fulltime team in some cases.\n - the event libraries are crossplatform, they will pick the most effective\n event wait for the platform without the developer having to know the details.\n For example most libs can conceal whether the platform is windows or unix,\n and use native waits like epoll() or WSA accordingly.\n - If your application uses a event library, it is possible to integrate very\n cleanly with other libraries like lws that can use the same event library.\n That is extremely messy or downright impossible to do with hand-rolled loops.\n\nCompared to just throwing threads on it\n\n - thread lifecycle has to be closely managed, threads must start and must be\n brought to an end in a controlled way. Event loops may end and destroy\n objects they control at any time a callback returns to the event loop.\n\n - threads may do things sequentially or genuinely concurrently, this requires\n locking and careful management so only deterministic and expected things\n happen at the user data.\n\n - threads do not scale well to, eg, serving tens of thousands of connections;\n web servers use event loops.\n\n## Multiple codebases cooperating on one event loop\n\nThe ideal situation is all your code operates via a single event loop thread.\nFor lws-only code, including lws_protocols callbacks, this is the normal state\nof affairs.\n\nWhen there is other code that also needs to handle events, say already existing\napplication code, or code handling a protocol not supported by lws, there are a\nfew options to allow them to work together, which is \u0022best\u0022 depends on the\ndetails of what you're trying to do and what the existing code looks like.\nIn descending order of desirability:\n\n### 1) Use a common event library for both lws and application code\n\nThis is the best choice for Linux-class devices. If you write your application\nto use, eg, a libevent loop, then you only need to configure lws to also use\nyour libevent loop for them to be able to interoperate perfectly. Lws will\noperate as a guest on this \u0022foreign loop\u0022, and can cleanly create and destroy\nits context on the loop without disturbing the loop.\n\nIn addition, your application can merge and interoperate with any other\nlibevent-capable libraries the same way, and compared to hand-rolled loops, the\nquality will be higher.\n\n### 2) Use lws native wsi semantics in the other code too\n\nLws supports raw sockets and file fd abstractions inside the event loop. So if\nyour other code fits into that model, one way is to express your connections as\n\u0022RAW\u0022 wsis and handle them using lws_protocols callback semantics.\n\nThis ties the application code to lws, but it has the advantage that the\nresulting code is aware of the underlying event loop implementation and will\nwork no matter what it is.\n\n### 3) Make a custom lws event lib shim for your custom loop\n\nLws provides an ops struct abstraction in order to integrate with event\nlibraries, you can find it in ./includes/libwebsockets/lws-eventlib-exports.h.\n\nLws uses this interface to implement its own event library plugins, but you can\nalso use it to make your own customized event loop shim, in the case there is\ntoo much written for your custom event loop to be practical to change it.\n\nIn other words this is a way to write a customized event lib \u0022plugin\u0022 and tell\nthe lws_context to use it at creation time. See [minimal-http-server.c](https://libwebsockets.org/git/libwebsockets/tree/minimal-examples/http-server/minimal-http-server-eventlib-custom/minimal-http-server.c)\n\n### 4) Cooperate at thread level\n\nThis is less desirable because it gives up on unifying the code to run from a\nsingle thread, it means the codebases cannot call each other's apis directly.\n\nIn this scheme the existing threads do their own thing, lock a shared\narea of memory and list what they want done from the lws thread context, before\ncalling `lws_cancel_service()` to break the lws event wait. Lws will then\nbroadcast a `LWS_CALLBACK_EVENT_WAIT_CANCELLED` protocol callback, the handler\nfor which can lock the shared area and perform the requested operations from the\nlws thread context.\n\n### 5) Glue the loops together to wait sequentially (don't do this)\n\nIf you have two or more chunks of code with their own waits, it may be tempting\nto have them wait sequentially in an outer event loop. (This is only possible\nwith the lws default loop and not the event library support, event libraries\nhave this loop inside their own `...run(loop)` apis.)\n\n```\n\twhile (1) {\n\t\tdo_lws_wait(); /* interrupted at short intervals */\n\t\tdo_app_wait(); /* interrupted at short intervals */\n\t}\n```\n\nThis never works well, either:\n\n - the whole thing spins at 100% CPU when idle, or\n\n - the waits have timeouts where they sleep for short periods, but then the\n latency to service on set of events is increased by the idle timeout period\n of the wait for other set of events\n\n## Common Misunderstandings\n\n### \u0022Real Men Use Threads\u0022\n\nSometimes you need threads or child processes. But typically, whatever you're\ntrying to do does not literally require threads. Threads are an architectural\nchoice that can go either way depending on the goal and the constraints.\n\nAny thread you add should have a clear reason to specifically be a thread and\nnot done on the event loop, without a new thread or the consequent locking (and\nbugs).\n\n### But blocking IO is faster and simpler\n\nNo, blocking IO has a lot of costs to conceal the event wait by blocking.\n\nFor any IO that may wait, you must spawn an IO thread for it, purely to handle\nthe situation you get blocked in read() or write() for an arbitrary amount of\ntime. It buys you a simple story in one place, that you will proceed on the\nthread if read() or write() has completed, but costs threads and locking to get\nto that.\n\nEvent loops dispense with the threads and locking, and still provide a simple\nstory, you will get called back when data arrives or you may send.\n\nEvent loops can scale much better, a busy server with 50,000 connections active\ndoes not have to pay the overhead of 50,000 threads and their competing for\nlocking.\n\nWith blocked threads, the thread can do no useful work at all while it is stuck\nwaiting. With event loops the thread can service other events until something\nhappens on the fd.\n\n### Threads are inexpensive\n\nIn the cases you really need threads, you must have them, or fork off another\nprocess. But if you don't really need them, they bring with them a lot of\nexpense, some you may only notice when your code runs on constrained targets\n\n - threads have an OS-side footprint both as objects and in the scheduler\n\n - thread context switches are not slow on modern CPUs, but have side effects\n like cache flushing\n\n - threads are designed to be blocked for arbitrary amounts of time if you use\n blocking IO apis like write() or read(). Then how much concurrency is really\n happening? Since blocked threads just go away silently, it is hard to know\n when in fact your thread is almost always blocked and not doing useful work.\n\n - threads require their own stack, which is on embedded is typically suffering\n from a dedicated worst-case allocation where the headroom is usually idle\n\n - locking must be handled, and missed locking or lock order bugs found\n\n### But... what about latency if only one thing happens at a time?\n\n - Typically, at CPU speeds, nothing is happening at any given time on most\n systems, the event loop is spending most of its time in the event wait\n asleep at 0% cpu.\n\n - The POSIX sockets layer is disjoint from the actual network device driver.\n It means that once you hand off the packet to the networking stack, the POSIX\n api just returns and leaves the rest of the scheduling, retries etc to the\n networking stack and device, descriptor queuing is driven by interrupts in\n the driver part completely unaffected by the event loop part.\n\n - Passing data around via POSIX apis between the user code and the networking\n stack tends to return almost immediately since its onward path is managed\n later in another, usually interrupt, context.\n\n - So long as enough packets-worth of data are in the network stack ready to be\n handed to descriptors, actual throughput is completely insensitive to jitter\n or latency at the application event loop\n\n - The network device itself is inherently serializing packets, it can only send\n one thing at a time. The networking stack locking also introduces hidden\n serialization by blocking multiple threads.\n\n - Many user systems are decoupled like the network stack and POSIX... the user\n event loop and its latencies do not affect backend processes occurring in\n interrupt or internal thread or other process contexts\n\n## Conclusion\n\nEvent loops have been around for a very long time and are in wide use today due\nto their advantages. Working with them successfully requires understand how to\nuse them and why they have the advantages and restrictions they do.\n\nThe best results come from all the participants joining the same loop directly.\nUsing a common event library in the participating codebases allows completely\ndifferent code can call each other's apis safely without locking.\n","s":{"c":1731485257,"u": 480}} ],"g": 2400,"chitpc": 0,"ehitpc": 0,"indexed":0 , "ab": 0, "si": 0, "db":0, "di":0, "sat":0, "lfc": "7d0a"}