Project homepage Mailing List  Warmcat.com  API Docs  Github Mirror 
{"schema":"libjg2-1", "vpath":"/git/", "avatar":"/git/avatar/", "alang":"en-US,en;q\u003d0.5", "gen_ut":1727977626, "reponame":"libwebsockets", "desc":"libwebsockets lightweight C networking library", "owner": { "name": "Andy Green", "email": "andy@warmcat.com", "md5": "c50933ca2aa61e0fe2c43d46bb6b59cb" },"url":"https://libwebsockets.org/repo/libwebsockets", "f":3, "items": [ {"schema":"libjg2-1", "cid":"95a95be9b3bfa027b3ea94dbf4a28e81", "oid":{ "oid": "0336ff058e0c8423041bbdd22218e3bbe43a2e24", "alias": [ "refs/heads/main"]},"blobname": "READMEs/README.html-parser.md", "blob": "# lws LHP HTML5 and CSS parser and render pipeline\n\n## Please note this is working end-end, but some parts incomplete and generally pre-alpha... looking for interested parties to help\n\n![overview](../doc-assets/lhp-overview.png)\n\u003cfigcaption\u003e*LHP Stream-parses HTML and CSS into a DOM and then into DLOs (lws Display List Objects). Multiple, antialiased, proportional fonts, JPEG and PNGs are supported. A linewise rasterizer is provided well-suited to resource-constrained devices with SPI based displays.*\u003c/figcaption\u003e\n\n![example](../doc-assets/lhp-acep7.jpg)\n\u003cfigcaption\u003e*Page fetched from `https://libwebsockets.org/lhp-tests/t1.html` by an ESP32, and rendered by lws on a 600x448 ACEP 7-colour EPD with 24-bit composition. The warning symbol at the bottom right is a .png img in an absolutely positioned `\u003cdiv\u003e`. The yellow shapes at the top right are divs with css-styled rounded corners. The red div is partly transparent. Display only has a 7 colour palette. Server only sends CSS/HTML/JPEG/PNG, all parsing and rendering done on the ESP32.*\u003c/figcaption\u003e\n\n## Overview\n\nLws is able to parse **and render** a subset of CSS + HTML5 on very constrained\ndevices such as ESP32, which have a total of 200KB heap available after boot at\nbest. There are some technology advances in lws that allow much greater\ncapability that has previously been possible on those platforms.\n\nThe goal is that all system display content is expressed in HTML/CSS by user\ncode, which may also be dynamically generated, with CSS responsive layout\nsimplifying managing the same UI over different display dimensions.\n\nThere are restrictions - most generic html on the internet are too complex or\nwant more assets from different hosts than tiny devices can connect to - but\nthey are quite far beyond what you would expect from a 200KB heap limit. It\nis very possible to mix remote and local http content over h2 including large\nJPEG and PNG images and express all UI in html/css.\n\n### Features\n\n - Parses common HTML and CSS, somewhat beyond html5 and CSS2.1 (supports\n some CSS3)\n - Uses Secure Streams to bring in HTML, and references to JPEG and PNG `\u003cimg\u003e`,\n toplevel async renderer api takes an lws VFS file:// or https:// URL\n retrieved via SS. There's easy, customizable lws VFS support at SS for\n transparently referencing dynamically-generated or .text, or SD card-stored\n HTML, or other assets\n - No framebuffer... rendered strictly linewise. RGB framebuffer for 600x448\n image above would be 800KB, lws can render it efficiently on an ESP32 with\n only the default 200KB memory available.\n - Image data is fetched from local or remote as needed during composition,\n image buffering just 2 lines (PNG) or 8 or 16 lines (JPEG) plus decompressor\n overhead (see below for more details)\n - Layout supports DIVs, text wrapping, margin and padding, font colour,\n selection and weight supported via CSS; implementation still early on others\n - HTML element ID names that the user code cares about can be given, these are\n found during parse and layout information for the elements kept\n - Rendering is very flexible and lightweight, supports Y (8-) or RGB (24-) bit\n line buffer and composition, alpha composing, Gamma correction, RGB565 and\n individual palette RGB/Y packing, Dynamic Floyd-Steinberg error diffusion so\n the same render can look as good as possible even on B\u0026W display\n - EPD Gray levels and spot colour palette (eg, BW-Red) supported\n - Compatible with LCD/OLED but understands needs of EPD, display management is\n all done via the event loop without any blocking waits.\n - LHP, DLO and lws_display use fixed-point int32:8 sig digits, FPU not needed\n - SPI DMA supported crossplatform, ESP32 driver provided\n - lws_display drivers provided for several common SPI display chips, most\n support ping-pong buffering so DMA proceeds while next line rendered\n - 9 specific display + ESP32 example combinations provided\n - HTML Parse -\u003e 320x240 565 SPI Display update complete possible in under 250ms\n on ESP32 WROVER KIT, remote assets slow it down\n\n### Restrictions\n\n - Only quite basic HTML + CSS implemented atm, old `style\u003d` element attributes\n not supported.\n - Requires correct HTML, not yet tolerant of missing end tags etc\n - CSS must be inline in the HTML atm\n - lws understands ETAGs but there's no support to cache assets yet, they are\n fetched anew in realtime each time\n - There is no JS support. Information can be collected from laid-out elements\n though by passing a list of IDs to the html parser api at the start.\n - There's no CSS Rotation, animation etc\n - There's no image scaling, images are presented 1:1\n - There is a DOM representation, but for optimal memory usage it is stream-\n parsed and destroyed element by element after using it to produce the DLO\n layout, so only DOM parent elements that are still open exist at any one time.\n This allows the parser to scale to complex html where most of it will be\n discarded.\n - CSS is parsed and kept for the whole HTML parse, since it doesn't know which\n pieces will be needed until it has parsed the html. So giant CSS alone will\n overflow available memory on constrained targets\n\nHeap Costs during active decode (while rendering line that includes image)\n\n|Feature|Decoder Cost in heap (600px width)|\n|---|---|\n|JPEG-Grayscale|6.5KB|\n|JPEG-YUV 4:4:4|16.4KB|\n|JPEG-YUV 4:4:2v|16.4KB|\n|JPEG-YUV 4:4:2h|31KB|\n|JPEG-YUV 4:4:0|31KB|\n|PNG|36KB|\n\nConnecting to an external tls source costs around 50KB. So for very constrained\ntargets like ESP32, the only practical way is a single h2 connection that\nprovides the assets as streams multiplexed inside a single tls tunnel.\n\n### JIT_TRUST\n\nIntegrates CA trust bundle dynamic querying into lws, with openssl and mbedtls.\nIt can support all the typical Mozilla 130+ trusted CAs, using the trust chain\ninformation from the server cert to identify the CA cert required, and just\ninstantiating that one to validate the server cert, if it trusts it. The\ntrust CTX is kept around in heap for a little while for the case there are\nmultiple connections needing it.\n\nNo heap is needed for trusted certs that are not actively required. This means\nlws can securely connect over tls to arbitrary servers like a browser would\nwithout using up all the memory; without this it's not possible to support\narbitrary connections securely within the memory constraints.\n\n### Display List Objects (DLO)\n\nLws supports a logical Display List for graphical primitives common in HTML +\nCSS, including compressed antialiased fonts, JPEG, PNG and rounded rectangles.\n\nThis intermediate representation allows display surface layout without having\nall the details to hand, and provides flexibility for how to render the logical\nrepresentation of the layout.\n\n### Linewise rendering\n\nThere may not be enough heap to hold a framebuffer for even a midrange display\ndevice, eg an RGB buffer for the 600 x 448 display at the top of the page is\n800KB. Even if there is, for display devices that hold a framebuffer on the\ndisplay, eg, SPI TFT, OLED, or Electrophoretic displays, the display data is\nanyway sent linewise (perhaps in two planes, but still linewise) to the display.\n\nIn this case, there is no need for a framebuffer at the device, if the software\nstack is rewritten to stream-parse all the page elements asynchronously, and\neach time enough is buffered, processed and composed to produce the next line's\nworth of pixels. Only one or two lines' worth of buffer is required then.\n\nThis is the lws approach, rewrite the asset decoders to operate completely\nstatefully so they can collaborate to provide just the next line's data Just-in-\nTime.\n\n### PNG and JPEG stream parsers\n\nLws includes fully stream-parsed decoders, which can run dry for input or output\nat any state safely, and pick up where they left off when more data or space is\nnext available.\n\nThese were rewritten from UPNG and Picojpeg to be wholly stateful. These DLO\nare bound to flow-controlled SS so the content can be provided to the composer\nJust In Time. The rewrite requires that it can exit the decode at any byte\nboundary, due to running out of input, or needing to flush output, and resume\nwith the same state, this is a complete inversion of the original program flow\nwhere it only returns when it has rendered the whole image into a fullsize\nbuffer and decode state is spread around stack or filescope variables.\n\nPNG transparency is supported via its A channel and composed by modulating\nalpha.\n\n### Compressed, Anti-aliased fonts\n\nBased on mcufont, these are 4-bit antialised fonts produced from arbitrary TTFs.\nThey are compressed, a set of a dozen different font sizes from 10px thru 32px\nand bold sizes only costs 100KB storage. The user can choose their own fonts\nand sizes, the encoder is included in lws.\n\nThe mcufont decompressor was rewritten to be again completely stateful, glyphs\npresent on the current line are statefully decoded to produce that line's-worth\nof output only and paused until the next line. Only glyphs that appear on the\ncurrent line have instantiated decoders.\n\nThe anti-alias information is composed into the line buffer as alpha.\n\n### Integration of PNG, JPEG and file:// VFS sources into Secure Streams\n\nSecure Streams and lws VFS now work together via `file://` URLs, a SS can be\ndirected to a local VFS resource the same way as to an `https://` resource.\nResources from https:// and file:// can refer to each other in CSS or `\u003cimg\u003e`\ncleanly.\n\n![example](../doc-assets/lhp-ss-unification.png)\n\u003cfigcaption\u003e*All local and remote resources are fetched using Secure Streams with\na VFS `file://` or `https://` URL. Delivery of enough data to render the next\nline from multiple sources without excess buffering is handled by `lws_flow`.*\u003c/figcaption\u003e\n\nDynamic content, such as dynamic HTML, can be registered in a DLO VFS filesystem\nand referenced via SS either as the toplevel html document or by URLs inside the\nHTML.\n\n`.jpg` and `.png` resources can be used in the html and are fetched using their\nown SS, if coming from the same server over h2, these have very modest extra\nmemory needs since they are sharing the existing h2 connection and tls.\n\n### H2 tx credit buffer management\n\nAll of the efforts to make JPEG or PNG stream-parsed are not useful if either\nthere is an h1 connection requiring a new TLS session that exhausts the heap,\nor even if multiplexed into the same h2 session, the whole JPEG or PNG is\ndumped too quickly into the device which cannot buffer it.\n\nOn constrained devices, the only mix that allows multiple streaming assets that\nare decoded as they come, is an h2 server with streaming modulated by h2 tx\ncredit. The demos stream css, html, JPEG and CSS from libwebsockets.org over h2.\nIn lws, `lws_flow` provides the link between maximum buffering targets and the\ntx_credit flow control management.\n\nThe number of assets that can be handled simultaneously on an HTML page is\nrestricted by the irreducible heap cost of decoding them, about 36KB + an RGB\nline buffer for PNGs, and an either 8 (YUV4:4:4) or 16 RGB (4:4:2 or 4:4:0)\nline buffer for JPEG.\n\nHowever, PNG and JPEG decode occurs lazily starting at the render line where the\nobject starts becoming visible, and all DLO objects are destroyed after the last\nline where they are visible. The SS responsible for fetching and regulating the\nbufferspace needed is started at layout-time, and the parser is started too up\nto the point that the header with the image dimensions is decoded, but not\nbeyond that where the large decoder allocation is required.\n\nIt means only images that appear on the same line have decoders that are\ninstantiated in memory at the same time; images that don't share any horizontal\ncommon lines do not exist in heap simultaneously; basically multiple vertically\nstacked images cost little more than one.\n\nThe demo shows that even on ESP32, the images are cheap enough to allow a full\nsize background JPEG with a partially-transparent PNG composed over it.\n\n### lws_display Composition, palette and dithering\n\nInternally, lws provides either a 8-bit Y (grayscale) or 32-bit RGBA (trucolor)\ncomposition pipeline for all display elements, based on if the display device is\nmonochrome or not. Alpha (opacity) is supported. This is true regardless of\nfinal the bit depth of the display device, so even B\u0026W devices can approximate\nthe same output.\n\nGamma of 2.2 is also applied before palettization, then floyd-steinberg\ndithering, all with just a line buffer and no framebuffer needed at the device.\nThe assets like JPEG can be normal, RGB ones, and the rendering adapts down to\nthe display palette and capabilities dynamically.\n\nThe `lws_display` support in lws has been extended to a variety of common EPD\ncontrollers such as UC8171, supporting B\u0026W, B\u0026W plus a third colour (red or\nyellow typically) and 4-level Gray. The ILI9341 driver for the displays found\non WROVER KIT and the ESP32S Kaluga KIT has been enhanced to work with the new\ndisplay pipline using native 565.\n\n![overview](../doc-assets/lhp-400-300-red.jpg)\n\u003cfigcaption\u003e*HTML rendered on the device from file:// VFS-stored normal RGB JPEG and HTML/CSS, by ESP32 with BW-Red palette 400x300 EPD*\u003c/figcaption\u003e\n\n![overview](../doc-assets/lhp-rgb-example.png)\n\u003cfigcaption\u003e*Test html rendered to 24-bit RGB data directly*\u003c/figcaption\u003e\n\n![overview](../doc-assets/lhp-example-g4.jpg)\n\u003cfigcaption\u003e*Test html rendered to 300x240 4-gray palette EPD (from RGB JPEG also fetched from server during render) using Y-only composition... notice effectiveness of error diffusion use of the palette*\u003c/figcaption\u003e\n\n![overview](../doc-assets/lhp-104-212-red.jpg)\n\u003cfigcaption\u003e*Test html rendered to 104 x 212 BW-Red palette EPD, red h1 text set by CSS `color:#f00`, on a lilygo ESP32-based EPD label board*\u003c/figcaption\u003e\n\n![overview](../doc-assets/lhp-epd-flex-104.jpg)\n\u003cfigcaption\u003e*Test html rendered to 104 x 212 BW flexible EPD, notice font legibility, effectiveness of dither and presence of line breaks*\u003c/figcaption\u003e\n\n![overview](https://libwebsockets.org/wrover-boot.gif)\n\u003cfigcaption\u003e*ESP32 WROVER KIT running the example carousel on a 320x200 565 RGB SPI display. 10s delay between tests snipped for brevity, otherwise shown realtime. Moire is artifact of camera. As composition is linewise, the JPEG and other data from libwebsockets.org is arriving and being completely parsed / composed in the time taken to update the display. Interleaved SPI DMA used to send line to display while rendering the next.*\u003c/figcaption\u003e\n\n## Implications of stream-parsing HTML\n\nTo maximize the scalability, HTML is parsed into an element stack, consisting\nof a set of nested parent-child elements. As an element goes out of scope and\nthe parsing moves on to the next, its parents also go out of scope and are\ndestroyed... new parsents are kept in the stack again only while they have\nchildren in scope. This keeps a strict pressure against large instantaneous\nheap allocations for HTML parsing, but it has some implications.\n\nThis \u0022goldfish memory\u0022 \u0022keyhole parsing\u0022 scheme by itself is inadequate when the\ndimensions of future elements will affect the dimensions of the current one, eg,\na table where we don't find out until later how many rows it has, and so how\nhigh it is. There's also a class of retrospective dimension acquisition, eg,\nwhere a JPEG `img` is in a table, but we don't find out its dimensions until we\nparse its header much later, long after the whole http parser stack related to\nit has been destroyed, and possibly many other things laid out after it.\n\n\n\n## Top level API\n\n```\nint\nlws_lhp_ss_browse(struct lws_context *cx, lws_display_render_state_t *rs,\n\t\t const char *url, sul_cb_t render);\n```\n\nYou basically give it an `https://` or `file://` URL, a structure for the\nrender state, and a callback for when the DLOs have been created and lines of\npixels are being emitted. The source fetching, parsing, layout, and finally\nrendering proceed asynchronously on the event loop without blocking beyond the\ntime taken to emit by default 4 lines.\n\nIn these examples, the renderer callback passes the lines of pixels to the\n`lws_display` blit op.\n\nSee `./include/libwebsockets/lws-html.h` for more information.\n\nAlso see `./minimal-examples-lowlevel/api-tests/api-test-lhp-dlo/main.c`\nexample, you can render to 24-bit RGB on stdout by giving it a URL, eg\n\n```\n $ ./bin/lws-api-test-lhp-dlo https://libwebsockets.org/lhp-tests/t1.html \u003e/tmp/a.data\n```\n\nThe raw RGB can be opened in GIMP.\n","s":{"c":1727976170,"u": 406}} ],"g": 1874,"chitpc": 0,"ehitpc": 0,"indexed":0 , "ab": 0, "si": 0, "db":0, "di":0, "sat":0, "lfc": "7d0a"}