Project homepage Mailing List  API Docs  Github Mirror 
{"schema":"libjg2-1", "vpath":"/git/", "avatar":"/git/avatar/", "alang":"en-US,en;q\u003d0.5", "gen_ut":1713406340, "reponame":"libwebsockets", "desc":"libwebsockets lightweight C networking library", "owner": { "name": "Andy Green", "email": "", "md5": "c50933ca2aa61e0fe2c43d46bb6b59cb" },"url":"", "f":3, "items": [ { "schema":"libjg2-1", "oid":{ "oid": "ec76f8178dde74d9c9f485e0e2ff609a0b8ec1f4", "alias": [ "refs/heads/main"]},"tree": [ { "name": "cpp","mode": "16384", "size":0}, { "name": "plugins","mode": "16384", "size":0}, { "name": "protocols","mode": "16384", "size":0}, { "name": "serialized","mode": "16384", "size":0}, { "name": "system","mode": "16384", "size":0}, { "name": "CMakeLists.txt","mode": "33188", "size":4396}, { "name": "","mode": "33188", "size":32380}, { "name": "policy-common.c","mode": "33188", "size":14810}, { "name": "policy-json.c","mode": "33188", "size":33573}, { "name": "private-lib-secure-streams.h","mode": "33188", "size":18320}, { "name": "secure-streams.c","mode": "33188", "size":52615}],"s":{"c":1713332551,"u": 1217}} ,{"schema":"libjg2-1", "cid":"32f03bf97cba842a9419d0a1e57fb527", "oid":{ "oid": "ec76f8178dde74d9c9f485e0e2ff609a0b8ec1f4", "alias": [ "refs/heads/main"]},"blobname": "lib/secure-streams/", "blob": "# Secure Streams\n\nSecure Streams is a networking api that strictly separates payload from any\nmetadata. That includes the client endpoint address for the connection, the tls\ntrust chain and even the protocol used to connect to the endpoint.\n\nThe user api just receives and transmits payload, and receives advisory\nconnection state information.\n\nThe details about how the connections for different types of secure stream should\nbe made are held in JSON \u0022policy database\u0022 initially passed in to the context\ncreation, but able to be updated from a remote copy.\n\nBoth client and server networking can be handled using Secure Streams APIS.\n\n![overview](/doc-assets/ss-operation-modes.png)\n\n## Secure Streams CLIENT State lifecycle\n\n![overview](/doc-assets/ss-state-flow.png)\n\nSecure Streams are created using `lws_ss_create()`, after that they may acquire\nunderlying connections, and lose them, but the lifecycle of the Secure Stream\nitself is not directly related to any underlying connection.\n\nOnce created, Secure Streams may attempt connections, these may fail and once\nthe number of failures exceeds the count of attempts to conceal in the retry /\nbackoff policy, the stream reaches `LWSSSCS_ALL_RETRIES_FAILED`. The stream becomes\nidle again until another explicit connection attempt is given.\n\nOnce connected, the user code can use `lws_ss_request_tx()` to ask for a slot\nto write to the peer, when this if forthcoming the tx handler can send a message.\nIf the underlying protocol gives indications of transaction success, such as,\neg, a 200 for http, or an ACK from MQTT, the stream state is called back with\nan `LWSSSCS_QOS_ACK_REMOTE` or `LWSSSCS_QOS_NACK_REMOTE`.\n\n## SS Callback return handling\n\nSS state(), rx() and tx() can indicate with their return code some common\nsituations that should be handled by the caller.\n\nConstant|Scope|Meaning\n---|---|---\nLWSSSSRET_TX_DONT_SEND|tx|This opportunity to send something was passed on\nLWSSSSRET_OK|state, rx, tx|No error, continue doing what we're doing\nLWSSSSRET_DISCONNECT_ME|state, rx|assertively disconnect from peer\nLWSSSSRET_DESTROY_ME|state, rx|Caller should now destroy the stream itself\nLWSSSSRET_SS_HANDLE_DESTROYED|state|Something handled a request to destroy the stream\n\nDestruction of the stream we're calling back on inside the callback is tricky,\nit's preferable to return `LWSSSSRET_DESTROY_ME` if it is required, and let the\ncaller handle it. But in some cases, helpers called from the callbacks may\ndestroy the handle themselves, in that case the handler should return\n`LWSSSSRET_SS_HANDLE_DESTROYED` indicating that the handle is already destroyed.\n\n## Secure Streams SERVER State lifecycle\n\n![overview](/doc-assets/ss-state-flow-server.png)\n\nYou can also run servers defined using Secure Streams, the main difference is\nthat the user code must assertively create a secure stream of the server type\nin order to create the vhost and listening socket. When this stream is\ndestroyed, the vhost is destroyed and the listen socket closed, otherwise it\ndoes not perform any rx or tx, it just represents the server lifecycle.\n\nWhen client connections randomly arrive at the listen socket, new Secure Stream\nobjects are created along with accept sockets to represent each client\nconnection. As they represent the incoming connection, their lifecycle is the\nsame as that of the underlying connection. There is no retry concept since as\nwith eg, http servers, the clients may typically not be routable for new\nconnections initiated by the server.\n\nSince connections at socket level are already established, new connections are\nimmediately taken through CREATING, CONNECTING, CONNECTED states for\nconsistency.\n\nSome underlying protocols like http are \u0022transactional\u0022, the server receives\na logical request and must reply with a logical response. The additional\nstate `LWSSSCS_SERVER_TXN` provides a point where the user code can set\ntransaction metadata before or in place of sending any payload. It's also\npossible to defer this until any rx related to the transaction was received,\nbut commonly with http requests, there is no rx / body. Configuring the\nresponse there may look like\n\n```\n\t\t/*\n\t\t * We do want to ack the transaction...\n\t\t */\n\t\tlws_ss_server_ack(m-\u003ess, 0);\n\t\t/*\n\t\t * ... it's going to be text/html...\n\t\t */\n\t\tlws_ss_set_metadata(m-\u003ess, \u0022mime\u0022, \u0022text/html\u0022, 9);\n\t\t/*\n\t\t *'s going to be 128 byte (and request tx)\n\t\t */\n\t\tlws_ss_request_tx_len(m-\u003ess, 128);\n```\n\nOtherwise the general api usage is very similar to client usage.\n\n## Convention for rx and tx callback return\n\nFunction|Return|Meaning\n---|---|---\ntx|`LWSSSSRET_OK`|Send the amount of `buf` stored in `*len`\ntx|`LWSSSSRET_TX_DONT_SEND`|Do not send anything\ntx|`LWSSSSRET_DISCONNECT_ME`|Close the current connection\ntx|`LWSSSSRET_DESTROY_ME`|Destroy the Secure Stream\nrx|\u003e\u003d0|accepted\nrx|\u003c0|Close the current connection\n\n# JSON Policy Database\n\nExample JSON policy... formatting is shown for clarity but whitespace can be\nomitted in the actual policy.\n\nOrdering is not critical in itself, but forward references are not allowed,\nthings must be defined before they are allowed to be referenced later in the\nJSON.\n\n\n```\n{\n\t\u0022release\u0022: \u002201234567\u0022,\n\t\u0022product\u0022: \u0022myproduct\u0022,\n\t\u0022schema-version\u0022: 1,\n\t\u0022retry\u0022: [{\n\t\t\u0022default\u0022: {\n\t\t\t\u0022backoff\u0022: [1000, 2000, 3000, 5000, 10000],\n\t\t\t\u0022conceal\u0022: 5,\n\t\t\t\u0022jitterpc\u0022: 20\n\t\t}\n\t}],\n\t\u0022certs\u0022: [{\n\t\t\u0022isrg_root_x1\u0022: \u0022MIIFazCCA1OgAw...AnX5iItreGCc\u003d\u0022\n\t}, {\n\t\t\u0022LEX3_isrg_root_x1\u0022: \u0022MIIFjTCCA3WgAwIB...WEsikxqEt\u0022\n\t}],\n\t\u0022trust_stores\u0022: [{\n\t\t\u0022le_via_isrg\u0022: [\u0022isrg_root_x1\u0022, \u0022LEX3_isrg_root_x1\u0022]\n\t}],\n\t\u0022s\u0022: [{\n\t\t\u0022mintest\u0022: {\n\t\t\t\u0022endpoint\u0022: \\u0022,\n\t\t\t\u0022port\u0022: 4443,\n\t\t\t\u0022protocol\u0022: \u0022h1get\u0022,\n\t\t\t\u0022aux\u0022: \u0022index.html\u0022,\n\t\t\t\u0022plugins\u0022: [],\n\t\t\t\u0022tls\u0022: true,\n\t\t\t\u0022opportunistic\u0022: true,\n\t\t\t\u0022retry\u0022: \u0022default\u0022,\n\t\t\t\u0022tls_trust_store\u0022: \u0022le_via_isrg\u0022\n\t\t}\n\t}]\n}\n```\n\n### `Release`\n\nIdentifies the policy version\n\n### `Product`\n\nIdentifies the product the policy should apply to\n\n### `Schema-version`\n\nThe minimum version of the policy parser required to parse this policy\n\n### `via-socks5`\n\nOptional redirect for Secure Streams client traffic through a socks5\nproxy given in the format `address:port`, eg, ``.\n\n### `retry`\n\nA list of backoff schemes referred to in the policy\n\n### `backoff`\n\nAn array of ms delays for each retry in turn\n\n### `conceal`\n\nThe number of retries to conceal from higher layers before giving errors. If\nthis is larger than the number of times in the backoff array, then the last time\nis used for the extra delays. 65535 means never stop trying.\n\n### `jitterpc`\n\nPercentage of the delay times mentioned in the backoff array that may be\nrandomly added to the figure from the array. For example with an array entry of\n1000ms, and jitterpc of 20%, actual delays will be chosen randomly from 1000ms\nthrough 1200ms. This is to stop retry storms triggered by a single event like\nan outage becoming synchronized into a DoS.\n\n### `certs`\n\nCertificates needed for validation should be listed here each with a name. The\nformat is base64 DER, which is the same as the part of PEM that is inside the\nstart and end lines.\n\n### `trust_stores`\n\nChains of certificates given in the `certs` section may be named and described\ninside the `trust_stores` section. Each entry in `trust_stores` is created as\na vhost + tls context with the given name. Stream types can later be associated\nwith one of these to enforce validity checking of the remote server.\n\nEntries should be named using \u0022name\u0022 and the stack array defined using \u0022stack\u0022\n\n### `auth`\n\nOptional section describing a map of available authentication streamtypes to\nauth token blob indexes.\n\n```\n...\n \u0022auth\u0022: [{\u0022name\u0022:\u0022newauth\u0022,\u0022type\u0022:\u0022sigv4\u0022, \u0022blob\u0022:0}]\n...\n```\n\nStreams can indicate they depend on a valid auth token from one of these schemes\nby using the `\u0022use_auth\u0022: \u0022name\u0022` member in the streamtype definition, where name\nis, eg, \u0022sigv4\u0022 in the example above. If \u0022use_auth\u0022 is not in the streamtype\ndefinition, default auth is lwa if \u0022http_auth_header\u0022 is there.\n\n### `auth[].name`\n\nThis is the name of the authentication scheme used by other streamtypes\n\n### `auth[].type`\n\nIndicate the auth type, e.g. sigv4\n\n### `auth[].streamtype`\n\nThis is the auth streamtype to be used to refresh the authentication token\n\n### `auth[].blob`\n\nThis is the auth blob index the authentication token is stored into and retreived\nfrom system blob, currently up to 4 blobs.\n\n\n### `s`\n\nThese are an array of policies for the supported stream type names.\n\n### `server`\n\n**SERVER ONLY**: if set to `true`, the policy describes a secure streams\nserver.\n\n### `endpoint`\n\n**CLIENT**: The DNS address the secure stream should connect to.\n\nThis may contain string symbols which will be replaced with the\ncorresponding streamtype metadata value at runtime. Eg, if the\nstreamtype lists a metadata name \u0022region\u0022, it's then possible to\ndefine the endpoint as, eg, `${region}`, and before\nattempting the connection setting the stream's metadata item\n\u0022region\u0022 to the desired value, eg, \u0022uk\u0022.\n\nIf the endpoint string begins with `+`, then it's understood to\nmean a connection to a Unix Domain Socket, for Linux `+@` means\nthe following Unix Domain Socket is in the Linux Abstract\nNamespace and doesn't have a filesystem footprint. This is only\nsupported on unix-type and windows platforms and when lws was\nconfigured with `-DLWS_UNIX_SOCK\u003d1`\n\n**SERVER**: If given, the network interface name or IP address the listen socket\nshould bind to.\n\n**SERVER**: If begins with '!', the rest of the endpoint name is the\nvhost name of an existing vhost to bind to, instead of creating a new\none. This is useful when the vhost layout is already being managed by\nlejp-conf JSON and it's more convenient to put the details in there.\n\n### `port`\n\n**CLIENT**: The port number as an integer on the endpoint to connect to\n\n**SERVER**: The port number the server will listen on\n\n### `protocol`\n\n**CLIENT**: The wire protocol to connect to the endpoint with. Currently\nsupported streamtypes are\n\n|Wire protocol|Description|\n|---|---|\n|h1|http/1|\n|h2|http/2|\n|ws|http/1 Websockets|\n|mqtt|mqtt 3.1.1|\n|raw||\n\nRaw protocol is a bit different than the others in that there is no protocol framing,\nwhatever is received on the connection is passed to the user rx callback and whatever\nthe tx callback provides is issued on to the connection. Because tcp can be\narbitrarily fragmented by any intermediary, such streams have to be regarded as an\nordered bytestream that may be fragmented at any byte without any meaning in terms\nof message boundaries, for that reason SOM and EOM are ignored with raw.\n\n### `allow_redirects`\n\nBy default redirects are not followed, if you wish a streamtype to observe them, eg,\nbecause that's how it responds to a POST, set `\u0022allow_redirects\u0022: true`\n\n### `tls`\n\nSet to `true` to enforce the stream travelling in a tls tunnel\n\n### `client cert`\n\nSet if the stream needs to authenticate itself using a tls client certificate.\nSet to the certificate index counting from 0+. The certificates are managed\nusing lws_sytstem blobs.\n\n### `opportunistic`\n\nSet to `true` if the connection may be left dropped except when in use\n\n### `nailed_up`\n\nSet to `true` to have lws retry if the connection carrying this stream should\never drop.\n\n### `retry`\n\nThe name of the policy described in the `retry` section to apply to this\nconnection for retry + backoff\n\n### `timeout_ms`\n\nOptional timeout associated with streams of this streamtype.\n\nIf user code applies the `lws_ss_start_timeout()` api on a stream with a\ntimeout of LWSSS_TIMEOUT_FROM_POLICY, the `timeout_ms` entry given in the\npolicy is applied.\n\n### `perf`\n\nIf set to true, and lws was built with `LWS_WITH_CONMON`, causes this streamtype\nto receive additional rx payload with the `LWSSS_FLAG_PERF_JSON` flag set on it,\nthat is JSON representing the onward connection performance information.\n\nThese are based on the information captured in the struct defined in\nlibwebsockets/lws-conmon.h, represented in JSON\n\n```\n\t{\n\t \u0022peer\u0022: \u002246.105.127.147\u0022,\n\t \u0022dns_us\u0022: 1234,\n\t \u0022sockconn_us\u0022: 1234,\n\t \u0022tls_us\u0022: 1234,\n\t \u0022txn_resp_us\u0022: 1234,\n\t \u0022dns\u0022:[\u002246.105.127.147\u0022, \u00222001:41d0:2:ee93::1\u0022]\n\t}\n```\n\nStreamtypes without \u0022perf\u0022: true will never see the special rx payloads.\nNotice that the `LWSSS_FLAG_PERF_JSON` payloads must be handled out of band\nfor the normal payloads, as they can appear inside normal payload messages.\n\n### `tls_trust_store`\n\nThe name of the trust store described in the `trust_stores` section to apply\nto validate the remote server cert.\n\nIf missing and tls is enabled on the streamtype, then validation is\nattempted using the OS trust store, otherwise the connection fails.\n\n### `use_auth`\n\nIndicate that the streamtype should use the named auth type from the `auth`\narray in the policy\n\n### `aws_region`\n\nIndicate which metadata should be used to set aws region for certain streamtype\n\n### `aws_service`\n\nIndicate which metadata should be used to set aws service for certain streamtype\n\n### `direct_proto_str`\n\nIf set to `true`, application can use `lws_ss_set_metadata()` to directly set protocol related string and use `lws_ss_get_metadata` to fetch certain protocol related string. Please note that currently HTTP header is the supported protocol string. The `name` parameter is the name of HTTP header name (**with ':'**, e.g. `\u0022Content-Type:\u0022`) and `value` is the header's value. `LWS_WITH_SS_DIRECT_PROTOCOL_STR` flag needs to be configured during compilation for this. Currently it's only work for non-proxy case.\n\n### `server_cert`\n\n**SERVER ONLY**: subject to change... the name of the x.509 cert that is the\nserver's tls certificate\n\n### `server_key`\n\n**SERVER ONLY**: subject to change... the name of the x.509 cert that is the\nserver's tls key\n\n### `swake_validity`\n\nSet to `true` if this streamtype is important enough for the functioning of the\ndevice that its locally-initiated periodic connection validity checks of the\ninterval described in the associated retry / backoff selection, are important\nenough to wake the whole system from low power suspend so they happen on\nschedule.\n\n### `proxy_buflen`\n\nOnly used when the streamtype is proxied... sets the maximum size of the\npayload buffering (in bytes) the proxy will hold for this type of stream. If\nthe endpoint dumps a lot of data without any flow control, this may need to\nbe correspondingly large. Default is 32KB.\n\n### `proxy_buflen_rxflow_on_above`, `proxy_buflen_rxflow_off_below`\n\nWhen `proxy_buflen` is set, you can also wire up the amount of buffered\ndata intended for the client held at the proxy, to the onward ss wsi\nrx flow control state. If more than `proxy_buflen_rxflow_on_above`\nbytes are buffered, rx flow control is set stopping further rx. Once\nthe dsh is drained below `proxy_buflen_rxflow_off_below`, the rx flow\ncontrol is released and RX resumes.\n\n### `client_buflen`\n\nOnly used when the streamtype is proxied... sets the maximum size of the\npayload buffering (in bytes) the client will hold for this type of stream. If\nthe client sends a lot of data without any flow control, this may need to\nbe correspondingly large. Default is 32KB.\n\n### `attr_priority`\n\nA number between 0 (normal priority) and 6 (very high priority). 7 is also\npossible, but requires CAP_NET_ADMIN on Linux and is reserved for network\nadministration packets. Normally default priority is fine, but under some\nconditions when transporting over IP packets, you may want to control the\nIP packet ToS priority for the streamtype by using this.\n\n### `attr_low_latency`\n\nThis is a flag indicating that the streamtype packets should be transported\nin a way that results in lower latency where there is a choice. For IP packets,\nthis sets the ToS \u0022low delay\u0022 flag on packets from this streamtype.\n\n### `attr_high_throughput`\n\nThis is a flag indicating that this streamtype should be expected to produce\nbulk content that requires high throughput. For IP packets,\nthis sets the ToS \u0022high throughput\u0022 flag on packets from this streamtype.\n\n### `attr_high_reliability`\n\nThis is a flag indicating that extra efforts should be made to deliver packets\nfrom this streamtype where possible. For IP packets, this sets the ToS \u0022high\nreliability\u0022 flag on packets from this streamtype.\n\n### `attr_low_cost`\n\nThis is a flag indicating that packets from this streamtype should be routed as\ninexpensively as possible by trading off latency and reliability where there is\na choice. For IP packets, this sets the ToS \u0022low cost\u0022 flag on packets from\nthis streamtype.\n\n### `metadata`\n\nThis allows declaring basically dynamic symbol names to be used by the streamtype,\nalong with an optional mapping to a protocol-specific entity such as a given\nhttp header. Eg:\n\n```\n\t\t\u0022metadata\u0022: [ { \u0022myname\u0022: \u0022\u0022 }, { \u0022ctype\u0022: \u0022content-type:\u0022 } ],\n```\n\nIn this example \u0022ctype\u0022 is associated with the http header \u0022content-type\u0022 while\n\u0022myname\u0022 doesn't have any association to a header.\n\nSymbol names may be used in the other policy for the streamtype for string\nsubstitution using the syntax like `xxx${myname}yyy`, forward references are\nvalid but the scope of the symbols is just the streamtype the metadata is\ndefined for.\n\nClient code can set metadata by name, using the `lws_ss_set_metadata()` api, this\nshould be done before a transaction. And for metadata associated with a\nprotocol-specific entity, like http headers, if incoming responses contain the\nmentioned header, the metadata symbol is set to that value at the client before\nany rx proceeds.\n\nMetadata continues to work the same for the client in the case it is proxying its\nconnectivity, metadata is passed in both directions serialized over the proxy link.\n\n## http transport\n\n### `http_method`\n\nHTTP method to use with http-related protocols, like GET or POST.\nNot required for ws.\n\n### `http_expect`\n\nOptionally indicates that success for HTTP transactions using this\nstreamtype is different than the default 200 - 299.\n\nEg, you may choose to set this to 204 for Captive Portal Detect usage\nif that's what you expect the server to reply with to indicate\nsuccess. In that case, anything other than 204 will be treated as a\nconnection failure.\n\n### `http_fail_redirect`\n\nSet to `true` if you want to fail the connection on meeting an\nhttp redirect. This is needed to, eg, detect Captive Portals\ncorrectly. Normally, if on https, you would want the default behaviour\nof following the redirect.\n\n### `http_url`\n\nUrl path to use with http-related protocols\n\nThe URL path can include metatadata like this\n\n\u0022/mypath?whatever\u003d${metadataname}\u0022\n\n${metadataname} will be replaced by the current value of the\nsame metadata name. The metadata names must be listed in the\n\u0022metadata\u0022: [ ] section.\n\n### `http_resp_map`\n\nIf your server overloads the meaning of the http transport response code with\nserver-custom application codes, you can map these to discrete Secure Streams\nstate callbacks using a JSON map, eg\n\n```\n\t\t\u0022http_resp_map\u0022: [ { \u0022530\u0022: 1530 }, { \u0022531\u0022: 1531 } ],\n```\n\nIt's not recommended to abuse the transport layer http response code by\nmixing it with application state information like this, but if it's dealing\nwith legacy serverside that takes this approach, it's possible to handle it\nin SS this way while removing the dependency on http.\n\n### `http_auth_header`\n\nThe name of the header that takes the auth token, with a trailing ':', eg\n\n```\n \u0022http_auth_header\u0022: \u0022authorization:\u0022\n```\n\n### `http_dsn_header`\n\nThe name of the header that takes the dsn token, with a trailing ':', eg\n\n```\n \u0022http_dsn_header\u0022: \u0022x-dsn:\u0022\n```\n\n### `http_fwv_header`\n\nThe name of the header that takes the firmware version token, with a trailing ':', eg\n\n```\n \u0022http_fwv_header\u0022: \u0022x-fw-version:\u0022\n```\n\n### `http_devtype_header`\n\nThe name of the header that takes the device type token, with a trailing ':', eg\n\n```\n \u0022http_devtype_header\u0022: \u0022x-device-type:\u0022\n```\n\n### `http_auth_preamble`\n\nAn optional string that precedes the auth token, eg\n\n```\n \u0022http_auth_preamble\u0022: \u0022bearer \u0022\n```\n\n### `auth_hexify`\n\nConvert the auth token to hex ('A' -\u003e \u002241\u0022) before transporting. Not necessary if the\nauth token is already in printable string format suitable for transport. Needed if the\nauth token is a chunk of 8-bit binary.\n\n### `nghttp2_quirk_end_stream`\n\nSet this to `true` if the peer server has the quirk it won't send a response until we have\nsent an `END_STREAM`, even though we have sent headers with `END_HEADERS`.\n\n### `h2q_oflow_txcr`\n\nSet this to `true` if the peer server has the quirk it sends an maximum initial tx credit\nof 0x7fffffff and then later increments it illegally.\n\n### `http_multipart_ss_in`\n\nIndicates that SS should parse any incoming multipart mime on this stream\n\n### `http_multipart_name`\n\nIndicates this stream goes out using multipart mime, and provides the name part of the\nmultipart header\n\n### `http_multipart_filename`\n\nIndicates this stream goes out using multipart mime, and provides the filename part of the\nmultipart header\n\n### `http_multipart_content_type`\n\nThe `content-type` to mark up the multipart mime section with if present\n\n### `http_www_form_urlencoded`\n\nIndicate the data is sent in `x-www-form-urlencoded` form\n\n### `http_cookies`\n\nThis streamtype should store and bring out http cookies from the peer.\n\n### `rideshare`\n\nFor special cases where one logically separate stream travels with another when using this\nprotocol. Eg, a single multipart mime transaction carries content from two or more streams.\n\n## ws transport\n\n### `ws_subprotocol`\n\n** CLIENT **: Name of the ws subprotocol to request from the server\n\n** SERVER **: Name of the subprotocol we will accept\n\n### `ws_binary`\n\nUse if the ws messages are binary\n\n### `ws_prioritize_reads`\n\nSet `true` if the event loop should prioritize keeping up with input at the\npotential expense of output latency.\n\n## MQTT transport\n\n### `mqtt_topic`\n\nSet the topic this streamtype uses for writes\n\n### `mqtt_subscribe`\n\nSet the topic this streamtype subscribes to\n\n### `mqtt qos`\n\nSet the QOS level for this streamtype\n\n### `mqtt_retain`\n\nSet to true if this streamtype should use MQTT's \u0022retain\u0022 feature.\n\n### `mqtt_keep_alive`\n\n16-bit number representing MQTT keep alive for the stream.\n\nThis is applied at connection time... where different streams may bind to the\nsame underlying MQTT connection, all the streams should have an identical\nsetting for this.\n\n### `mqtt_clean_start`\n\nSet to true if the connection should use MQTT's \u0022clean start\u0022 feature.\n\nThis is applied at connection time... where different streams may bind to the\nsame underlying MQTT connection, all the streams should have an identical\nsetting for this.\n\n### `mqtt_will_topic`\n\nSet the topic of the connection's will message, if any (there is none by default).\n\nThis is applied at connection time... where different streams may bind to the\nsame underlying MQTT connection, all the streams should have an identical\nsetting for this.\n\n### `mqtt_will_message`\n\nSet the content of the connect's will message, if any (there is none by default).\n\nThis is applied at connection time... where different streams may bind to the\nsame underlying MQTT connection, all the streams should have an identical\nsetting for this.\n\n### `mqtt_will_qos`\n\nSet the QoS of the will message, if any (there is none by default).\n\nThis is applied at connection time... where different streams may bind to the\nsame underlying MQTT connection, all the streams should have an identical\nsetting for this.\n\n### `mqtt_will_retain`\n\nSet to true if the connection should use MQTT's \u0022will retain\u0022 feature, if there\nis a will message (there is none by default).\n\nThis is applied at connection time... where different streams may bind to the\nsame underlying MQTT connection, all the streams should have an identical\nsetting for this.\n\n## Loading and using updated remote policy\n\nIf the default, hardcoded policy includes a streamtype `fetch_policy`,\nduring startup when lws_system reaches the POLICY state, lws will use\na Secure Stream of type `fetch_policy` to download, parse and update\nthe policy to use it.\n\nThe secure-streams-proxy minimal example shows how this is done and\nfetches its real policy from at startup using the built-in\none.\n\n## Applying streamtype policy overlays\n\nThis is intended for modifying policies at runtime for testing, eg, to\nforce error paths to be taken. After the main policy is processed, you\nmay parse additional, usually smaller policy fragments on top of it.\n\nWhere streamtype names in the new fragment already exist in the current\nparsed policy, the settings in the fragment are applied over the parsed\npolicy, overriding settings. There's a simple api to enable this by\ngiving it the override JSON in one string\n\n```\nint\nlws_ss_policy_overlay(struct lws_context *context, const char *overlay);\n```\n\nbut there are also other apis available that can statefully process\nlarger overlay fragments if needed.\n\nAn example overlay fragment looks like this\n\n```\n\t{ \u0022s\u0022: [{ \u0022captive_portal_detect\u0022: {\n\t\t\u0022endpoint\u0022: \\u0022,\n\t\t\u0022http_url\u0022: \u0022/\u0022,\n\t\t\u0022port\u0022: 80\n\t}}]}\n```\n\nie the overlay fragment completely follows the structure of the main policy,\njust misses out anything it doesn't override.\n\nCurrently ONLY streamtypes may be overridden.\n\nYou can see an example of this in use in `minimal-secure-streams` example\nwhere `--force-portal` and `--force-no-internet` options cause the captive\nportal detect streamtype to be overridden to force the requested kind of\noutcome.\n\n## Captive Portal Detection\n\nIf the policy contains a streamtype `captive_portal_detect` then the\ntype of transaction described there is automatically performed after\nacquiring a DHCP address to try to determine the captive portal\nsituation.\n\n```\n\t\t\u0022captive_portal_detect\u0022: {\n \u0022endpoint\u0022: \\u0022,\n \u0022port\u0022: 80,\n \u0022protocol\u0022: \u0022h1\u0022,\n \u0022http_method\u0022: \u0022GET\u0022,\n \u0022http_url\u0022: \u0022generate_204\u0022,\n \u0022opportunistic\u0022: true,\n \u0022http_expect\u0022: 204,\n\t\t\t\u0022http_fail_redirect\u0022: true\n }\n```\n\n## Stream serialization and proxying\n\nBy default Secure Streams expects to make the outgoing connection described in\nthe policy in the same process / thread, this suits the case where all the\nparticipating clients are in the same statically-linked image.\n\nIn this case the `lws_ss_` apis are fulfilled locally by secure-streams.c and\npolicy.c for policy lookups.\n\nHowever it also supports serialization, where the SS api can be streamed over\nanother transport such as a Unix Domain Socket connection. This suits the case\nwhere the clients are actually in different processes in, eg, Linux or Android.\n\nIn those cases, you run a proxy process (minimal-secure-streams-proxy) that\nlistens on a Unix Domain Socket and is connected to by one or more other\nprocesses that pass their SS API activity to the proxy for fulfilment (or\nonward proxying).\n\nEach Secure Stream that is created then in turn creates a private Unix Domain\nSocket connection to the proxy for each stream.\n\nIn this case the proxy uses secure-streams.c and policy.c as before to fulfil\nthe inbound proxy streams, but uses secure-streams-serialize.c to serialize and\ndeserialize the proxied SS API activity. The proxy clients define\nLWS_SS_USE_SSPC either very early in their sources before the includes, or on\nthe compiler commandline... this causes the lws_ss_ apis to be replaced at\npreprocessor time with lws_sspc_ equivalents. These serialize the api action\nand pass it to the proxy over a Unix Domain Socket for fulfilment, the results\nand state changes etc are streamed over the Unix Domain Socket and presented to\nthe application exactly the same as if it was being fulfilled locally.\n\nTo demonstrate this, some minimal examples, eg, minimal-secure-streams and\nmimimal-secure-streams-avs build themselves both ways, once with direct SS API\nfulfilment and once with Unix Domain Socket proxying and -client appended on the\nexecutable name. To test the -client variants, run minimal-secure-streams-proxy\non the same machine.\n\n## Complicated scenarios with secure streams proxy\n\nAs mentioned above, Secure Streams has two modes, by default the application\ndirectly parses the policy and makes the outgoing connections itself.\nHowever when configured at cmake with\n\n```\n-DLWS_WITH_SOCKS5\u003d1 -DLWS_WITH_SECURE_STREAMS\u003d1 -DLWS_WITH_SECURE_STREAMS_PROXY_API\u003d1 -DLWS_WITH_MINIMAL_EXAMPLES\u003d1\n```\n\nand define `LWS_SS_USE_SSPC` when building the application, applications forward\ntheir network requests to a local or remote SS proxy for fulfilment... and only\nthe SS proxy has the system policy. By default, the SS proxy is on the local\nmachine and is connected to via a Unix Domain Socket, but tcp links are also\npossible. (Note the proxied traffic is not encrypyed by default.)\n\nUsing the configuration above, the example SS applications are built two ways,\nonce for direct connection fulfilment (eg, `./bin/lws-minimal-secure-streams`),\nand once with `LWS_SS_USE_SSPC` also defined so it connects via an SS proxy,\n(eg, `./bin/lws-minimal-secure-streams-client`).\n\n## Testing an example scenario with SS Proxy and socks5 proxy\n\n```\n [ SS application ] --- tcp --- [ socks 5 proxy ] --- tcp --- [ SS proxy ] --- internet\n```\n\nIn this scenario, everything is on localhost, the socks5 proxy listens on :1337 and\nthe SS proxy listens on :1234. The SS application connects to the socks5\nproxy to get to the SS proxy, which then goes out to the internet\n\n### 1 Start the SS proxy\n\nTell it to listen on lo interface on port 1234\n\n```\n$ ./bin/lws-minimal-secure-streams-proxy -p 1234 -i lo\n```\n\n### 2 Start the SOCKS5 proxy\n\n```\n$ ssh -D 1337 -N -v localhost\n```\n\nThe -v makes connections to the proxy visible in the terminal for testing\n\n### 3 Run the SS application\n\nThe application is told to make all connections via the socks5 proxy at\n127.0.0.1:1337, and to fulfil its SS connections via an SS proxy, binding\nconnections to (ipv4 lo interface, -1), to (-a/-p).\n\n```\nsocks_proxy\u003d127.0.0.1:1337 ./bin/lws-minimal-secure-streams-client -p 1234 -i -a\n```\n\nYou can confirm this goes through the ssh socks5 proxy to get to the SS proxy\nand fulfil the connection.\n\n## Using static policies\n\nIf one of your targets is too constrained to make use of dynamic JSON policies, but\nusing SS and the policies is attractive for wider reasons, you can use a static policy\nbuilt into the firmware for the constrained target.\n\nThe secure-streams example \u0022policy2c\u0022 (which runs on the build machine, not the device)\n\n\n\naccepts a normal JSON policy on stdin, and emits a C code representation that can be\nincluded directly in the firmware.\n\n\n\nUsing this technique it's possible to standardize on maintaining JSON policies across a\nrange of devices with different contraints, and use the C conversion of the policy on devices\nthat are too small.\n\nThe Cmake option `LWS_WITH_SECURE_STREAMS_STATIC_POLICY_ONLY` should be enabled to use this\nmode, it will not build the JSON parser (and the option for LEJP can also be disabled if\nyou're not otherwise using it, saving an additional couple of KB).\n\nNotice policy2c example tool must be built with `LWS_ROLE_H1`, `LWS_ROLE_H2`, `LWS_ROLE_WS`\nand `LWS_ROLE_MQTT` enabled so it can handle any kind of policy.\n\n## HTTP and ws serving\n\nAll ws servers start out as http servers... for that reason ws serving is\nhandled as part of http serving, if you give the `ws_subprotocol` entry to the\nstreamtype additionally, the server will also accept upgrades to ws.\n\nTo help the user code understand if the upgrade occurred, there's a special\nstate `LWSSSCS_SERVER_UPGRADE`, so subsequent rx and tx can be understood to\nhave come from the upgraded protocol. To allow separation of rx and tx\nhandling between http and ws, there's a ss api `lws_ss_change_handlers()`\nwhich allows dynamically setting SS handlers.\n\nSince the http and ws upgrade identity is encapsulated in one streamtype, the\nuser object for the server streamtype should contain related user data for both\nhttp and ws underlying protocol identity.\n","s":{"c":1713332551,"u": 490}} ],"g": 1606,"chitpc": 0,"ehitpc": 0,"indexed":0 , "ab": 0, "si": 0, "db":0, "di":0, "sat":0, "lfc": "7d0a"}