[Libwebsockets] Do we have plan to support web socket over http2 client on libwebsockets.

zhang guojun guojunzhang1981 at gmail.com
Thu Sep 20 08:53:37 CEST 2018

Andy, you reply really quick. Thanks a lot.
See the inline comments.


> On Sep 19, 2018, at 11:08 PM, Andy Green <andy at warmcat.com> wrote:
> On September 20, 2018 1:51:52 PM GMT+08:00, zhang guojun <guojunzhang1981 at gmail.com> wrote:
>> Thanks Andy, I learn a lot from your reply.
>> I agree with your answer on the case I said. In that case H2 may not
>> helpful.
>> Let me introduce another case, for specific Client which have multiple
>> applications and trying to push data into server at almost same time,
>> but one application send a huge data to server, say 5M Bytes size,
>> another application, a latency sensitive application, send a small size
>> request to server, it expect a immediate response from server.
> This sounds more like a real-world problem.
> But if you open two separate h1 ws connections back to the server, there's no relationship between the connection sending a 5MB message and the time-sensitive smaller message on a different tcp connection, right?  Traffic from both will be interleaved and the small message will appear quickly.
> If the two logical ws messages are actually different logical ws subprotocols, you'd normally do that as two separate ws connections anyway.
[GZ]: Yeah, this is a real-world problem. And I agree two or multiple connections could solve my problem. But actually there are thousands and thousands of such clients, each client have more than ten of such applications, if I create multiple connections, which will increase much of pressure of the server :-(. 
>> 1) in websocket tunnel, the second latency sensitive application has to
>> waiting for the previous huge packet finished, although we send packet
>> in fragment frame.
> If you try to do both jobs on one tcp socket, yes... once a ws message starts unless you fragmented the message and it's control packets (CLOSE, PING, PONG) it must continue until the message is complete.  Ws itself has no mux concept.
>> 2) in HTTP2 tunnel, since http2 have stream concept, so multiple
>> applications can send request or post-data at the same time, so server
>> can receive the second applications request in time and send back the
>> response.
>> So H2 should solve this scenario in my case, right?
> Yeah... it will be the same or worse (depending on what else the bundle is shared with) than doing it as two separate ws tcp connections.
> With multiple client http connection binding, it happens automatically just by opening the individual client connections without knowledge of each other... lws sees they're going to the same host, that one is already connected or connecting there and binds them together automaically.
> https://libwebsockets.org/git/libwebsockets/tree/minimal-examples/http-client/minimal-http-client-multi
> When eventually client ws-over-h2 support appears it will likely work the same way.  So you should just open one ws connection each for big messages using one subprotocol (sending a smaller ws fragment on each WRITABLE) and another for urgent messages using another subprotocol.  That will solve your problem today on h1 and probably be compatible to automatically use client ws-over-h2 when it appears.
[GZ]: it’s a great feature, good to know it. I definitely haven’t access libwebsockets for a long time.

[GZ]: I personally like websocket more than H2 in C/S mode, I though it will inherit MUX in ws-over-h2, looks like it doesn’t. I’m not sure if you have any chance do something before it come to RFC. :-)

> -Andy
>> Thanks
>> Guojun
>>> On Sep 19, 2018, at 12:29 AM, Andy Green <andy at warmcat.com> wrote:
>>> On 19/09/2018 15:11, zhang guojun wrote:
>>>> Andy, glad to receive you email.
>>>> Thank you for the very detail reply.
>>>> Please see my comments inline with red color.
>>>> Thanks
>>>> Guojun
>>>>> On Sep 18, 2018, at 3:39 PM, Andy Green <andy at warmcat.com
>> <mailto:andy at warmcat.com>> wrote:
>>>>> On 19/09/2018 00:40, zhang guojun wrote:
>>>>>> Hi Andy,
>>>>>> Thanks for the quickly reply and very detail explanation.
>>>>>> In my case, there is a manage server to manage various network
>> device. Right now we are using websocket as the transport tunnel, but
>> as we know, we can see head-of-line-blocking issue.
>>>>> I know what that is, but what do you mean by it?
>>>>> Actually you can't properly solve that without QUIC... but since
>> you talk about a non-h2 ws solution these are presumably individual tcp
>> connections... one can't block the other.
>>>>> TCP itself can block itself where there is packet loss due to its
>> requirement to deliver in-order... if that's what you mean h2 won't
>> solve it since it's also on top of TCP.  QUIC is designed on top of UDP
>> transport and to the extent its packets are formed from content from
>> one stream, loss + retry does not block packets containing content from
>> other streams.
>>>>> You should clarify exactly where the blocking you think makes
>> trouble is coming from, because h2 won't necessarily solve it.
>>>> [GZ]: in my case, multiple device send message to websocket server
>> at almost same time, but one of device have a real-time application
>> need higher priority and lower latency, if some of the connection have
>> occupy 
>>> Mmm... the network decides about the latency.  Network programming is
>> not like designing a system where everything is local, you cannot
>> always control latency.  You can do things to reduce it generally but
>> if your application cares so much about it and is using tcp /
>> websockets, it's likely got problems.
>>>> longer time for response, it will cause the higher priority
>> connection block, since H2 have multiplexing, so it should solve the
>> problem of my case.
>>> No, latency on two h2 streams being muxed is always going to be worse
>> than two separate h1 connections.  So if you already see this problem,
>> h2 won't fix it.
>>>> My understand is websocket doesn’t have multiplexing, if I
>> understand wrong, please correct me.
>>>> I agree with you, QUIC solve TCP level head-of-line blocking. It
>> should be ultimate solution of all kinds of networks blocking issue.
>>>>>> So I consider the websocket over h2, in this way,  we can reuse
>> some of the existing code. We may loss a little bit bandwidth compare
>> to raw websocket, but it still can acceptable.
>>>>>> I go through the ws-over-h2 draft, the technology is flawless and
>> you already implement server side code. In your perspective, what’s the
>> reason the draft haven’t been a RFC?
>>>>> It's simply going through the process, which takes time.  The
>> author is one of the Great and Good from Mozilla, and the proposal is
>> very reasonable and limited in scope.
>>>> [GZ]: that’s great, looking forward to see it become RFC soon.
>>>>>> I’m also considering to use H2 directly.
>>>>> By default what flows on DATA is understood to be http.  What
>> happens without a content-length on both sides and unlimited DATA in
>> both directions depends on the implementation... this isn't in the h2
>> spec and the compliance tools don't test for it.  ws-over-h2 upgrades
>> the connection so it is logically no longer http protocol on the
>> stream.
>>>> [GZ]: Looks like I have some wrong understanding of ws-over-h2, I
>> though websocket frame is the payload of HTTP2, going through the new
>> protocol again, I get it. After handshake, it just a websocket on the
>> stream. So 
>>> Yes, the protocol on the stream is told to be "websocket" after the
>> ws-over-h2 upgrade.  So endpoints that understand that no longer think
>> of traffic on the stream in http terms.
>>>> my question is, does ws-over-h2 still inherit the multiplexing of
>> H2? Or you don’t think multiplexing is necessary for ws-over-h2.
>>> It doesn't really 'inherit' anything, the only way for an h2 stream
>> to participate on the h2 "bundle" / "master" / "network connection" is
>> as a subordinate h2 stream.
>>> So it must follow the rules about h2 tx credit... it can't send
>> anything the remote stream endpoint hasn't already told it is willing
>> to receive. The ws-over-h2 steam WRITABLE callback is not directly
>> related to the "bundle" being WRITABLE, because it does not itself own
>> the network connection... it participates in a round-robin sharing of
>> the bundle / connection writability with all the streams, be they http
>> or ws protocol on them.
>>> In other words, there's a new way ws streams on h2 have to "wait
>> their turn".  Hence ws-over-h2 is always going to have same or worse
>> latency than ws-over-h1.
>>>>>> BTW, does libwebsockets support SERVER PUSH and POST?
>>>>> There's no lws api to use PUSH_PROMISE.  I proposed on httpbis that
>> ws-over-h2 support PUSH_PROMISE, because this would allow what was
>> originally a "GET index.html" to have sent the first data on a ws link
>> index.html will want to open before the client has finished receiving
>> index.html... instead of a RTT setting up the ws link much later it
>> could have delivered the first data before the browser realized it
>> wanted the ws link: instant ws data as soon as the JS opened the ws
>> without any network activity.  But it was told it complicated the spec
>> too much.
>>>>> For me I don't have a need to implement PUSH_PROMISE without that.
>> PUSH_PROMISE on http traffic has the internal contradiction the server
>> doesn't know what the client has in its private cache.  So if it starts
>> setting up and partially sending CSS, JS, IMG or whatever, after the
>> first time if the private cache policy is reasonable, that is just
>> complete waste and the client will ignore the streams every time since
>> it has them in private cache already.  So PUSH_PROMISE is kind of
>> useless AFAIK.
>>>> [GZ]: hope I understand right. it may useless on B/S mode, but it
>> still useful in C/S mode. In our case(C/S mode), server push some
>> commands to clients whenever it want, clients need to apply that
>> command and response the status to server. There no any duplicate data
>> pushed to client. So as a end-to-end transport, PUSH_PROMISE + POST
>> provide a bi-direction transmit capability.
>>> That is not what h2 "server push" / PUSH_PROMISE does.  Section 8.2
>> is literally called "server push" in the RFC
>>> https://tools.ietf.org/html/rfc7540#section-8.2
>>> You seem to be wrongly thinking it's some kind of long poll mechanism
>> but it's completely unrelated.  It's the server trying to get ahead of
>> things by inferring that if it is sending you index.html, you will soon
>> want mysite.css, mysite.js and starting the streams for that
>> speculatively.
>>> You can open an XHR back to the server and do something like that
>> though.
>>> But if your basic problem is a tight latency requirement, and it's
>> not that your existing implementation is just poor, you need to go back
>> and study exactly where this latency is coming from.
>>> -Andy
>>>>> Lws h2 supports POST (including multipart / file upload), CGI
>> (translating headers from the h1-only CGI) and proxying h2 <-> h1,
>> including on unix domain sockets.
>>>>> -Andy
>>>>>> Thanks
>>>>>> Guojun
>>>>>>> On Sep 14, 2018, at 5:57 PM, Andy Green <andy at warmcat.com
>> <mailto:andy at warmcat.com>> wrote:
>>>>>>> On 15/09/2018 02:10, zhang guojun wrote:
>>>>>>>> Dear libwebsockets developers.
>>>>>>>> I’m glad to see libwebsocket support websocket over http2
>> server, do we have any plan to support web socket over HTTP2 client?
>>>>>>> Yeah, but not immediately.  H2 client is working (including multi
>> client stream coalescing into one h2 connection) for HTTP GET already,
>> just not the ws-over-h2.
>>>>>>> - I don't personally need it atm.  There's a really big benefit
>> implementing server side as lws already has, if your client is a
>> browser with it implemented.  Once the TLS + h2 is up for fetching html
>> / css / js, an additional ws connection can be established inside the
>> h2 connection, and start sending data, in just one RTT.  It only
>> benefits clients that are opening multiple streams on the server... if
>> the client just opens the one ws connection, it still has to do the TLS
>> and start the h2 connection from scratch, so there's no advantage for
>> that case.
>>>>>>> - It's not a small subproject... it interacts with both h1, h2
>> and ws roles / parsers and h2 needs all stream tx initiated from the h2
>> round-robin scheduler.
>>>>>>> - Aside from the Chrome browser developers who contacted me while
>> we mutually used each other's implementation for testing, IIRC you're
>> the first person to acknowledge the existence of even the server work.
>>>>>>> - Although the draft RFC is small and unlikely to change, it's
>> not offical yet (it seems just a matter of time though)
>>>>>>> - Implementation status outside of Chrome (it's in Canary 67+ if
>> you enable special flags) is opaque.  I asked twice on httpbis and just
>> got ignored.  I guess they won't enable it by default until the RFC is
>> formally accepted.  If you know the server has it and your non-browser
>> client has it, you don't care about browser status though.
>>>>>>> Of course if someone wants to pay my consultancy rate for the
>> couple of weeks it would take I could get religion about it.
>>>>>>> -Andy
>>>>>>>> Thanks
>>>>>>>> Guojun
>>>>>>>> _______________________________________________
>>>>>>>> Libwebsockets mailing list
>>>>>>>> Libwebsockets at ml.libwebsockets.org
>> <mailto:Libwebsockets at ml.libwebsockets.org>
>>>>>>>> https://libwebsockets.org/mailman/listinfo/libwebsockets

More information about the Libwebsockets mailing list