[Libwebsockets] browser flow-control

Andy Green andy at warmcat.com
Sun Mar 12 21:18:54 CET 2017



On March 13, 2017 3:58:46 AM GMT+08:00, Per Bothner <per at bothner.com> wrote:
>On 03/12/2017 11:02 AM, Andy Green wrote:
>> There's something not quite right about "bandwidth does not slow
>things down" and at the same time "the browser can't keep up". 
>Eventually, if it can't keep up, ACKs come too slowly to make space in
>the tcp window and POLLOUT isn't signalled until some ACKs come.
>>
>> It sounds like there's a buffer of a large size relative to the
>messages in the browser, that keeps the ACKs coming for a good while
>even while you can't keep up.
>
>The problem isn't that messages get dropped, or we're not handling them
>correctly.
>The problem is that the browser (at least Chrome and Firefox) never
>seem to get around to handling
>keyboard and other user interface events.  For example if I run the
>'yes' command
>I can't ctrl-c to interrupt it.
>
>> Apologies if the following doesn't apply... but...I guess if
>onMessage() did no work, you don't have any problem.  And getting data
>quickly is not necessarily a bad thing within reason.
>>
>> If so the problem is your onMessage() handling is expensive and the
>remote peer can spam triggering expensive work in the browser.
>>
>> How about decouple accepting data at onMessage() from the expensive
>work?  So eg if re-render or update of UI elements is the expensive
>bit, decouple re-render or update of the UI elements from logical
>handling of the incoming data.  Eat / process the data at onMessage()
>cheaply but then queue a rate-limited "expensive bit" on a timer and
>return from onMessage().
>
>I really do want to slow down the sender if the browser can't keep up.
>Yes, I can have onmessage just save the date in a queue and quickly
>return,
>but if the onmessage calls come too quickly deferring the expensive
>display
>update just makes things worse, as far as I can see.

Hum...

>>> I've implemented a builtin 'less'-like pager for ldomterm, and the
>plan
>>> is to integrate
>>> this into the flow-control: In auto-pager mode, automatic scroll
>>> suspends display update
>>> and enters paging mode, until the users scrolls forward to enable
>more
>>> output.
>>
>> Sounds like they're two separate problems... if no ^S coming, browser
>should hopefully logically keep up even under endless spamming, even if
>it does not try update the UI fast enough to render every bit of the
>spam.  I use VNC on a tablet to a beefy machine for all my work the
>last few years, when VNC cant keep up with spew in a Gnome terminal
>window he acts well showing me a consistent snapshot of the spam (ie,
>no tearing in it itself) and then a clean view of how it ended.  This
>is very acceptable for the user.
>
>This happens with DomTerm too, for example if I cat a very large file.
>And that is acceptable.
>
>But what if the file is infinite? I.e. it's printing in an endless
>loop?
>I could tolerate this too, as long as the user can type ctrl-C, and
>that
>is handled and gets sent to the looping process, and kills that
>process.
>But the ctrl-C isn't getting through - I'm not sure why, but I suspect
>it's just a scheduler in the browser that isn't designed to handle
>these situations.

This is true in both, eg, ffox and chrome?

-Andy



More information about the Libwebsockets mailing list