[Libwebsockets] using unix or tcp sockets?
andy at warmcat.com
Wed Aug 30 01:50:45 CEST 2017
On 08/30/2017 02:22 AM, Per Bothner wrote:
> Here is a design that uses plain http:/ws: for local terminals,
> and ssh tunnels for remote tunnels. Does this make sense?
Well, this is kind of unrelated to lws... here are some comments you can
do what you like with.
> ** LOCAL CONNECTIONS
> When server starts, it generates a random KEY, starts an http server on
> some available PORT, and writes the following to /tmp/domterm-$UID.html
> (only readable by user $UID):
> var domterm_port = PORT;
> var doterm_key = KEY;
> var domterm_pid = PID; /* of server */
> location =
I dunno if it's just my prejudice but executing data files doesn't feel
like a good way.
Anyway I understand there's a session key used in the URLs to stop them
being guessable... it's good if it's big.
It's probably a bad idea to have the whole location defined there. If
there is some way for an attacker to create these files in /tmp he could
direct your browser to "evilbox.kp".
> When 'domterm' (client) wants to create a new terminal,
> it checks if /tmp/domterm-$UID.html exists (and if
> the contained PID is alive) - if not it starts the server.
I guess the PIDs are children of this server or another instance of it.
PIDs are only 16-bit and are a global resource for the OS, it's possible
for nonprivileged users to mount an attack to synthesize a specific PID
if the real one had terminated before, just by starting processes that
immediately exit if they aren't assigned the right PID.
So at least you'd need to check the uid / group the process is running
under is what you expected, and if it's supposed to have a parent /
child relationship to something else confirm that and its uid / group too.
If you consider sshd, his method is to spawn an explicit parent for the
shell or whatever was spawned that monitors the child and lives
standalone for as long as the child does; he can receive signals about
the child lifecycle very quickly. Those spawned sshd sessions do not
die when the listening parent dies or is restarted but have their own
lifetime. That would be a good model.
> The client then opens a browser window on file:/tmp/domterm-$UID.html
> (Note this avoids secret information on the command-line,
> where it could be inspected by 'ps'.)
The browser may leak via ps / remember the UID, if that matters.
If the browser just opens the file in /tmp, it is not going to have a
way to confirm the user:group on that file or the PID is good first. It
will just open and run whatever it finds there, which could be an 0777
file from another user with
I think the user is going to have to go to http://localhost:7654 or
whatever just like cups users go to http://localhost:631 and go from
there. The listen port can be specifically bound to the lo network
The user should authenticate, even using basic auth there. And then he
should be able to select sessions that belong to his auth. That way
multiple users could have sessions on one box without restricting the
sessions to one UID.
The session key is still useful but it would be private to the server
and some form of it used in the links the server generated.
The same interface can then get TLS wrapping and listen externally if
the user wants it.
And if it listens externally, optional TLS client certs add another
layer of security... an attacker can't even connect to the listening
socket without a valid client cert + passphrase, and there is auth on
top. TLS client certs operate to provide very similar assurance to
having your ssh public key registered at the server.
> To handle other commands, such as connecting to an existing session,
> we need a slightly more complex protocol: The client can encode
> request information in the hash part #OPTIONS - for example:
> The redirection in /tmp/domterm-$UID.html needs to move the hash #OPTIONS
> to some other part of the URL it sends to the server, perhaps a
> query string ?OPTIONS.
You can hide however you do this in server-generated stuff if the
starting point is a webpage from the server on a well-known port.
> ** REMOTE CONNECTIONS
> $ domterm remote HOST COMMAND ....
> Create local server, if need be.
> Use ssh to connect to remote HOST and if necessary start server
> (check for HOST:/tmp/domterm-$UID.html). Get (remote) PORT and KEY.
> Create domain socket /tmp/domterm-HOST-UID.socket
> Set up ssh fowarding from /tmp/domterm-HOST-UID.socket to HOST:PORT
> Server creates local session as a proxy to /tmp/domterm-HOST-UID.socket.
> Client opens browser on /tmp/domterm-$UID.html#remote=HOST
> or something like that.
Ssh forwarding is a bit all-or-nothing by default and some people
disable it globally. Ie you think you are handing out ssh credentials
to access some restricted shell, but it also gets the user the ability
to forward access to any of your internal sensitive ports externally.
Anyone who has disabled it globally will likely reject enabling it just
for this app.
How about instead ssh just works like http does, you can ssh in a
terminal to your server, and get a menu of active sessions and the
ability to start a new one. After selecting your ssh client joins the
Both ssh and https have "convenience" features people use like ssh-agent
and "login keyrings" that eliminate the passphrase for the whole client
session. It probably makes sense to continue to have the server's own
auth credentials in addition on both.
Ie ssh is just another way to do what the http[s] server does and both
can be used locally or remote.
More information about the Libwebsockets