Kill or interrupt threads when client closes connection - jetty-9

I'm using jetty-9 embedded for a proxy application (hosting netflix zuul). Unfortunately, some of the services behind the proxy have very long (as in one-hour long) timeouts. Most clients don't wait around for an hour. However, since the jetty server thread is blocked waiting for input from the remote server (the server that we are proxying), the thread doesn't have any way to act on this information.
Is there a way to interrupt or kill the jetty server thread when the client that called it closes its connection. Basically if the socket from the client is closed I want to interrupt that thread. Alternatively, if I knew the thread, I could map that to the outbound socket and close that instead, thereby waking up the thread.
I thought if I could get a list of sockets I could query them (?) to see if they are still alive? But how to drill down into the guts of the jetty engine to get a list of sockets, let alone a mapping of socket to jetty thread?

Related

What if server didn't receive fd_close

I have a high performance client server system programmed from the scratch. i am still improving my system. the server using io overlapping to handle connections. the server correctly handles disconnections and resource deallocations. at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client. this works well. and server detects that as a graceful disconnection. rarely i have observed when the connection is very slow the server doesn't detect this. I feel that the shutdown partial closure doesn't reach the server. how can i handle this. this is important the server shouldn't contain this kind of connections if so the server can not be stopped. and i do not want to close all such connection by force.
at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client.
It doesn't do that.
this works well
It doesn't work at all. The shutdown command with SD_RECEIVE that you're using is completely pointless. A close, or a shutdown with SD_SEND or SD_BOTH, sends a FIN: shutdown with SD_RECEIVE does exactly nothing on the wire, and specifically it does not 'notify the server' of anything.
I feel that the shutdown partial closure doesn't reach the server.
It never reaches the server. Your code doesn't work the way you think it does. What reaches the server is the FIN, which in turn is the result of the close, not the shutdown SD_RECEIVE.
What you need here is a read timeout at the server end. As you're using select() or whatever is delivering you the events, you will have to implement the timeout manually yourself.

Websockets and uwsgi - detect broken connections client side?

I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.

Is it possible to communicate with http requests between web and worker processes on Heroku?

I'm building an HTTP -> IRC proxy, it receives messages via an HTTP request and should then connect to an IRC server and post them to a channel (chat room).
This is all fairly straightforward, the one issue I have is that a connection to an IRC server is a persistent socket that should ideally be kept open for a reasonable period of time - unlike HTTP requests where a socket is opened and closed for each request (not always true I know). The implication of this is that a message bound for the same IRC server/room must always be sent via the same process (the one that holds a connection to the IRC server).
So I basically need to receive the HTTP request on my web processes, and then have them figure out which specific worker process has an open connection to the IRC server and route the message to that process.
I would prefer to avoid the complexity of a message queue within the IRC proxy app, as we already have one sitting in front of it that sends it the HTTP requests in the first place.
With that in mind my ideal solution is to have a shared datastore between the web and worker processes, and to have the worker processes maintain a table of all the IRC servers they're connected to. When a web process receives an HTTP request it could then look up the table to figure out if there is already a worker with a connection the the required IRC server and forward the message to that, or if there is no existing connection it could effectively act as a load balancer and pick an appropriate worker to forward the message to so it can establish and hold a connection to the IRC server.
Now to do this it would require my worker processes to be able to start an HTTP server and listen for requests from the web processes. On Heroku I know only web processes are added to the public facing "routing mesh" which is fine, what I would like to know is is it possible to send HTTP requests between a web and worker process internally within Herokus network (outside of the "routing mesh").
I will use a message queue if I must be as I said I'd like to avoid it.
Thanks!

boost pion comet-like httpserver

I'm trying to efficiently implement comet-like functionality using HTTPServer class of boost::pion.
Basically, in my 'handleURI' function, I would like to postpone returning results to the client, until the server is ready to respond (for instance, until another user has sent a message to the first user, to use a simple comet 'hello world' application).
What should I do? Put the state on the stack, and exit silently, without creating a HTTPResponseWriter?
Cheers!
Setup a timeout ASIO event for your connection so that you can reap the connection after 20min or something reasonable like that. I don't know about Boost Pion, but in ASIO you'd want to register a read handler that catches when the connection closes and a timeout handler to alert you for when the connection has actually timed out. Enable TCP keep alives on your socket to detect when the socket should be reaped in the event that it just vanishes (though tcp keep alives aren't a guarantee so don't rely exclusively on them - not all clients support tcp keep alives). As for the timer, check out the following timer example:
https://github.com/sean-/Boost.Examples/blob/master/asio/timer/timer.cc

Websocket onclose/onerror events does not fire if server crashes

I have observed the following behavior in Firefox 4 and Chrome 7:
If the server running the websocket daemon crashes, reboots, loses network connectivity, etc then the 'onclose' or 'onerror' events are not fired on the client-side. I would expect one of those events to be fired when the connection is broken for any reason.
If however the daemon is shutdown cleanly first, then the 'onclose' event is fired (as expected).
Why do the clients perceive the websocket connection as open when the daemon is not shutdown properly?
I want to rely on the expected behavior to inform the user that the server has become unavailable or that the client's internet connection has suffered a disruption.
TCP is like that. The most recent WebSockets standard draft (v76) has a clean shutdown message mechanism. But without that (or if it doesn't have a chance to be sent) you are relying on normal TCP socket cleanup which make take several minutes (or hours).
I would suggest adding some sort of signal handler/exit trap to the server so that when the server is killed/shutdown, a clean shutdown message is sent to all connected clients.
You could also add a heartbeat mechanism (ala TCP keep alive) to your application to detect when the other side goes away.

Resources