I am reading the documentation of Alexa Voice Service capabilities and came across the part on managing HTTP2 connection. I don't really understand how this down channel works behind the scenes. Is it using server push? Well, could server push be used to keep a long connection? Or is it just using some tricks to keep the connection alive for a very long time?
As stated on the documentation, the client needs to establish a down channel stream with the server.
Based on what I read here https://www.rfc-editor.org/rfc/rfc7540, From this state diagram:
once the stream sends a HEADER frame, followed by an END STREAM flag, the state will be half-closed(local) on the point of view of the client. So, this is how half-closed state for the device happened, as stated in above image. Correct me that if I am wrong.
For managing the HTTP connection, this is what it says.
Based on my understanding: the client sets a timeout of 60minutes for the GET request. After the request is sent, the server will not send any response. Then the connection will remain open for 60minutes. But once a response is sent from the server, the connection should be closed. Isn't that supposed to happen? Or, is it because when the server sends response through the down channel stream, it did not send an END STREAM flag so the stream will not be closed?
But once a response is sent from the server, the connection should be closed.
HTTP/1.1 and HTTP/2 use persistent connections, which means that a single connection can be used not just for one request/response, but for several request/response cycles.
Only HTTP/1.0 was closing the connection after the response, and so for HTTP/2 this is not the case, the connection will remain open until either peer decides to explicitly close it.
The recommendations about the idle timeouts are exactly to prevent the client to explicitly close the connection too early when it sees no network traffic, independently from requests or responses.
Related
I have a websocket server. It accepts thousands of connections from clients. Read data from and write data to clients. It will work normally for weeks. But something wrong will happen occasionally, maybe once two weeks. In a very short time, the new clients will establish connections to server and send a protocol immediately. The server side websocket.onOpen() will be invoked, but it fails to read the protocol data from client. And later the client may close the connection. But on the server side, the connections will keep in the state of CLOSE_WAIT, but never successfully closed. Via netstat I can see that the CLOSE_WAIT connections' read buffer is not empty and keep that value(never be read). So I guess that the server's failing to read data and the close FIN package leads to the connection to keep in CLOSE_WAIT state.
So I want to know under what circumstance may the websocket fail to read data from reading buffers.
According to the Wikipedia article: http://en.wikipedia.org/wiki/WebSocket,
The server sends back this response to the client during handshake:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=
Sec-WebSocket-Protocol: chat
Does this close the connection (as HTTP responses usually do) or it is kept open throughout the entire handshake and it can start sending WebSocket frames straight away (assuming that it succeeds)?
An HTTP socket going through the handshake process to be upgraded to the webSocket protocol is not closed during that process. The same open socket goes through the whole process and then becomes the socket used for the webSocket protocol. As soon as the upgrade is complete, that very socket is ready for messages to be sent per the webSocket protocol.
It is this use of the exact same socket that enables a webSocket connection to run on the same port as an HTTP request (no extra port is needed) because it literally starts out as an HTTP request (with some extra headers attached) and then when those headers are recognized and both sides agree, the socket from that original HTTP request on the original web port (often port 80) is then switched to use the webSocket protocol. No additional connection on some new port is needed.
I actually find it a relatively elegant design because it makes for easy coexistence with a web server which was an important design parameter. And, a slight extra bit of connection overhead (protocol upgrade negotiation) is generally not an issue because webSocket connections by their very nature are designed to be long running sockets which you open once and use over an extended period of time so a little extra overhead to open them doesn't generally bother their use.
If, for any reason, the upgrade is not completed (both sides don't agree on the upgrade to webSocket), then the socket would remain an HTTP socket and would behave as HTTP sockets normally do (likely getting closed right away, but subject to normal HTTP interactions).
You can see this answer for more details on the back and forth during an upgrade to webSocket: SocketIO tries to connect using same port as the browser used to get web page
I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.
I've not been able to find a clear answer as to whether or not CometD's long polling mechanism uses a persistent connection, or disconnects and then reconnects after a message is pushed to it.
The reason this is important to me is that I am currently using a long polling push client which disconnects and reconnects after every message (or batch of messages) is sent from the server, and the reconnect time introduces random latency which I am looking to get rid of. I am assuming it does this for compatibility's sake, as it makes every "push" just look like a really long request/response, which should work on any and every browser.
So, does CometD's long polling use a persistent, long-lived http connection? If the answer is yes, is it conditional? That is, are there cases/browsers where it falls back to a "request/response/reconnect" per message sent?
CometD long polling is using HTTP 1.1, and therefore persistent connections.
When CometD is used from a browser, the browser manages the connection pool and the HTTP protocol version, and CometD does not add any Connection header to close the connection after every message: all it is left to the browser, and my experience is that the long poll always stays on the same connection.
When the CometD Java client library is used, the same applies: Jetty's HTTP client manages the connection pool, defaults to HTTP 1.1 and keeps the connections open.
The main difference with browsers is that Jetty HTTP client allows more than few (usually 6) connections per domain, so it is appropriate for load testing simulations.
Check out the CometD performance report.
The updated CometD documentation can be found at http://docs.cometd.org.
It is wrong to say that "Long polling by definition does not use a persistent connection but reconnects". HTTP 1.1 is perfectly capable to send multiple long pollings over the same connection, and CometD does exactly that.
I am not aware of cases where clients like browsers fallback to open/request/response/close behaviour when using HTTP 1.1, unless this is explicitly requested by the application adding a Connection: close header to HTTP requests or responses (CometD does not do this).
With WebSocket, CometD opens 1 connection only, persistent, and all the messages are exchanged over that connection, until the application decides to close the connection by disconnecting the CometD client.
I'm using the HTTPClient gem (http://github.com/nahi/httpclient) for ruby, to post data to IIS 6.1. Even though both support HTTP 1.1 it seems to be closing the socket after each request made, rather than using persistent connections. I haven't added any flags to enable persistent connections (mainly because having poked about the source code it appears that they should be enabled by default).
The reason I think the socket is being close is that if I watch the requests in Wireshark once each request is made I see FIN/ACK TCP packets sent from the client to the server, then the same sent back the other way.
Am I misreading that or does that mean the socket is being closed?
Wikipedia's article on TCP suggests that the FIN/ACK packets are a signal to terminate the connection. Check which of the client or server initiated the sending of the FIN packet - that's the party requesting that the connection is closed.
As you saw in the source, an HTTP 1.1 implementation should assume that connections are persistent by default.
Is the client specifying HTTP 1.1 in its request and is the server responding accordingly?