I'm using the HTTPClient gem (http://github.com/nahi/httpclient) for ruby, to post data to IIS 6.1. Even though both support HTTP 1.1 it seems to be closing the socket after each request made, rather than using persistent connections. I haven't added any flags to enable persistent connections (mainly because having poked about the source code it appears that they should be enabled by default).
The reason I think the socket is being close is that if I watch the requests in Wireshark once each request is made I see FIN/ACK TCP packets sent from the client to the server, then the same sent back the other way.
Am I misreading that or does that mean the socket is being closed?
Wikipedia's article on TCP suggests that the FIN/ACK packets are a signal to terminate the connection. Check which of the client or server initiated the sending of the FIN packet - that's the party requesting that the connection is closed.
As you saw in the source, an HTTP 1.1 implementation should assume that connections are persistent by default.
Is the client specifying HTTP 1.1 in its request and is the server responding accordingly?
Related
I am reading the documentation of Alexa Voice Service capabilities and came across the part on managing HTTP2 connection. I don't really understand how this down channel works behind the scenes. Is it using server push? Well, could server push be used to keep a long connection? Or is it just using some tricks to keep the connection alive for a very long time?
As stated on the documentation, the client needs to establish a down channel stream with the server.
Based on what I read here https://www.rfc-editor.org/rfc/rfc7540, From this state diagram:
once the stream sends a HEADER frame, followed by an END STREAM flag, the state will be half-closed(local) on the point of view of the client. So, this is how half-closed state for the device happened, as stated in above image. Correct me that if I am wrong.
For managing the HTTP connection, this is what it says.
Based on my understanding: the client sets a timeout of 60minutes for the GET request. After the request is sent, the server will not send any response. Then the connection will remain open for 60minutes. But once a response is sent from the server, the connection should be closed. Isn't that supposed to happen? Or, is it because when the server sends response through the down channel stream, it did not send an END STREAM flag so the stream will not be closed?
But once a response is sent from the server, the connection should be closed.
HTTP/1.1 and HTTP/2 use persistent connections, which means that a single connection can be used not just for one request/response, but for several request/response cycles.
Only HTTP/1.0 was closing the connection after the response, and so for HTTP/2 this is not the case, the connection will remain open until either peer decides to explicitly close it.
The recommendations about the idle timeouts are exactly to prevent the client to explicitly close the connection too early when it sees no network traffic, independently from requests or responses.
There's a fantastic answer which goes into detail as to how REST apis work.
How do websockets work in the similar detail?
Websockets create and represent a standard for bi-directional communication between a server and client. This communication channel creates a TCP connection which is outside of HTTP and is run on a seperate server.
To start this process a handshake is performed between the server and client.
Here is the work flow
1) The user makes an HTTP request to the server with an upgrade header, indicating that the client wishes to establish a WebSocket connection.
2) If the server uses the WebSocket protocol, then it will accept the upgrade and send a response back.
3) With the handshake finished, the WebSocket protocol is used from now on. All communications will use the same underlying TCP port. The new returning status code, 101, signifies Switching Protocols.
As part of HTML5 it should work with most modern browsers.
There are two techniques for implementing Comet. One uses HTTP streaming, which uses a single persisted TCP connection to send and receive multiple HTTP requests/responses between client/server.The second is HTTP long polling, which keeps a connection open by the server, and, as soon as an event occurs, the response is committed and the connection is closed. Then, a new long-polling connection is reopened immediately by the client waiting for new events to arrive.
I am using the Faye ruby gem and I noticed it uses Comet/Bayeux out of the box. But I cannot find out which type of Comet technique it uses. I just gather that Bayeux is publish-subscribe protocol. I'm curious to know if it suffers the same shortcomings of HTTP streaming and long polling. Does it allow full-duplex communication (communication in both directions, and, unlike half-duplex, allows this to happen simultaneously.)?
Your definition of HTTP streaming and long-polling are not correct.
In HTTP streaming, the client sends a request to the server, and the server replies with an "infinite" response that contains small chunks of data (messages), typically using the chunked transfer encoding.
This mechanism has been standardized as EventSource (a.k.a Server-Sent Events).
It is a server-to-client only push of events.
For the client to send another message to the server, it has to open a new connection.
In HTTP long-polling, the client sends a request that is held by the server until an event (or a timeout) occurs, then the response is committed but the connection is not closed.
The connection is kept open and other requests may be sent on that connection, both normal or long-polling requests (one at a time, of course).
The Bayeux protocol is an application protocol on top of a transport protocol such as HTTP or WebSocket.
HTTP is a full duplex protocol in the context of a single request/response exchange. Multiple HTTP exchanges are serialized (that is, executed one after the other). The HTTP request/response exchange is the unit of serialization.
WebSocket is a full duplex protocol in the context of WebSocket messages. WebSocket messages may be sent and received simultaneously. The WebSocket message is the unit of serialization.
Bayeux inherits the characteristics of the transport protocol is it carried on. The Bayeux protocol itself does not itself have any "duplexness" characteristics, you can think of it just as a way to format messages in a particular textual form.
Both CometD and Faye use Bayeux over both WebSocket and HTTP long-polling.
According to the Wikipedia article: http://en.wikipedia.org/wiki/WebSocket,
The server sends back this response to the client during handshake:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=
Sec-WebSocket-Protocol: chat
Does this close the connection (as HTTP responses usually do) or it is kept open throughout the entire handshake and it can start sending WebSocket frames straight away (assuming that it succeeds)?
An HTTP socket going through the handshake process to be upgraded to the webSocket protocol is not closed during that process. The same open socket goes through the whole process and then becomes the socket used for the webSocket protocol. As soon as the upgrade is complete, that very socket is ready for messages to be sent per the webSocket protocol.
It is this use of the exact same socket that enables a webSocket connection to run on the same port as an HTTP request (no extra port is needed) because it literally starts out as an HTTP request (with some extra headers attached) and then when those headers are recognized and both sides agree, the socket from that original HTTP request on the original web port (often port 80) is then switched to use the webSocket protocol. No additional connection on some new port is needed.
I actually find it a relatively elegant design because it makes for easy coexistence with a web server which was an important design parameter. And, a slight extra bit of connection overhead (protocol upgrade negotiation) is generally not an issue because webSocket connections by their very nature are designed to be long running sockets which you open once and use over an extended period of time so a little extra overhead to open them doesn't generally bother their use.
If, for any reason, the upgrade is not completed (both sides don't agree on the upgrade to webSocket), then the socket would remain an HTTP socket and would behave as HTTP sockets normally do (likely getting closed right away, but subject to normal HTTP interactions).
You can see this answer for more details on the back and forth during an upgrade to webSocket: SocketIO tries to connect using same port as the browser used to get web page
I've not been able to find a clear answer as to whether or not CometD's long polling mechanism uses a persistent connection, or disconnects and then reconnects after a message is pushed to it.
The reason this is important to me is that I am currently using a long polling push client which disconnects and reconnects after every message (or batch of messages) is sent from the server, and the reconnect time introduces random latency which I am looking to get rid of. I am assuming it does this for compatibility's sake, as it makes every "push" just look like a really long request/response, which should work on any and every browser.
So, does CometD's long polling use a persistent, long-lived http connection? If the answer is yes, is it conditional? That is, are there cases/browsers where it falls back to a "request/response/reconnect" per message sent?
CometD long polling is using HTTP 1.1, and therefore persistent connections.
When CometD is used from a browser, the browser manages the connection pool and the HTTP protocol version, and CometD does not add any Connection header to close the connection after every message: all it is left to the browser, and my experience is that the long poll always stays on the same connection.
When the CometD Java client library is used, the same applies: Jetty's HTTP client manages the connection pool, defaults to HTTP 1.1 and keeps the connections open.
The main difference with browsers is that Jetty HTTP client allows more than few (usually 6) connections per domain, so it is appropriate for load testing simulations.
Check out the CometD performance report.
The updated CometD documentation can be found at http://docs.cometd.org.
It is wrong to say that "Long polling by definition does not use a persistent connection but reconnects". HTTP 1.1 is perfectly capable to send multiple long pollings over the same connection, and CometD does exactly that.
I am not aware of cases where clients like browsers fallback to open/request/response/close behaviour when using HTTP 1.1, unless this is explicitly requested by the application adding a Connection: close header to HTTP requests or responses (CometD does not do this).
With WebSocket, CometD opens 1 connection only, persistent, and all the messages are exchanged over that connection, until the application decides to close the connection by disconnecting the CometD client.