http2: PUSH_PROMISE reserved stream id validation - http2

The spec says:
The identifier of a newly established stream MUST be numerically
greater than all streams that the initiating endpoint has opened or
reserved. This governs streams that are opened using a HEADERS frame
and streams that are reserved using PUSH_PROMISE. An endpoint that
receives an unexpected stream identifier MUST respond with a
connection error (Section 5.4.1) of type PROTOCOL_ERROR.
For the case of the server that sends PUSH_PROMISE it makes sense to me that conforming servers must send strictly increasing stream ids. But I don't understand how the client is supposed to detect this situation.
For example, on one connection, if the server sends:
PUSH_PROMISE promised stream 2
PUSH_PROMISE promised stream 4
because of concurrency the client might receive
PUSH_PROMISE promised stream 4
PUSH_PROMISE promised stream 2
the spec would have me think that client should error on this, but the server did nothing wrong.
What am I missing here?

If the server wrote PUSH_PROMISE[stream=2] and then PUSH_PROMISE[stream=4], then those frames will be delivered in the same order (this is guaranteed by TCP).
It is a task of a client to read from the socket in an ordered way.
For a HTTP/2 implementation the requirement is even stricter, in that not only it has to read from the socket in an ordered way, but it must also parse the frames in an ordered way.
This is required by the fact that PUSH_PROMISE frame carries a HPACK block and in order to keep the server and client HPACK context in sync, the frames (or at least the HPACK blocks of those frames) must be processed in order, so stream=2 before stream=4.
After that, the client is free to process the 2 frames concurrently.
For implementations, this is actually quite simple to achieve, since a thread allocated to perform I/O reads typically does:
loop
read bytes from socket
if no bytes or socket closed -> break loop
parse read bytes (with HPACK decoding) -> produce frame objects
pass frame objects to upper software layer
end loop
Since the read and parse are sequential and no other thread reads from the same socket, the ordering guarantee is met.

Related

why http2 use prioritization over stream instead of requests?

The concepts of "stream, connection, message, and frame" constitute the main design of http2. And what confuses me is the idea of stream.
At first, the stream idea seems to me only as a virtual description of the flow of frames. But then I find the priority of http2 is aimed at streams instead of messages/requests. And why is that, I think the applications both client and server sides care more about and directly control the requests or messages, not which stream these messages reside in.
Plese refer to "stream prioritization":
https://developers.google.com/web/fundamentals/performance/http2#design_and_technical_goals
A stream in HTTP/2 corresponds to all the frames which make up a request and its corresponding response, so is the natural place to handle priority and flow control. The sentences "the response for this request should have high priority" and "the stream for this request and its response should have high priority" are equivalent.
There is a mention in the document you quote of a stream carrying "one or more messages", but I think that's just sloppy language in that document. If you look at section 8.1 of the spec it says "A client sends an HTTP request on a new stream" and "An HTTP request/response exchange fully consumes a single stream."
There can be other frames in that stream, such as PUSH_PROMISE, but those aren't actual requests and responses; the response data for a server push is sent on a new stream, which can then be given a different priority.

Why can't http2 streams be reused?

According to RFC7540:
An HTTP request/response exchange fully consumes a single stream. A request starts with the HEADERS frame that puts the stream into an "open" state. The request ends with a frame bearing END_STREAM, which causes the stream to become "half-closed (local)" for the client and "half-closed (remote)" for the server. A response starts with a HEADERS frame and ends with a frame bearing END_STREAM, which places the stream in the "closed" state.
Knowing that a stream cannot be reopened once it's closed, this means that if I want to implement a long-lived connection where the client sends a stream of requests to the server, I will have to use a new stream for each request. But there is a finite number of streams available, so in theory, I could run out of streams and have to restart the connection.
Why did the writers of the specification design a request/response exchange to completely consume a stream? Wouldn't it have been easy to make a stream like a single thread of exchanges, where you can have multiple exchanges done in serial in one stream?
The point of having many streams multiplexed in a single connection is to interleave them, so that if one cannot proceed, others can.
Reusing a stream for more than one request means just reusing its stream id. I don't see much benefit reusing 4-byte integers -- on the contrary the implementation would become quite more complicated.
For example, the server can inform the client of the last stream that it processed when it's about to close a connection. If stream ids are reused, it would not be possible to report this reliably.
Also, imagine the case where the client sends requestA on stream5; this arrives on the server where its processing takes time; the client times out, sends a RST_STREAM for stream5 (to cancel requestA) and then requestB on stream5. While these are in-flight, the server finishes the processing of requestA and sends the response for requestA on stream5. Now the client reads a response, but it does not know if it is that of requestA or that of requestB.
But there is a finite number of streams available, so in theory, I could run out of streams and have to restart the connection.
That is correct. At 1 ms per exchange, it will take about 12 days to consume the stream ids for a single connection ((2^31-1)/1000/3600/24/2=12.4 days) -- remember that stream ids are incremented by 2 (clients only send odd stream ids).
While this is possible, I have never encountered this case in all the HTTP/2 deployments that I have seen -- typically the connection goes idle and gets closed well before consuming all stream ids.
The specification preferred simplicity and stable features over the ability to reuse stream ids.
Also, bear in mind that HTTP/2 was designed mostly with the web in mind, where browsers make a number of requests to download a web page and its resources, but then stay idle for a while.
The case where an HTTP/2 connection is bombed with non-stop requests is definitely possible, but much rarer and as such it has not probably been deemed important enough in the design -- using 8 bytes for stream ids seems overkill and a cost that is paid for each request even if the 4 bytes limit is never, practically, reached.

Does the Websocket protocol manage the sending of large data in chunks

Hi guys I was just wondering if the websocket protocol already handles the sending of large data in chunks. At least knowing that it does will save me the time of doing so myself.
According to RFC-6455 base framing, has a maximum size limit of 2^63 bytes which means it actually depends on your client library implementation.
I was just wondering if the websocket protocol already handles the sending of large data in chunks...
Depends what you mean by that.
The WebSockets protocol is frame based (not stream based)
If what you're wondering about is "will a huge payload arrive in one piece?" - the answer is always "yes".
The WebSockets protocol is a frame / message based protocol - not a streaming protocol. Which means that the protocols wraps and unwraps messages in a way that's designed to grantee message ordering and integrity. A messages will not get...
...truncated in the middle (unlike TCP/IP, which is a streaming based protocol, where ordering is preserved, but not message boundaries).
The WebSockets protocol MAY use fragmented "packets"
According to the standard, the protocol may break large messages to smaller chunks. It doesn't have too.
There's a 32 bit compatibility concern that makes some clients / servers fragment messages into smaller fragments and later put them back together on the receiving end (before the onmessage callback is called).
Application layer "chunking" is required for multiplexing
Sending large payloads over a single WebSocket connection will cause a pipelining issue, where other messages will have to wait until the huge payload is sent, received and (if required) re-assembled.
In practice, this means that large payloads should be fragmented by the application layer. This "chunked" application layer approach will enable multiplexing the single WebSocket connection.

http2: PUSH_PROMISE client-side stream state

The http2 spec says:
A receiver MUST treat the receipt of a PUSH_PROMISE on a stream that
is neither "open" nor "half-closed (local)" as a connection error
(Section 5.4.1) of type PROTOCOL_ERROR. However, an endpoint that has
sent RST_STREAM on the associated stream MUST handle PUSH_PROMISE
frames that might have been created before the RST_STREAM frame is
received and processed.
The spec also has this lifecycle diagram.
My understanding is that in order for a client to receive a PUSH_PROMISE on a stream, the client must have all of these on that stream:
sent HEADERS frame (+ any CONTINUATIONs) to the server
not received END_STREAM flag from the server
not received RST_STREAM frame from the server
(Notably missing here is "not sent RST_STREAM frame to the server”, which would lead to the stream being "closed"; the quote above says this is not grounds for connection error.)
In any case where these criteria are not met, then the client must treat receiving a PUSH_PROMISE as a connection error.
Is this a correct understanding?
Your understanding is correct.
The HTTP/2 protocol associates PUSH_PROMISE streams to an existing stream, called the associated stream.
The associated stream must meet the conditions defined in the section of the specification quoted in the question; the bullet list in the question is another way of saying the same thing that the specification section says.

why http/2 stream id must be ascending?

in RFC 7540 section 5.1.1. (https://www.rfc-editor.org/rfc/rfc7540#section-5.1.1), it specifies as following:
The identifier of a newly established stream MUST be numerically greater than all streams that the initiating endpoint has opened or reserved.
I searched a lot on Google, but still no one explained why the stream ID must be in an ascending order. I don't see any benefit from making this rule to the protocol. From my point of view, out of order stream IDs should also work well if the server just consider the "stream ID" as an ID and use it to distinguish HTTP2 request.
So could anyone can help out explaining the exact reason for this specification?
Thanks a lot!
Strictly ascending stream IDs are an easy way to make them unique (per connection), and it's super-easy to implement.
Choosing - like you say - "out of order" stream IDs is potentially more complicated, as it requires to avoid clashes, and potentially consumes more resources, as you have to remember all the stream IDs that are in use.
I don't think there is any particular reason to specify that stream IDs must be ascending apart simplicity.
6.8. GOAWAY
The GOAWAY frame (type=0x7) is used to initiate shutdown of a
connection or to signal serious error conditions. GOAWAY allows an
endpoint to gracefully stop accepting new streams while still
finishing processing of previously established streams. This enables
administrative actions, like server maintenance.
There is an inherent race condition between an endpoint starting new
streams and the remote sending a GOAWAY frame. To deal with this
case, the GOAWAY contains the stream identifier of the last peer-
initiated stream that was or might be processed on the sending
endpoint in this connection. For instance, if the server sends a
GOAWAY frame, the identified stream is the highest-numbered stream
initiated by the client.
Once sent, the sender will ignore frames sent on streams initiated by
the receiver if the stream has an identifier higher than the included
last stream identifier. Receivers of a GOAWAY frame MUST NOT open
additional streams on the connection, although a new connection can
be established for new streams.
If the receiver of the GOAWAY has sent data on streams with a higher
stream identifier than what is indicated in the GOAWAY frame, those
streams are not or will not be processed. The receiver of the GOAWAY
frame can treat the streams as though they had never been created at
all, thereby allowing those streams to be retried later on a new
connection.

Resources