HTTP2 push after serving content - http2

Is it possible to push content to the client after the requested content has already been served?
This Wikipedia article explains the sequence of frames as follows:
Server receives HEADERS frame asking for index.html in stream 3...
Server sends a PUSH_PROMISE for styles.css and a PUSH_PROMISE for script.js, again in stream 3...
Server sends a HEADERS frame in stream 3 for responding to the request for index.html.
Server sends DATA frame(s) with the contents of index.html, still in stream 3.
Server sends HEADERS frame for the response to styles.css in stream 4
Server sends HEADERS frame for the response to script.js in stream 6.
Server sends DATA frames for the contents of styles.css and script.js, using their respective stream numbers.
I was wondering if, for example, I could keep open stream 3 and after I sent the DATA frame(s) for index.html and afterwards send PUSH_PROMISE frames.
Thanks for any responses :)

Is it possible to push content to the client after the requested content has already been served?
I believe the answer is 'no', based on 6.6. PUSH_PROMISE in RFC 7540. Here's the relevant quote (emphasis mine):
PUSH_PROMISE frames MUST only be sent on a peer-initiated stream
that is in either the "open" or "half-closed (remote)" state. The
stream identifier of a PUSH_PROMISE frame indicates the stream it
is associated with. If the stream identifier field specifies the
value 0x0, a recipient MUST respond with a connection error
(Section 5.4.1) of type PROTOCOL_ERROR.
Back to your question:
I was wondering if, for example, I could keep open stream 3 and after I sent the DATA frame(s) for index.html and afterwards send PUSH_PROMISE frames.
Here's something that I believe you could do, along those lines: you could send all DATA frames for stream 3 but withhold the END_STREAM flag thus keeping the (which means that the client would still be waiting for content). Then send the PUSH_PROMISE, then send an empty (zero length) DATA frame with END_STREAM set on stream 3. I can't think of a scenario where that would be useful, however.

Related

why http2 use prioritization over stream instead of requests?

The concepts of "stream, connection, message, and frame" constitute the main design of http2. And what confuses me is the idea of stream.
At first, the stream idea seems to me only as a virtual description of the flow of frames. But then I find the priority of http2 is aimed at streams instead of messages/requests. And why is that, I think the applications both client and server sides care more about and directly control the requests or messages, not which stream these messages reside in.
Plese refer to "stream prioritization":
https://developers.google.com/web/fundamentals/performance/http2#design_and_technical_goals
A stream in HTTP/2 corresponds to all the frames which make up a request and its corresponding response, so is the natural place to handle priority and flow control. The sentences "the response for this request should have high priority" and "the stream for this request and its response should have high priority" are equivalent.
There is a mention in the document you quote of a stream carrying "one or more messages", but I think that's just sloppy language in that document. If you look at section 8.1 of the spec it says "A client sends an HTTP request on a new stream" and "An HTTP request/response exchange fully consumes a single stream."
There can be other frames in that stream, such as PUSH_PROMISE, but those aren't actual requests and responses; the response data for a server push is sent on a new stream, which can then be given a different priority.

Setting up a video stream with Spring Framework and Chrome

We're writing a Spring service that makes an HTTP endpoint available through which a video (or audio) file from an Amazon S3 store can be streamed. The basic idea is that you can type in an url in the Google Chrome address bar, and the service will fetch the file from S3 and stream it, in such a way that the user can start watching immediately without having to wait for a download to complete, and that the user can click on a random spot in the video's progress bar and immediately start watching the video from that spot.
The way I understand this should work in theory, is that Chrome starts downloading the file. The service responds with HTTP 200 and includes an Accept-Ranges: bytes and a Content-Length: filesize header. The filesize is known, because we can query that as metadata from S3 without fetching the entire file. Including these headers causes the browser to cancel the download, and request the file again with a Range: bytes=0-whatever header (where whatever is some chunk size that Chrome decides). The service then responds with HTTP 206 (Partial content) and the requested byte range, which we can determine easily because S3 supports the same range protocol. Chrome then requests successive chunks from the service, until the stream ends.
On the Spring side, we're sending the data out in a ResponseEntity<InputStreamResource> (as per this SO answer).
However, we observe in practice that while Chrome's cancels its first request after a few hundred bytes. However, it sends a second request with a Range: bytes=0- header, effectively asking for the entire file. The server responds with an HTTP 206. As a result, is has only downloaded a few hundred bytes of video, and the video obviously doesn't start playing.
Interestingly, in Firefox it all works properly. Unfortunately, our app needs to support Chrome. Are we missing some part of the protocol?
It turns out we had an off-by-one error in the Content-Range response header.
The syntax is Content-Range: bytes start-end/total. With a total of 10, if you want to get the entire range, you need to specify bytes 0-9/10, not 0-10/10, which was what we were doing.
Of course with the larger sizes of real files, and the actual ranges of chunks in the middle of such files, this error was a lot harder to notice than in the contrived example in the previous paragraph... ಠ_ಠ

http2: PUSH_PROMISE client-side stream state

The http2 spec says:
A receiver MUST treat the receipt of a PUSH_PROMISE on a stream that
is neither "open" nor "half-closed (local)" as a connection error
(Section 5.4.1) of type PROTOCOL_ERROR. However, an endpoint that has
sent RST_STREAM on the associated stream MUST handle PUSH_PROMISE
frames that might have been created before the RST_STREAM frame is
received and processed.
The spec also has this lifecycle diagram.
My understanding is that in order for a client to receive a PUSH_PROMISE on a stream, the client must have all of these on that stream:
sent HEADERS frame (+ any CONTINUATIONs) to the server
not received END_STREAM flag from the server
not received RST_STREAM frame from the server
(Notably missing here is "not sent RST_STREAM frame to the server”, which would lead to the stream being "closed"; the quote above says this is not grounds for connection error.)
In any case where these criteria are not met, then the client must treat receiving a PUSH_PROMISE as a connection error.
Is this a correct understanding?
Your understanding is correct.
The HTTP/2 protocol associates PUSH_PROMISE streams to an existing stream, called the associated stream.
The associated stream must meet the conditions defined in the section of the specification quoted in the question; the bullet list in the question is another way of saying the same thing that the specification section says.

http2: PUSH_PROMISE reserved stream id validation

The spec says:
The identifier of a newly established stream MUST be numerically
greater than all streams that the initiating endpoint has opened or
reserved. This governs streams that are opened using a HEADERS frame
and streams that are reserved using PUSH_PROMISE. An endpoint that
receives an unexpected stream identifier MUST respond with a
connection error (Section 5.4.1) of type PROTOCOL_ERROR.
For the case of the server that sends PUSH_PROMISE it makes sense to me that conforming servers must send strictly increasing stream ids. But I don't understand how the client is supposed to detect this situation.
For example, on one connection, if the server sends:
PUSH_PROMISE promised stream 2
PUSH_PROMISE promised stream 4
because of concurrency the client might receive
PUSH_PROMISE promised stream 4
PUSH_PROMISE promised stream 2
the spec would have me think that client should error on this, but the server did nothing wrong.
What am I missing here?
If the server wrote PUSH_PROMISE[stream=2] and then PUSH_PROMISE[stream=4], then those frames will be delivered in the same order (this is guaranteed by TCP).
It is a task of a client to read from the socket in an ordered way.
For a HTTP/2 implementation the requirement is even stricter, in that not only it has to read from the socket in an ordered way, but it must also parse the frames in an ordered way.
This is required by the fact that PUSH_PROMISE frame carries a HPACK block and in order to keep the server and client HPACK context in sync, the frames (or at least the HPACK blocks of those frames) must be processed in order, so stream=2 before stream=4.
After that, the client is free to process the 2 frames concurrently.
For implementations, this is actually quite simple to achieve, since a thread allocated to perform I/O reads typically does:
loop
read bytes from socket
if no bytes or socket closed -> break loop
parse read bytes (with HPACK decoding) -> produce frame objects
pass frame objects to upper software layer
end loop
Since the read and parse are sequential and no other thread reads from the same socket, the ordering guarantee is met.

why http/2 stream id must be ascending?

in RFC 7540 section 5.1.1. (https://www.rfc-editor.org/rfc/rfc7540#section-5.1.1), it specifies as following:
The identifier of a newly established stream MUST be numerically greater than all streams that the initiating endpoint has opened or reserved.
I searched a lot on Google, but still no one explained why the stream ID must be in an ascending order. I don't see any benefit from making this rule to the protocol. From my point of view, out of order stream IDs should also work well if the server just consider the "stream ID" as an ID and use it to distinguish HTTP2 request.
So could anyone can help out explaining the exact reason for this specification?
Thanks a lot!
Strictly ascending stream IDs are an easy way to make them unique (per connection), and it's super-easy to implement.
Choosing - like you say - "out of order" stream IDs is potentially more complicated, as it requires to avoid clashes, and potentially consumes more resources, as you have to remember all the stream IDs that are in use.
I don't think there is any particular reason to specify that stream IDs must be ascending apart simplicity.
6.8. GOAWAY
The GOAWAY frame (type=0x7) is used to initiate shutdown of a
connection or to signal serious error conditions. GOAWAY allows an
endpoint to gracefully stop accepting new streams while still
finishing processing of previously established streams. This enables
administrative actions, like server maintenance.
There is an inherent race condition between an endpoint starting new
streams and the remote sending a GOAWAY frame. To deal with this
case, the GOAWAY contains the stream identifier of the last peer-
initiated stream that was or might be processed on the sending
endpoint in this connection. For instance, if the server sends a
GOAWAY frame, the identified stream is the highest-numbered stream
initiated by the client.
Once sent, the sender will ignore frames sent on streams initiated by
the receiver if the stream has an identifier higher than the included
last stream identifier. Receivers of a GOAWAY frame MUST NOT open
additional streams on the connection, although a new connection can
be established for new streams.
If the receiver of the GOAWAY has sent data on streams with a higher
stream identifier than what is indicated in the GOAWAY frame, those
streams are not or will not be processed. The receiver of the GOAWAY
frame can treat the streams as though they had never been created at
all, thereby allowing those streams to be retried later on a new
connection.

Resources