I'm writing a proxy from HTTP/2 to HTTP/1 and vice-versa.
When I have an incoming HTTP/2 request, which defines :scheme, what header should I map that to for my proxied HTTP/1 request?
The closest thing I can find is https://www.rfc-editor.org/rfc/rfc7239#section-5.4
Mapping the HTTP/2 :scheme pseudo header to HTTP/1.1 X-Forwarded-Proto header would be correct.
You basically shouldn't map it.
For a start HTTP has no direct equivalent to the :scheme pseudo-header. The request was a relative path (e.g. /path/page/) rather than an absolute path (e.g. https://www.example.com/path/page/) and the Host header contained just the server name and not the scheme.
So the connection knows whether it is HTTP or HTTPS and this is exposed to webservers and the like (e.g. in the REQUEST_SCHEME variable for Apache) but at an HTTP level it doesn't know.
If acting as an intercepting proxy and taking one HTTP/2 connection, and forwarding requests to another, then you should open a HTTP or HTTPS connection for that second connection, as you see fit depending what the downstream system supports.
As sbordet points out if you want to make the downstream system aware of what the original scheme was then you can use X-Forwarded-Proto header (technically obseleted but still used) or the Forwarded header, but that's more for informational purposes rather than a direct mapping of what was in the original request. The scheme is related to the current request.
According to RFC 7540, section 8.1.2:
While HTTP/1.x used the message start-line (see [RFC7230],
Section 3.1) to convey the target URI, the method of the request, and
the status code for the response, HTTP/2 uses special pseudo-header
fields beginning with ':' character (ASCII 0x3a) for this purpose.
And:
The ":scheme" pseudo-header field includes the scheme portion of
the target URI ([RFC3986], Section 3.1).
":scheme" is not restricted to "http" and "https" schemed URIs. A
proxy or gateway can translate requests for non-HTTP schemes,
enabling the use of HTTP to interact with non-HTTP services.
So, if you're proxing HTTP, it should be "http" and if you're proxying HTTPS, it should be "https".
Reading again, I can see that I may have had the sense of the question the wrong way around (I was thinking HTTP1 client, HTTP2 server). But the above two quotes are still the relevant ones. You don't put :scheme in an HTTP1 header - it forms part of the URI that you place in the message start line.
Related
What is the purpose of http2 pseudo-headers :authority & :method ? I feel confused because :authority & :method seems to repeat the Request URL(the host) and Request Method in http 1.1
Compared to :path pseudo-headers, as explained in https://developers.google.com/web/fundamentals/performance/http2#header_compression I can see it can be used for the consecutive requests for other resource. So I suspect :authority & :method maybe an optimization for that purpose too. But I can't figure out how exactly. For example, if :authority, :method and :path are all different from the original Request URL and Request Method, shouldn't browser issue a new request?
The HTTP/2 :method pseudo-header is equivalent to the HTTP/1.1 request method (the first token of the HTTP/1.1 request line).
The HTTP/2 :authority pseudo-header is a stricter, mandatory, information about the host authority (i.e. host name and host port).
In HTTP/1.1, the host authority is derived from multiple sources, and may even be absent, causing a number of confusing behaviors that depend on server implementations.
For example, the authority could be present in the HTTP/1.1 request target, when it is in absolute form.
If the HTTP/1.1 request target is in origin form, typically the authority is defined by the Host header.
However, a HTTP/1.1 request could be of this form:
GET / HTTP/1.1\r\n
Host: \r\n
\r\n
where the Host field is empty and hence there is no authority. Requests of this type are uncommon, but technically valid and different server implementations may behave differently.
Furthermore, the authority may be overridden by the Forwarded header and its obsolete predecessors X-Forwarded-* headers, but again with fuzzy rules.
The purpose of the HTTP/2 :authority pseudo-header is to clear out the confusion about the authority so that there is a single source and no more multiple sources such as the HTTP/1.1 absolute form or Host header, etc.
Regarding the optimizations, it is true that HTTP/2 may optimize the send of the :authority information, but that is orthogonal to the purpose of the :authority pseudo-header.
The optimization works (very roughly) in this way: when a user agent (such as a browser) makes a request for a page, it is likely that it will make subsequent requests to the same authority for other, secondary, resources that are necessary to render the page (for example, CSS resources, JavaScript resources, images, etc.).
The HTTP/2 protocol indexes the :authority string, let's say at index 17 of the HPACK context, so the browser sends to the server the first request with the information :authority => (17, "veryverylongdomainname.com").
For the second request, the browser and the server now share this common information, so the browser can just send to the server :authority => 17, and the server will look up the authority from index 17, saving the send of the authority string bytes over the network.
This HPACK mechanism is valid for most headers, not only for pseudo-headers. For further information please see the HPACK specification.
Browser must make a request for every resource. If they make requests for the same authority, the HPACK optimization may kick in and reduce the number of bytes over the network.
If browsers make requests for different authorities (imagine a web spider), then the authority is likely to be different in every request and the HPACK optimization will likely not kick in because the authority is always different.
In the worst case (authorities always different), HTTP/2 is as good as HTTP/1.1; but in the common case (web pages with many resources with the same authority), HTTP/2 is better than HTTP/1.1, as less bytes are sent over the network.
When Authorization header present in the inbound request, it's always a Cache Miss. My requirement is, I need ATS to treat the Authorization header like any other header (It should not cause cache miss and it should get forwarded to upstream service). How can I achieve this.
This may sound non-secure, but, I have a specific usecase for this. This cache is for internal use and it's access is controlled by other means.
I tried this
As per the official documentation
By default, Traffic Server does not cache objects with the following
request headers:
Authorization
Cache-Control: no-store
Cache-Control: no-cache
To configure Traffic Server to ignore this request header,
Edit proxy.config.http.cache.ignore_client_no_cache in records.config.
CONFIG proxy.config.http.cache.ignore_client_no_cache INT 1 Run the
command traffic_ctl config reload to apply the configuration changes.
but, no luck
If your origin returns a cache-control header with the 'public' directive (for instance, "Cache-Control: max-age=60,public") or including the s-maxage directive (for instance, "Cache-Control: s-maxage=60"), ATS should start caching the object. The relevant http RFC:
https://www.rfc-editor.org/rfc/rfc2616#section-14.8
When a shared cache (see section 13.7) receives a request
containing an Authorization field, it MUST NOT return the
corresponding response as a reply to any other request, unless one
of the following specific exceptions holds:
1. If the response includes the "s-maxage" cache-control
directive, the cache MAY use that response
...
3. If the response includes the "public" cache-control directive,
it MAY be returned in reply to any subsequent request.
Similarly, you could also use the header_rewrite plugin to remove the Authorization header from the request, or to add public/s-maxage.
Actually this https://docs.trafficserver.apache.org/en/latest/admin-guide/configuration/cache-basics.en.html#configuring-traffic-server-to-ignore-www-authenticate-headers did the trick for me.
The following instructions was applicable for Authorization header as well, besides WWW-Authenticate Header. They need to update the documentation.
Configuring Traffic Server to Ignore WWW-Authenticate Headers
By default, Traffic Server does not cache objects that contain WWW-Authenticate response headers. The WWW-Authenticate header contains authentication parameters the client uses when preparing the authentication challenge response to an origin server.
When you configure Traffic Server to ignore origin server WWW-Authenticate headers, all objects with WWW-Authenticate headers are stored in the cache for future requests. However, the default behavior of not caching objects with WWW-Authenticate headers is appropriate in most cases. Only configure Traffic Server to ignore server WWW-Authenticate headers if you are knowledgeable about HTTP 1.1.
To configure Traffic Server to ignore server WWW-Authenticate headers:
Edit proxy.config.http.cache.ignore_authentication in records.config.
CONFIG proxy.config.http.cache.ignore_authentication INT 1
Run the command traffic_ctl config reload to apply the configuration changes.
I am hoping to get some clarification on the expected behavior of a SIP Proxy when proxying 401 responses from a downstream UAS.
Our SIP Proxy is configured to proxy requests downstream in a round-robin fashion. If the downstream UAS responds to an INVITE with a 401, I would expect the SIP Proxy to keep enough state to select this same UAS as the target when the originating upstream UAC sends the second INVITE containing authentication credentials.
Instead, what I'm seeing is that the SIP Proxy will proxy the 401 response, receive the ACK from the upstream UAC, and immediately destroy all state pertaining to this dialog. Then when the upstream UAC sends the second INVITE with authentication credentials the SIP Proxy will forward that request in round-robin fashion. If we get lucky then the SIP Proxy will select the same UAS for the second INVITE, but most of the time it will select some other downstream target.
I'm new to SIP and I've been reading RFC 3261 to try and understand what the correct behavior should be, but I'm not seeing an obvious answer.
I think what you are really asking is an understanding of how further requests within a dialog work. For that you need to understand the "Record-Route" / "Route" headers.
It really doesn't mater what the response code is, the next request in the dialog will go directly to the remote URI unless (and there almost always is) a route set provided.
From section 12 of RFC 3261:
The route set is the list of servers that need to be traversed to
send a request to the peer.
From section 16.6 Request Forwarding
4. Record-Route
If this proxy wishes to remain on the path of future requests
in a dialog created by this request (assuming the request
creates a dialog), it MUST insert a Record-Route header field
value into the copy before any existing Record-Route header
field values, even if a Route header field is already present.
From 20.34 Route
The Route header field is used to force routing for a request
through the listed set of proxies. Examples of the use of the
Route header field are in Section 16.12.1.
From 12.1.2 UAC Behavior
The route set MUST be set to the list of URIs in the Record-Route
header field from the response, taken in reverse order and preserving
all URI parameters. If no Record-Route header field is present in
the response, the route set MUST be set to the empty set. This route
set, even if empty, overrides any pre-existing route set for future
requests in this dialog.
From 16.12 Summary of Proxy Route Processing
In the absence of local policy to the contrary, the processing a
proxy performs on a request containing a Route header field can be
summarized in the following steps.
1. The proxy will inspect the Request-URI. If it indicates a
resource owned by this proxy, the proxy will replace it with
the results of running a location service. Otherwise, the
proxy will not change the Request-URI.
2. The proxy will inspect the URI in the topmost Route header
field value. If it indicates this proxy, the proxy removes it
from the Route header field (this route node has been
reached).
3. The proxy will forward the request to the resource indicated
by the URI in the topmost Route header field value or in the
Request-URI if no Route header field is present. The proxy
determines the address, port and transport to use when
forwarding the request by applying the procedures in [4] to
that URI.
See this example for how it works.
So basically the initial request should build up "Route-Set" that is then used to generate the "Route" header in the following request.
So for your problem, it sounds like either the "Route-Set" is not being built up and/or being sent back in the response or the UAC isn't using the remote target and route set to build the Request-URI and Route headers correctly for the next request.
There is also the difference between strict and loose routing which also may be in play here as well. I would assume you would be using lr tho.
Do I have to return a HTTP Strict Transport Security header for all resources (stylesheets, scripts, images) loaded with my documents? or is it enough to include them for the documents only?
The security hint should be applied per-domain, so just sending it with the documents should be enough to inform the browser to only fetch resources over HTTPS? Or have I misunderstood how it is supposed to work?
Anyone only accessing my site’s resources directly are not really an audience I want to cater specifically for anyway.
Turns out it should be enough to send the header for documents.
If a UA receives HTTP responses from a Known HSTS Host over a secure channel but the responses are missing the STS header field, the UA MUST continue to treat the host as a Known HSTS Host until the max-age value for the knowledge of that Known HSTS Host is reached.
https://www.rfc-editor.org/rfc/rfc6797#section-8.6
Hoping clients have implemented the RFC correctly.
Update: Here is the Apache configuration I used. I unset it for resources instead of setting it for documents specifically to make sure the header is used in redirects and other pages generated by Apache.
# Enable HSTS for all responses, but disable for common resources
Header always set Strict-Transport-Security "max-age=324000; includeSubDomains"
<FilesMatch "\.(css|gif|ico|jpeg|jpg|js|png|woff)$">
Header unset Strict-Transport-Security
</FilesMatch>
Shaves off 64 bytes from each resource’s response headers.
I'm loading my script on a domain and sending some data with POST and the use of Ext.Ajax.request() to that same domain.
Somehow the dev-tools show me, that there is a failed OPTIONS request.
Request URL : myurl-internal.com:8090/some/rest/api.php
Request Headers
Access-Control-Request-Headers : origin, x-requested-with, content-type
Access-Control-Request-Method : POST
Origin : http://myurl-internal.com:8090
It's both HTTP and not HTTPS. Same port, same host ... I don't know why it's doing this.
The server can't handle such stuff and so the request fails and the whole system stops working.
It's not really specific to Ext JS -- see these related threads across other frameworks. It's the server properly enforcing the CORS standard:
for HTTP request methods that can cause side-effects on user data (in
particular, for HTTP methods other than GET, or for POST usage with
certain MIME types), the specification mandates that browsers
“preflight” the request, soliciting supported methods from the server
with an HTTP OPTIONS request header, and then, upon “approval” from
the server, sending the actual request with the actual HTTP request
method.
If you're going to use CORS, you need to be able to either properly handle or ignore these requests on the server. Ext JS itself doesn't care about the OPTIONS requests -- you'll receive the responses as expected, but unless you do something with them they'll just be ignored (assuming the server actually allows whatever you're trying to do).
If you are NOT intending to use CORS (which sounds like you aren't purposefully going cross-domain) then you need to figure out why the server thinks the originating domain is different (I'm not sure about that). You could also bypass CORS altogether by using JsonP (via Ext's JsonP proxy).
Use relative url instead of absolute, then you will get expected result.
use before request
Ext.Ajax.useDefaultXhrHeader = false