HSTS header for all resources? or documents? - performance

Do I have to return a HTTP Strict Transport Security header for all resources (stylesheets, scripts, images) loaded with my documents? or is it enough to include them for the documents only?
The security hint should be applied per-domain, so just sending it with the documents should be enough to inform the browser to only fetch resources over HTTPS? Or have I misunderstood how it is supposed to work?
Anyone only accessing my site’s resources directly are not really an audience I want to cater specifically for anyway.

Turns out it should be enough to send the header for documents.
If a UA receives HTTP responses from a Known HSTS Host over a secure channel but the responses are missing the STS header field, the UA MUST continue to treat the host as a Known HSTS Host until the max-age value for the knowledge of that Known HSTS Host is reached.
https://www.rfc-editor.org/rfc/rfc6797#section-8.6
Hoping clients have implemented the RFC correctly.
Update: Here is the Apache configuration I used. I unset it for resources instead of setting it for documents specifically to make sure the header is used in redirects and other pages generated by Apache.
# Enable HSTS for all responses, but disable for common resources
Header always set Strict-Transport-Security "max-age=324000; includeSubDomains"
<FilesMatch "\.(css|gif|ico|jpeg|jpg|js|png|woff)$">
Header unset Strict-Transport-Security
</FilesMatch>
Shaves off 64 bytes from each resource’s response headers.

Related

What is the HTTP/1 equivalent of the HTTP/2 `:scheme` header?

I'm writing a proxy from HTTP/2 to HTTP/1 and vice-versa.
When I have an incoming HTTP/2 request, which defines :scheme, what header should I map that to for my proxied HTTP/1 request?
The closest thing I can find is https://www.rfc-editor.org/rfc/rfc7239#section-5.4
Mapping the HTTP/2 :scheme pseudo header to HTTP/1.1 X-Forwarded-Proto header would be correct.
You basically shouldn't map it.
For a start HTTP has no direct equivalent to the :scheme pseudo-header. The request was a relative path (e.g. /path/page/) rather than an absolute path (e.g. https://www.example.com/path/page/) and the Host header contained just the server name and not the scheme.
So the connection knows whether it is HTTP or HTTPS and this is exposed to webservers and the like (e.g. in the REQUEST_SCHEME variable for Apache) but at an HTTP level it doesn't know.
If acting as an intercepting proxy and taking one HTTP/2 connection, and forwarding requests to another, then you should open a HTTP or HTTPS connection for that second connection, as you see fit depending what the downstream system supports.
As sbordet points out if you want to make the downstream system aware of what the original scheme was then you can use X-Forwarded-Proto header (technically obseleted but still used) or the Forwarded header, but that's more for informational purposes rather than a direct mapping of what was in the original request. The scheme is related to the current request.
According to RFC 7540, section 8.1.2:
While HTTP/1.x used the message start-line (see [RFC7230],
Section 3.1) to convey the target URI, the method of the request, and
the status code for the response, HTTP/2 uses special pseudo-header
fields beginning with ':' character (ASCII 0x3a) for this purpose.
And:
The ":scheme" pseudo-header field includes the scheme portion of
the target URI ([RFC3986], Section 3.1).
":scheme" is not restricted to "http" and "https" schemed URIs. A
proxy or gateway can translate requests for non-HTTP schemes,
enabling the use of HTTP to interact with non-HTTP services.
So, if you're proxing HTTP, it should be "http" and if you're proxying HTTPS, it should be "https".
Reading again, I can see that I may have had the sense of the question the wrong way around (I was thinking HTTP1 client, HTTP2 server). But the above two quotes are still the relevant ones. You don't put :scheme in an HTTP1 header - it forms part of the URI that you place in the message start line.

When Authorization header is present in the request, its' always a Cache Miss

When Authorization header present in the inbound request, it's always a Cache Miss. My requirement is, I need ATS to treat the Authorization header like any other header (It should not cause cache miss and it should get forwarded to upstream service). How can I achieve this.
This may sound non-secure, but, I have a specific usecase for this. This cache is for internal use and it's access is controlled by other means.
I tried this
As per the official documentation
By default, Traffic Server does not cache objects with the following
request headers:
Authorization
Cache-Control: no-store
Cache-Control: no-cache
To configure Traffic Server to ignore this request header,
Edit proxy.config.http.cache.ignore_client_no_cache in records.config.
CONFIG proxy.config.http.cache.ignore_client_no_cache INT 1 Run the
command traffic_ctl config reload to apply the configuration changes.
but, no luck
If your origin returns a cache-control header with the 'public' directive (for instance, "Cache-Control: max-age=60,public") or including the s-maxage directive (for instance, "Cache-Control: s-maxage=60"), ATS should start caching the object. The relevant http RFC:
https://www.rfc-editor.org/rfc/rfc2616#section-14.8
When a shared cache (see section 13.7) receives a request
containing an Authorization field, it MUST NOT return the
corresponding response as a reply to any other request, unless one
of the following specific exceptions holds:
1. If the response includes the "s-maxage" cache-control
directive, the cache MAY use that response
...
3. If the response includes the "public" cache-control directive,
it MAY be returned in reply to any subsequent request.
Similarly, you could also use the header_rewrite plugin to remove the Authorization header from the request, or to add public/s-maxage.
Actually this https://docs.trafficserver.apache.org/en/latest/admin-guide/configuration/cache-basics.en.html#configuring-traffic-server-to-ignore-www-authenticate-headers did the trick for me.
The following instructions was applicable for Authorization header as well, besides WWW-Authenticate Header. They need to update the documentation.
Configuring Traffic Server to Ignore WWW-Authenticate Headers
By default, Traffic Server does not cache objects that contain WWW-Authenticate response headers. The WWW-Authenticate header contains authentication parameters the client uses when preparing the authentication challenge response to an origin server.
When you configure Traffic Server to ignore origin server WWW-Authenticate headers, all objects with WWW-Authenticate headers are stored in the cache for future requests. However, the default behavior of not caching objects with WWW-Authenticate headers is appropriate in most cases. Only configure Traffic Server to ignore server WWW-Authenticate headers if you are knowledgeable about HTTP 1.1.
To configure Traffic Server to ignore server WWW-Authenticate headers:
Edit proxy.config.http.cache.ignore_authentication in records.config.
CONFIG proxy.config.http.cache.ignore_authentication INT 1
Run the command traffic_ctl config reload to apply the configuration changes.

Request Safari web client to disregard HSTS

I've taken over a site that previously used HSTS, but because of some iframes I need to embed, I need to disable it. I'm able to intelligently redirect from one protocol to the other, but Safari, in particular, doesn't want to disregard its HSTS cache.
In this question (Is it possible to ask your users to clear their HTTP Strict Transport Security (HSTS) for your site?) and on other sites, I've seen that I can request browsers to remove my site from their HSTS cache by sending the following header:
Strict-Transport-Security: max-age=0
However, Safari doesn't seem to care about that. On a coworker's computer, which has the site in its HSTS cache, receiving that header is not preventing it from automatically redirecting to https.
Anyone know a way to tell Safari to disregard HSTS?
It could be set on the top level domain.
So if you are looking at www.example.com then maybe the policy has been published from example.com with includeSubDomains option so it affects all subdomains (including www subdomain).
If so the answer is similar. Publish this header from the base domain and make sure you visit the base domain (even if it just redirects to main domain).
Strict-Transport-Security: max-age=0; includeSubDomains
Also check the preload lists for the base domain.
Would also be worth looking through web config and any scripts or dynamic parts of the website (e.g. PHP, Java Servlets... Etc.) to make sure something is not still setting this when you visit a certain page for example.

Firefox Fonts via CDN with NGINX - Access-Control-Allow-Origin Header

Firefox does not allow cross domain fonts it seems. In order to serve fonts from a domain other than the web page, the font files need to be served with a Access-Control-Allow-Origin header. I am currently acheiving this via NGINX like this.
location ~* .(ttf|woff|eot|otf)$ {
add_header Access-Control-Allow-Origin *;
expires 8d;
}
This is working perfectly, however I was wanting to know what the proper value(s) for the header would be, if I did not want to use *. Would be the subdomain I'm using for the CDN? The domain for the site? How would I specify multiple values?
It needs to be the doamin(s) from where you are requesting the resources.
Let's say you use the font for domain http://example.com then add Access-Control-Allow-Origin: http://example.com. You can space-separate multiple origins.
In some browsers multiple domains cause issues. In that case you can programmatically read the Origin header of the response, check it against some whitelist and respond with the same value in Access-Control-Allow-Origin header. IMO, the latter would be the best practice.
Additional Note
The value of the Access-Control-Allow-Origin header need to consist of scheme (e.g. http), domain (e.g. example.com) and port (only if it is not a default port).
W3C Spec

Caching with SSL certification

I read if the request is authenticated or secure, it won't be cached. We previously worked on our cache and now planning to purchase a SSL certificate.
If caching cannot be done with SSL connection then is that mean our work on caching is useless?
Reference: http://www.mnot.net/cache_docs/
Your reference is wrong. Content sent over https will be cached in modern browsers, but they obviously cannot be cached in intermediate proxies. See http://arstechnica.com/business/2011/03/https-is-great-here-is-why-everyone-needs-to-use-it-so-ars-can-too/ or https://blog.httpwatch.com/2011/01/28/top-7-myths-about-https/ for example.
You can use the Cache-Control: public header to allow a representation served over HTTPS to be cached.
While the document you refer to says "If the request is authenticated or secure (i.e., HTTPS), it won’t be cached.", it's within a paragraph starting with "Generally speaking, these are the most common rules that are followed [...]".
The same document goes into more details after this:
Useful Cache-Control response headers include:
public — marks authenticated responses as cacheable; normally, if HTTP authentication is required, responses are automatically private.
(What applies to HTTP with authentication also applies to HTTPS.)
Obviously, documents that actually contain sensitive information only aimed for the authenticated user should not be served with this header, since they really shouldn't be cached. However, using this header for items that are suitable for caching (e.g. common images and scripts) should improve the performance of your website (as expected for caching over plain HTTP).
What will never happen with HTTPS is the caching of resources by intermediate proxy servers (between the client and your web-server, at least the external part, if you have a load-balancer or similar). Some CDNs will serve content over HTTPS (assuming it's suitable for your system to trust these CDNs). In general, these proxy servers wouldn't fall under the control of your cache design anyway.

Resources