Request Safari web client to disregard HSTS - macos

I've taken over a site that previously used HSTS, but because of some iframes I need to embed, I need to disable it. I'm able to intelligently redirect from one protocol to the other, but Safari, in particular, doesn't want to disregard its HSTS cache.
In this question (Is it possible to ask your users to clear their HTTP Strict Transport Security (HSTS) for your site?) and on other sites, I've seen that I can request browsers to remove my site from their HSTS cache by sending the following header:
Strict-Transport-Security: max-age=0
However, Safari doesn't seem to care about that. On a coworker's computer, which has the site in its HSTS cache, receiving that header is not preventing it from automatically redirecting to https.
Anyone know a way to tell Safari to disregard HSTS?

It could be set on the top level domain.
So if you are looking at www.example.com then maybe the policy has been published from example.com with includeSubDomains option so it affects all subdomains (including www subdomain).
If so the answer is similar. Publish this header from the base domain and make sure you visit the base domain (even if it just redirects to main domain).
Strict-Transport-Security: max-age=0; includeSubDomains
Also check the preload lists for the base domain.
Would also be worth looking through web config and any scripts or dynamic parts of the website (e.g. PHP, Java Servlets... Etc.) to make sure something is not still setting this when you visit a certain page for example.

Related

How to make clients request over HTTPS without HSTS preload?

If I request our website using HTTP http://example.com, the reponse is 301 Moved Permanently with the Location header set to https://example.com - which, of course, is insecure due to MIM attack.
Is there not a way to just repond to the browser something along "make the same request again but this time over HTTPS" insted of explicitly telling the browser the URL?
I was expecting to find this kind of solution on Troy Hunt's blog post but the only suggestion there is to use HSTS preload (ie. register our site with Google) which we do not want to do.
HTTP Strict-Transport-Security (HSTS) allows you to send a HTTP Header to say “next time you use this domain - make sure it’s over HTTPS even if the user types http:// or uses a link beginning http://“.
In Apache it is set with the following config:
Header always set Strict-Transport-Security "max-age=60;"
This sends the message telling the browser to remember this header for 60 seconds. You should increase this as you confirm there are no issues. A setting of 63072000 (2 years) is often recommended.
So this is more secure than a redirect as it happens automatically without needing an insecure HTTP request to be sent which could be intercepted, read and even changed on an insecure network.
For example let’s imagine you have logged on to your internet banking previously on your home WiFi, the browser has remembered the HSTS setting and then you visit your local coffee shop. Here you try to connect to the free WiFi but actually connect to a hackers WiFi instead. If you go to your internet banking with a HTTP link, bookmark or by typing the URL, then HSTS will kick in and you will go over HTTPS straight away and the hacker cannot unencrypt your traffic (within reason).
So. All is good. You can also add the includeSubDomains attribute:
Header always set Strict-Transport-Security "max-age= 63072000; includeSubDomains"
Which adds extra security.
The one flaw with HSTS is it requires that initial connection to load this HTTP header and protect you in future. It also times out after the max-age time. That’s where preload comes in. You can submit your domain to the browsers and they will load this domain’s HSTS setting into the browser code and make this permanent so even that first connection is secure.
However I really don’t like preload to be honest. I just find the fact it’s out of your control dangerous. So if you discover some domain is not using HTTPS (e.g. http://blog.example.com or http://intranet.example.com or http://dev.example.com) then as soon as the preload comes into affect - BANG you’ve forced yourself to upgrade these and quickly as they are inaccessible until then. Reversing from browser takes months at least and few can live with that downtime. Of course you should test this, but that requires going to https://example.com (instead of https://www.example.com) and using includeSubDomains to fully replicate what preload will do and not everyone does that. There are many, many examples of sites getting this wrong.
You’ve also got to ask what you are protecting against and what risks you are exposing yourself to? With a http:// link a hacker intercepting could get access to cookies (which the site can protect against by using the secure attribute on cookies) and possibly intercept the traffic by keeping you on http:// instead of upgrading to https:// (which is mostly mitigated with HSTS and is increasingly flagged by the browser anyway). Remember that even on an attackers WiFi network the green padlock means the connection is secure (within reasonable limitations). So as long as you look for this (and your users do, which is more difficult I admit) the risks are reasonably small. This is why the move to HTTPS everywhere and then HTTPS by default is so important. So for most sites I think HSTS without preload is sufficient, and leaves the control with you the site owner.

Set-Cookie (from AJAX) header not setting cookie in browser

I have a single page application that's using a web API. When a user logs in, I would want the server to set a cookie for further identification.
AJAX requests are obviously HTTP, only with a small identifying header. For as far as I know, the browser's agent should not differentiate between XMLHttpRequest and normal requests. Especially since I'm using a relatively old version of firefox.
App URL: http://sub.domain.com/app
API Request: http://sub.domain.com/service/method
The domain and subdomain are exactly the same. There's no attempt to change other domains cookies.
As you can see the cookie is recognized by the browser's request parser. Even after digging all over SO and Google, I haven't found one logical explanation to why this isn't setting the cookie.
Tried a bunch of different Set-Cookie arguments combinations. I figured the most stable syntax is key=value; expires=date; domain=.domain.com and that's what I use in the example above.
P.S.
I am using actual domain and subdomain, NOT localhost.
Using a relatively old and stable version of Firefox.
I think you issue is quite well explained here
How does a browser handle cookie with no path and no domain
For Set-Cookie without path attribute, RFC6265 states that:
If the server omits the Path attribute, the user agent will use the "directory" of the request-uri's path component as the default value.
So from your server you need to set path=/ as well to make sure cookie is accessible to everyone
Edit-1
Also make sure that your webpage and API both run on the same protocol. Because if the cookie is marked secured then the same cannot be read by an http url
The problem can occur due to two reasons:
The Set-Cookie header returns from an HTTPS request to an HTTP website.
"Path" attribute is not set so it defaults to the API URI's path (as explained by Tarun Lalwani).
The syntax that ended up working was:
Set-Cookie: test=working; Domain=.domain.com; Path=/; Secure

Caching with SSL certification

I read if the request is authenticated or secure, it won't be cached. We previously worked on our cache and now planning to purchase a SSL certificate.
If caching cannot be done with SSL connection then is that mean our work on caching is useless?
Reference: http://www.mnot.net/cache_docs/
Your reference is wrong. Content sent over https will be cached in modern browsers, but they obviously cannot be cached in intermediate proxies. See http://arstechnica.com/business/2011/03/https-is-great-here-is-why-everyone-needs-to-use-it-so-ars-can-too/ or https://blog.httpwatch.com/2011/01/28/top-7-myths-about-https/ for example.
You can use the Cache-Control: public header to allow a representation served over HTTPS to be cached.
While the document you refer to says "If the request is authenticated or secure (i.e., HTTPS), it won’t be cached.", it's within a paragraph starting with "Generally speaking, these are the most common rules that are followed [...]".
The same document goes into more details after this:
Useful Cache-Control response headers include:
public — marks authenticated responses as cacheable; normally, if HTTP authentication is required, responses are automatically private.
(What applies to HTTP with authentication also applies to HTTPS.)
Obviously, documents that actually contain sensitive information only aimed for the authenticated user should not be served with this header, since they really shouldn't be cached. However, using this header for items that are suitable for caching (e.g. common images and scripts) should improve the performance of your website (as expected for caching over plain HTTP).
What will never happen with HTTPS is the caching of resources by intermediate proxy servers (between the client and your web-server, at least the external part, if you have a load-balancer or similar). Some CDNs will serve content over HTTPS (assuming it's suitable for your system to trust these CDNs). In general, these proxy servers wouldn't fall under the control of your cache design anyway.

Use of JSON-P with Sensitive Information

I have a secured website that requires a user to authenticate, and would like to return sensitive data to the client from my API via JSON-P so that I can get around ajax cross-domain issues. I own both the client and server, so I am not concerned about the security from the client perspective (i.e. reading malicious js from the server).
I have been researching ways to secure the JSON-P to prevent Cross-Site Request Forgery, but haven't been able to clearly determine whether checking the Referer is a foolproof method for securing the data. As I understand it, the Referer header cannot be spoofed in this situation because the calls would be from javascript, and Headers cannot be changed. Is this a correct assumption?
I would like some clear-cut examples of why or why not checking the Referer would/wouldn't work to secure JSON-P.
Thanks!
EDIT:
Just to clarify - the JSON-P is secured via Spring Security, so it wouldn't only be secured by the Referer header. I am mostly concerned here about session hijacking...
Jsonp urls can be called using normal curl code. Http refer can easily be forged.
I would like some clear-cut examples of why or why not checking the Referer would/wouldn't work to secure JSON-P.
Referer is not guaranteed to be sent, so:
if you require it to be present and match a trusted site, you will be breaking the app for everyone whose browser or network setup doesn't send it;
if you permit it to be absent to get around that, you open yourself to attack not just for those users, but for everyone where the attacker can induce Referer not to be sent (most notably, from HTTPS pages;
also, to behave properly with proxies you would have to no-cache all your responses (or Vary: Referer, but that won't work right in IE)
Referrer-checking is a weak and problematic method which sometimes sees use as a desperate last measure... it's not something you should build when you've got the choice. If you control both servers you can easily include a request token on one page that gets recognised by the script on the either.

Is a change required only in the code of a web application to support HSTS?

If I want a client to always use a HTTPs connection, do I only need to include the headers in the code of the application or do I also need to make a change on the server? Also how is this different to simply redirecting a user to a HTTPs page make every single time they attempt to use HTTP?
If you just have HTTP -> HTTPS redirects a client might still try to post sensitive data to you (or GET a URL that has sensitive data in it) - this would leave it exposed publicly. If it knew your site was HSTS then it would not even try to hit it via HTTP and so that exposure is eliminated. It's a pretty small win IMO - the bigger risks are the vast # of root CAs that everyone trusts blindly thanks to policies at Microsoft, Mozilla, Opera, and Google.

Resources