I am working on a project to add HSTS header to web application. As a prework for that, I have added CSP header in report mode only with default-src https directive. The intent is to assess the violations and decide if adding HSTS header is going to break any usecase.
Questions:
Is it a worthy approach?
What are the scenarios that we will miss for HSTS with this approach?
What are recommended approaches, if they are different from what the one I have described above?
I would have thought it was pretty obvious if you are using https only on you server. Do you have https set up? Do you have a redirect set up to force all http requests to be redirected to https? These are questions that should be easy to answer by looking at your server config and I don't think you need CSP to answer them. And if the answer to both is not "Yes", then why are you considering HSTS?
CSP support is not universal (though admittedly neither is HSTS) and, with these things, it's usually the older, less common browsers that will break as you'll presumably test the common ones so whether your approach will give you the confidence you need to proceed is debatable.
The one thing you should be aware of is if you are using includeSubDomains flag. This can cause problems if it affects more servers than you intend it to, but CSP will not help as you will presumably only set up CSP on the servers you think will be affected. More info here: https://serverfault.com/questions/665234/problems-using-hsts-header-at-top-level-domain-with-includesubdomains.
Also be aware that, once HSTS is implemented, certificate errors cannot be bypassed in the browser anymore. Not that you should do this, but just another intentional effect of this flag that not everyone knows about.
Note that the only way to resolve HSTS issues (e.g. If you discover you need http after all), other than to remove the header and wait for the policy to expire, is to set the max age back to zero and hope people visit your site with https to pick up this new HSTS policy and override the previous one. With Chrome it is possible to manually view and remove the policy, but that's not practical if you have any volume of visitors and also not aware of any way to do this with the other browsers short of a full reinstall.
The best approach is to be fully aware of what HSTS is, and the caveats above, and then start with a low expiry, and build it up slowly as long as you do not experience any issues.
There is also the preload list but I would stay well away from that until you've been running this for at least 6 months with no issues.
Related
I'm reading quite a bit about http2's server-push. Also did some experimenting (on a beginner's level)...
Well, my question is: Does it make sense to server-push woff2 web-fonts? (since not every browser uses them), and, is there a method to push the correct font (if not already in the cache)?
Zach points out how important it is to have a fast font-delivery-solution, and CSS-Tricks (Chris Coyer) has a great method to get it done cache-aware...
Thank you!
david
Well that's an interesting question alright. The answer is: No you should not do this. But the reason is a little different than you might think...
For reasons that are a bit cryptic, fonts are always requested without credentials (basically cookies). For most browsers (Edge being the exception) this means the browser opens another connection for that request and this is important because HTTP/2 Pushes are linked to the connection. So if you push a resource on one connection, and the browser goes to get a resource from another connection it will not use that pushed resource (you do not push directly into the HTTP Cache as you might think).
This, and lots of other HTTP/2 Push trickiness and edge cases were discussed by Jake Archibald in his excellent HTTP/2 push is tougher than I thought article.
But it does beg the question of how you can decide what format to push even if this wasn't an issue, or if you wanted to send different image formats for example (that would be on the same connection). Other than looking at the User-Agent and guessing based off of that, there is now way for you to know what the browser supports.
There is a new HTTP Client Hints header currently being proposed which aims to allow the browser to indicate the device specifics. This currently is more concerned with image size and density, but could in theory also include the file formats that are supported.
I am managing a shop that forces HTTPs on the register/login/account/checkout pages, but that's it, and I've been trying to convince people to force HTTPs on everything.
I know that it's recommended to use HTTPs everywhere, but not sure why.
Are there any good reasons to keep part of the site on HTTP ?
One good reason is that page perfomance has a massive impact on sales (there's lots of published studies) and SSL has a BIG imact on performance - particularly if it's not tuned right.
But running a mixed SSL and non-SSL is full of pitfalls for the unwary...
Exactly which pages you put inside SSL has a big impact on security too though - suppose you send a login form using HTTP with a POST target which is HTTPS - a trivial analysis would suggest this is secure - but in fact an MITM could modify the login page to send the post elsewhere or inject some ajax to fork a request to a different location.
Further with mixed HTTP and HTTPS you've got the problem of transferring sessions securely - the user fills their session-linked shopping basket outside the SSL site, then pays for it inside the SSL site - how do you prevent session fixation problems in the transition?
Hence I'd only suggest running a mixed site if you've got really expert skills in HTTP - and since you're asking this question here, that rather implies you don't.
A compromise solution is to use SPDY. SPDY requires SSL but makes most sites (especially ones that have not been heavily performance optimized) much faster. Currently it's not supported by MSIE - and (last time I checked) is not enabled by default in Firefox. But it's likely to make up a large part of HTTP/2.0 any time soon.
Using (good) CDNs over HTTPS also mitigates much of the performance impact of SSL.
There's really no need to use HTTPS on the whole website. Using HTTPS will cause the server to consume more resources as it has to do extra work to encrypt and decrypt the connection, not to mention extra steps/handshake in negotiating algorithms etc.
If you have a heavy traffic website, the performance hit can be quite big.
This will also mean a slow response time then using plain on HTTP.
You should only really use HTTPS on the parts of the site you actually need to be secure, such as when ever the user send important information to your site, completes forms, logs in, private parts of the site etc.
One other issue can be if you use resources from none secure URLS, maybe images/scripts hosted elsewhere. If they are not available over HTTPS then your visitors will get a warning about an insecure connection.
You also need to realise the fact HTTPS data/pages will hardly ever get cached. this will also add a performance penalty.
I have a site that works very well when everything is in HTTPS (authentication, web services etc). If I mix http and https it requires more coding (cross domain problems).
I don't seem to see many web sites that are entirely in HTTPS so I was wondering if it was a bad idea to go about it this way?
Edit: Site is to be hosted on Azure cloud where Bandwidth and CPU usage could be an issue...
EDIT 10 years later: The correct answer is now to use https only.
you lose a lot of features with https (mainly related to performance)
Proxies cannot cache pages
You cannot use a reverse proxy for performance improvement
You cannot host multiple domains on the same IP address
Obviously, the encryption consumes CPU
Maybe that's no problem for you though, it really depends on the requirements
HTTPS decreases server throughput so may be a bad idea if your hardware can't cope with it. You might find this post useful. This paper (academic) also discusses the overhead of HTTPS.
If you have HTTP requests coming from a HTTPS page you'll force the user to confirm the loading of unsecure data. Annoying on some websites I use.
This question and especially the answers are OBSOLETE. This question should be tagged: <meta name="robots" content="noindex"> so that it no longer appears in search results.
To make THIS answer relevant:
Google is now penalizing website search rankings when they fail to use TLS/https. You will ALSO be penalized in rankings for duplicate content, so be careful to serve a page EITHER as http OR https BUT NEVER BOTH (Or use accurate canonical tags!)
Google is also aggressively indicating insecure connections which has a negative impact on conversions by frightening-off would-be users.
This is in pursuit of a TLS-only web/internet, which is a GOOD thing. TLS is not just about keeping your passwords secure — it's about keeping your entire world-facing environment secure and authentic.
The "performance penalty" myth is really just based on antiquated obsolete technology. This is a comparison that shows TLS being faster than HTTP (however it should be noted that page is also a comparison of encrypted HTTP/2 HTTPS vs Plaintext HTTP/1.1).
It is fairly easy and free to implement using LetsEncrypt if you don't already have a certificate in place.
If you DO have a certificate, then batten down the hatches and use HTTPS everywhere.
TL;DR, here in 2019 it is ideal to use TLS site-wide, and advisable to use HTTP/2 as well.
</soapbox>
If you've no side effects then you are probably okay for now and might be happy not to create work where it is not needed.
However, there is little reason to encrypt all your traffic. Certainly login credentials or other sensitive data do. One the main things you would be losing out on is downstream caching. Your servers, the intermediate ISPs and users cannot cache the https. This may not be completely relevant as it reads that you are only providing services. However, it completely depends on your setup and whether there is opportunity for caching and if performance is an issue at all.
It is a good idea to use all-HTTPS - or at least provide knowledgeable users with the option for all-HTTPS.
If there are certain cases where HTTPS is completely useless and in those cases you find that performance is degraded, only then would you default to or permit non-HTTPS.
I hate running into pointlessly all-https sites that handle nothing that really requires encryption. Mainly because they all seem to be 10x slower than every other site I visit. Like most of the documentation pages on developer.mozilla.org will force you to view it with https, for no reason whatsoever, and it always takes long to load.
I would like to setup a http proxy on my work machine (no admin rights, WinXP) to only allow access to a whitelist of URLs. What would be the easiest solution? I prefer open-source software if possible.
Squid seems to be the de facto proxy. This link describes how to set it up on a windows box: http://www.ausgamers.com/features/read/2638752
Why not use the Content Advisor in IE? You can provide a list of approved sites, anything else is blocked. Or do you want pass-through functionality like a true proxy?
Content advisor will ask for authorization every time a javascript function is called. At least that's my experience right now, and that's how I landed here, after hours of googling.
You are right, however, if the sites in the whitelist don't use javascript intensively and I would suggest that that option be tried first because (and I'm an IT person), it's FAAAAAAAAR easier to set up Content Advisor than a proxy server. Google "noaccess.rat" and you'll come accross articles that tell you how to set up IE using a white-list approach.
Having said this, however, you must be fully aware that Content Advisor can be easily disabled, even without knowing the password. One of my users did it in no time. You can find this in google as well.
Alex
One of my clients uses McAfee ScanAlert (i.e., HackerSafe). It basically hits the site with about 1500 bad requests a day looking for security holes. Since it demonstrates malicious behavior it is tempting to just block it after a couple bad requests, but maybe I should let it exercise the UI. Is it a true test if I don't let it finish?
Isn't it a security flaw of the site to let hackers throw everything in their arsenal against the site?
Well, you should focus on closing holes, rather than trying to thwart scanners (which is a futile battle). Consider running such tests yourself.
It's good that you block bad request after a couple of trials, but you should let it continue.
If you block it after 5 bad requests you won't know if the 6th request wouldn't crash your site.
EDIT:
I meant that some attacker might send only one request but similar to one of those 1495 that You didn't test because you blocked., and this one request might chrash your site.
Preventing security breaches requires different strategies for different attacks. For instance, it would not be unusual to block traffic from certain sources during a denial of service attack. If a user fails to provide proper credentials more than 3 times the IP address is blocked or the account is locked.
When ScanAlert issues hundreds of requests which may include SQL injection--to name one--it certainly matches what the site code should consider "malicious behavior".
In fact, just putting UrlScan or eEye SecureIIS in place may deny many such requests, but is that a true test of the site code. It's the job of the site code to detect malicious users/requests and deny them. At what layer is the test valid?
ScanAlert presents in two different ways: the number of requests which are malformed and the variety of each individual request as a test. It's seems like the 2 pieces of advice that emerge are as follows:
The site code should not try to detect malicious traffic from a particular source and block that traffic, because that is a futile effort.
If you do attempt such a futile effort, as least make an exception for requests from ScanAlert in order to test lower layers.
If it's not hurting the performance of the site, I think its a good thing. If you had 1000 clients to the same site all doing that, yeah, block it.
But if the site was built for that client, I think it's fair enough they do that.