I was reading in this article about Host header attacks
https://crashtest-security.com/invalid-host-header/
and there are many solutions posted regards how to prevent Host header attacks like
X-Forwarded-Host
X-Host
X-Forwarded-Server
,
but i am wondering if one of the solution is available in springboot-security?
Possible solutions:
1- Use relative URLs as much as possible.
2- Validate Host headers
3- Whitelist trusted domains
4 - Implement domain mapping
5 -Reject override headers
6 - Avoid using internal-only websites under a virtual host
How can one of them be implemented in springboot?
Related
I am building a website on server A (with domain name registered), used for people to create and run their "apps".
These "apps" are actually docker containers running on server B, in the container, there lives a small web app which can be accessed directly like:
http://IP_ADDR_OF_SERVER_B:PORT
The PORT is a random big number one which maps to the docker container.
Now I can make SSL certificate working on server A, so that it works fine by accessing:
https://DOMAIN_NAME_OF_SERVER_A
The problem is, I enclosed the "apps" in iframe by accessing "http" like above, therefore my browser(Chrome) refuse to open it and report error as:
Mixed Content: The page at 'https://DOMAIN_NAME_OF_SERVER_A/xxx' was loaded over HTTPS, but requested an insecure resource 'http://IP_ADDR_OF_SERVER_B:PORT/xxx'. This request has been blocked; the content must be served over HTTPS.
So, how should I deal with such issue?
I am a full stack green hand, I'd appreciate a lot if you can share some knowledge on how to build a healthy https website while solving such problem in a proper way.
Supplementary explanation
Ok I think I just threw out the outline of the question, here goes more details.
I see it is intact and straight forward to make the iframe requests to be served with https, then it won't confuse me anymore.
However the trouble is, since all the "apps" are dynamically created/removed, it seems I'll need to prepare many certificates for each one of them.
Will self signed certificate work without being blocked or complained by the browser? Or do I have a way to serve all the "apps" with one SSL certificate?
Software environment
Server A: Running node.js website listening to port 5000 and served with Nginx proxy_pass.
server {
listen 80;
server_name DOMAIN_NAME_OF_SERVER_A;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
server {
listen 443;
server_name DOMAIN_NAME_OF_SERVER_A;
ssl on;
ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.cer;
ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.key;
ssl_session_timeout 5m;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
Server B: Running node.js apps listening to different random big port numbers such as 50055, assigned dynamically when "apps" are created. (In fact these apps are running in docker containers while I think it doesn't matter) Can run Nginx if needed.
Server A and Server B talk with each other in public traffic.
Solution
Just as all the answers, especially the one from #eawenden, I need a reverse proxy to achieve my goal.
In addition, I did a few more things:
1. Assign a domain name to Server B for using a letsencrypt cert.
2. Proxy predefined url to specific port.
Therefore I setup a reverse proxy server using nginx on Server B, proxy all the requests like:
https://DOMAIN_NAME_OF_SERVER_B/PORT/xxx
to
https://127.0.0.1:PORT/xxx
Ps: nginx reverse proxy config on Server B
server {
listen 443;
server_name DOMAIN_NAME_OF_SERVER_B;
ssl on;
ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.cer;
ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.key;
ssl_session_timeout 5m;
rewrite_log off;
error_log /var/log/nginx/rewrite.error.log info;
location ~ ^/(?<port>\d+)/ {
rewrite ^/\d+?(/.*) $1 break;
proxy_pass http://127.0.0.1:$port;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}
Thus everything seems to be working as expected!
Thanks again to all the answerers.
I have mix content issue on dynamic request
add_header 'Content-Security-Policy' 'upgrade-insecure-requests';
This resolve my issue with ngnix server
The best way to do it would be to have a reverse proxy (Nginx supports them) that provides access to the docker containers:
A reverse proxy server is a type of proxy server that typically sits
behind the firewall in a private network and directs client requests
to the appropriate backend server. A reverse proxy provides an
additional level of abstraction and control to ensure the smooth flow
of network traffic between clients and servers.
- Source
Assign a domain name or just use the IP address of the reverse proxy and create a trusted certificate (Let's Encrypt provides free certificates). Then you can connect to the reverse proxy over HTTPS with a trusted certificate and it will handle connecting to the correct Docker container.
Here's an example of this type of setup geared specifically towards Docker: https://github.com/jwilder/nginx-proxy
The error message is pretty much telling you the solution.
This request has been blocked; the content must be served over HTTPS.
If the main page is loaded over HTTPS, then all the other page content, including the iframes, should also be loaded over HTTPS.
The reason is that insecure (non-HTTPS) traffic can be tampered with in transit, potentially being altered to include malicious code that alters the secure content. (Consider for example a login page with a script being injected that steals the userid and password.)
== Update to reflect the "supplemental information" ==
As I said, everything on the page needs to be loaded via HTTPS. Yes, self-signed certificates will work, but with some caveats: first, you'll have to tell the browser to allow them, and second, they're really only suitable for use in a development situation. (You do not want to get users in the habit of clicking through a security warning.)
The answer from #eawenden provides a solution for making all of the content appear to come from a single server, thus providing a way to use a single certificate. Be warned, reverse proxy is a somewhat advanced topic and may be more difficult to set up in a production environment.
An alternative, if you control the servers for all of the iframes, may be to use a wildcard SSL certificate. This would be issued for example for *.mydomain.com, and would work for www.mydomain.com, subsite1.mydomain.com, subsite2.mydomain, etc, for everything under mydomain.com
Like others have said, you should serve all the content over HTTPS.
You could use http proxy to do this. This means that server A will handle the HTTPS connection and forward the request to server B over HTTP. HTTP will then send the response back to server A, which will update the response headers to make it look like the response came from server A itself and forward the response to the user.
You would make each of your apps on server B available on a url on domain A, for instance https://www.domain-a.com/appOnB1 and https://www.domain-a.com/appOnB2. The proxy would then forward the requests to the right port on server B.
For Apache this would mean two extra lines in your configuration per app:
ProxyPass "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
ProxyPassReverse "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
The first line will make sure that Apache forwards this request to server B and the second line will make sure that Apache changes the address in the HTTP response headers to make it look like the response came from server A instead of server B.
As you have a requirement to make this proxy dynamic, it might make more sense to set this proxy up inside your NodeJS app on server A, because that app probably already has knowledge about the different apps that live on server B. I'm no NodeJS expert, but a quick search turned up https://github.com/nodejitsu/node-http-proxy which looks like it would do the trick and seems like a well maintained project.
The general idea remains the same though: You make the apps on server B accessible through server A using a proxy, using server A's HTTPS set-up. To the user it will look like all the apps on server B are hosted on domain A.
After you set this up you can use https://DOMAIN_NAME_OF_SERVER_A/fooApp as the url for your iFrame to load the apps over HTTPS.
Warning: You should only do this if you can route this traffic internally (server A and B can reach each other on the same network), otherwise traffic could be intercepted on its way from server A to server B.
I have problem, when I started my chrome, and try visit https page, i see this error:
Error 1013 Ray ID: 2cb7c5989ac13dd8 • 2016-08-01 08:03:08 UTC
HTTP hostname and TLS SNI hostname mismatch
You've requested an IP address that is part of the CloudFlare network. The request's Host header does not match the request's TLS SNI Host header.
What is it? where place cloudfare and how can I this remove from my browser?
SNI stands for Server Name Indication, essentially it allows you to serve more than one SSL certificate from more than one IP Address. Most modern browsers submit this as part of a web request. CloudFlare Free Universal SSL makes use of SNI.
Similarly, the Host header allows a server to know which hostname it should serve. The Host header fundamentally allows virtualhosting, serving more than one website per IP Address.
So these two clearly need to match, the SNI hostname and the hostname in the Host header must be identical. When this fails to occur CloudFlare will present a Error 1013 HTTP hostname and TLS SNI hostname mismatch.
There may be a proxy in your local network that is stripping out SNI headers or deforming them thus causing them to mismatch. This can happen as a result of a firewall or intermediary web server.
There is a tool here to check if your SNI header and Host header match. Simply visit the linked site and check the "SNI information:" field states "cc.dcsec.uni-hannover.de".
If you require support for browsers which do not support SNI, CloudFlare's Pro, Business or Enterprise support have legacy support for non-SNI browsers; but it would make sense to contact your network administrator to see why SNI requests are being malformed.
The tags on this question suggest you are using Firefox on Windows 10 which should support SNI; therefore you should reach out to your network team to see why they are malforming SNI requests.
Sentry needs a value the location it is installed: SENTRY_URL_PREFIX. The problem is that I want to log errors to an installation via two different lan's.
Lets say, the server that's running sentry has an ip 192.168.1.1 and 10.0.0.1, and I want to log errors from 192.168.1.2 and from 10.0.0.2.
The connection between the sentry server (machine) and the machines that need to do the logging is fine, but I need to 'switch' a url-prefix setting in sentry for it to work with one or the other: If I set the SENTRY_URL_PREFIX to http://10.0.0.1 it works and is able to receive logs from that lan, but all requests from the other lan go wrong (direct http request for the frontend get an http 400 result for instance) and of course the other way around.
Details:
I'm running sentry 8.1.2 in docker (https://hub.docker.com/_/sentry/)
Interestingly enough, I read this in the changelog
SENTRY_URL_PREFIX has been deprecated, and moved to system.url-prefix inside of config.yml or it can be configured at runtime.
Starting sentry for the first time actually still seems to ask for the prefix; changing the prefix does seem to work for the connections, so to me it looks like this is the culprint. It could be that behind the scenes this is communicated to above mentioned system.url-prefix, so that this setting is the problem, but I'm not sure.
Does anyone know how to run one server on two adresses?
The main issue is of course sending the errors, it's not a big deal to have the web-interface only visible from one ip.
While I'm not really sure how it is supposed to work, I could not get any logs to a server with a different system.url-prefix then I used in the call.
From twitter and the sentry group I gather that it does need the correct host for the interface, while it shouldn't really break stuff otherwise. Not sure how to unbreak it though.
The solution for me was just to address the sentry install from one point. Because we need the separate NIC's, we do this by using a simple nginx reverse proxy in front of the set-up, and let that set the host header. I used a default https://hub.docker.com/_/nginx/ image, and this config:
events {
worker_connections 1024; ## Default: 1024
}
http{
server {
listen 80;
location / {
proxy_pass http://$internalip-sentry-knows;
proxy_redirect http://$internalip-sentry-knows $externalip-we-use;
proxy_set_header Host $internalip-sentry-knows;
proxy_set_header X-Real-IP $remote_addr ;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
The port is exposed to both nics, so this listens to both adresses we want to use, and proxys it nice and easy to sentry.
Disclaimer: The interface speaks of SENTRY_URL_PREFIX but that's deprecated, so I use system.url-prefix, but for all practical purposes in my answer they are interchangeable. This could be a source of confusion for someone who does know what goes where exactly :)
References:
* twitter conversation with Matt from sentry
* Groups response from David from sentry
I am using
Apache
Ruby and Ruby on Rails 3
Mac Os running "Snow Leopard"
and I would like to use HTTPS on localhost for my domains and sub-domains.
I have already set everything (I think correctly):
I generated a wildcard certificate for my domains and sub-domains (example: *.sitename.com)
I have set base-named virtualhosts in the http.conf file listening on port :433 and :80
My browser accept certificates also if it alerts me that those aren't safe and I can have access to pages using HTTPS
From the official Apache guide I read that it is not possible to do that using name-based virtualhost, but I also read someone that made that in some way (what?! I don't understand...).
So, is it possible or not to use HTTPS in localhost for multiple domains and sub-domains? If so, what I must "to do"\"to check" for working with that?
UPDATE for #sarnold
typhoeus appears to use libcurl, and
libcurl appears to support SNI -- is
your version of libcurl new enough to
support SNI? Does typhoeous know how
to enable it? (Do clients of libcurl
need to "enable" it SNI themselves?)
I think so because I can access all sub_domains over HTTPS and libcurl should be updated:
curl -V--version
curl 7.21.2 (x86_64-apple-darwin10.5.0) libcurl/7.21.2 OpenSSL/1.0.0c zlib/1.2.5 libidn/1.19
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smtp smtps telnet tftp
Features: IDN IPv6 Largefile NTLM SSL libz
# Typhoeus request
Typhoeus::Request.get("https://<sub_domain_name>.<domain_name>.com/")
How can I check if "Do clients of libcurl need to "enable" it SNI themselves?"?
The techniques for doing name-based virtual servers with SSL/TLS aren't great choices, but the Server Name Indication extension allows browsers to request a specific site by name, allowing different certificates to be used with different sites. Not all browsers support SNI yet.
Though one might ask what value there is is in having multiple certificates if they are all served out of the same process with the same privileges, anything to improve the user's TLS experience has to be worth the hassle. :)
Let's say I have a domain, js.mydomain.com, and it points to some IP address, and some other domain, requests.mydomain.com, which points to a different IP address. Can a .js file downloaded from js.mydomain.com make Ajax requests to requests.mydomain.com?
How exactly do modern browsers enforce the same-domain policy?
The short answer to your question is no: for AJAX calls, you can only access the same hostname (and port / scheme) as your page was loaded from.
There are a couple of work-arounds: one is to create a URL in foo.example.com that acts as a reverse proxy for bar.example.com. The browser doesn't care where the request is actually fulfilled, as long as the hostname matches. If you already have a front-end Apache webserver, this won't be too difficult.
Another alternative is AJAST, which works by inserting script tags into your document. I believe that this is how Google APIs work.
You'll find a good description of the same origin policy here: http://code.google.com/p/browsersec/wiki/Part2
This won't work because the host name is different. Two pages are considered to be from the same origin if they have the same host, protocol and port.
From Wikipedia on the same origin policy:
The term "origin" is defined using the
domain name, application layer
protocol, and (in most browsers) TCP
port of the HTML document running the
script. Two resources are considered
to be of the same origin if and only
if all these values are exactly the
same.