How to point http to https subdomain in NGINX? - spring

I am working on a Spring Boot app that servers content for 3 types of users. All three users "live" in the same application. I want to configure NGINX to 1) redirect all http to https and 2) redirect traffic as follows:
http to https://www.example.com
http://b2b.example.com to https://b2b.example.com/b2b (Ideally not showing the "/b2b". Here all the b2b spring boot endpoints are listening)
So far this is my NGINX conf:
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name example.com www.example.com;
ssl on;
ssl_certificate ...;
ssl_certificate_key ...;
ssl_session_cache shared:SSL:10m;
access_log ...;
error_log ...;
location / {
proxy_pass http://localhost:5050;
proxy_set_header Host $host;
# re-write redirects to http as to https, example: /home
proxy_redirect http:// https://;
}
}
server {
listen 443;
server_name b2b.example.com;
ssl on;
ssl_certificate ...;
ssl_certificate_key ...;
ssl_session_cache shared:SSL:10m;
access_log ...;
error_log ...;
location / {
proxy_pass http://localhost:5050/b2b;
proxy_set_header Host $host;
# re-write redirects to http as to https, example: /home
proxy_redirect http:// https://;
}
}
On Sring Boot side, all B2B endpoints are listening to a pattern starting with "B2B". So for example the login page for these users is .../B2B/login. Right now if I go to b2b.example.com I get redirected to b2b.example.com/B2B/login. What I want is the browser to show "B2B.example.com/login" and to display the "/B2B/login" page. All the B2B sites omitting the "/B2B" part in the URL.

Related

Redirect all HTTP traffic to HTTPS seems to be impossible

I am posting the question because the previous attempts have proved to be futile.
I have a rails server using nginx, and I am trying to redirect all http traffic to https.
Here is my nginx.conf file:
upstream backend {
server unix:PROJECT_PATH/tmp/thin1.sock;
server unix:PROJECT_PATH/tmp/thin2.sock;
server unix:PROJECT_PATH/tmp/thin3.sock;
server unix:PROJECT_PATH/tmp/thin4.sock;
server unix:PROJECT_PATH/tmp/thin5.sock;
server unix:PROJECT_PATH/tmp/thin6.sock;
server unix:PROJECT_PATH/tmp/thin7.sock;
server unix:PROJECT_PATH/tmp/thin8.sock;
}
server {
listen 80 default_server;
listen 443 default_server ssl;
server_name app_name;
ssl_certificate path_to_certificate_file.crt;
ssl_certificate_key path_to_certificatefile.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
root PATH_TO_PUBLIC_FOLDER;
access_log path_to_project/log/access.log;
error_log path_to_project/log/error.log;
client_max_body_size 10m;
large_client_header_buffers 4 16k;
location /ping {
echo "pong"
return 200;
}
# Cache static content
location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico|swf|wav)$ {
expires max;
log_not_found off;
}
# Status, local only (accessed via ssh+wget)
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
# double slash removal
set $test_uri $host$request_uri;
if ($test_uri != $host$uri$is_args$args) {
rewrite ^/(.*)$ /$1 break;
}
location / {
if ($http_x_forwarded_proto = 'http') {
return 301 https://$server_name$request_uri;
}
try_files $uri #proxy;
}
location #proxy {
proxy_redirect off;
# Inform we are on SSL
proxy_set_header X-Forwarded-Proto https;
# force timeouts if one of backend is died
proxy_next_upstream error timeout invalid_header http_502 http_503;
# Set headers
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
}
error_page 500 502 503 504 /500.html;
}
The current configuration causes:
400 Bad Request The plain HTTP request was sent to HTTPS port
You may notice the /ping location. That's because I have the servers behind a GCE balancer that performs a health check, and this is THE ONLY one I do not want to redirect. Everything else should be redirected to HTTPS.
Previous attempts:
server {
listen 80;
server_name app_name;
location /ping {
echo "pong";
return 200;
}
location / {
return 301 https://$server_name$request_uri;
}
}
With the https server part like the current config (with listen 80 default_server commented). This causes a too many redirections error.
I tried to simply redirect ALL traffic to https, including the health check. GCE expects a 200 response and instead it gets a 301, thus marking the machine as unhealthy and rendering the application useless.
I also tried the ssl on; on the https server config, same result (400)
I also tried to toggle the config.force_ssl = true in the rails project to no avail. Every other solution I try fails too.
Did anyone stumble on this also?
It seems the problem was not the Nginx config, but the certificates.
Putting a valid certificate led me to create an https backend and health check. Everything is working fine now.

"Naked domain" unexpectedly closed the connection on my computer

I am experiencing ERR_CONNECTION_CLOSED on all web browsers for the naked domain of my website in my computer. I don't find this issue in any other device other than my computer. The www version loads fine as well.
I have tried clear browser history for last 24 hours, deleted the cache and cookies. It didn't make any difference.
This is my nginx configuration.
upstream app_server {
server unix:/run/gunicorn.sock fail_timeout=0;
}
server {
server_name mydomain.com www.mydomain.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /assets/ {
root /home/djangoadmin/v/myappname;
}
location /media/ {
root /home/djangoadmin/myapp/myappname;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https; # <-
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server;
break;
}
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
server_name mydomain.com www.mydomain.com;
return 404; # managed by Certbot
}
Is this a device issue or something related to my nginx configuration? How to fix this?
Finally figured it out! The reason was my computer's /etc/hosts had a entry for the naked domain pointing to 127.0.0.1. Removing it fixed the issue.

Rundeck reverse proxy behind Nginx

I have configured reverse proxy for Rundeck behind Nginx. Below is the Rundeck.conf which is placed in the path /etc/nginx/sites-enabled
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 80 ;
server_name ~(?<repo>.+)\.pilot1 pilot1;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/pilot1.ci1.peapod.com-access.log timing;
## error_log /var/log/nginx/pilot1.ci1.peapod.com-error.log;
# rewrite ^/$ /rundeck/menu/home redirect;
rewrite ^/rundeck/?(/rundeck)?$ /rundeck/menu/home redirect;
chunked_transfer_encoding on;
client_max_body_size 0;
location ^~ /rundeck/ {
proxy_pass http://localhost:4440;
proxy_read_timeout 900;
}
}
Reverse proxy works fine when I browse and login to Rundeck.But when I click log out the redirection to the login page exposes the port:4440
as below
LOGIN----> pilot1/rundeck redirects to pilot1/rundeck/menu/home (works fine)
Logout---> pilot1:4440/rundeck/user/loggedout
I do not want the port to be exposed. How do i fix this issue?
Here is what I had to do:
In NGINX config under an appropriate 'server' section set up a location:
location /rundeck/ {
proxy_pass http://localhost:4440;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Rundeck config:
sed -i "/^grails.serverURL/c grails.serverURL = ${RUNDECK_URL}" /etc/rundeck/rundeck-config.properties
sed -i "/^framework.server.url/c framework.server.url = ${RUNDECK_URL}" /etc/rundeck/framework.properties
sed -i '/^RDECK_JVM="$RDECK_JVM/ s/"$/ -Dserver.web.context=\/rundeck"/' /etc/rundeck/profile
where RUNDECK_URL should point to you NGINX ip (dns name) so http://my-nginx.com/rundeck

Serving two sites from one server with Nginx

I have a Rails app up and running on my server and now I'd like to add another one.
I want Nginx to check what the request is for and split traffic based on domain name
Both sites have their own nginx.conf symlinked into sites-enabled, but I get an error starting nginx Starting nginx: nginx: [emerg] duplicate listen options for 0.0.0.0:80 in /etc/nginx/sites-enabled/bubbles:6
They are both listening on 80 but for different things.
Site #1
upstream blog_unicorn {
server unix:/tmp/unicorn.blog.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name walrus.com www.walrus.com;
root /home/deployer/apps/blog/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #blog_unicorn;
location #blog_unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://blog_unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Site two:
upstream bubbles_unicorn {
server unix:/tmp/unicorn.bubbles.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name bubbles.com www.bubbles.com;
root /home/deployer/apps/bubbles/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #bubbles_unicorn;
location #bubbles_unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://bubbles_unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
The documentation says:
The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair.
It's also obvious, there can be only one default server.
And it is also says:
A listen directive can have several additional parameters specific to socket-related system calls. They can be specified in any listen directive, but only once for the given address:port pair.
So, you should remove default and deferred from one of the listen 80 directives. And same applies to ipv6only=on directive as well.
Just hit this same issue, but the duplicate default_server directive was not the only cause of this message.
You can only use the backlog parameter on one of the server_name directives.
Example
site 1:
server {
listen 80 default_server backlog=2048;
server_name www.example.com;
location / {
proxy_pass http://www_server;
}
site 2:
server {
listen 80; ## NOT NOT DUPLICATE THESE SETTINGS 'default_server backlog=2048;'
server_name blogs.example.com;
location / {
proxy_pass http://blog_server;
}
I was having the same issue. I fixed it by modifying my /etc/nginx/sites-available/example2.com file. I changed the server block to
server {
listen 443 ssl; # modified: was listen 80;
listen [::]:443; #modified: was listen [::]:80;
. . .
}
And in /etc/nginx/sites-available/example1.com I commented out listen 80 and listen [::]:80 because the server block had already been configured for 443.

block direct access on port 8080

I have an app running on a service, behind a nginx server, using unicorn.
If I access http://server.com I get the app, up and running...But I still can access app on port 8080, like http://server.com:8080 but this time, without assets (which are beign served by nginx)
How do I block direct access to port 8080 on my prod. server?
The server is an Ubuntu 12.04
nginx.conf
upstream unicorn {
server 127.0.0.1:8080;
}
server {
listen 80 default deferred;
# server_name example.com;
root /home/deploy/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Make unicorn and nginx use a domain socket. For nginx:
upstream unicorn {
server unix:/path/to/socket fail_timeout=0;
}
Then pass '-l /path/to/socket' to unicorn, or alter your unicorn config file:
listen '/path/to/socket'

Resources