Nginx hide forwarded port number [closed] - proxy

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm trying to set up a simple static website, and I have an issue with nginx that's complicated by a number of things, most notably the fact that my ISP blocks all inbound port 80 traffic.
First, I got a web forward set up so that www.mysite.com will redirect to mysite.com:8000, and then I set up my router to forward port 8000 to my server running nginx. This gets around my ISP's block on port 80. I'm now attempting to have nginx on the server proxy the request on port 8000 to a virtual host on port 80, so that the site will show up as mysite.com after it loads rather than mysite.com:8000.
I've been trying to do this with nginx's proxy_pass directive, but no matter what I do the site always shows up as mysite.com:8000.
Here's what I have so far:
server {
listen [::]:8000
server_name mysite.com;
location / {
proxy_pass http://127.0.0.1:80;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
server {
listen 127.0.0.1:80;
server_name mysite.com;
root /var/www/homepage;
index index.html;
.
. (non-relevant stuff)
.
}
Link to the actual site: http://www.bjacobel.com
I've also tried to do this by forwarding port 8000 at the router to port 80, and having nginx listen on port 80, but the url with :8000 in it still shows up.
Thanks for your help!

The root of the problem is not with your setup, but with the first web forward - it works by redirecting the requested URL (http://www.yoursite.com) to the new URL (http://yoursite.com:8000)
So this is already in place, when the request reaches your setup, and you can't change it back to port 80, as your provider blocks it.
You could use a frameset as a forwarder ("Web 0.5") or live with it.

Word of warning, hosting public web servers on a residential connection is normally against the ISPs Terms Of Service.
The browser will always show 8080 because the HTTP connection needs to be initiated on port 8080 to access your site. Just think of the security issues if you could "hide" part of the URL.
The only workaround is to host a proxy server or a framed website on a server that can be accessed on port 80. Also, there are redirection services that could redirect port 80 to 8080.

Related

WebSocket connection fails to establish when behind AWS ALB and nginx reverse proxy load balancer

Setup Introduction: I have a node js app with 3 different services namely admin, client and server. All these 3 services are running as individual docker containers. My setup consists of 2 EC2 instances behind an AWS Application Load Balancer, with each EC2 instance running 1 container each of the admin and client service and the server service scaled to 2 containers using docker-compose --scale option. I'm using containerised nginx as a reverse proxy and load balancer. I have a target group with both the instances as registered targets.
Problem description: The admin service needs to communicate with the server service via WebSocket and I'm using socket.io for that purpose. So this scenario requires sticky session to establish WebSocket connection. I have enabled sticky session at the instance level with nginx ip_hash in the upstream block for server service. At the ALB level I've enabled sticky session for the target group with the Load balancer generated cookie type. When I access the endpoint for the admin service via Chrome browser and use the inspect element, I can see that the WebSocket connection failed to establish with the error exactly being:
WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is establisbed.
Failed to load resource: the server responded with the status of 400 ()
This is my nginx conf for the server service:
upstream webinar_server {
hash $remote_addr consistent;
server webinar-server_webinar_server_1:8000;
server webinar-server_webinar_server_2:8000;
}
server {
listen 80;
server_name server.mydomain.com;
location / {
proxy_pass http://webinar_server/;
proxy_set_header X-Real_IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
}
}
This is the nignx conf for admin service:
server {
listen 80;
server_name admin.mydomain.com;
location / {
proxy_pass http://webinar_admin:5001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
}
}
I've tried: I've tried to implement a simpler setup to test out the stickiness of the infrastructure which worked as expected. I had 2 EC2 instances behind AWS ALB and each instances running 2 basic containerised nginx web servers each serving a different html page. These web servers are behind a containerised nginx reverse load balancer as mentioned in my original setup. In this case both the instance level stickiness using nginx hash function and the alb level target group stickiness worked as expected.
For the original setup I'm trying to implement, when i removed one of the instance from the target group(only one registered target in the target group), the instance level nginx stickiness worked fine routing to the correct server container(since there are 2 server containers). But the target group level stickiness returns the error mentioned above.
As you can see here, Socket.IO Client don't handle cookies out of the box and ALB use cookies to redirect to the right server.
To fix this issue you need to put that code in client side
import { io } from "socket.io-client";
import { parse } from "cookie";
const socket = io("https://my-domain.com");
const COOKIE_NAME = "AWSALB";
socket.io.on("open", () => {
socket.io.engine.transport.on("pollComplete", () => {
const request = socket.io.engine.transport.pollXhr.xhr;
const cookieHeader = request.getResponseHeader("set-cookie");
if (!cookieHeader) {
return;
}
cookieHeader.forEach(cookieString => {
if (cookieString.includes(`${COOKIE_NAME}=`)) {
const cookie = parse(cookieString);
socket.io.opts.extraHeaders = {
cookie: `${COOKIE_NAME}=${cookie[COOKIE_NAME]}`
}
}
});
});
});

HAProxy redirect based on the selected server

I'm trying to setup HAProxy to change the destination URL depending on the Server that gets picked by the loadbalancer, I have 3 webservices each one deployed in each of the services AWS, GCP and AZURE
The AWS is located at address1.com/service
The AZR is located at address2.com/bucketName/service
The GCP is located at address3.com/api/service
But in HAProxy I can't put a dash / on the server address to force a request made to example.com/helloWorld to go, for example, to address3.com/api/helloWorld, which is what I need, I want HAProxy to pick one of the servers for me using the balance method provided, and then have it calling the correct webservice depending on the server that was picked.
frontend example.com
bind 0.0.0.0:80
use_backend back_farm
default_backend back_farm
backend back_farm
mode http
balance roundrobin
option httpclose
option forwardfor
server awsback address1.com/
server azrback address2.com/bucketName/
server gcpback address3.com/api/
I tried creating an ACL to check the selected server, but it doesn't seem to recognize the %s argument mentioned on the docs, and I've neither found a better replacement for it.
backend back_farm
mode http
balance roundrobin
option httpclose
option forwardfor
server awsback address1.com
server azrback address2.com
server gcpback address3.com
acl is_azr %s -i azrback
acl is_gcp %s -i gcpback
http-request redirect code 301 location http://%[hdr(host)]%[url,regsub(^/service,/api/service,)] if is_azr
http-request redirect code 301 location http://%[hdr(host)]%[url,regsub(^/service,/bucketName/service,)] if is_gcp
What am I doing wrong in here, or what other way could I do this, to force the address to be changed depending on the destination server?
Any ideas or guidance?
Even throught this isn't 100% a reply to the question, because I had to switch from HAProxy to NGINX to be able to achieve what I intended, I wanted to leave the solution I found here for others that might find it usefull.
I was able to achieve this using the following configuration:
upstream backend.example.com {
server aws.backend.example.com:8091;
server azr.backend.example.com:8092;
server gcp.backend.example.com:8093;
}
server {
listen 80;
listen [::]:80;
server_name backend.example.com;
location / {
proxy_pass http://backend.example.com;
}
}
server {
listen 8091;
listen [::]:8091;
server_name aws.backend.example.com;
location / {
proxy_pass http://address1.com:80/;
}
}
server {
listen 8092;
listen [::]:8092;
server_name azr.backend.example.com;
location / {
proxy_pass http://address2.com:80/api/;
}
}
server {
listen 8093;
listen [::]:8093;
server_name gcp.backend.example.com;
location / {
proxy_pass http://address3.com:80/bucketName/;
}
}

How to deal with mixed content in a website which should be secured as https?

I am building a website on server A (with domain name registered), used for people to create and run their "apps".
These "apps" are actually docker containers running on server B, in the container, there lives a small web app which can be accessed directly like:
http://IP_ADDR_OF_SERVER_B:PORT
The PORT is a random big number one which maps to the docker container.
Now I can make SSL certificate working on server A, so that it works fine by accessing:
https://DOMAIN_NAME_OF_SERVER_A
The problem is, I enclosed the "apps" in iframe by accessing "http" like above, therefore my browser(Chrome) refuse to open it and report error as:
Mixed Content: The page at 'https://DOMAIN_NAME_OF_SERVER_A/xxx' was loaded over HTTPS, but requested an insecure resource 'http://IP_ADDR_OF_SERVER_B:PORT/xxx'. This request has been blocked; the content must be served over HTTPS.
So, how should I deal with such issue?
I am a full stack green hand, I'd appreciate a lot if you can share some knowledge on how to build a healthy https website while solving such problem in a proper way.
Supplementary explanation
Ok I think I just threw out the outline of the question, here goes more details.
I see it is intact and straight forward to make the iframe requests to be served with https, then it won't confuse me anymore.
However the trouble is, since all the "apps" are dynamically created/removed, it seems I'll need to prepare many certificates for each one of them.
Will self signed certificate work without being blocked or complained by the browser? Or do I have a way to serve all the "apps" with one SSL certificate?
Software environment
Server A: Running node.js website listening to port 5000 and served with Nginx proxy_pass.
server {
listen 80;
server_name DOMAIN_NAME_OF_SERVER_A;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
server {
listen 443;
server_name DOMAIN_NAME_OF_SERVER_A;
ssl on;
ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.cer;
ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.key;
ssl_session_timeout 5m;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
Server B: Running node.js apps listening to different random big port numbers such as 50055, assigned dynamically when "apps" are created. (In fact these apps are running in docker containers while I think it doesn't matter) Can run Nginx if needed.
Server A and Server B talk with each other in public traffic.
Solution
Just as all the answers, especially the one from #eawenden, I need a reverse proxy to achieve my goal.
In addition, I did a few more things:
1. Assign a domain name to Server B for using a letsencrypt cert.
2. Proxy predefined url to specific port.
Therefore I setup a reverse proxy server using nginx on Server B, proxy all the requests like:
https://DOMAIN_NAME_OF_SERVER_B/PORT/xxx
to
https://127.0.0.1:PORT/xxx
Ps: nginx reverse proxy config on Server B
server {
listen 443;
server_name DOMAIN_NAME_OF_SERVER_B;
ssl on;
ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.cer;
ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.key;
ssl_session_timeout 5m;
rewrite_log off;
error_log /var/log/nginx/rewrite.error.log info;
location ~ ^/(?<port>\d+)/ {
rewrite ^/\d+?(/.*) $1 break;
proxy_pass http://127.0.0.1:$port;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}
Thus everything seems to be working as expected!
Thanks again to all the answerers.
I have mix content issue on dynamic request
add_header 'Content-Security-Policy' 'upgrade-insecure-requests';
This resolve my issue with ngnix server
The best way to do it would be to have a reverse proxy (Nginx supports them) that provides access to the docker containers:
A reverse proxy server is a type of proxy server that typically sits
behind the firewall in a private network and directs client requests
to the appropriate backend server. A reverse proxy provides an
additional level of abstraction and control to ensure the smooth flow
of network traffic between clients and servers.
- Source
Assign a domain name or just use the IP address of the reverse proxy and create a trusted certificate (Let's Encrypt provides free certificates). Then you can connect to the reverse proxy over HTTPS with a trusted certificate and it will handle connecting to the correct Docker container.
Here's an example of this type of setup geared specifically towards Docker: https://github.com/jwilder/nginx-proxy
The error message is pretty much telling you the solution.
This request has been blocked; the content must be served over HTTPS.
If the main page is loaded over HTTPS, then all the other page content, including the iframes, should also be loaded over HTTPS.
The reason is that insecure (non-HTTPS) traffic can be tampered with in transit, potentially being altered to include malicious code that alters the secure content. (Consider for example a login page with a script being injected that steals the userid and password.)
== Update to reflect the "supplemental information" ==
As I said, everything on the page needs to be loaded via HTTPS. Yes, self-signed certificates will work, but with some caveats: first, you'll have to tell the browser to allow them, and second, they're really only suitable for use in a development situation. (You do not want to get users in the habit of clicking through a security warning.)
The answer from #eawenden provides a solution for making all of the content appear to come from a single server, thus providing a way to use a single certificate. Be warned, reverse proxy is a somewhat advanced topic and may be more difficult to set up in a production environment.
An alternative, if you control the servers for all of the iframes, may be to use a wildcard SSL certificate. This would be issued for example for *.mydomain.com, and would work for www.mydomain.com, subsite1.mydomain.com, subsite2.mydomain, etc, for everything under mydomain.com
Like others have said, you should serve all the content over HTTPS.
You could use http proxy to do this. This means that server A will handle the HTTPS connection and forward the request to server B over HTTP. HTTP will then send the response back to server A, which will update the response headers to make it look like the response came from server A itself and forward the response to the user.
You would make each of your apps on server B available on a url on domain A, for instance https://www.domain-a.com/appOnB1 and https://www.domain-a.com/appOnB2. The proxy would then forward the requests to the right port on server B.
For Apache this would mean two extra lines in your configuration per app:
ProxyPass "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
ProxyPassReverse "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
The first line will make sure that Apache forwards this request to server B and the second line will make sure that Apache changes the address in the HTTP response headers to make it look like the response came from server A instead of server B.
As you have a requirement to make this proxy dynamic, it might make more sense to set this proxy up inside your NodeJS app on server A, because that app probably already has knowledge about the different apps that live on server B. I'm no NodeJS expert, but a quick search turned up https://github.com/nodejitsu/node-http-proxy which looks like it would do the trick and seems like a well maintained project.
The general idea remains the same though: You make the apps on server B accessible through server A using a proxy, using server A's HTTPS set-up. To the user it will look like all the apps on server B are hosted on domain A.
After you set this up you can use https://DOMAIN_NAME_OF_SERVER_A/fooApp as the url for your iFrame to load the apps over HTTPS.
Warning: You should only do this if you can route this traffic internally (server A and B can reach each other on the same network), otherwise traffic could be intercepted on its way from server A to server B.

golang - how to route data from default port to another[4000] port [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
currently my go server is running on port 4000. to access web application i need to type somedomainname:4000 in browser.
I would like to only type somedomainname and it should make the connection to web server on port 4000.
There are several solutions for this:
Have your Go server listen directly on port 80. However, be careful with how you implement this. Do not have your service run as root, but use Linux capabilities instead (thanks to #JimB who reminded me of this in comments). You can use setcap to grant a process the capability to bind to a privileged port:
> setcap 'cap_net_bind_service=+ep' /path/to/your/application
Use an HTTP reverse proxy like Nginx to forward all HTTP requests from port 80 to your Go application. Here's an example configuration file for Nginx:
upstream yourgoapplication {
server localhost:4000;
}
server {
listen 80;
server_name somedomainname;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://yourgoapplication;
}
}
When you do this, you can configure to Go application to listen on 127.0.0.1:4000 instead of 0.0.0.0:4000 to make your application accessible only by port 80.
If and when you are deploying your application in a Docker container, you can simply map the container port 4000 to the host port 80. See the manual for more information.

Load balance WebSocket connections to Tornado app using HAProxy?

I am working on a Tornado app that uses websocket handlers. I'm running multiple instances of the app using Supervisord, but I have trouble load balancing websocket connections.
I know nginx does not support dealing with websockets out of the box, but I followed the instructions here http://www.letseehere.com/reverse-proxy-web-sockets to use the nginx tcp_proxy module to reverse proxy websocket connections. However, this did not work since the module can't route websocket urls (ex: ws://localhost:80/something). So it would not work with the URL routes I have defined in my Tornado app.
From my research around the web, it seems that HAProxy is the way to go to load balance my websocket connections. However, I'm having trouble finding any decent guidance to setup HAProxy to load balance websocket connections and also be able to handle websocket URL routes.
I would really appreciate some detailed directions on how to get this going. I am also fully open to other solutions as well.
it's not difficult to implement WebSocket in haproxy, though I admit it's not yet easy to find doc on this (hopefully this response will make one example). If you're using haproxy 1.4 (which I suppose you are) then it works just like any other HTTP request without having to do anything, as the HTTP Upgrade is recognized by haproxy.
If you want to direct the WebSocket traffic to a different farm than the rest of HTTP, then you should use content switching rules, in short :
frontend pub-srv
bind :80
use_backend websocket if { hdr(Upgrade) -i WebSocket }
default_backend http
backend websocket
timeout server 600s
server node1 1.1.1.1:8080 check
server node2 2.2.2.2:8080 check
backend http
timeout server 30s
server www1 1.1.1.1:80 check
server www2 2.2.2.2:80 check
If you're using 1.5-dev, you can even specify "timeout tunnel" to have a larger timeout for WS connections than for normal HTTP connections, which saves you from using overly long timeouts on the client side.
You can also combine Upgrade: WebSocket + a specific URL :
frontend pub-srv
bind :80
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_ws_url path /something1 /something2 /something3
use_backend websocket if is_websocket is_ws_url
default_backend http
Last, please don't use the stupid 24h idle timeouts we sometimes see, it makes absolutely no
sense to wait for a client for 24h with an established session right now. The web is much more
mobile than in the 80s and connection are very ephemeral. You'd end up with many FIN_WAIT sockets
for nothing. 10 minutes is already quite long for the current internet.
Hoping this helps!
WebSockets does not traverse Proxies too well since after the handshake they are not following the normal HTTP behavior.
Try use the WebSocket (wss://) protocol (secured WS). this will use the Proxy CONNECT API which will hide the WebSocket protocol.
I used https://launchpad.net/txloadbalancer to do loadbalancing with Tornado websocket handlers. It's simple and worked well (I think).
http nginx (only nginx v1.3+)
upstream chatservice {
#multi thread by tornado
server 127.0.0.1:6661;
server 127.0.0.1:6662;
server 127.0.0.1:6663;
server 127.0.0.1:6664;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
virtual host
server {
listen 80;
server_name chat.domain.com;
root /home/duchat/www;
index index.html index.htm;
location / {
try_files $uri $uri/ #backend;
}
location #backend {
proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_pass http://chatservice;
internal;
}

Resources