WebSocket connection fails to establish when behind AWS ALB and nginx reverse proxy load balancer - socket.io

Setup Introduction: I have a node js app with 3 different services namely admin, client and server. All these 3 services are running as individual docker containers. My setup consists of 2 EC2 instances behind an AWS Application Load Balancer, with each EC2 instance running 1 container each of the admin and client service and the server service scaled to 2 containers using docker-compose --scale option. I'm using containerised nginx as a reverse proxy and load balancer. I have a target group with both the instances as registered targets.
Problem description: The admin service needs to communicate with the server service via WebSocket and I'm using socket.io for that purpose. So this scenario requires sticky session to establish WebSocket connection. I have enabled sticky session at the instance level with nginx ip_hash in the upstream block for server service. At the ALB level I've enabled sticky session for the target group with the Load balancer generated cookie type. When I access the endpoint for the admin service via Chrome browser and use the inspect element, I can see that the WebSocket connection failed to establish with the error exactly being:
WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is establisbed.
Failed to load resource: the server responded with the status of 400 ()
This is my nginx conf for the server service:
upstream webinar_server {
hash $remote_addr consistent;
server webinar-server_webinar_server_1:8000;
server webinar-server_webinar_server_2:8000;
}
server {
listen 80;
server_name server.mydomain.com;
location / {
proxy_pass http://webinar_server/;
proxy_set_header X-Real_IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
}
}
This is the nignx conf for admin service:
server {
listen 80;
server_name admin.mydomain.com;
location / {
proxy_pass http://webinar_admin:5001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
}
}
I've tried: I've tried to implement a simpler setup to test out the stickiness of the infrastructure which worked as expected. I had 2 EC2 instances behind AWS ALB and each instances running 2 basic containerised nginx web servers each serving a different html page. These web servers are behind a containerised nginx reverse load balancer as mentioned in my original setup. In this case both the instance level stickiness using nginx hash function and the alb level target group stickiness worked as expected.
For the original setup I'm trying to implement, when i removed one of the instance from the target group(only one registered target in the target group), the instance level nginx stickiness worked fine routing to the correct server container(since there are 2 server containers). But the target group level stickiness returns the error mentioned above.

As you can see here, Socket.IO Client don't handle cookies out of the box and ALB use cookies to redirect to the right server.
To fix this issue you need to put that code in client side
import { io } from "socket.io-client";
import { parse } from "cookie";
const socket = io("https://my-domain.com");
const COOKIE_NAME = "AWSALB";
socket.io.on("open", () => {
socket.io.engine.transport.on("pollComplete", () => {
const request = socket.io.engine.transport.pollXhr.xhr;
const cookieHeader = request.getResponseHeader("set-cookie");
if (!cookieHeader) {
return;
}
cookieHeader.forEach(cookieString => {
if (cookieString.includes(`${COOKIE_NAME}=`)) {
const cookie = parse(cookieString);
socket.io.opts.extraHeaders = {
cookie: `${COOKIE_NAME}=${cookie[COOKIE_NAME]}`
}
}
});
});
});

Related

Missing Sec-WebSocket-Key in WebSocket connection handshake

Since yesterday we have had a problem with WebSocket connections. The chromium-based browser and also Firefox don't add Sec-WebSocket-Key into the headers during connection. We use the standard new WebSocket() to connect with the server.
Missing header Sec-WebSocket-Key header
Funny thing is that when I open a new incognito window I can create a connection but after some fetch() request if I will try to make another connection it will fail - missing Sec-Websocket-Key header.
First WebSocket connection - success
Fetch request - app health status
Failed WebSockect connection after a fetch request
Second WebSocket connection - missing Sec-WebSocket-Key
Nginx config for /ws.
location ^~ /ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://backend;
}
Has anyone encountered a similar problem?
Checked in Google Chrome Version 96.0.4664.93 and Firefox 95.0.1 Windows, Linux and Mac.
MacOS & Safari works.
The problem is probably in the HTTP2 implementation in Firefox & Chrome (Safari works OK). We were digging for three days and finally realized that after disabling the HTTP2 issue disappeared.
This is the response from DigitalOcean technical support:
We had recently enabled support for Websockets over HTTP2 (RFC 8441).
This adds support for browsers to reuse an existing HTTP2 connection
and will allow tunneling of WebSocket connections over an HTTP2
stream. As part of an immediate fix we have disabled this
functionality which informs browsers to create Websockets over HTTP1.1
(RFC 6455). We believe the bug is actually within Chrome/Firefox but
further testing is necessary to track down the issue. To our knowledge
the LB wasn't the issue as it was the browser that was not including
the required header Sec-WebSocket-Key.

SignalR: Web-socket Handshake issue with Error 400 with Nginx

I have an asp.net mvc 5 application i have implemented SignalR with web sockets using postgresql. all functionality working fine when i move my application to staging server and enable web sockets we have implemented nginx for load balancing every time signalr throw error for web socket secure (wss) connection "Handshake Error with 400 Code."
Config i have use right now:
upstream servername_websocket {
server abc:8000;
}
server {
listen 8000 ssl;
server_name ~\.servername\.com$;
ssl_certificate /etc/ssl/certs/servername.com.chained.crt;
ssl_certificate_key /etc/ssl/private/servername.com.key;
location / {
proxy_pass http://servername_websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Thanks

Nginx hide forwarded port number [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm trying to set up a simple static website, and I have an issue with nginx that's complicated by a number of things, most notably the fact that my ISP blocks all inbound port 80 traffic.
First, I got a web forward set up so that www.mysite.com will redirect to mysite.com:8000, and then I set up my router to forward port 8000 to my server running nginx. This gets around my ISP's block on port 80. I'm now attempting to have nginx on the server proxy the request on port 8000 to a virtual host on port 80, so that the site will show up as mysite.com after it loads rather than mysite.com:8000.
I've been trying to do this with nginx's proxy_pass directive, but no matter what I do the site always shows up as mysite.com:8000.
Here's what I have so far:
server {
listen [::]:8000
server_name mysite.com;
location / {
proxy_pass http://127.0.0.1:80;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
server {
listen 127.0.0.1:80;
server_name mysite.com;
root /var/www/homepage;
index index.html;
.
. (non-relevant stuff)
.
}
Link to the actual site: http://www.bjacobel.com
I've also tried to do this by forwarding port 8000 at the router to port 80, and having nginx listen on port 80, but the url with :8000 in it still shows up.
Thanks for your help!
The root of the problem is not with your setup, but with the first web forward - it works by redirecting the requested URL (http://www.yoursite.com) to the new URL (http://yoursite.com:8000)
So this is already in place, when the request reaches your setup, and you can't change it back to port 80, as your provider blocks it.
You could use a frameset as a forwarder ("Web 0.5") or live with it.
Word of warning, hosting public web servers on a residential connection is normally against the ISPs Terms Of Service.
The browser will always show 8080 because the HTTP connection needs to be initiated on port 8080 to access your site. Just think of the security issues if you could "hide" part of the URL.
The only workaround is to host a proxy server or a framed website on a server that can be accessed on port 80. Also, there are redirection services that could redirect port 80 to 8080.

Load balance WebSocket connections to Tornado app using HAProxy?

I am working on a Tornado app that uses websocket handlers. I'm running multiple instances of the app using Supervisord, but I have trouble load balancing websocket connections.
I know nginx does not support dealing with websockets out of the box, but I followed the instructions here http://www.letseehere.com/reverse-proxy-web-sockets to use the nginx tcp_proxy module to reverse proxy websocket connections. However, this did not work since the module can't route websocket urls (ex: ws://localhost:80/something). So it would not work with the URL routes I have defined in my Tornado app.
From my research around the web, it seems that HAProxy is the way to go to load balance my websocket connections. However, I'm having trouble finding any decent guidance to setup HAProxy to load balance websocket connections and also be able to handle websocket URL routes.
I would really appreciate some detailed directions on how to get this going. I am also fully open to other solutions as well.
it's not difficult to implement WebSocket in haproxy, though I admit it's not yet easy to find doc on this (hopefully this response will make one example). If you're using haproxy 1.4 (which I suppose you are) then it works just like any other HTTP request without having to do anything, as the HTTP Upgrade is recognized by haproxy.
If you want to direct the WebSocket traffic to a different farm than the rest of HTTP, then you should use content switching rules, in short :
frontend pub-srv
bind :80
use_backend websocket if { hdr(Upgrade) -i WebSocket }
default_backend http
backend websocket
timeout server 600s
server node1 1.1.1.1:8080 check
server node2 2.2.2.2:8080 check
backend http
timeout server 30s
server www1 1.1.1.1:80 check
server www2 2.2.2.2:80 check
If you're using 1.5-dev, you can even specify "timeout tunnel" to have a larger timeout for WS connections than for normal HTTP connections, which saves you from using overly long timeouts on the client side.
You can also combine Upgrade: WebSocket + a specific URL :
frontend pub-srv
bind :80
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_ws_url path /something1 /something2 /something3
use_backend websocket if is_websocket is_ws_url
default_backend http
Last, please don't use the stupid 24h idle timeouts we sometimes see, it makes absolutely no
sense to wait for a client for 24h with an established session right now. The web is much more
mobile than in the 80s and connection are very ephemeral. You'd end up with many FIN_WAIT sockets
for nothing. 10 minutes is already quite long for the current internet.
Hoping this helps!
WebSockets does not traverse Proxies too well since after the handshake they are not following the normal HTTP behavior.
Try use the WebSocket (wss://) protocol (secured WS). this will use the Proxy CONNECT API which will hide the WebSocket protocol.
I used https://launchpad.net/txloadbalancer to do loadbalancing with Tornado websocket handlers. It's simple and worked well (I think).
http nginx (only nginx v1.3+)
upstream chatservice {
#multi thread by tornado
server 127.0.0.1:6661;
server 127.0.0.1:6662;
server 127.0.0.1:6663;
server 127.0.0.1:6664;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
virtual host
server {
listen 80;
server_name chat.domain.com;
root /home/duchat/www;
index index.html index.htm;
location / {
try_files $uri $uri/ #backend;
}
location #backend {
proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_pass http://chatservice;
internal;
}

Node.js - Good WebServer with WebSocket-proxying & SSL support?

I really love node.js but it´s really complicating when you want to run multiple websocket servers and make them all accessible over port 80.
I'm currently running nginx, but proxying incoming websocket connections to the different websocket servers depending on the url is not possible because nginx does not support http 1.1.
I´ve tried to implement a webserver that has the functionality on my own, but it is really complicated when it comes to header passing etc. Another thing is SSL support. It´s not easy to support it.
So, does anyone know a good solution to do the things i mentioned?
Thanks for any help!
I had good results using node-http-proxy by nodejitsu. As stated in their readme, they seem to support WebSockets.
Example for WebSockets (taken from their GitHub readme):
var http = require('http'),
httpProxy = require('http-proxy');
//
// Create an instance of node-http-proxy
//
var proxy = new httpProxy.HttpProxy();
var server = http.createServer(function (req, res) {
//
// Proxy normal HTTP requests
//
proxy.proxyRequest(req, res, {
host: 'localhost',
port: 8000
})
});
server.on('upgrade', function(req, socket, head) {
//
// Proxy websocket requests too
//
proxy.proxyWebSocketRequest(req, socket, head, {
host: 'localhost',
port: 8000
});
});
It's production usage should be no problem since it is used for nodejitsu.com. To run the proxy app as a daemon, consider using forever.
Newer versions of nginx actually will support reverse proxying for http/1.1. You probably want version 1.1.7 or greater.
Try something like this in your config:
location / {
chunked_transfer_encoding off;
proxy_http_version 1.1;
proxy_pass http://localhost:9001;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host:9001; #probaby need to change this
proxy_set_header Connection "Upgrade";
proxy_set_header Upgrade websocket;
}
Nice thing about this is that you can terminate SSL at nginx.

Resources