Socket.io Handshake fails on site but not on localhost - socket.io

Socket.io handshake fails on my site but not on localhost.
I had to provide a custom handshake because socket.io was unable to find my query parameter.
Here is my declaration in socket.service.js:
var ioSocket = io('https://my-site.rhcloud.com//?EIO=2&transport=polling&t=1414328625757-0&
token=' + Auth.getToken(), {
});
and how I catch it on server side :
socketio.use(function(socket, next) {
request.get('https://my-site.rhcloud.com/api/users/chan', {qs:{access_token:socket.handshake.query.token}}, function(err, me) {
if (err||!me.body|| me.body=='Unauthorized') {
if (!me) console.log('!me');
if (err) console.log(err);
next(err);
}
else {
// perfoming operations
next();
}
});
});
Here is the message I get:
WebSocket connection to 'wss://my-site.rhcloud.com/socket.io/?EIO=2&transport=websocket&t=…YwMX0.1F6ebfNxzoDPYffXGapGMzLFPJd-mfN0EexqZzXXo7A&sid=z0Jmrbgb7OS0nbqxAAAG' failed: Error during WebSocket handshake: Unexpected response code: 400
I'm really lost here, and digged a lot into Google without any success.
Any help would be really appreciated !

After a lot of search, I realized the problem came from Openshift. You have to specify the port sockets are going to use. See this article: https://blog.openshift.com/paas-websockets/
So, I just had to write:
var ioSocket = io('http://my-site.rhcloud.com:8000, {
});

Nginx proxy solves the issue with the following config:
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
Original solution here. It works great for me (Ubuntu 14.4 + Plesk 12.5).

I deployed my node/express server to Azure and I got the folowing error
websocket.js:100 WebSocket connection to 'ws://mywebsitename.azurewebsites.net/socket.io/?EIO=3&transport=websocket&sid=ssEHs-OWxI6mcChyAAAB' failed: Error during WebSocket handshake: Unexpected response code: 503
, I fixed my issue by enable WebSocket in Azure console as shown below

Related

Socket.io in production returns 400 Bad Request error

I am using Socket.IO in my application. The React client uses socket.io-client 4.1.3, and the Node.js server uses socket.io 4.1.3
In the development environment on my local machine, everything works fine.
The React app runs on http://localhost:3000, and connects to the server using:
import io from 'socket.io-client';
const socket = io('http://localhost:5000/');
The Node.js server is configured as below:
const express = require('express');
const app = express();
const server = require('http').createServer(app);
const cors = require('cors');
const io = require('socket.io')(server, {
cors: {
origin: 'http://localhost:3000'
},
maxHttpBufferSize: '1e6'
});
app.set('io', io);
app.use(express.static('public'));
app.use(express.json({ limit: '7mb' }));
app.use(cors({ origin: 'http://localhost:3000' }));
server.listen(5000, () => console.log('Server started'));
In production, I am using Firebase to host the React app, in a subdirectory (e.g. https://www.example.com/app/).
In production, http://localhost:5000/ and http://localhost:3000 in the code above have also been changed to https://app.example.com and https://www.example.com/app respectively.
My server uses Ubuntu 20.04, Nginx, and Let's Encrypt, with a server block set up as follows:
server {
server_name app.example.com;
location / {
proxy_pass http://localhost:5000/;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}server {
if ($host = app.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name app.example.com;
return 404; # managed by Certbot
}
In Google Chrome, I was getting No 'Access-Control-Allow-Origin' header is present on the requested resource as an error. Changing the origin from https://www.example.com/app to * in the Node.js code fixed this.
However, now I am getting the following error in my browser:
POST https://app.example.com/socket.io/?EIO=4&transport=polling&t=NirW_WK&sid=PmhwTyHRXOV4jWOdAAAF 400 (Bad Request)
Why would this be?
Thanks
A few small changes to both the Node.js and Nginx should resolve your problem:
Node.js
First off, I'd recommend that you change this:
cors: {
origin: 'http://localhost:3000'
},
to this (as specified here):
cors: {
origin: 'http://localhost:3000',
methods: ["GET", "POST"]
},
Nginx
Change this:
location / {
proxy_pass http://localhost:5000/;
}
to this:
location / {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
proxy_pass http://localhost:5000/;
}
This post here can help give more information on CORS headers needed in Nginx reverse proxies
Try adding a variable port to access an environment variable. PORT should be set to https://app.example.com/
const port = process.env.PORT || 3000
And use it everywhere that local host 3000 was used in your backend code.
This should also help
const io = require("socket.io")(server, {
cors: {
origin: port,
methods: ["GET", "POST"],
allowedHeaders: ['Access-Control-Allow-Origin'],
credentials: false
}
})
I was facing same issue and backend was on aws elasticbeanstalk, so we set Load Balancer, to handle multiple request calls and this error was fixed. So i think you need to check cloud base function for load balancing.

Spring get actual scheme from reverse proxy

I have a Spring Boot Web application running on Widlfly server. I implemented Facebook OAuth login by generate a button "Login With Facebook" linked to Facebook login endpoint.
https://www.facebook.com/v2.5/dialog/oauth?client_id=****&response_type=code&redirect_uri=http://example.com/login/facebook
I generate the value of redirect_uri using following java code.
public static String getFacebookRedirectUrl() {
RequestAttributes attrs = RequestContextHolder.getRequestAttributes();
if (attrs instanceof ServletRequestAttributes) {
HttpServletRequest request = ((ServletRequestAttributes) attrs).getRequest();
return request.getScheme() + "://"
+ request.getServerName() + ":"
+ request.getServerPort()
+ request.getContextPath()
+ "/login/facebook";
} else {
throw new RuntimeException("Cannot determine facebook oauth redirect url");
}
}
My website is deployed internally to http://my-ip-address:8080 and have a reversed proxy (nginx) forwarding requests from https://example.com to http://my-ip-address:8080.
The problem is the redirect-uri is always generated as http://example.com/login/facebook instead of https://example.com/login/facebook (not https).
Please help suggest how make request.getScheme() returns https correctly when the user access the website via https. Following is the reverse proxy configuration /etc/nginx/sites-enalbed/mysite.com
server {
listen 80;
listen 443;
server_name example.com www.example.com;
ssl on;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
location / {
proxy_pass http://my-ip-address:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
request.getScheme() will always be http because you are proxying via http, but you are passing the request scheme in a header, so use that.
Change return request.getScheme() + "://" to return request.getHeader('X-Forwarded-Proto') + "://"
Not sure how java interprets headers, so X-Forwarded-Proto might become X_Forwarded_Proto or xForwardedProto or something else, but you get the idea...

Heroku Error during websocket handshake 503

I'm trying to connect to websockets in heroku but it's saying Error during websocket handshake: Unexpected response code: 503. The error in Dev Tools is 'Service unavailable'.
Server code
var wss = new WebSocketServer({server: app, port:5001});
Client code(I am replacing the port to 5001 as well)
var host = location.origin
.replace(/^http/, 'ws')
.replace('5000','5001');
var ws = new WebSocket(host);
I've did the same in development and I managed to connect. Any help to troubleshoot? Thanks.
Apparently, this was a stupid mistake from my side. What I did was follow the example on here and everything was ok..
Basically, I omitted this part from my code:
// app.listen(config.port, function(){
// console.log("App started on port " + config.port);
});
and included this instead
var server = http.createServer(app);
server.listen(config.port);

How do I get socket.io running for a subdirectory

I've got a proxy running that only hits my node.js server for paths that being with /mysubdir
How do I get socket.io configured for this situation?
In my client code I tried:
var socket = io.connect('http://www.example.com/mysubdir');
but then I notice that the underlying socket.io (or engine.io) http requests are hitting
http://www.example.com/socket.io/?EIO=3&transport=polling&t=1410972713498-72`
I want them to hit
http://www.example.com/mysubdir/socket.io.....
Is there something I have to configure on the client and the server?
In my server I had to
var io = require('socket.io')(httpServer, {path: '/mysubdir/socket.io'})`
In my client I had to
<script src="http://www.example.com/mysubdir/socket.io/socket.io.js"></script>
and also
var socket = io.connect('http://www.example.com', {path: "/mysubdir/socket.io"});`
In my case I am using nginx as a reverse proxy. I was getting 404 errors when polling. This was the solution for me.
The url to the node server is https://example.com/subdir/
In the app.js I instantiated the io server with
var io = require('socket.io')(http, {path: '/subdir/socket.io'});
In the html I used
socket = io.connect('https://example.com/subdir/', {
path: "/subdir"
});
Cheers,
Luke.
Using nginx, this a solution without the need to change anything in the socket.io server app:
In the nginx conf:
location /mysubdir {
rewrite ^/mysubdir/(.*) /socket.io/$1 break;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://127.0.1.1:3000;
}
In the server:
var io = require('socket.io')(3000)
In the client:
var socket = io.connect('https://example.com/', {
path: "/mysubdir"
})
The answer by #Drew-LeSueur is correct for socket.io >= 1.0.
As I was using socket.io 0.9, I found the old way of doing it in the doc.
// in 0.9
var socket = io.connect('localhost:3000', {
'resource': 'path/to/socket.io';
});
// in 1.0
var socket = io.connect('localhost:3000', {
'path': '/path/to/socket.io';
});
Notice that a / appears as first character in the new path option.

SignalR and failed websocket connection, but still works

I am currently getting this error below in Chrome console, but it still connects successfully with SignalR. Any reason why I am getting this error?
JS Hub Connection
scheduleHub = $.connection.scheduleHub;
scheduleHub.client.viewing = function (name, message) {
app.showWarning(message, name, function () {
app.refreshHash();
});
};
if ($.connection.hub && $.connection.hub.state === $.signalR.connectionState.disconnected) {
$.connection.hub.qs = { "eventid": options.eventId };
$.connection.hub.start()
.done(function () {
alert('Connected');
//scheduleHub.server.viewing('wow', 'test');
})
.fail(function() { alert('Could not Connect!'); });
}
Chrome Console
WebSocket connection to 'ws://localhost:2222/signalr/connect?transport=webSockets&clientProtocol=1.4&eventid=23919&connectionToken=CV3wchrj88t6FdjgA%2BREdzEDIw0rhW6r2aUrb%2BI8qInsb3Y9BqQSOscPxfAZ2g0Dxl704usqdBBn%2BNSFKpjVNOtwASndOweD1kGWPCkWEbtJBMu%2B&connectionData=%5B%7B%22name%22%3A%22schedulehub%22%7D%5D&tid=5' failed: Error during WebSocket handshake: Unexpected response code: 500
Web Sockets initially starts by negotiating the websockets connection over HTTP. During this HTTP handshake, the web server probably raised an exception, anyway, it returns HTTP Status Code 500. Without a successful HTTP response, Chrome is unable to continue negotiating the web sockets connection.
Since SignalR works over multiple transports, and not just websockets, once websockets connection failed, it will have automatically have switched to try some other transport, like forever frame or polling, which is why your connection still works.

Resources