Laravel Echo Server - Socket.io - Polling fails - laravel

The Laravel Echo Server is launched via:
laravel-echo-server start and is running fine:
L A R A V E L E C H O S E R V E R
version 1.3.1
Starting server...
✔ Running at localhost on port 6001
✔ Channels are ready.
✔ Listening for http events...
✔ Listening for redis events...
Server ready!
CHANNEL private-user:09222583-ef73-5640-bcc8-062b36c4f380.d3d0a4d2-0c95-5f5f-aea4-1dfe1fa36483
However, in the front-end, the polling seems to fail. The Network tab shows that requests keep trying to connnect and fail:
The URLs look like this:
https://example.com:6001/socket.io/?EIO=3&transport=polling&t=MGEIXhf
The laravel-echo-server.json looks like this:
{
"authHost": "https://example.com",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {
"host": "127.0.0.1",
"password": "56df4h5dfg4"
}
},
"devMode": false,
"host": null,
"port": "6001",
"protocol": "https",
"socketio": {},
"sslCertPath": "/etc/nginx/ssl/example_com/ssl-bundle.crt",
"sslKeyPath": "/etc/nginx/ssl/example_com/example_com.key",
"sslCertChainPath": "",
"sslPassphrase": ""
}
The port 6001 seems to be open in the ufw:
Status: active
To Action From
-- ------ ----
...
6001 ALLOW Anywhere
...
6001 (v6) ALLOW Anywhere (v6)
...

Verify that you used same port number in js file also.
import Echo from 'laravel-echo'
window.io = require('socket.io-client');
window.Echo = new Echo({
broadcaster: 'socket.io',
host: window.location.hostname + ':6001'});

I was having the exact same problem. On my local setup everything worked fine but on the staging, droplet ufw was blocking every incoming request /var/log/ufw.log showed them all being blocked. I already had 6001 open in ufw, and I'd even given my IP access to everything in ufw, but requests were still blocked.
Although I really didn't want the security risk, I briefly disabled ufw and everything started working, connections were made instantly, no timeouts. I then quickly enabled ufw again, and everything continued working. I tested new connections in other browsers - all working.
So although it's really bad advice to try turning the firewall off and on again, it did work for me.
I hope someone else might chime in with a way to diagnose and fix such UFW issues in a secure way.

I got to the cause of the problem in my situation.
The first rule I'd set up with ufw was to allow ssh traffic.
I then installed fail2ban and part of that process saved the iptables using iptables-persistent.
When I later added a ufw rule to allow 6001 everything was working.
After a reboot 6001 stopped being open even though it was listed as open in ufw's status output - the reason it wasn't working was the iptables had been restored with the copy taken before the 6001 rule was added.
So if you've got iptables-persistent running make sure you use
sudo dpkg-reconfigure iptables-persistent
after adding any new ufw rules.

Related

Lets-encrypt Error: Failed HTTP-01 Pre-Flight / Dry Run

I've set up a redbird based proxy following its README file examples.
By now I've configured single domain both for http and https and it's working well (https still using self-signed certificate).
But now I'm trying to configure it to use letsencrypt to automatically get valid ssl certificates and I'm getting stuck in following error:
{"level":30,"time":1578681102208,"pid":21320,"hostname":"nigul","name":"redbird","0":false,"1":"setChallenge called for 'exposito.bitifet.net'","msg":"Lets encrypt debugger","v":1}
[acme-v2] handled(?) rejection as errback:
Error: Error: Failed HTTP-01 Pre-Flight / Dry Run.
curl 'http://exposito.bitifet.net/.well-known/acme-challenge/test-cf55199519d859042f695e620cca8dbb-0'
Expected: 'test-cf55199519d859042f695e620cca8dbb-0.MgLl7GIS59DPtPMejuUcXfddzNt8YxfLVo5op670u8M'
Got: '<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>404 - Not Found</title>
</head>
<body>
<h1>404 - Not Found</h1>
</body>
</html>
'
See https://git.coolaj86.com/coolaj86/acme-v2.js/issues/4
at /home/joanmi/SERVICES/redbird_domains/node_modules/acme-v2/index.js:49:10
at process._tickCallback (internal/process/next_tick.js:68:7)
As far as I understand, this is telling me that Lets Encrypt is trying to access to the url http://exposito.bitifet.net/.well-known/acme-challenge/test-cf55199519d859042f695e620cca8dbb-0 using the following command:
curl 'http://exposito.bitifet.net/.well-known/acme-challenge/test-cf55199519d859042f695e620cca8dbb-0'
...and that it is getting which seems a 404 HTML Error page which I have no clue wherever it could come.
And, in fact, executing that curl command or just pasting that url in my browser (you can try it: I left the server running), I get the given Expected string so, from my point of view it seems like if my configuration were correct but, for some reason, Lets Encrypt's servers were reaching another server (either because of wrong routing or DNS).
But on the other hand, I suppose it's more probable that I've done something wrong in my configuration.
Here I paste my whole script (ports 80 and 443 are redirected to 1080 and 1443, respectively, through iptables because the script is run by non privileged user):
const Redbird = require("redbird");
const proxy = Redbird({
port: 1080,
xfwd: false, // Disable the X-Forwarded-For header
letsencrypt: {
path: __dirname + '/certs',
port: 9999
// LetsEncrypt minimal web server port for handling challenges.
// Routed 80->9999, no need to open 9999 in firewall. Default 3000
// if not defined.
},
ssl: {
http2: true,
port: 1443, // SSL port used to serve registered https routes with LetsEncrypt certificate.
}
});
proxy.register('exposito.bitifet.net:9999', 'http://localhost:8001', {
ssl: {
letsencrypt: {
email: 'xxxxxx#gmail.com', // Domain owner/admin email
production: false,
// WARNING: Only use this flag when the proxy is verified to
// work correctly to avoid being banned!
}
}
});
proxy.register("exposito.bitifet.net", "http://localhost:8001");
Any clue will be welcome.
Thanks.
SOLVED!!
Many issues were involved at the same time (despite my lack of experience with either redbird and letsencrypt.
The magic 404/Not found page: I guess it came from a lighttpd server that seems to had been preinstalled in my VPS.
Port 80 was redirected via iptables but I suppose in one or other configuration tweak I could have redirected incoming requests to localhost's port 80 (which is not redirected).
My redbird missunderstanding: Looking at examples in its README file, I thought redbird were kinda "multi- reverse_proxy" in the sense that you could redirect http and https requests with single redbird instance.
But I finally realized that the (maybe not so well named) port option which is, in fact, an http port, serves only to configure a built-in unconditional http->https redirector (of which I already had read about, but I thought it were optional).
The actual underlying issue: If your DNS have DNSSEC activated, you should define a CAA register in it pointing to letsencrypt.org.
At the moment I disabled DNSSEC instead because my provider's control panel doesn't allow me to create such register.
I discovered it while trying to get the certificates through certbot (sudo apt-get install certbot which I must say that, If I had known about it before, I wouldn't had care about trying redbird's letsencrypt integration.
It is much more verbose (while redbird is more like a black box when errors arise) and pointed out that I needed the CAA register.
Here the notes I took about it (in case anyone could be interested):
Free SSL Certificates with Certbot
Install certbot:
sudo apt-get install certbot
Create:
sudo certbot certonly --manual --preferred-challenges http -d <domain>
Renew:
sudo certbot renew
Caveats:
DNSSEC
If your DNS server has DNSSEC enabled, you will need to add a CAA
register pointing to letsencrypt.org.
...and your DNS provider my not allow to create it (at least I
couldn't with CDMON. Also not -yet- complained).
production = false is for other kinds of testing: I read that if you put in to true while testing, you may be banned from letsencrypt if you perform too many requests.
Setting it to false you can test redirections, but you will see still errors regarding letsencrypt even you could navigate without a secure certificate (I think kinda self-signed is provided to allow testing). So don't expect a valid one.
ssl port is used for redirection: Not a (big) issue, but if you specify ssl port other than 443, built in redirector will unconditionally redirect you to that port.
Running redbird as root and using standard (80 and 443) ports works fine. But if you, like me, want to use an alternative ports in order to execute redbird with non privileged user, you will get redirected to that alternative port instead of 443 (Even having it redirected through iptables).
Here my (almost*) final redbird script:
const Redbird = require("redbird");
const proxy = Redbird({
port: 1080,
xfwd: false, // Disable the X-Forwarded-For header
ssl: {
port: 1443,
},
letsencrypt: {
path: __dirname + '/certs',
port: 9999,
// LetsEncrypt minimal web server port for handling challenges.
// Routed 80->9999, no need to open 9999 in firewall. Default 3000
// if not defined.
},
});
proxy.register('exposito.bitifet.net', 'http://exposito.bitifet.net:8001', {
ssl: {
http2: true,
letsencrypt: {
email: 'xxxxxx#gmail.com', // Domain owner/admin email
production: true,
// WARNING: Only use this flag when the proxy is verified to
// work correctly to avoid being banned!
},
}
});
(*) I still need to fix the explicit-port redirection issue (5), because I don't want to run redbird as root. But I know is possible to allow uses to listen given ports. Even I would probably better try to patch redbird in order to allow specifying listen and redirection ports separatedly.
EDIT: It is already implemented (and documented) using the (optional) option redirectPort in ssl section. Just added redirectPort: 443 and job done!!
EDIT 2: For the sake of completion, there still was another issue I struggled with.
To get things working I finally configured the redirection to the http port instead of https one.
That is: Incomming https requests gets redirected to my application http port.
It seems weird but it works. At least if you don't need any exclusively https feature such as push notifications (which I plan to use in the future).
But its implies to open an http server at least on localhost. Which isn't a major issue now (this is only a playground server) but I plan to use redbird at work to proxy multiple domains to different servers so that would had forced us to open http at least in our DMZ vlan (which is an additional risk that is better to avoid...).
When I tried redirecting to https I got the DEPTH_ZERO_SELF_SIGNED_CERT error.
Ok: This is telling me that redbird (or node) does not trust my original (self signed) certificate. I know there is an option to tell node to accept those certificates. But maybe it is not the way to go...
So I configured my application to use the same certificate that redbird is obtaining through letsencrypt.
But then I got this other error:
UNABLE_TO_VERIFY_LEAF_SIGNATURE
Researching a bit I found this StackOverflow answer that explains how to get all root and intermediate certificates trusted by Mozilla and make node to trust them.
So, at the end, what I did was:
Installed node_extra_ca_certs_mozilla_bundle package:
npm install --save node_extra_ca_certs_mozilla_bundle
Prepended NODE_EXTRA_CA_CERTS=node_modules/node_extra_ca_certs_mozilla_bundle/ca_bundle/ca_intermediate_root_bundle.pem to the start command in the package.json's scripts section.
Updated my redbird script to point again to the https (protocol and) port:
proxy.register('exposito.bitifet.net', 'https://localhost:4301', {...]);
Here my final redbird configuration:
const Redbird = require("redbird");
const proxy = Redbird({
port: 1080,
xfwd: false, // Disable the X-Forwarded-For header
ssl: {
port: 1443,
redirectPort: 443
// key: "/etc/bitifet/exposito/ssl/private.key",
// cert: "/etc/bitifet/exposito/ssl/public.cert",
},
letsencrypt: {
path: __dirname + '/certs',
port: 9999,
// LetsEncrypt minimal web server port for handling challenges.
// Routed 80->9999, no need to open 9999 in firewall. Default 3000
// if not defined.
},
});
proxy.register('exposito.bitifet.net', 'https://localhost:4301', {
ssl: {
http2: true,
letsencrypt: {
email: 'xxxxxx#gmail.com', // Domain owner/admin email
production: true,
// WARNING: Only use this flag when the proxy is verified to
// work correctly to avoid being banned!
},
}
});
And here my package.json file contents:
{
"name": "redbird_domains",
"version": "0.0.1",
"description": "Local Domains Handling",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "NODE_EXTRA_CA_CERTS=node_modules/node_extra_ca_certs_mozilla_bundle/ca_bundle/ca_intermediate_root_bundle.pem node ./index.js"
},
"author": "Joanmi",
"license": "GPL-3.0",
"dependencies": {
"node_extra_ca_certs_mozilla_bundle": "^1.0.4",
"redbird": "^0.10.0"
}
}

Docker xDebug not connecting to VSCode

I'm trying to set up php debugging with VSCode and xDebug, but xDebug can't connect to the host. Thus, VSCode doesn't hit any breakpoints either.
When I start the debug listener in VSCode, run a Bash shell in the php-fpm container and try to connect to the host, it fails:
$ docker-compose exec php-fpm bash
root#178ba0224b37:/application# nc -zv 172.20.0.1 9001
172.20.0.1: inverse host lookup failed: Unknown host
(UNKNOWN) [172.20.0.1] 9001 (?) : Connection refused
I'm confused about the IP addresses, because in the Docker settings the Virtual Switch subnet is set to 10.0.75.0, and the network adapter vEthernet (DockerNAT) uses the IP 10.0.75.1. How do the containers get the IP range 172.20.0.x?
From my desktop I am unable to request the webpage using 172.20.0.1.
It works fine with 10.0.75.1, which shows the phpinfo() as expected, but the breakpoint is not triggered.
phpinfo() shows xDebug is configured and the settings match what I have in the php-ini-overrides.ini config.
I've disabled the firewall, tried different IP's, and checked the port and various xDebug, php, docker-compose, and VSCode debug settings.
I've been searching far and wide, but I guess I'm still missing something. My guess is that it has to do with the network connection, but I don't know what else I can change to fix this issue.
Setup
Host is Windows 10 with docker-compose and VSCode.
I got the docker debug-test directory from https://phpdocker.io/generator
Basically it uses two docker containers: nginx:alpine and phpdocker/php-fpm
My VSCode workspace looks like this:
(The readme files come from the phpdocker.io generator and contain some basic Docker info)
index.php contents:
<?php
phpinfo(); // <-- VSCode breakpoint here
echo 'hello there';
?>
The IP addresses for the containers:
/debug-test-php-fpm - 172.20.0.3
/debug-test-webserver - 172.20.0.2
$_SERVER['REMOTE_ADDR']: 172.20.0.1 <- the host?
Configs and logs
launch.json contents:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Listen for XDebug",
"type": "php",
"request": "launch",
"pathMappings": {
"/application/public": "${workspaceRoot}/public"
},
"log": true,
"port": 9001,
"xdebugSettings": {
"max_data": 65535,
"show_hidden": 1,
"max_children": 100,
"max_depth": 5
}
},
{
"name": "Launch currently open script",
"type": "php",
"request": "launch",
"program": "${file}",
"cwd": "${fileDirname}",
"port": 9001
}
]
}
docker-compose.yml contents:
###############################################################################
# Generated on phpdocker.io #
###############################################################################
version: "3.1"
services:
webserver:
image: nginx:alpine
container_name: debug-test-webserver
working_dir: /application
volumes:
- .:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8000:80"
php-fpm:
build: phpdocker/php-fpm
container_name: debug-test-php-fpm
working_dir: /application
volumes:
- .:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
php-ini-overrides.ini contents:
upload_max_filesize = 100M
post_max_size = 108M
# added for debugging with Docker and VSCode
xdebug.remote_enable=1
xdebug.remote_connect_back=On
xdebug.remote_autostart=1
# xdebug.remote_host=172.20.0.1 # using remote_connect_back instead, which should work for any IP
xdebug.remote_connect_back=1
xdebug.remote_port=9001
xdebug.profiler_enable=0
xdebug.var_display_max_depth = 5
xdebug.var_display_max_children = 256
xdebug.var_display_max_data = 1024
xdebug.remote_log = /application/xdebug.log
xdebug.idekey = VSCODE
xdebug.log contents after one visit to the page:
Log opened at 2019-01-30 12:37:39
I: Checking remote connect back address.
I: Checking header 'HTTP_X_FORWARDED_FOR'.
I: Checking header 'REMOTE_ADDR'.
I: Remote address found, connecting to 172.20.0.1:9001.
W: Creating socket for '172.20.0.1:9001', poll success, but error: Operation now in progress (29).
E: Could not connect to client. :-(
Log closed at 2019-01-30 12:37:39
Log opened at 2019-01-30 12:37:39
I: Checking remote connect back address.
I: Checking header 'HTTP_X_FORWARDED_FOR'.
I: Checking header 'REMOTE_ADDR'.
I: Remote address found, connecting to 172.20.0.1:9001.
W: Creating socket for '172.20.0.1:9001', poll success, but error: Operation now in progress (29).
E: Could not connect to client. :-(
Log closed at 2019-01-30 12:37:39
This is no paste error, it actually logs the request two times for some reason.
Debug console in VSCode after starting the debug listener:
<- launchResponse
Response {
seq: 0,
type: 'response',
request_seq: 2,
command: 'launch',
success: true }
Any thoughts? I'm lost..
Perhaps it has to do with the DockerNAT setup?
Sorry for the long post. I'm still new to Docker, I hope this has all the info needed.
Edit: solved
See my answer below.
After some coding I stumbled upon the solution.
The IP address in the php debug settings was incorrect. Since my system has VPN connections, multiple ethernet adapters, multiple virtual switches, and multiple virtual machines, it's a bit tricky to find out what's used where.
I discovered the IP by accident when I ran netstat on the php container during a request:
$ docker-compose ps --services
php
app
$ docker-compose exec php sh
/var/www/html # netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 08674b3fd785:58060 192.168.137.12:http TIME_WAIT
tcp 0 0 08674b3fd785:58062 192.168.137.12:http TIME_WAIT
[...]
udp 0 0 08874b3cd785:35298 192.168.65.1:domain ESTABLISHED
I tried the 192.168.65.1 IP first, but that didn't work.
The 192.168.137.12 is the IP of a Hyper-V Virtual Machine that the php script connects to. Apparently the php container can connect to that, so maybe then it could also connect to the Windows adapter that's bound to that virtual switch, in other words: 192.168.137.1 .
Adding this to the xDebug settings solved the problem:
xdebug.remote_host = 192.168.137.1.

Using dnsmaqs with Consul and the confusion around recursive

I'm using dnsmasq but I'm a little confused as to what gets set where. Everything works as expected, but I wasn't sure if any of my config parameters are redundant or would cause issues down the road.
1 - Do I need to set the recusors option in Consul's config?
2 - Do I still need both nameservers entry in /etc/resolv.conf?
3 - Do I need dnsmasq on all Consul clients or just the servers?
#/etc/dnsmasq.d/dnsmasq.conf`
server=/consul/127.0.0.1#8600
My Consul config looks like this:
{
"server": false,
"client_addr": "0.0.0.0",
"bind_addr": "0.0.0.0",
"datacenter": "us-east-1",
"advertise_addr": "172.16.11.144",
"data_dir": "/var/consul",
"encrypt": "XXXXXXXXXXXXX",
"retry_join_ec2": {
"tag_key": "SOMEKEY",
"tag_value": "SOMEVALUE"
},
"log_level": "INFO",
"recursors" : [ "172.31.33.2" ],
"enable_syslog": true
}
My /etc/resolv.conf looks like this:
nameserver 127.0.0.1
nameserver 172.31.33.2
1) read the documentation: https://www.consul.io/docs/agent/options.html#recursors having a recursor setup is great if you have external services registered in Consul, otherwise it's probably moot. You likely don't want ALL of your DNS traffic to hit consul directly, just the consul specific DNS traffic.
2 & 3:
It's up to you. Some people run dnsmasq on every machine. Some people centralize dnsmasq on their internal DNS servers. Both are valid configurations. If you run it on every single machine, then you probably just need 1 nameserver entry, pointed at localhost. If you run it centralized (i.e. just on your internal DNS servers) then you just point every machine at your internal DNS servers. Both are valid options.

Crossbar SSL/TLS configuration with intermediate and cross-signed certificates

Using the latest version of Crossbar (0.13, installed from apt-get on Ubuntu 14.04) I am having trouble making connections using SSL and intermediate certificates.
If I set up the server without a ca_certificates property in the tls key then the server runs fine and connections can be made using Google Chrome via the wss protocol. However trying to make a connection using thruway fails with the following error:
Could not connect: Unable to complete SSL/TLS handshake: stream_socket_enable_crypto(): SSL operation failed with code 1. OpenSSL Error messages: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
Which having spoken with the Thruway team seems to be a certificate issue - on our live site we use an intermediate and cross-signed certificate from Gandi which is needed for some browsers and therefore some open-ssl implementations.
It seems that whilst browsers are happy to make a TLS connection with just a key and cert, Thruway requires a chain. However the configuration below using the two certificates provided by Gandi does not work for either Chrome or Thruway. Chrome shows the error:
failed: WebSocket opening handshake was canceled
When using the .crossbar/config.json file below. So, is this a problem with my config, with my certificates or with some other part of the Open-SSL stack?
(The file below has been altered to remove any potentially sensitive information so may appear like it wouldn't work for other reasons. If the connection works the underlying auth and other components work fine, so please keep answers/comments regarding the TLS implementation. The comments are not valid JSON but are included so readers can see the certificate files in use)
{
"version": 2,
"controller": {},
"workers": [
{
"type": "router",
"realms": [
{
"name": "test",
"roles": [
{
"name": "web",
"authorizer": "test.utils.permissions",
"disclose": {
"caller": true,
"publisher": true
}
},
{
"name": "no",
"permissions": []
}
]
}
],
"transports": [
{
"type": "websocket",
"endpoint": {
"type": "tcp",
"port": 9001,
"interface": "127.0.0.1"
},
"auth": {
"wampcra": {
"type": "static",
"users": {
"authenticator": {
"secret": "authenticator-REDACTED",
"role": "authenticator"
}
}
}
}
},
{
"type": "web",
"endpoint": {
"type": "tcp",
"port": 8089,
"tls": {
"key": "../ssl/key.pem",
"certificate": "../ssl/cert.pem",
"ca_certificates": [
"../ssl/gandi.pem", // https://www.gandi.net/static/CAs/GandiProSSLCA2.pem
"../ssl/gandi-cross-signed.pem" // https://wiki.gandi.net/en/ssl/intermediate#comodo_cross-signed_certificate
],
"dhparam": "../ssl/dhparam.pem"
}
},
"paths": {
"/": {
"type": "static",
"directory": "../web"
},
"ws": {
"type": "websocket",
"url": "wss://OUR-DOMAIN.com:8089/ws",
"auth": {
"wampcra": {
"type": "dynamic",
"authenticator": "test.utils.authenticate"
}
}
}
}
}
]
},
{
"type": "guest",
"executable": "/usr/bin/env",
"arguments": [
"php",
"../test.php",
"ws://127.0.0.1:9001",
"test",
"authenticator",
"authenticator-REDACTED"
]
}
]
}
There are other questions which address issues similar to this#
This one deals with the fact that any TLS error terminates a WSS connection with no useful error.
This one deals specifically with the handshake cancellation but in their case it was an improperly configured library used in compilation, which isn't relevant in this case as Crossbar has been installed from apt-get
This is not an issue with Crossbar. This appears to be a problem with the WAMP client - Thruway. Davidwdan is the owner of the Thruway Github repo and he says:
"Thruway's Ratchet transport provider does not directly support SSL. You'll need to put some sort of proxy in front of it."
You can find more information regarding what Davidwdan and others have to say about this right here https://github.com/voryx/Thruway/issues/163.
Now to get to the solution. Mind you, the following is only for Apache users. If you are running on Nginx the idea is pretty much the same.
A couple things to note before we get started.
Follow Crossbar's tutorial for the install! Don't try to do it yourself! There is more to setting up Crossbar then meets the eye. The fine folks over at Crossbar have laid out detailed instructions just for you! https://crossbar.io/docs/Installation/.
For this example, I have Crossbar and Apache running on the same machine. Although this is not a requirement and does not matter!
The first thing you want to do is create a new virtual host. I chose port 4043 for this virtual host, but you can choose whatever you would like. This virtual host is going to be for every WAMP library that does NOT have an issue connecting via wss:// (with an SSL). Here is a full list of WAMP clients: http://wamp-proto.org/implementations/. Make sure the ProxyPass directive and the ProxyPassReverse directive has the IP address pointing to the machine that the CROSSBAR router exists on. In my case since Apache and Crossbar are running on the same machine I just use 127.0.0.1. Also make sure the port being used in the ProxyPass directive and the ProxyPassReverse directive is the exact same as the port that you defined in your .crossbar/config.json! You will also need an SSL certificate set up on this virtual host as well, which you can see I have added below the Proxy directives.
Listen 4043
<VirtualHost *:4043>
ServerName example.org
ProxyRequests off
SSLProxyEngine on
ProxyPass /ws/ ws://127.0.0.1:8000/
ProxyPassReverse /ws/ ws://127.0.0.1:8000/
## Custom fragment
SSLEngine on
SSLCertificateFile /path/to/server_cert.pem
SSLCertificateKeyFile /path/to/server_key.pem
SSLCertificateChainFile /path/to/server_ca.pem
</VirtualHost>
Next, make sure your Crossbar router is NOT setup with an SSL! This is super important. Thruway or any other library that is NOT able to connect via SSL WON'T be able to use the router if you have it configured to use an SSL! Below is a working Crossbar config.json file that you would be able to use.
{
"version": 2,
"controller": {},
"workers": [
{
"type": "router",
"realms": [
{
"name": "production_realm",
"roles": [
{
"name": "production_role",
"permissions": [
{
"uri": "",
"match": "prefix",
"allow": {
"call": true,
"register": true,
"publish": true,
"subscribe": true
}
}
]
}
]
}
],
"transports": [
{
"type": "websocket",
"endpoint": {
"type": "tcp",
"port": 8000
},
"options": {
"allowed_origins": ["http://*","https://*"]
},
"auth": {
"ticket": {
"type": "static",
"principals": {
"production_user": {
"ticket": "tSjlwueuireladgte",
"role": "production_role"
}
}
}
}
}
]
}
]
}
Notice how the port number defined above matches the port number defined in the virtual host.
./crossbar/config.json:
"endpoint": {
"type": "tcp",
"port": 8000
},
virtual host:
ProxyPass /ws/ ws://127.0.0.1:8000/
ProxyPassReverse /ws/ ws://127.0.0.1:8000/
Also, if you read other tutorials, some people will tell you to make sure you use the ProxyPreserveHost directive in your virtual host file. DON'T LISTEN TO THEM! This will produce lots of unexpected results. When this directive is enabled, this option will pass the Host: line from the incoming request to the proxied host, instead of the hostname specified in the ProxyPass line! Even Apache says to stay away from this directive https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypreservehost. If you do have it enabled you will receive an error similar to below:
failing WebSocket opening handshake ('missing port in HTTP Host header
'example.org' and server runs on non-standard port 8000 (wss =
False)')
Last but not least, make sure all of the following Apache libraries are installed and enabled. On recent Apache installations all of the following libraries come installed by default and just need to be enabled:
$ sudo a2enmod proxy
$ sudo a2enmod proxy_http
$ sudo a2enmod proxy_balancer
$ sudo a2enmod lbmethod_byrequests
$ sudo a2enmod proxy_wstunnel
Make sure you open up whichever port your virtual host file is listening on and whichever port your crossbar router is listening on. In my case:
$ sudo ufw allow 4043
$ sudo ufw allow 8000
And finally restart Apache so all your changes can take effect.
$ sudo service apache2 restart
Last but not least I want to give a quick explanation of why all of this has to be done:
When you have an SSL certificate setup on your server the browser will throw an error when trying to connect to any WAMP router without using wss://.
Normally the solution to this would be to configure your WAMP router to use the SSL certificate that is already set up on your server.
The only issue with this is that Thruway.php (the only good php client I know that works with WAMP) does not play well with wss://. Even the creators of Thruway.php on GitHub say it doesn’t work.
The solution to this issues is to use a reverse proxy.
First you need to set up your WAMP router and make sure it is not using an SSL certificate.
Next you need to setup a reverse proxy so wss:// requests get converted to ws://. This will allow your browser to connect to the WAMP router without complaining.
Since the WAMP router is not set up to use an SSL, Thruway.php will work fine as well!
And well.... That's all folks! I know I needed to give a detailed answer to this question because it took me 5 days to figure all of this out!
#Tay-Bae's answer was already very useful. But it wasn't working for me, the client was getting a 200 OK response. All I need to do is to forward WSS traffic to my internal WS client which does not support WSS (Thruway).
After looking in the forums, I stumbled uppon this answer : https://serverfault.com/a/846936.
They add a rewrite part which seems to be required to re-route the request. I thought ProxyPassReverse should do it, but it doesn't. So here's my working config :
Listen 4043
<VirtualHost *:4043>
ServerName mydomain.net
ProxyRequests off
SSLProxyEngine on
ProxyPass /ws/ ws://127.0.0.1:8080/
ProxyPassReverse /ws/ ws://127.0.0.1:8080/
## Custom fragment
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/mydomaine.net/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/mydomain.net/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/mydomain.net/chain.pem
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} Upgrade$ [NC]
RewriteRule .* ws://localhost:8080%{REQUEST_URI} [P]
</IfModule>
LogLevel debug
ErrorLog ${APACHE_LOG_DIR}/error_thruway.log
CustomLog ${APACHE_LOG_DIR}/access_thruway.log combined
</VirtualHost>

Vagrant running DSTK refuses connections at port 8080

I'm very new to the land of virtual machines and vagrant, so please forgive my general ignorance about all of this.
The other day, I downloaded DSTK vagrant box and followed the instructions on the documentation page to get it set up so I could run many, many thousands of requests through it on my local machine rather than bombarding the public server (and also, it should be faster; and also, DSTK is down at the moment so it was the only option).
After many hours of waiting for vagrant to add and init the box on my tired old hard drive, it worked! I used curl to make a few requests, got the expected responses, and patted myself on the back. I closed my terminal and put away my computer and drank beer.
... And then, the next morning, this happened:
$ curl -d "Joe Biden" "localhost:8080/text2people"
curl: (7) Failed to connect to localhost port 8080: Connection refused
I can't work out why. I tried vagrant suspend; vagrant up. Same thing. vagrant halt; vagrant up. Same thing. When I've halted vagrant and run 'vagrant up' again, this appears in the readout, which makes me think it ought to be working.
==> default: Forwarding ports...
default: 80 => 8080 (adapter 1)
default: 22 => 2222 (adapter 1)
I can run vagrant ssh and get in, look around, and I see that all the files are in the right right place.
I suppose I could remove the box and re-add it, but it really did take hours to add and init and now I'm running up against a deadline where I need to it work, and I imagine there is a very simple thing of which I am not aware that is causing my problem. Google has thus far been unhelpful, probably because of how unfamiliar I am with vagrant, generally.
I have just encountered the same problem, in my case the solution was to ensure that the following line is not commented on the Vagrantfile:
config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
So it's not a problem with vagrant or virtualbox at all.
It turns out I just didn't stop to think about whether curl could be the culprit, and when I got around to this again, fixing it was as simple as curl "127.0.0.1:8080/...". Apparently curl (or at least the native Mac OSX implementation) doesn't like localhost.
I did try to Google about it for a few minutes, though, and I came across my own un-answered question, which was sort of a bummer—so in case anybody else ever has this problem, here's your fix.
curl "localhost:8080/text2people" -d "Joe Biden"
curl: (7) Failed to connect to localhost port 8080: Connection refused
curl "127.0.0.1:8080/text2people" -d "Joe Biden"
[
{
"title": "",
"gender": "m",
"start_index": 0,
"first_name": "Joe",
"end_index": 9,
"surnames": "Biden",
"ethnicity": {
"percentage_american_indian_or_alaska_native": 0.0,
"rank": 114852,
"percentage_two_or_more": 0.0,
"percentage_of_total": 5.0e-05,
"percentage_hispanic": 0.0,
"percentage_white": 96.45,
"percentage_black": 0.0,
"percentage_asian_or_pacific_islander": 0.0
},
"matched_string": "Joe Biden"
}
]

Resources