Secure websockets with Cro - websocket

Briefly: I created a service on an internet server using Cro and websocket. Very simple using Cro examples. No problem when sending and receiving data from an HTML page when the page is served as localhost. When the page is served using https, the websocket cannot be established.
How is the wss protocol be used with Cro?
Update: After installing cro and running cro stub :secure, the service.p6 has some more code not explicit in the documentation.
More detail:
I have a docker file running on the internet server, Cro is set to listen on 35145, so the docker command is docker --rm -t myApp -p 35145:35145
The service file contains
use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Cro::HTTP::Router;
use Cro::HTTP::Router::WebSocket;
my $host = %*ENV<RAKU_WEB_REPL_HOST> // '0.0.0.0';
my $port = %*ENV<RAKU_WEB_REPL_PORT> // 35145;
my Cro::Service $http = Cro::HTTP::Server.new(
http => <1.1>,
:$host,
:$port,
application => routes(),
after => [
Cro::HTTP::Log::File.new(logs => $*OUT, errors => $*ERR)
]
);
$http.start;
react {
whenever signal(SIGINT) {
say "Shutting down...";
$http.stop;
done;
}
}
sub routes() {
route {
get -> 'raku' {
web-socket :json, -> $incoming {
supply whenever $incoming -> $message {
my $json = await $message.body;
if $json<code> {
my $stdout, $stderr;
# process code
emit({ :$stdout, :$stderr })
}
}
}
}
}
}
In the HTML I have a textarea container with an id raku-code. The js script has the following (I set websocketHost and websocketPort elsewhere in the script) in a handler that fires after the DOM is ready:
const connect = function() {
// Return a promise, which will wait for the socket to open
return new Promise((resolve, reject) => {
// This calculates the link to the websocket.
const socketProtocol = (window.location.protocol === 'https:' ? 'wss:' : 'ws:');
const socketUrl = `${socketProtocol}//${websocketHost}:${websocketPort}/raku`;
socket = new WebSocket(socketUrl);
// This will fire once the socket opens
socket.onopen = (e) => {
// Send a little test data, which we can use on the server if we want
socket.send(JSON.stringify({ "loaded" : true }));
// Resolve the promise - we are connected
resolve();
}
// This will fire when the server sends the user a message
socket.onmessage = (data) => {
let parsedData = JSON.parse(data.data);
const resOut = document.getElementById('raku-ws-stdout');
const resErr = document.getElementById('raku-ws-stderr');
resOut.textContent = parsedData.stdout;
resErr.textContent = parsedData.stderr;
}
When an HTML file with this JS script is set up, and served locally I can send data to the Cro app running on an internet server, and the Cro App (running in a docker image) processes and returns data which is placed in the right HTML container. Using Firefox and the developer tools, I can see that the ws connection is created.
But when I serve the same file via Apache which forces access via https, Firefox issues an error that the 'wss' connection cannot be created. In addition, if I force a 'ws' connection in the JS script, Firefox prevents the creation of a non-secure connection.
a) How do I change the Cro coding to allow for wss? From the Cro documentation it seems I need to add a Cro::TLS listener, but it isn't clear where to instantiate the listener.
b) If this is to be in a docker file, would I need to include the secret encryption keys in the image, which is not something I would like to do?
c) Is there a way to put the Cro app behind the Apache server so that the websocket is decrypted/encrypted by Apache?

How do I change the Cro coding to allow for wss? From the Cro documentation it seems I need to add a Cro::TLS listener, but it isn't clear where to instantiate the listener.
Just pass the needed arguments to Cro::HTTP::Server, it will set up the listener for you.
If this is to be in a docker file, would I need to include the secret encryption keys in the image, which is not something I would like to do?
No. You can keep them in a volume, or bind-mount them from the host machine.
Is there a way to put the Cro app behind the Apache server so that the websocket is decrypted/encrypted by Apache?
Yes, same as with any other app. Use mod_proxy, mod_proxy_wstunnel and a ProxyPass command. Other frontends such as nginx, haproxy, or envoy will also do the job.

Though is not a pure cro solution, but you can
run your cro app on (none ssl/https) http/web socket port - localhost
and then have an Nginx server (configured to serve https/ssl trafic) to handle incoming public https/ssl requests and bypass them
as a plain http traffic to your app using
nginx reverse proxy mechanism (this is also often referred as an ssl termination), that way you
remove a necessity to handle https/ssl on cro side.
The only hurdle here might be if a web sockets
protocol is handled well by Nginx proxy. I’ve never tried that but probably you should be fine according to the Nginx docs - https://www.nginx.com/blog/websocket-nginx/

Related

Blazor & Custom Certificates

I'm investigating the idea of using Blazor WASM to build a retail application that would run on an office Intranet. The application would be installed on a given machine, to be accessed via browser from any of several machines on the LAN.
The biggest stumbling block I'm running into is the question of how to go about securing the channel.
The app itself would run as a Windows Service, listening on port 443 on one of the workstations, e.g. https://reception/. But how do we tell Blazor to use a self-signed TLS cert for that hostname?
If there's a better way to go about this, I'm all ears. I can't use Let's Encrypt certs, because neither the application nor its hostname will be exposed to the public Internet.
There is a glut of information on working with Blazor to build such an app, but most if not all demos run on localhost. That works fine for dev, but not for production (in a self-hosting scenario, anyway). There doesn't seem to be much discussion at all of this aspect of things.
How can we use a custom certificate for browser requests from the client to a Blazor WASM app?
Any ideas?
I was able to get this working using some slightly modified sample code from the official documentation:
builder.WebHost.ConfigureKestrel(serverOptions =>
{
serverOptions.ListenAnyIP(443, listenOptions =>
{
listenOptions.UseHttps(httpsOptions =>
{
var testCert = CertificateLoader.LoadFromStoreCert(
"test", "My", StoreLocation.CurrentUser,
allowInvalid: true);
var certs = new Dictionary<string, X509Certificate2>(
StringComparer.OrdinalIgnoreCase)
{
["test"] = testCert
};
httpsOptions.ServerCertificateSelector = (connectionContext, name) =>
{
if (name is not null && certs.TryGetValue(name, out var cert))
{
return cert;
}
return testCert;
};
});
});
});
The easiest way to handle SSL is to use IIS that will act as a proxy for your Blazor app.
IIS will give you easy access to well documented SSL settings.
https://learn.microsoft.com/en-us/aspnet/core/blazor/host-and-deploy/webassembly?view=aspnetcore-6.0#standalone-deployment

TypeORM create connection through proxy address

So I wanted to make a small project with a DB, Backend and mobile frontend. I have a mariadb database on a raspberry pi and have everything setup for connections from anywhere. I created my backend server with TypeORM and hosted it on heroku. The problem is that my heroku server has a dynamic ip and I want to only have a small amount of whitelisted IP's. So I added quotaguard to my heroku app. The problem is the only way to setup that proxy connection (from the quotaguard documentation) is through socksjs (once again from the documentation on quotaguard) which creates a SockConnection object. I know if I use mysql.createConnection() there's a stream option that allows me to pass in that object, but I don't see it in the createConnection function from TypeORM. I have a variable called sockConn and I have verified that the connection is made on the quotaguard dashboard, but I don't know how to add it as an option to the TypeORM createConnection function.
Here is the index.ts file from my project:
import "reflect-metadata";
import {createConnection, getConnectionManager} from "typeorm";
import express from "express";
import {Request, Response} from "express";
import {Routes} from "./routes";
import { DB_CONNECTION } from "./database";
import { MysqlConnectionOptions } from "typeorm/driver/mysql/MysqlConnectionOptions";
import { config } from 'dotenv';
import { parse } from 'url';
var SocksConnection = require('socksjs');
config().parsed
const setupConnection = () => {
DB_CONNECTION.username = process.env.DB_USER;
DB_CONNECTION.password = process.env.DB_PASS;
DB_CONNECTION.host = process.env.DB_HOST;
DB_CONNECTION.port = parseInt(process.env.DB_PORT);
DB_CONNECTION.debug = true;
// ---- this section from quotaguard documentation ---- //
var proxy = parse(process.env.QUOTAGUARDSTATIC_URL),
auth = proxy.auth,
username = auth.split(':')[0],
pass = auth.split(':')[1];
var sock_options = {
host: proxy.hostname,
port: 1080,
user: username,
pass: pass
};
var sockConn = new SocksConnection({host: DB_CONNECTION.host, port: DB_CONNECTION.port}, sock_options);
// ---- this section above from quotaguard documentation ---- //
}
setupConnection();
createConnection(DB_CONNECTION as MysqlConnectionOptions).then(async connection => {
// create express app
const app = express();
app.use(express.json());
// register express routes from defined application routes
Routes.forEach(route => {
(app as any)[route.method](route.route, (req: Request, res: Response, next: Function) => {
const result = (new (route.controller as any)())[route.action](req, res, next);
if (result instanceof Promise) {
result.then(result => result !== null && result !== undefined ? res.send(result) : undefined);
} else if (result !== null && result !== undefined) {
res.json(result);
}
});
});
// setup express app here
// start express server
app.listen(3000);
console.log("Express server has started on port 3000.
Open http://localhost:3000 to see results");
}).catch(error => console.log(error));
There might also just be a better package, but as I've never worked with this type of thing, I only went off of what the documentation had on it.
So I reached out to QuotaGuard and they gave me an answer that works. The answer is below:
However we usually recommend that you use our QGTunnel software for database connections.
The QGTunnel software is a wrapper program that presents a socket to your application from the localhost. Then you connect to that socket as if it were your database. Below are some setup instructions for the QGTunnel.
Download QGTunnel into the root of your project
Log in to our dashboard and setup the tunnel
Using the Heroku CLI you can log into our dashboard with the following command:
heroku addons:open quotaguardstatic
Or if you prefer, you can login from the Heroku dashboard by clicking on QuotaGuard Static on the resources tab of your application.
Once you are logged into our dashboard, in the top right menu, go to Setup. On the left, click Tunnel, then Create Tunnel.
Remote Destination: tcp://hostname.for.your.server.com:3306
Local Port: 3306
Transparent: true
Encrypted: false
This setup assumes that the remote database server is located at hostname.for.your.server.com and is listening on port 3306. This is usually the default port.
The Local Port is the port number that QGTunnel will listen on. In this example we set it to 3306, but if you have another process using 3306, you may have to change it (ie: 3307).
Transparent mode allows QGTunnel to override the DNS for hostname.for.your.server.com to 127.0.0.1, which redirects traffic to the QGTunnel software. This means you can connect to either hostname.for.your.server.com or 127.0.0.1 to connect through the tunnel.
Encrypted mode can be used to encrypt data end-to-end, but if your protocol is already encrypted then you don't need to spend time setting it up.
Change your code to connect through the tunnel
With transparent mode and matching Local and Remote ports you should not need to change your code. You can also connect to 127.0.0.1:3306.
Without transparent mode, you will want to connect to 127.0.0.1:3306.
Change your startup code.
Change the code that starts up your application. In heroku this is done with a Procfile. Basically you just need to prepend your startup code with "bin/qgtunnel".
So for a Procfile that was previously:
web: your-application your arguments
you would now want:
web: bin/qgtunnel your-application your arguments
If you do not have a Procfile, then heroku is using a default setup in place of the Procfile based on the framework or language you are using. You can usually find this information on the Overview tab of the application in Heroku's dashboard. It is usually under the heading "Dyno information".
Commit and push your code.
Be sure that the file bin/qgtunnel is added to your repository.
If you are using transparent mode, be sure that vendor/nss_wrapper/libnss_wrapper.so is also added to your repository.
If you have problems, enable the environment variable QGTUNNEL_DEBUG=true and then restart your application while watching the logs. Send me any information in the logs. Please redact any sensitive information, including your QuotaGuard connection URL.
VERY IMPORTANT
7. After you get everything working, I suggest you download your QGTunnel configuration from our dashboard as a .qgtunnel file and put that in the root of your project. This keeps your project from not relying on our website during startup.
This did work for me, and I was able to make a connection to my database.

MITM proxy using FiddlerCore

We want to implement a MITM proxy.
It should receive https requests from client, decrypt them and
return pre-recorded responses.
It means that the proxy is not connected to remote server directly.
I know that FiddlerCore supports MITM, but how can I possibly use it in my scenario?
Thanks
https://groups.google.com/forum/#!topic/httpfiddler/E0JZrRRGhVg
This is a pretty straightforward task. If you look at the demo project included in FiddlerCore, you can get most of the way there.
Fiddler.FiddlerApplication.BeforeRequest += delegate(Fiddler.Session oS)
{
if (oSession.HTTPMethodIs("CONNECT")) { oSession.oFlags["X-ReplyWithTunnel"] = "Fake for HTTPS Tunnel"; return; }
if (oS.uriContains("replaceme.txt"))
{
oS.utilCreateResponseAndBypassServer();
oS.responseBodyBytes = SessionIWantToReturn.responseBodyBytes;
oS.oResponse.headers = (HTTPResponseHeaders) SessionIWantToReturn.oResponse.headers.Clone();
}
};

When self-hosting what exactly causes AddressAccessDeniedException : HTTP could not register URL

I am writing a bdd test for a component that will startup phantomjs and hit a specific route on my site and do processing on that. Because the component is fundamentally about automating a phantom instance there is no way to easily stub out the http requests.
So I want to stub out a self-hosted endpoint that will stub out the data I'm after. Because this is a unit test I think its really important for it to run in isolation so I do something like this:
async Task can_render_html_for_slide_async() {
var config = new HttpSelfHostConfiguration("http://localhost:54331");
config.Routes.MapHttpRoute("Controller", "{controller}", new {});
using (var server = new HttpSelfHostServer(config)) {
server.OpenAsync().Wait();
var client = new HttpClient();
var resp = await client.GetStringAsync(config.BaseAddress+"/Stub");
Console.WriteLine(resp);
}
}
public class StubController : ApiController
{
public string Get() {
return "Booyah";
}
}
Which gets me
AddressAccessDeniedException : HTTP could not register URL http://+:54331/
I understand that netsh or Admin mode is required for this but I don't understand why. Nodejs for example runs perfectly fine on windows but has no such requirement.
Also using OWIN directly needs no netsh-ing. So....what's going on?
I wrote an article about it on codeproject, it was done to make it possible for multiple application to share the same port.
You can have both, IIS and Apache (or OWIN in your case) listenening port 80. The routing to the right application is done thanks to the path of the url.
IIS and Apache both would use this driver (http.sys). But you need permission to "reserve" a path.
Administrators are always authorized. For other users, use netsh or my GUI tool HttpSysManager to set ACL.
Any method that requires giving permission via netsh uses a Windows kernel driver to provide http access.
If a library opens a socket itself and handles the http communication that way, no netsh use is needed.
So to answer your question, some methods are using the kernel driver and some are handling the protocol themselves.

Faye does not publish when using a browser on another computer in the network

I have faye implementation in my rails application. The publish method works correctly when both browsers are on the same computer. When I access the application from another browser on another computer, it only works from client to server and does not publish to other clients. Also the publish event does not push to client when there are changes in the browser on the server.
Controller publish code:
def publish(channel, data)
message = {
:channel => channel,
:data => data,
:ext => {:faye_token => FAYE_OUTGOING_AUTH_TOKEN}
}
uri = URI.parse('http://localhost:9292/faye')
Net::HTTP.post_form(uri, :message => message.to_json)
end
Command to run faye:
rackup faye.ru -s thin -E production -d
Example:
A: Server,
B: Client1,
C: Client2
A B and C are different computers in same network, and are all subscribed to the same channel.
If I input data on B, A will see the data but C will not see the data until I refresh the page (Which is getting the data from db).
If I input data on A, it does not get published to the other clients.
If I input data on C, to a channel that only C and B are subscribed to, only C gets to see the data, and it is not published to B.
If A, B, and C were different browsers on the same computer, all the above cases would work.
I have ran this in Development mode, and have tried WEBrick, Unicorn, and Thin.
Any help would be appreciated.
Thanks.
To resolve the issue I replaced all instances of "localhost" with the address of the server on which Faye is running. This includes for subscribing clients to channels as well.
Hope it helps,
Cheers!
Hey Babak: I am also facing same kind of problem, I am using nodejs + express + faye. so I should add ip_addr:port to each subscribe,client
var client = new Faye.Client('/faye',{
endpoints:{
websocket:'http:ws.chat-yummyfoods.rhcloud.com'
}
,timeout: 20
});
client.disable('websocket');
console.log("client:"+client);
var subscription=client.subscribe('/channel', function(message) {
console.log("Message:"+message.text);
$('#messages').append('<p>'+message.text+'</p>');
});
subscription.then(function(){
console.log('subscribe is active');
alert('subscribe is active');
});

Resources