SailsJS server and HTTPS - https

Sails.js integrates node.js http server and socket.io server. How can I change that http server to a https server? Similarly, can I add SSL to encrypt socket messages as well? If yes, what should I do? Is there any module I can add to do either or both of them?

To add https to Sails.js you have to self signed create an SSL certificat (or buy one ^^) and configure config/local.js
http : {
serverOptions : {
key : require('fs').readFileSync(__dirname + '/../ssl/server.key'),
cert : require('fs').readFileSync(__dirname + '/../ssl/server.crt')
}
},
ssl : {
key : require('fs').readFileSync(__dirname + '/../ssl/server.key'),
cert : require('fs').readFileSync(__dirname + '/../ssl/server.crt')
},
port: process.env.PORT || 443,
I create a ssl folder at sails root folder with all certificate files.

Related

Traefik acme timeouts

Im trying to get Traefik working properly in AKS. Overall it works fine however i can not get the ACME certs to work. Below attached my traefik.toml configuration on which i cant find anything odd.
The 3 domains that are mentioned are dummy in this use case by actually exists and reply as well
# traefik.toml
logLevel = "info"
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/tls.crt"
KeyFile = "/ssl/tls.key"
[entryPoints.traefik]
address = ":8080"
[ping]
entryPoint = "http"
[kubernetes]
[traefikLog]
format = "json"
[acme]
KeyType = "RSA4096"
email = "pimjansen#domain.com"
storage = "/acme/acme.json"
entryPoint = "https"
onHostRule = true
acmeLogging = true
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "traefik.domain.com"
[[acme.domains]]
main = "elasticsearch.domain.com"
[[acme.domains]]
main = "kibana.domain.com"
[api]
entryPoint = "traefik"
dashboard = true
The actual error i am receiving is this:
{"level":"error","msg":"Unable to obtain ACME certificate for domains \"traefik.hardstyletop40.com\" : unable to generate a certificate for the domains [traefik.domain.com]: acme: Error -\u003e One or more domains had a problem:\n[traefik.domain.com] acme: error: 400 :: urn:ietf:params:acme:error:connection :: Fetching http://traefik.hardstyletop40.com/.well-known/acme-challenge/mYkyJzIM-6Y2UIknhXpCkUUTZWjzsAeMuqx7eDCZloY: Error getting validation data, url: \n","time":"2019-09-11T14:47:13Z"}
With details about the challenge:
"challenges": [
{
"type": "http-01",
"status": "invalid",
"error": {
"type": "urn:ietf:params:acme:error:connection",
"detail": "Fetching http://traefik.domain.com/.well-known/acme-challenge/mYkyJzIM-6Y2UIknhXpCkUUTZWjzsAeMuqx7eDCZloY: Error getting validation data",
"status": 400
},
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/293838266/LPH2sA",
"token": "mYkyJzIM-6Y2UIknhXpCkUUTZWjzsAeMuqx7eDCZloY",
"validationRecord": [
{
"url": "http://traefik.domain.com/.well-known/acme-challenge/mYkyJzIM-6Y2UIknhXpCkUUTZWjzsAeMuqx7eDCZloY",
"hostname": "traefik.hardstyletop40.com",
"port": "80",
"addressesResolved": [
"13.79.159.165"
],
"addressUsed": "13.79.159.165"
}
]
},
Thanks in advance
How letsencrypt works is by putting a file in the .well-known directory on your specified webserver.
You're saying they're dummy, so you might be doing them locally? In anycase, if the autogenerated file isn't found on the webserver, it can't be verified that the certificate is requested from the "owning" domain.
How the flow works heavily simplified:
letsencrypt -> generate file name: abc133......
letsencrypt -> find webroot of provided domain in webserver config
letsencrypt -> copy file to .well-known in webroot of given domain
letsencrypt -> send a webrequest with filename and domain name to letsencrypt.org
letsencrypt.org -> try to request the file from the given domain looked up via dns
letsencrypt.org -> successfully requested file and verified, output certificate
letsencrypt -> read certificate and copy to certificates folder, make a few symlinks
letsencrypt -> modify webserver configs if needed
Now if you're working with dummy domains and not on the live server, the above process will fail on step 3, which will cause step 5 to fail, which will result in an error which you are getting.
An alternative is that you set a DNS record key to verify, if you can't run the command on the webserver to generate the certificate.
sudo certbot -d your.dummy.com --manual --preferred-challenges dns certonly
This will give you a code you will need to put in a txt record on your domain server
When you have done that, you confirm in the letsencrypt app that you've set the record and continue.
In short, if you cannot run the command on the webserver to generate the certificates, or cannoot modify the dns records, you cannot obtain a certificate via letsencrypt.

Let's encrypt with Auth Basic + HTTPS only

I would like to create a frontend to go on the traefik's dashboard, so here it's what I did:
[file]
[frontends]
[frontends.traefik]
entrypoints = ["https"]
backend = "traefik"
basicAuth = [
"...:...",
]
[frontends.traefik.routes.route]
rule = "Host:t.foo.bar"
[backends]
[backends.traefik]
[backends.traefik.servers.server]
url = "http://127.0.0.1:8080"
But the certificate is not valid. I guess it's because I force https, and I have a auth basic.
What should I do?
I guess I would need to create an other frontend on the same domain with /.well-known check and don't have http basic on this frontend ?

Service Fabric https endpoint with kestrel and reverse proxy

I've been trying to setup Https on a stateless API endpoint following the instructions on the microsoft documentations and diverse post/blogs I could find. It works fine locally, but I'm struggling to make it work after deploying it on my dev server getting
Browser : HTTP ERROR 504
Vm event viewer : HandlerAsyncOperation EndProcessReverseProxyRequest failed with FABRIC_E_TIMEOUT
SF event table : Error while processing request: request url = https://mydomain:19081/appname/servicename/api/healthcheck/ping, verb = GET, remote (client) address = xxx, request processing start time = 2018-03-13T14:50:17.1396031Z, forward url = https://0.0.0.0:44338/api/healthcheck/ping, number of successful resolve attempts = 48, error = 2147949567, message = , phase = ResolveServicePartition
in code I have in the instancelistener
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 44338, listenOptions =>
{
listenOptions.UseHttps(GetCertificate());
});
})
servicemanifest
<Endpoint Protocol="https" Name="SslServiceEndpoint" Type="Input" Port="44338" />
startup
services.AddMvc(options =>
{
options.SslPort = 44338;
options.Filters.Add(new RequireHttpsAttribute());
});
+
var options = new RewriteOptions().AddRedirectToHttps(StatusCodes.Status301MovedPermanently, 44338);
app.UseRewriter(options);
here is what I got in azure (deployed through ARM template)
Health probes
NAME PROTOCOL PORT USED BY
AppPortProbe TCP 44338 AppPortLBRule
FabricGatewayProbe TCP 19000 LBRule
FabricHttpGatewayProbe TCP 19080 LBHttpRule
SFReverseProxyProbe TCP 19081 LBSFReverseProxyRule
Load balancing rules
NAME LOAD BALANCING RULE BACKEND POOL HEALTH PROBE
AppPortLBRule AppPortLBRule (TCP/44338) LoadBalancerBEAddressPool AppPortProbe
LBHttpRule LBHttpRule (TCP/19080) LoadBalancerBEAddressPool FabricHttpGatewayProbe
LBRule LBRule (TCP/19000) LoadBalancerBEAddressPool FabricGatewayProbe
LBSFReverseProxyRule LBSFReverseProxyRule (TCP/19081) LoadBalancerBEAddressPool SFReverseProxyProbe
I have a Cluster certificate, ReverseProxy Certificate, and auth to the api through azure ad and in ARM
"fabricSettings": [
{
"parameters": [
{
"name": "ClusterProtectionLevel",
"value": "[parameters('clusterProtectionLevel')]"
}
],
"name": "Security"
},
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "ApplicationCertificateValidationPolicy",
"value": "None"
}
]
}
],
Not sure what else could be relevant, if you have any ideas/suggestions, those are really welcome
Edit : code for GetCertificate()
private X509Certificate2 GetCertificate()
{
var certificateBundle = Task.Run(async () => await GetKeyVaultClient()
.GetCertificateAsync(Environment.GetEnvironmentVariable("KeyVaultCertifIdentifier")));
var certificate = new X509Certificate2();
certificate.Import(certificateBundle.Result.Cer);
return certificate;
}
private KeyVaultClient GetKeyVaultClient() => new KeyVaultClient(async (authority, resource, scope) =>
{
var context = new AuthenticationContext(authority, TokenCache.DefaultShared);
var clientCred = new ClientCredential(Environment.GetEnvironmentVariable("KeyVaultClientId"),
Environment.GetEnvironmentVariable("KeyVaultSecret"));
var authResult = await context.AcquireTokenAsync(resource, clientCred);
return authResult.AccessToken;
});
Digging into your code I've realized that there is nothing wrong with it except one thing. I mean, as you use Kestrel, you don't need to set up anything extra in the AppManifest as those things are for Http.Sys implementation. You don't even need to have an endpoint in the ServiceManifest(although recommended) as all these things are about URL reservation for the service account and SSL binding configuration, neither of which is required with Kestrel.
What you do need to do is to use IPAddress.IPv6Any while you configure SSL. Aside the fact that it turns out to be the recommended way which allows you to accept both IPv4 and IPV6 connections, it also does a 'correct' endpoint registration in the SF. See, when you use IPAddress.Any, you'll get the SF setting up an endpoint like https://0.0.0.0:44338, and that's how the reverse proxy will try to reach the service which obviously wouldn't work. 0.0.0.0 doesn't correspond to any particular ip, it's just the way to say 'any IPv4 address at all'. While when you use IPAddress.IPv6Any, you'll get a correct endpoint mapped to the vm ip address that could be resolved from within the vnet. You could see that stuff by yourself in the SF Explorer if you go down to the endpoint section in the service instance blade.

Setting up ejabberd via websockets

I have an ejabberd server up and running.
I can test it via web clients and it works fine using BOSH connections.
I would like to connect to it via web sockets now, and I am not sure what I am missing for it to work, I just know it doesn't.
Here is an extract from my ejabberd.yml
hosts:
- "localhost"
- "somedomain.com"
- "im.somedomain.com"
listen :
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/websocket": ejabberd_http_ws
"/pub/archive": mod_http_fileserver
web_admin: true
http_bind: true
## register: true
## captcha: true
tls: true
certfile: "/etc/ejabberd/ejabberd.pem"
Now I tried to open a web socket via javascript as follows :
var ws = new WebSocket("ws://somedomain:5280/websocket/");
I get ERR_CONNECTION_TIMED_OUT in return. I have nothing within ejabberd's logs when I try to open a weksocket. I do have logs of the BOSH connections.
I am not sure if I am testing appropriately, nor if my server is setup correctly.
Any suggestion is most welcome.
Connection timeout error will throw by the server when the client does not send pong response to the server make sure you are sending the pong response.If you are using Strophe.js kindly check Handlers http://strophe.im/strophejs/doc/1.2.14/files/strophe-js.html#Strophe.Connection.addHandler
connection = new WebSocket("ws://somedomain:5280/websocket/");
//Adding ping handler using strophe connection
connection.addHandler(pingHandler, "urn:xmpp:ping", "iq", "get");
//Ping Handler Call back function
function pingHandler(ping) {
var pingId = ping.getAttribute("id");
var from = ping.getAttribute("from");
var to = ping.getAttribute("to");
var pong = strophe.$iq({
type: "result",
"to": from,
id: pingId,
"from": to
});
connection.send(pong);
return true;
}
Also, consider you are adding this configuration to your ejabberd.yml
websocket_ping_interval: 50
websocket_timeout: 60

Javascript get request from https server to localhost:port with self signed SSL

I have two servers configured and running om my Debian server. One main server and one Elasticsearch (search engine) server.
The main server is running on a https node server with a NGINX proxy and a purchased SSL certificate. The Elasticsearch server is running on a http server. I've added a new NGINX proxy server to redirect https://localhost:9999 to http://localhost:9200 with a self-signed SSL certificate. There's also a configured authentication on the Elasticsearch server with a username and a password.
Everything seem to be properly configured since I can get a successful response from the server when I'm doing a curl from the servers terminal towards https://localhost:9999 with the -k option to bypass the verication of the self-signed certificate, without it, it does not work.
I cannot do a cross-domain request from my https main server to my http localhost server. Therefore I need to configure https on my localhost server.
Without the -k option:
curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
With the -k option:
{
"name" : "server-name",
"cluster_name" : "name",
"cluster_uuid" : "uuid",
"version" : {
"number" : "x.x.x",
"build_hash" : "abc123",
"build_date" : "Timestamp",
"build_snapshot" : false,
"lucene_version" : "x.x.x"
},
"tagline" : "You Know, for Search"
}
Which is a successful Elasticsearch server response.
So the full curl request looks something like curl -k https://localhost:9999/ --user username:password.
So, the actual question:
I would like to be able to do a simple jQuery AJAX request towards this server. I'm trying with the following request $.get('https://username:password#localhost:9999/') but I'm getting ERR_CONNECTION_REFUSED.
My guess is that that the AJAX request does not bypass the self-signed certificate verification and therefore it refuses to connect.
Is there any simple way to solve this with request headers or something like that? Or do i need to purchase a CA-certificate to make this work with AJAX?
You are right the problem is the self signed certificate.If you try the same request but as http it will work.
Here is a workaround to make ElasticSearch work with https:
You need to implement your own Http Connector:
var HttpConnector = require('elasticsearch/src/lib/connectors/http');
var inherits = require('util').inherits;
var qs = require('querystring');
var fs = require('fs');
function CustomHttpConnector(host, config) {
HttpConnector.call(this, host, config);
}
inherits(CustomHttpConnector, HttpConnector);
// This function is copied and modified from elasticsearch-js/src/lib/connectors/http.js
CustomHttpConnector.prototype.makeReqParams = function (params) {
params = params || {};
var host = this.host;
var reqParams = {
method: params.method || 'GET',
protocol: host.protocol + ':',
auth: host.auth,
hostname: host.host,
port: host.port,
path: (host.path || '') + (params.path || ''),
headers: host.getHeaders(params.headers),
agent: this.agent,
rejectUnauthorized: true,
ca: fs.readFileSync('publicCertificate.crt', 'utf8')
};
if (!reqParams.path) {
reqParams.path = '/';
}
var query = host.getQuery(params.query);
if (query) {
reqParams.path = reqParams.path + '?' + qs.stringify(query);
}
return reqParams;
};
module.exports = CustomHttpConnector;
Then register it like so:
var elasticsearch = require('elasticsearch');
var CustomHttpConnector = require('./customHttpConnector');
var Elasticsearch = function() {
this.client = new elasticsearch.Client({
host: {
host: 'my.server.com',
port: '443',
protocol: 'https',
auth: 'user:passwd'
},
keepAlive: true,
apiVerison: "1.3",
connectionClass: CustomHttpConnector
});
}
https://gist.github.com/fractalf/d08de3b59c32197ccd65
If you want to make simple ajax calls not using ES the only thing you can do is prompt the user to visit the page and accept the certificate themselves when the request is denied.
Also see: https://stackoverflow.com/a/4566055/5758328

Resources