Lets-encrypt Error: Failed HTTP-01 Pre-Flight / Dry Run - lets-encrypt

I've set up a redbird based proxy following its README file examples.
By now I've configured single domain both for http and https and it's working well (https still using self-signed certificate).
But now I'm trying to configure it to use letsencrypt to automatically get valid ssl certificates and I'm getting stuck in following error:
{"level":30,"time":1578681102208,"pid":21320,"hostname":"nigul","name":"redbird","0":false,"1":"setChallenge called for 'exposito.bitifet.net'","msg":"Lets encrypt debugger","v":1}
[acme-v2] handled(?) rejection as errback:
Error: Error: Failed HTTP-01 Pre-Flight / Dry Run.
curl 'http://exposito.bitifet.net/.well-known/acme-challenge/test-cf55199519d859042f695e620cca8dbb-0'
Expected: 'test-cf55199519d859042f695e620cca8dbb-0.MgLl7GIS59DPtPMejuUcXfddzNt8YxfLVo5op670u8M'
Got: '<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>404 - Not Found</title>
</head>
<body>
<h1>404 - Not Found</h1>
</body>
</html>
'
See https://git.coolaj86.com/coolaj86/acme-v2.js/issues/4
at /home/joanmi/SERVICES/redbird_domains/node_modules/acme-v2/index.js:49:10
at process._tickCallback (internal/process/next_tick.js:68:7)
As far as I understand, this is telling me that Lets Encrypt is trying to access to the url http://exposito.bitifet.net/.well-known/acme-challenge/test-cf55199519d859042f695e620cca8dbb-0 using the following command:
curl 'http://exposito.bitifet.net/.well-known/acme-challenge/test-cf55199519d859042f695e620cca8dbb-0'
...and that it is getting which seems a 404 HTML Error page which I have no clue wherever it could come.
And, in fact, executing that curl command or just pasting that url in my browser (you can try it: I left the server running), I get the given Expected string so, from my point of view it seems like if my configuration were correct but, for some reason, Lets Encrypt's servers were reaching another server (either because of wrong routing or DNS).
But on the other hand, I suppose it's more probable that I've done something wrong in my configuration.
Here I paste my whole script (ports 80 and 443 are redirected to 1080 and 1443, respectively, through iptables because the script is run by non privileged user):
const Redbird = require("redbird");
const proxy = Redbird({
port: 1080,
xfwd: false, // Disable the X-Forwarded-For header
letsencrypt: {
path: __dirname + '/certs',
port: 9999
// LetsEncrypt minimal web server port for handling challenges.
// Routed 80->9999, no need to open 9999 in firewall. Default 3000
// if not defined.
},
ssl: {
http2: true,
port: 1443, // SSL port used to serve registered https routes with LetsEncrypt certificate.
}
});
proxy.register('exposito.bitifet.net:9999', 'http://localhost:8001', {
ssl: {
letsencrypt: {
email: 'xxxxxx#gmail.com', // Domain owner/admin email
production: false,
// WARNING: Only use this flag when the proxy is verified to
// work correctly to avoid being banned!
}
}
});
proxy.register("exposito.bitifet.net", "http://localhost:8001");
Any clue will be welcome.
Thanks.

SOLVED!!
Many issues were involved at the same time (despite my lack of experience with either redbird and letsencrypt.
The magic 404/Not found page: I guess it came from a lighttpd server that seems to had been preinstalled in my VPS.
Port 80 was redirected via iptables but I suppose in one or other configuration tweak I could have redirected incoming requests to localhost's port 80 (which is not redirected).
My redbird missunderstanding: Looking at examples in its README file, I thought redbird were kinda "multi- reverse_proxy" in the sense that you could redirect http and https requests with single redbird instance.
But I finally realized that the (maybe not so well named) port option which is, in fact, an http port, serves only to configure a built-in unconditional http->https redirector (of which I already had read about, but I thought it were optional).
The actual underlying issue: If your DNS have DNSSEC activated, you should define a CAA register in it pointing to letsencrypt.org.
At the moment I disabled DNSSEC instead because my provider's control panel doesn't allow me to create such register.
I discovered it while trying to get the certificates through certbot (sudo apt-get install certbot which I must say that, If I had known about it before, I wouldn't had care about trying redbird's letsencrypt integration.
It is much more verbose (while redbird is more like a black box when errors arise) and pointed out that I needed the CAA register.
Here the notes I took about it (in case anyone could be interested):
Free SSL Certificates with Certbot
Install certbot:
sudo apt-get install certbot
Create:
sudo certbot certonly --manual --preferred-challenges http -d <domain>
Renew:
sudo certbot renew
Caveats:
DNSSEC
If your DNS server has DNSSEC enabled, you will need to add a CAA
register pointing to letsencrypt.org.
...and your DNS provider my not allow to create it (at least I
couldn't with CDMON. Also not -yet- complained).
production = false is for other kinds of testing: I read that if you put in to true while testing, you may be banned from letsencrypt if you perform too many requests.
Setting it to false you can test redirections, but you will see still errors regarding letsencrypt even you could navigate without a secure certificate (I think kinda self-signed is provided to allow testing). So don't expect a valid one.
ssl port is used for redirection: Not a (big) issue, but if you specify ssl port other than 443, built in redirector will unconditionally redirect you to that port.
Running redbird as root and using standard (80 and 443) ports works fine. But if you, like me, want to use an alternative ports in order to execute redbird with non privileged user, you will get redirected to that alternative port instead of 443 (Even having it redirected through iptables).
Here my (almost*) final redbird script:
const Redbird = require("redbird");
const proxy = Redbird({
port: 1080,
xfwd: false, // Disable the X-Forwarded-For header
ssl: {
port: 1443,
},
letsencrypt: {
path: __dirname + '/certs',
port: 9999,
// LetsEncrypt minimal web server port for handling challenges.
// Routed 80->9999, no need to open 9999 in firewall. Default 3000
// if not defined.
},
});
proxy.register('exposito.bitifet.net', 'http://exposito.bitifet.net:8001', {
ssl: {
http2: true,
letsencrypt: {
email: 'xxxxxx#gmail.com', // Domain owner/admin email
production: true,
// WARNING: Only use this flag when the proxy is verified to
// work correctly to avoid being banned!
},
}
});
(*) I still need to fix the explicit-port redirection issue (5), because I don't want to run redbird as root. But I know is possible to allow uses to listen given ports. Even I would probably better try to patch redbird in order to allow specifying listen and redirection ports separatedly.
EDIT: It is already implemented (and documented) using the (optional) option redirectPort in ssl section. Just added redirectPort: 443 and job done!!
EDIT 2: For the sake of completion, there still was another issue I struggled with.
To get things working I finally configured the redirection to the http port instead of https one.
That is: Incomming https requests gets redirected to my application http port.
It seems weird but it works. At least if you don't need any exclusively https feature such as push notifications (which I plan to use in the future).
But its implies to open an http server at least on localhost. Which isn't a major issue now (this is only a playground server) but I plan to use redbird at work to proxy multiple domains to different servers so that would had forced us to open http at least in our DMZ vlan (which is an additional risk that is better to avoid...).
When I tried redirecting to https I got the DEPTH_ZERO_SELF_SIGNED_CERT error.
Ok: This is telling me that redbird (or node) does not trust my original (self signed) certificate. I know there is an option to tell node to accept those certificates. But maybe it is not the way to go...
So I configured my application to use the same certificate that redbird is obtaining through letsencrypt.
But then I got this other error:
UNABLE_TO_VERIFY_LEAF_SIGNATURE
Researching a bit I found this StackOverflow answer that explains how to get all root and intermediate certificates trusted by Mozilla and make node to trust them.
So, at the end, what I did was:
Installed node_extra_ca_certs_mozilla_bundle package:
npm install --save node_extra_ca_certs_mozilla_bundle
Prepended NODE_EXTRA_CA_CERTS=node_modules/node_extra_ca_certs_mozilla_bundle/ca_bundle/ca_intermediate_root_bundle.pem to the start command in the package.json's scripts section.
Updated my redbird script to point again to the https (protocol and) port:
proxy.register('exposito.bitifet.net', 'https://localhost:4301', {...]);
Here my final redbird configuration:
const Redbird = require("redbird");
const proxy = Redbird({
port: 1080,
xfwd: false, // Disable the X-Forwarded-For header
ssl: {
port: 1443,
redirectPort: 443
// key: "/etc/bitifet/exposito/ssl/private.key",
// cert: "/etc/bitifet/exposito/ssl/public.cert",
},
letsencrypt: {
path: __dirname + '/certs',
port: 9999,
// LetsEncrypt minimal web server port for handling challenges.
// Routed 80->9999, no need to open 9999 in firewall. Default 3000
// if not defined.
},
});
proxy.register('exposito.bitifet.net', 'https://localhost:4301', {
ssl: {
http2: true,
letsencrypt: {
email: 'xxxxxx#gmail.com', // Domain owner/admin email
production: true,
// WARNING: Only use this flag when the proxy is verified to
// work correctly to avoid being banned!
},
}
});
And here my package.json file contents:
{
"name": "redbird_domains",
"version": "0.0.1",
"description": "Local Domains Handling",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "NODE_EXTRA_CA_CERTS=node_modules/node_extra_ca_certs_mozilla_bundle/ca_bundle/ca_intermediate_root_bundle.pem node ./index.js"
},
"author": "Joanmi",
"license": "GPL-3.0",
"dependencies": {
"node_extra_ca_certs_mozilla_bundle": "^1.0.4",
"redbird": "^0.10.0"
}
}

Related

caddy - setting https to localhost on mac

I am using caddy v2.3.0 on mac
If run caddy run i am getting following
My Caddyfile
{
local_certs
}
demoCart.dev:443 {
reverse_proxy http://localhost:3000
}
If I run caddy validate it says Valid configuration
When I am trying to access it on the browser with https://democart.dev
I may be wrong, but this looks like a DNS issue rather than a caddy issue.
Have you set up your DNS to point democart.dev to your machine? Caddy will only be able to serve it if it points to your machines IP address in the first place.
Try
localhost:443 {
reverse_proxy http://localhost:3000
}
and see if that works.

how do you enable ssl using laravel 8 sail

I just created a new Laravel 8 project, following the instructions in their docs. Using Laravel Sail I have the site running locally on my machine just fine using sail up. I have set up an entry in /etc/hosts so the url I go to is http://local.dev.domain.com (substituting domain.com for the actual domain name I own, and pointing to localhost in the /etc/hosts file)...all works great.
However, the site needs to use Facebook Login, and Facebook requires https urls only on referrers. I've tried everything I could find online about setting up SSL certs with docker, but setting up nginx with manually created certs (via mkcert) or trying to use letsencrypt all fails for various reasons (conflicts in ports, letsencrypting wanting the domain to be a real one (and failing on the acme challenge if I do create that subdomain), etc. I've copied the certs to /etc/ssl/certs in the docker image and run update-ca-certificates, tried setting the application port 443 in my .env file as well as opening both ports 80 and 443 in the docker-compose.yml file...but all ends in the browser rejecting the request to https://local.dev.domain.com
I've spent hours trying to get this to work but it doesn't seem like anyone has used the Laravel Sail docker image with SSL.
Any pointers?
[Edit for more info]
As pointed out in the comments, you need to set an alias to just use sail ..., but I've already done that:
I also tried without the bash alias using vendor/bin/sail share to no avail:
Problem
In your case you need a real domain, which you have. A self-signed certificate would not work as Facebook would not acknowledge it as trusted. To get a free ssl certificate for that domain you can use Let's Encrypt, the easiest way to obtain that certificate is using certbot. The problem is that you need to install that certificate on your webserver. Laravel Sail uses the build-in webserver that does not support ssl unfortunatly. You need to put a webserver like nginx in front of the app and install the certificate there.
I'm currently working on a fork that enables what you need, however it's not finished.
Workaround
For now you can use the build in tunnel provided by Expose: https://beyondco.de/docs/expose/server/ssl
This is enable by sail share
It might be easier to use ngrok instead, which is essentialy the same but commercial. Than all you have to do is download, register and run ngrok http --region=eu 9000 and it will create a https link for you for development.
I solved this problem by using Caddy as a reverse proxy to the Laravel Sail container. Caddy has a feature called automatic HTTPS which can generate local certificates on the fly.
1 - Add Caddy as a service to your docker-compose.yml
services:
caddy:
image: caddy:latest
restart: unless-stopped
ports:
- '80:80'
- '443:443'
volumes:
- './docker/Caddyfile:/etc/caddy/Caddyfile'
- sailcaddy:/data
- sailcaddy:/config
networks:
- sail
# Remove "ports" from laravel.test service
volumes:
sailcaddy:
driver: local
2 - Create a simple Caddyfile and configure it as a reverse proxy
{
on_demand_tls {
ask http://laravel.test/caddy-check
}
local_certs
}
:443 {
tls internal {
on_demand
}
reverse_proxy laravel.test {
header_up Host {host}
header_up X-Real-IP {remote}
header_up X-Forwarded-For {remote}
header_up X-Forwarded-Port {server_port}
header_up X-Forwarded-Proto {scheme}
health_timeout 5s
}
}
3 - Set up an endpoint for Caddy to authorise which domains it generates certificates for
<?php
namespace App\Http\Controllers;
use App\Store;
use Illuminate\Http\Request;
class CaddyController extends Controller
{
public function check(Request $request)
{
$authorizedDomains = [
'laravel.test',
'www.laravel.test',
// Add subdomains here
];
if (in_array($request->query('domain'), $authorizedDomains)) {
return response('Domain Authorized');
}
// Abort if there's no 200 response returned above
abort(503);
}
}
See this gist for the full code changes involved. This blog post explains how to trust the Caddy root certificates.
For make "sail share" work you have to set alias and run "composer require laravel/sail --dev" on your project. This will install the latest version of sail, version 0.0.6 includes "share" command
There is actually an easier way. I did the following:
changed laravel.test port to something else like 8085
do it from .env so u will avoid issues, add APP_PORT env var
then (this step has been done by our sys admin) since laravel sail is actually installing apache in the system, u can manually set a reverse proxy for both port 80 and 443 to port 8085 and that should do the trick.
of course u will have to install certbot on that apache instance.

How to make Laravel Valet work nicely with BrowserSync?

Has anyone made Laravel Valet (secure) work nicely with Browser Sync while using Laravel Mix. I am doing something like this but it keeps pointing me to https://shadow-api.test:3000 where as I simply want to omit the port.
mix.browserSync({
proxy: 'shadow-api.test',
host: 'shadow-api.test',
open: 'external',
https: {
key: "/Users/aligajani/.config/valet/Certificates/shadow-api.test.key",
cert: "/Users/aligajani/.config/valet/Certificates/shadow-api.test.crt"
}
});
For your information, I am using the latest versions of everything, fresh install and intend on building a SPA (with token auth). I haven't had similar issues with BrowserSync before simply because I wasn't using Valet.
Better late than never.. this works for me:
.browserSync({
proxy: 'https://mass-importer.faaren.test',
host: 'mass-importer.faaren.test',
open: 'external',
https: {
key: "/Users/fabianhagen/.config/valet/Certificates/mass-importer.faaren.test.key",
cert: "/Users/fabianhagen/.config/valet/Certificates/mass-importer.faaren.test.crt"
}
});
I had to prefix the proxy domain with https://. It still opens under port 3000 (or another if this port is already used), but browsersync is working.

Cannot connect to ghost server on ec2 instance

I am following the basic Ghost server installation on an ec2 instance, so far I can run ghost server via npm start and I can see that ghost server is up and running:
Ghost is running...
Listening on 127.0.0.1:2368
Url configured as: http://54.187.25.187/
Ctrl+C to shut down
Here is the ghost config config.js:
// ### Development **(default)**
development: {
// The url to use when providing links to the site, E.g. in RSS and email.
url: 'http://54.187.25.187/',
database: {
client: 'sqlite3',
connection: {
filename: path.join(__dirname, '/content/data/ghost-dev.db')
},
debug: false
},
server: {
// Host to be passed to node's `net.Server#listen()`
host: '127.0.0.1',
// Port to be passed to node's `net.Server#listen()`, for iisnode set this to `process.env.PORT`
port: '2368'
}
At the end, I cannot access to anything when I type http://54.187.25.187:2368 on the browser. I really appreciate guidelines on how to setup ghost properly.
EDIT: The problem is solved already, it was a EC2 SG issue that ports remained closed after I have set them to open.
For Amazon EC2 we have found you need to change the port to 0.0.0.0
http://www.howtoinstallghost.com/how-to-setup-an-amazon-ec2-instance-to-host-ghost-for-free-self-install/

How can I configure yeoman (gruntjs) to run as HTTPS?

I have recently been working with Yeoman (http://yeoman.io/) and now would like to set up my local environment to handle HTTPS requests, so that I can have it handle callbacks from OAUTH providers.
Under a non-Yeoman/grunt setup I was able to get node.js configured to handle HTTPS in a following a similar path as directed in this question (How to create an HTTPS server in Node.js?).
Looking at the gruntJS repo on github it appears this has been added as a feature (https://github.com/gruntjs/grunt-contrib-connect/pull/15) but I still am unclear as to where I set the appropriate options.
grunt.initConfig({
connect: {
server: {
options: {
protocol: 'https',
port: 8443,
key: grunt.file.read('server.key').toString(),
cert: grunt.file.read('server.crt').toString(),
ca: grunt.file.read('ca.crt').toString(),
passphrase: 'grunt'
}
}
}
});
see this commit

Resources