install wildcard lets encrypt ssl certificate on laravel sail - laravel

I created a SaaS app using laravel 8 with first-party package laravel sail (Docker) and tenancy for laravel
package for the SaaS.
I need to install wildcard lets encrypt SSL on the main app and all tenant apps will be on HTTPS.
I tried to install certbot image like this
certbot:
image: certbot/certbot:latest
the image installed but I do not know what to do after that.
I tried without docker using certbot instructions
it's installed and everything succeeded but the website doesn't open and all request timeout.
Update:
this is the ports section in my docker-compose.yml file
ports:
- '443:443'
I ran docker ps and all services are up and running.
I ran sudo ufw status and this is the result

TLDR: Laravel sail is not for production. Use a different docker configuration, if you need an example you can find it here: https://github.com/thomasmoors/laravel-docker
Also wildcard certificates are not achievable by using HTTP-01 challenges, you need a DNS-01 challenge, which you do by adding a txt record to your dns config.
Wildcard certificates by Let's Encrypt are only possible with a DNS-01 challenge. This however requires you to paste a TXT record to your DNS registry. So no go for wildcard unless you have an api to change your dns. It might be worth a try to look at this: https://stackexchange.github.io/dnscontrol/
However I do not know if your domain provider supports this.
For regular (non-wildcard) certificates:
By default Laravel Sail runs using the built in php artisan serve command-webserver, which has no support for ssl certificates. So you need to add a reverse proxy like nginx. Because of this I believe sail not to be production ready and also not intended. I have made an example of a non-sail docker-compose config for laravel: https://github.com/thomasmoors/laravel-docker
Certbot works by placing a file on your webserver which will be retrieved for the challenge. However it looks like your current configuration does not share a volume between your webserver and Certbot. Also you need to allow certbot to modify your nginx config.
The default location for you code is /var/www/html, so you should enable Certbot to write to that directory by adding a volume for the Certbot service as well:
upstream sentry_docker {
server 192.168.1.94:9005;
}
server {
server_name example.dev;
location / {
proxy_pass http://sentry_docker;
proxy_set_header Host $host;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.dev/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.dev/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = example.dev) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name example.dev;
listen 80;
return 404; # managed by Certbot
}
certbot:
image: certbot/certbot:latest
volumes:
- .:/var/www/html
- ./data/nginx:/etc/nginx/conf.d

There are not enough information to help you but I can suggest to check out this guide https://github.com/Daanra/laravel-lets-encrypt and double check your configuration.
If the website doesn't show up, according to the error, the problem might be related to the network (firewall) or something else (the application not running and binding itself to the port 80 for http requests and 443 for https).

Related

how do you enable ssl using laravel 8 sail

I just created a new Laravel 8 project, following the instructions in their docs. Using Laravel Sail I have the site running locally on my machine just fine using sail up. I have set up an entry in /etc/hosts so the url I go to is http://local.dev.domain.com (substituting domain.com for the actual domain name I own, and pointing to localhost in the /etc/hosts file)...all works great.
However, the site needs to use Facebook Login, and Facebook requires https urls only on referrers. I've tried everything I could find online about setting up SSL certs with docker, but setting up nginx with manually created certs (via mkcert) or trying to use letsencrypt all fails for various reasons (conflicts in ports, letsencrypting wanting the domain to be a real one (and failing on the acme challenge if I do create that subdomain), etc. I've copied the certs to /etc/ssl/certs in the docker image and run update-ca-certificates, tried setting the application port 443 in my .env file as well as opening both ports 80 and 443 in the docker-compose.yml file...but all ends in the browser rejecting the request to https://local.dev.domain.com
I've spent hours trying to get this to work but it doesn't seem like anyone has used the Laravel Sail docker image with SSL.
Any pointers?
[Edit for more info]
As pointed out in the comments, you need to set an alias to just use sail ..., but I've already done that:
I also tried without the bash alias using vendor/bin/sail share to no avail:
Problem
In your case you need a real domain, which you have. A self-signed certificate would not work as Facebook would not acknowledge it as trusted. To get a free ssl certificate for that domain you can use Let's Encrypt, the easiest way to obtain that certificate is using certbot. The problem is that you need to install that certificate on your webserver. Laravel Sail uses the build-in webserver that does not support ssl unfortunatly. You need to put a webserver like nginx in front of the app and install the certificate there.
I'm currently working on a fork that enables what you need, however it's not finished.
Workaround
For now you can use the build in tunnel provided by Expose: https://beyondco.de/docs/expose/server/ssl
This is enable by sail share
It might be easier to use ngrok instead, which is essentialy the same but commercial. Than all you have to do is download, register and run ngrok http --region=eu 9000 and it will create a https link for you for development.
I solved this problem by using Caddy as a reverse proxy to the Laravel Sail container. Caddy has a feature called automatic HTTPS which can generate local certificates on the fly.
1 - Add Caddy as a service to your docker-compose.yml
services:
caddy:
image: caddy:latest
restart: unless-stopped
ports:
- '80:80'
- '443:443'
volumes:
- './docker/Caddyfile:/etc/caddy/Caddyfile'
- sailcaddy:/data
- sailcaddy:/config
networks:
- sail
# Remove "ports" from laravel.test service
volumes:
sailcaddy:
driver: local
2 - Create a simple Caddyfile and configure it as a reverse proxy
{
on_demand_tls {
ask http://laravel.test/caddy-check
}
local_certs
}
:443 {
tls internal {
on_demand
}
reverse_proxy laravel.test {
header_up Host {host}
header_up X-Real-IP {remote}
header_up X-Forwarded-For {remote}
header_up X-Forwarded-Port {server_port}
header_up X-Forwarded-Proto {scheme}
health_timeout 5s
}
}
3 - Set up an endpoint for Caddy to authorise which domains it generates certificates for
<?php
namespace App\Http\Controllers;
use App\Store;
use Illuminate\Http\Request;
class CaddyController extends Controller
{
public function check(Request $request)
{
$authorizedDomains = [
'laravel.test',
'www.laravel.test',
// Add subdomains here
];
if (in_array($request->query('domain'), $authorizedDomains)) {
return response('Domain Authorized');
}
// Abort if there's no 200 response returned above
abort(503);
}
}
See this gist for the full code changes involved. This blog post explains how to trust the Caddy root certificates.
For make "sail share" work you have to set alias and run "composer require laravel/sail --dev" on your project. This will install the latest version of sail, version 0.0.6 includes "share" command
There is actually an easier way. I did the following:
changed laravel.test port to something else like 8085
do it from .env so u will avoid issues, add APP_PORT env var
then (this step has been done by our sys admin) since laravel sail is actually installing apache in the system, u can manually set a reverse proxy for both port 80 and 443 to port 8085 and that should do the trick.
of course u will have to install certbot on that apache instance.

Single laravel install, multiple domains, SSL Encryption

I'm building an app in Laravel that has a single codebase that will serve multiple domain names, a new domain can be added in the CMS, and all that should have to be done for that new domain to work is have it's records pointed to the server. The CMS itself will then display the appropriate pages for that domain, based on the request()->getHost(); function.
The app is being managed with Laravel Forge.
My question is regarding nginx, and LetsEncrypt: I would like all new domains added in this way to be secured via SSL, would every new domain need to be added to forge manually, or is there some way to allow a wildcard TLD in the certificate? (And if so, is that a security risk?).
Will nginx require some specific configuration to work with wildcard TLDs?
My aim is to avoid additional configuration and have it automatic, with the domain name simply being added to the backend.
Thanks!
Please follow the Steps. Hope it will work for you.
1 - First clone Letsencrypt/Certbot repo from Github
cd /opt
git clone https://github.com/certbot/certbot.git
2 - Now enter new created directory and run certificate bot
cd certbot
./letsencrypt-auto certonly --manual --preferred-challenges=dns --email mymail#gmail.com --server https://acme-v02.api.letsencrypt.org/directory --agree-tos -d *.mywebsite.com
3 - Now Certbot will ask for a DNS record to check that if you really have rights at this domain.
------------------------------------------------------------------
Please deploy a DNS TXT record under the name
_acme-challenge.mywebsite.com with the following value:
5GFgEqWd7AQrvHteRtfT5V-XXXXXXXXXXXXXX
Before continuing, verify the record is deployed.
------------------------------------------------------------------
Press Enter to Continue
4 - After adding this DNS TXT record to your domain and wait for few seconds press enter and continue.
5 - Your certificate is ready!
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/mywebsite.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/mywebsite.com/privkey.pem
Your cert will expire on 2018-08-22. To obtain a new or tweaked
version of this certificate in the future, simply run certbot-auto
again. To non-interactively renew *all* of your certificates, run
"certbot-auto renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
6 - Now we will copy our fullchain.pem and privkey.pem to our Nginx folder and add this to our Nginx server configuration. For example;
server {
listen 443 ssl;
server_name test.mywebsite.com;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_trusted_certificate /etc/nginx/ssl/fullchain.pem;
...
Hope it will helpful.

Laravel Forge - access app by naked IP instead of domain name

I have a laravel app on production (using Laravel Forge and a Digital Ocean droplet).
I'm able to access the app via www.domain.com, but if I try with the server's IP I get a 404 (nginx).
How can I manage to access the app with the IP address?
Thanks a lot for your help
EDIT:
Here is my Nginx config on Laravel Forge:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain.com;
root /home/forge/domain.com/public;
...
}
This occurs because nginx searches for a configuration block containing default_server when no matching domain can be found. You can remove the default_server tag for the default(/etc/nginx/enabled-sites/default) and move it the config for the site you want to display by default:
server {
listen 80 default_server;
server_name example.net www.example.net;
...
}
your server block with updated default_server:
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2;
server_name domain.com;
root /home/forge/domain.com/public;
...
}
Be sure to edit the default config to remove the default_server tag before restarting nginx, it is not allowed to have two config blocks with default_server. The config can be verified using nginx -t
more information can be found at the nginx documentation
Why would you want to access your server from the naked ip?
Nginx returns a 404 since it cant find the requested domain on your server.
If you look at your folder structure your project folder corresponds to your site domain. It redirects you towards the right folder based on your domain name.
You could make a default project to show you something like phpinfo() when request trough the ip

SSL for local dmain

I have a project in Laravel. It was downloaded from a server to make some modifications. I have a SSL certificate for local domain (syworkx.local) from nginx. Can somebody help me with the installation and setting up the certificate
You can easily add an existing certificate to your nginx config file as shown below
listen 443 ssl;
server_name syworkx.local;
ssl_certificate /etc/nginx/ssl/syworkx_local.crt;
ssl_certificate_key /etc/nginx/ssl/syworkx_local.key;

How can I use "let's encrypt" without stopping nginx?

I am adding https support to our servers. How can I not stop Nginx when adding Let's Encrypt support?
Add this block to your server configuration (depending on your server configuration you can use other path than /var/www/html):
location ~ /.well-known {
root /var/www/html;
allow all;
}
Reload nginx, run certbot as follows:
certbot certonly -a webroot --webroot-path=/var/www/html -d yourdomain.example
Apply generated certificate to your server configuration
ssl_certificate /etc/letsencrypt/live/yourdomain.example/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.example/privkey.pem;
Make sure server setup is configured to run on port 443 with ssl:
listen 443 ssl;
Reload nginx again. Between reloads, you can make sure if configuration don't have syntax errors by running nginx -t.
against all answers you can run certbot in nginx mode.
just read the docs for it.
all you have to do is install an additional nginx plugin and follow the docs of certbot.
that plugin would even hot reload the cached certificates in nginx ram as soon as they get updated.
https://certbot.eff.org/instructions
or go to the nginx docs instead: https://www.nginx.com/blog/using-free-ssltls-certificates-from-lets-encrypt-with-nginx/
You can use docker for that. Link on hub.docker
For example:
Create certbot.sh
For that you must run in CLI:
touch certbot.sh && chmod +x ./certbot.sh
Write in file:
#!/usr/bin/env bash
docker run --rm -v /etc/letsencrypt:/etc/letsencrypt -v /var/lib/letsencrypt:/var/lib/letsencrypt certbot/certbot "$#"
and run like this:
./certbot.sh --webroot -w /var/www/example -d example.com -d www.example.com -w /var/www/thing -d thing.is -d m.thing.is
OR
./certbot.sh renew
And you can add call this method in crontab for renew
0 0 1 * * /<PATH_TO_FILE>/certbot.sh renew

Resources