Why can't write certificate.crt with acme? - https

root#vultr:~# systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-07-28 02:16:44 UTC; 23min ago
Docs: man:nginx(8)
Main PID: 12999 (nginx)
Tasks: 2 (limit: 1148)
Memory: 8.2M
CGroup: /system.slice/nginx.service
├─12999 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─13000 nginx: worker process
Jul 28 02:16:44 vultr.guest systemd[1]: Starting A high performance web server and a reverse proxy server...
Jul 28 02:16:44 vultr.guest systemd[1]: nginx.service: Failed to parse PID from file /run/nginx.pid: Invalid argument
Jul 28 02:16:44 vultr.guest systemd[1]: Started A high performance web server and a reverse proxy server.
The nginx is in good status.
I want to create and write certificate.crt with acme:
sudo su -l -s /bin/bash acme
curl https://get.acme.sh | sh
export CF_Key="xxxx"
export CF_Email="yyyy#yahoo.com"
CF_Key is my global api key in cloudflare,CF_Email is the register email to login cloudflare.
acme#vultr:~$ acme.sh --issue --dns dns_cf -d domain.com --debug 2
The output content is so long that i can't post here,so i upload into the termbin.com ,we share the link below:
https://termbin.com/taxl
Please open the webpage,you can get the whole output info,and check which result in error,there are two main issues:
1.My nginx server is in good status,acme.sh can't detect it.
2.How can set the config file?
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EAB_KEY_ID
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EAB_HMAC_KEY
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EMAIL
To write key into specified directory:
acme.sh --install-cert -d domain.com
--key-file /usr/local/etc/certfiles/private.key
--fullchain-file /usr/local/etc/certfiles/certificate.crt
It encounter problem:
[Tue Jul 27 01:12:15 UTC 2021] Installing key to:/usr/local/etc/certfiles/private.key
cat: /home/acme/.acme.sh/domain.com/domain.com.key: No such file or directory
To check files in /usr/local/etc/certfiles/
ls /usr/local/etc/certfiles/
private.key
No certificate.crt in /usr/local/etc/certfiles/.
How to fix then?

From acme.sh v3.0.0, acme.sh is using Zerossl as default ca, you must
register the account first(one-time) before you can issue new certs.
Here is how ZeroSSL compares with LetsEncrypt.
With ZeroSSL as CA
You must register at ZeroSSL before issuing a certificate. To register run the below command (assuming yyyy#yahoo.com is email with which you want to register)
acme.sh --register-account -m yyyy#yahoo.com
Now you can issue a new certificate (assuming you have set CF_Key & CF_Email or CF_Token & CF_Account_ID)
acme.sh --issue --dns dns_cf -d domain.com
Without ZeroSSL as CA
If you don't want to use ZeroSSL and say want to use LetsEncrypt instead, then you can provide the server option to issue a certificate
acme.sh --issue --dns dns_cf -d domain.com --server letsencrypt
Here are more options for the CA server.

Related

Hashicorp - vault service fails to start

I have setup Hashicorp - vault (Vault v1.5.4) on Ubuntu 18.04. My backend is Consul (single node running on same server as vault) - consul service is up.
My vault service fails to start
systemctl list-units --type=service | grep "vault"
vault.service loaded failed failed vault service
journalctl -xe -u vault
Oct 03 00:21:33 ubuntu2 systemd[1]: vault.service: Scheduled restart job, restart counter is at 5.
-- Subject: Automatic restarting of a unit has been scheduled
- Unit vault.service has finished shutting down.
Oct 03 00:21:33 ubuntu2 systemd[1]: vault.service: Start request repeated too quickly.
Oct 03 00:21:33 ubuntu2 systemd[1]: vault.service: Failed with result 'exit-code'.
Oct 03 00:21:33 ubuntu2 systemd[1]: Failed to start vault service.
-- Subject: Unit vault.service has failed
vault config.json
"api_addr": "http://<my-ip>:8200",
storage "consul" {
address = "127.0.0.1:8500"
path = "vault"
},
Service config
StandardOutput=/opt/vault/logs/output.log
StandardError=/opt/vault/logs/error.log
cat /opt/vault/logs/error.log
cat: /opt/vault/logs/error.log: No such file or directory
cat /opt/vault/logs/output.log
cat: /opt/vault/logs/output.log: No such file or directory
sudo tail -f /opt/vault/logs/error.log
tail: cannot open '/opt/vault/logs/error.log' for reading: No such file or
directory
:/opt/vault/logs$ ls -al
total 8
drwxrwxr-x 2 vault vault 4096 Oct 2 13:38 .
drwxrwxr-x 5 vault vault 4096 Oct 2 13:38 ..
After much debugging, the issue was silly goofup mixing .hcl and .json (they are so similar - but different) - cut-n-paste between stuff the storage (as posted) needs to be in json format. The problem is of course compounded when the error message saying nothing and there is nothing in the logs.
"storage": {
"consul": {
"address": "127.0.0.1:8500",
"path" : "vault"
}
},
There were a couple of other additional issues to sort out to get it going- disable_mlock : true, opening the firewall for 8200: sudo ufw allow 8200/tcp.
Finally got done (rather started).

How to reach a linked service in docker-compose?

According to https://docs.docker.com/compose/compose-file/#links, if I specify the name of another service under links in docker-compose, I should be able to reach that service at a hostname identical to the service name.
To test this, I tried the following docker-compose.yml:
version: '3'
services:
tor:
build: ./tor
use_tor:
build: ./use_tor
links:
- tor
where the tor and use_tor directories contain Dockerfiles:
.
├── docker-compose.yml
├── tor
│   └── Dockerfile
└── use_tor
└── Dockerfile
which are, for tor:
FROM alpine:latest
EXPOSE 9050
RUN apk --update add tor
CMD ["tor"]
and for use_tor:
FROM alpine:latest
CMD ["nc", "-z", "tor", "9050"]
However, if I do docker-compose build followed by docker-compose up, I see from the logs that the use_tor service exits with status code 1:
Starting scrapercompose_tor_1
Recreating scrapercompose_use_tor_1
Attaching to scrapercompose_tor_1, scrapercompose_use_tor_1
tor_1 | May 02 15:36:34.123 [notice] Tor v0.2.8.12 running on Linux with Libevent 2.0.22-stable, OpenSSL LibreSSL 2.4.4 and Zlib 1.2.8.
tor_1 | May 02 15:36:34.123 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
tor_1 | May 02 15:36:34.123 [notice] Configuration file "/etc/tor/torrc" not present, using reasonable defaults.
tor_1 | May 02 15:36:34.129 [notice] Opening Socks listener on 127.0.0.1:9050
tor_1 | May 02 15:36:34.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
tor_1 | May 02 15:36:34.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
tor_1 | May 02 15:36:34.000 [warn] You are running Tor as root. You don't need to, and you probably shouldn't.
tor_1 | May 02 15:36:34.000 [notice] We were built to run on a 64-bit CPU, with OpenSSL 1.0.1 or later, but with a version of OpenSSL that apparently lacks accelerated support for the NIST P-224 and P-256 groups. Building openssl with such support (using the enable-ec_nistp_64_gcc_128 option when configuring it) would make ECDH much faster.
tor_1 | May 02 15:36:34.000 [notice] Bootstrapped 0%: Starting
scrapercompose_use_tor_1 exited with code 1
tor_1 | May 02 15:36:35.000 [notice] Bootstrapped 80%: Connecting to the Tor network
tor_1 | May 02 15:36:36.000 [notice] Bootstrapped 85%: Finishing handshake with first hop
tor_1 | May 02 15:36:36.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
tor_1 | May 02 15:36:36.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
tor_1 | May 02 15:36:36.000 [notice] Bootstrapped 100%: Done
Apparently the command nc -z tor 9050 doesn't return the expected status code 0 on the use_tor container. However, it would seem to me that this should work. For example, if I modify the tor service to map port 9050 on the container to the host as follows,
services:
tor:
build: ./tor
ports:
- "9050:9050"
Then in my ordinary terminal, I do see that nc -z localhost 9050 yields an exit code of 0:
kurt#kurt-ThinkPad:~$ nc -z localhost 9050
kurt#kurt-ThinkPad:~$ echo $?
0
In short, I would expect the hostname tor to behave like localhost on my the host after the port mapping, but this appears not to be the case. Why is this not working?
This question made me gawk at it for once. Although I cloned this example but was not able to get the solution. According to docker docs
The EXPOSE instruction informs Docker that the container listens on
the specified network ports at runtime. EXPOSE does not make the ports
of the container accessible to the host. To do that, you must use
either the -p flag to publish a range of ports or the -P flag to
publish all of the exposed ports. You can expose one port number and
publish it externally under another number.
So I think that may be because the tor service is running on 127.0.0.1 instead of 0.0.0.0 (for difference between them you can look here)
tor_1 | May 02 15:36:34.129 [notice] Opening Socks listener on
127.0.0.1:9050
It is accessible through terminal is because of the ports argument in docker-compose.yml which does the same as -p argument.
All in all if the tor service listens on 0.0.0.0 then it should work as expected.

docker toolbox on windows then docker run hello-world gets x509: certificate signed by unknown authority

Tried many of the examples but none work for me.
My Docker version:
C:\>docker version
Client:
Version: 1.12.2
API version: 1.24
Go version: go1.6.3
Git commit: bb80604
Built: Tue Oct 11 17:00:50 2016
OS/Arch: windows/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 23:26:11 2016
OS/Arch: linux/amd64
I did copy the certs (*.pem) to the /etc/docker/certs.d location but no effect.
docker#default:~$ l /etc/docker/certs.d/
total 24
drwxr-xr-x 2 root root 4096 Nov 30 17:59 ./
drwxr-xr-x 3 root root 4096 Nov 30 17:16 ../
-rwxr-xr-x 1 root root 1679 Nov 30 17:59 ca-key.pem
-rwxr-xr-x 1 root root 1038 Nov 30 17:59 ca.pem
-rwxr-xr-x 1 root root 1078 Nov 30 17:59 cert.pem
-rwxr-xr-x 1 root root 1675 Nov 30 17:59 key.pem
The certs are the ones that are generated when creating the vm.
Appreciate you help on this. Spent a day trying how to solve this.
Message is generated when running docker run hello-world
Log is from docker.log located in /var/lib/boot2docker/
time="2016-11-30T18:25:14.233037149Z" level=debug msg="Client and server don't have the same version (client: 1.12.2, server: 1.12.3 )"
time="2016-11-30T18:25:14.233712555Z" level=error msg="Handler for POST /v1.24/containers/create returned error: No such image: hello-world:latest"
time="2016-11-30T18:25:14.244589790Z" level=debug msg="Calling GET /v1.24/info"
time="2016-11-30T18:25:14.244626594Z" level=debug msg="Client and server don't have the same version (client: 1.12.2, server: 1.12.3)"
time="2016-11-30T18:25:14.249913910Z" level=debug msg="Calling POST /v1.24/images/create?fromImage=hello-world&tag=latest"
time="2016-11-30T18:25:14.249943955Z" level=debug msg="Client and server don't have the same version (client: 1.12.2, server: 1.12.3)"
time="2016-11-30T18:25:14.250041478Z" level=debug msg="Trying to pull hello-world from https://registry-1.docker.io v2"
time="2016-11-30T18:25:14.327535482Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
time="2016-11-30T18:25:14.327561850Z" level=error msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
time="2016-11-30T18:25:14.327574917Z" level=debug msg="Trying to pull hello-world from https://index.docker.io v1"
time="2016-11-30T18:25:14.327587833Z" level=debug msg="hostDir: /etc/docker/certs.d/docker.io"
time="2016-11-30T18:25:14.327858818Z" level=debug msg="[registry] Calling GET https://index.docker.io/v1/repositories/library/hello-world/images"
time="2016-11-30T18:25:14.501831878Z" level=error msg="Not continuing with pull after error: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate signed by unknown authority"
you may be behind a proxy. Try this
sudo vi /var/lib/boot2docker/profile
at the end of the profile file add the following
# replace with your office's proxy environment
export "HTTP_PROXY=http://PROXY:PORT"
export "HTTPS_PROXY=http://PROXY:PORT"
# you can add more no_proxy with your environment.
export "NO_PROXY=192.168.99.*,*.local,169.254/16,*.example.com,192.168.59.*"
then restart boot2docker
The above steps worked for me. I am on windows.
Turns out we are behind a proxy, however those settings would not work with our proxy system of Zscalar. Zscalar interjects its own certificates and event adding those certificates to the Docker's setup would not work. Zscalar does have a SSL bypass setting that exempts a given URL this SSL treatment.
For Docker you must use the URLs of .docker.io and .cloudfront.net

DNS name and IP address do not resolve the same

I have checked it at: http://www.ipchecking.com/ and they say they are the same. but when I visit each of them they are different
ec2-54-206-38-225.ap-southeast-2.compute.amazonaws.com - 404 error Problem accessing /. Reason: Not Found
54.206.38.225 - returns apache default page
ec2-54-206-38-225.ap-southeast-2.compute.amazonaws.com/jenkins - jenkins launchs
54.206.38.225/jenkins - not found
My understanding was that the host name should resolve to the ip address and thus they shouldboth take me to the same place?
What you are probably seeing is due to named based virtual hosts.
When your browser makes an HTTP request, it includes a header that says what host it is looking for. This allows a server to have more than 1 site hosted on a single IP address and port.
This can also allow a load balancer to redirect your traffic to different machines on its network for handling.
You can find more information at
https://en.wikipedia.org/wiki/Virtual_hosting
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Check this:
# curl -I -s http://ec2-54-206-38-225.ap-southeast-2.compute.amazonaws.com | head -3
HTTP/1.1 404 Not Found
Date: Tue, 05 Jan 2016 06:15:49 GMT
Server: Jetty(winstone-2.9)
# curl -I -s http://54.206.38.225 | head -3
HTTP/1.1 200 OK
Date: Tue, 05 Jan 2016 06:16:00 GMT
Server: Apache/2.4.7 (Ubuntu)
# curl -I -s http://ec2-54-206-38-225.ap-southeast-2.compute.amazonaws.com/jenkins | head -3
HTTP/1.1 302 Found
Date: Tue, 05 Jan 2016 06:16:18 GMT
Server: Jetty(winstone-2.9)
# curl -I -s http://54.206.38.225/jenkins | head -3
HTTP/1.1 404 Not Found
Date: Tue, 05 Jan 2016 06:16:28 GMT
Server: Apache/2.4.7 (Ubuntu)
From above commands (look at the HTTP response code closely),
When FQDN is used, the HTTP request is responded by JeTTy.
When IP Address is used, the HTTP request is responded by Apache.
Jetty is aware of /jenkins path
Apache is not aware of /jenkins path.
So this implies that, You have JeTTy acting as Reverse_Proxy/Load_Balancer. So the connection looks like this:
USER --> JeTTy --> Apache
Now, you need to figure out how JeTTy is configured for redirecting/denying requests. This link might be helpful.
However, Usually, I have seen that an application server is fronted by a web server as Reverse_Proxy/Load_Balancer. So, you might find that your setup looks like below:
USER --> Apache --> JeTTy
If this is the case then figure out how Apache is configured for redirecting/denying requests.

Vagrant Shell Provision LAMP Stack on Ubuntu 15.04

I'm trying to create a LAMP stack on Ubuntu 15.04 using the default packages providing PHP 5.6.x, Apache 2.4, MySQL 5.x for a CakePHP 2.x project, and I'm having issues configuring Apache it doesn't seem to start up properly, though it is installed. I can't hit the test index.html I added at the bottom of the provisioning script, or the vhost pointed at the public directory /var/www/app/webroot.
All the components appear to be installed properly as I can check all their versions, but I've misconfigured or missed configuration on Apache? I really don't want to install XAMPP again having used Laravel Homestead for the last year. A vagrant box is the best way to go.
I created a GIST with my Vagrantfile, Lamp.rb, and provision.sh script with their paths at the top. Can anyone boot this up and see what I've done wrong.
Vagrant Up Error In Terminal
==> default: Job for apache2.service failed. See "systemctl status
apache2.service" and "journalctl -xe" for details.
When I try to restart apache I get the same error you see when first running vagrant up:
root#test:/# sudo service apache2 restart
Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.
Error Log From var/log/apache2/error.log
I couldn't get access to the error.log and was getting this -bash: cd: var/log/apache2: Permission denied. So I had to use sudo su, which seemed to work, but I don't want to do this each time so if anyone knows what needs to be done to provide user permission I'd appreciate it. The posts I've found concerning this don't seem to really explain what I need to do this correctly only that sudo su will work. From there I was able to get access to the error log using nano.
[Sat Jan 02 19:03:54.589161 2016] [mpm_event:notice] [pid 3529:tid 140703238530944] AH00489: Apache/2.4.10 (Ubuntu) co$
[Sat Jan 02 19:03:54.589263 2016] [core:notice] [pid 3529:tid 140703238530944] AH00094: Command line: '/usr/sbin/apach$
[Sat Jan 02 19:03:58.874664 2016] [mpm_event:notice] [pid 3529:tid 140703238530944] AH00491: caught SIGTERM, shutting $
[Sat Jan 02 19:03:59.950199 2016] [mpm_prefork:notice] [pid 4803] AH00163: Apache/2.4.10 (Ubuntu) configured -- resumi$
[Sat Jan 02 19:03:59.950314 2016] [core:notice] [pid 4803] AH00094: Command line: '/usr/sbin/apache2'
[Sat Jan 02 19:04:01.359328 2016] [mpm_prefork:notice] [pid 4803] AH00169: caught SIGTERM, shutting down
[Sat Jan 02 19:04:02.467409 2016] [mpm_prefork:notice] [pid 4906] AH00163: Apache/2.4.10 (Ubuntu) configured -- resumi$
[Sat Jan 02 19:04:02.467483 2016] [core:notice] [pid 4906] AH00094: Command line: '/usr/sbin/apache2'
[Sat Jan 02 19:05:16.040251 2016] [mpm_prefork:notice] [pid 4906] AH00169: caught SIGTERM, shutting down
The problem is because some configuration files are deleted, you have to reinstall it.
REINSTALL APACHE2:
To replace configuration files that have been deleted, without purging the package,you can do
sudo apt-get -o DPkg::Options::="--force-confmiss" --reinstall install apache2
To fully remove the apache2 config files, you should
sudo apt-get purge apache2
which will then let you reinstall it in the usual way with
sudo apt-get install apache2
Purge is required to remove all the config files - if you delete the config files but only remove the package, then this is remembered & missing config files are not reinstalled by default.
Then REINSTALL PHP5:
apt-get purge libapache2-mod-php5 php5 && \
apt-get install libapache2-mod-php5 php5

Resources