I'm trying to create a LAMP stack on Ubuntu 15.04 using the default packages providing PHP 5.6.x, Apache 2.4, MySQL 5.x for a CakePHP 2.x project, and I'm having issues configuring Apache it doesn't seem to start up properly, though it is installed. I can't hit the test index.html I added at the bottom of the provisioning script, or the vhost pointed at the public directory /var/www/app/webroot.
All the components appear to be installed properly as I can check all their versions, but I've misconfigured or missed configuration on Apache? I really don't want to install XAMPP again having used Laravel Homestead for the last year. A vagrant box is the best way to go.
I created a GIST with my Vagrantfile, Lamp.rb, and provision.sh script with their paths at the top. Can anyone boot this up and see what I've done wrong.
Vagrant Up Error In Terminal
==> default: Job for apache2.service failed. See "systemctl status
apache2.service" and "journalctl -xe" for details.
When I try to restart apache I get the same error you see when first running vagrant up:
root#test:/# sudo service apache2 restart
Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.
Error Log From var/log/apache2/error.log
I couldn't get access to the error.log and was getting this -bash: cd: var/log/apache2: Permission denied. So I had to use sudo su, which seemed to work, but I don't want to do this each time so if anyone knows what needs to be done to provide user permission I'd appreciate it. The posts I've found concerning this don't seem to really explain what I need to do this correctly only that sudo su will work. From there I was able to get access to the error log using nano.
[Sat Jan 02 19:03:54.589161 2016] [mpm_event:notice] [pid 3529:tid 140703238530944] AH00489: Apache/2.4.10 (Ubuntu) co$
[Sat Jan 02 19:03:54.589263 2016] [core:notice] [pid 3529:tid 140703238530944] AH00094: Command line: '/usr/sbin/apach$
[Sat Jan 02 19:03:58.874664 2016] [mpm_event:notice] [pid 3529:tid 140703238530944] AH00491: caught SIGTERM, shutting $
[Sat Jan 02 19:03:59.950199 2016] [mpm_prefork:notice] [pid 4803] AH00163: Apache/2.4.10 (Ubuntu) configured -- resumi$
[Sat Jan 02 19:03:59.950314 2016] [core:notice] [pid 4803] AH00094: Command line: '/usr/sbin/apache2'
[Sat Jan 02 19:04:01.359328 2016] [mpm_prefork:notice] [pid 4803] AH00169: caught SIGTERM, shutting down
[Sat Jan 02 19:04:02.467409 2016] [mpm_prefork:notice] [pid 4906] AH00163: Apache/2.4.10 (Ubuntu) configured -- resumi$
[Sat Jan 02 19:04:02.467483 2016] [core:notice] [pid 4906] AH00094: Command line: '/usr/sbin/apache2'
[Sat Jan 02 19:05:16.040251 2016] [mpm_prefork:notice] [pid 4906] AH00169: caught SIGTERM, shutting down
The problem is because some configuration files are deleted, you have to reinstall it.
REINSTALL APACHE2:
To replace configuration files that have been deleted, without purging the package,you can do
sudo apt-get -o DPkg::Options::="--force-confmiss" --reinstall install apache2
To fully remove the apache2 config files, you should
sudo apt-get purge apache2
which will then let you reinstall it in the usual way with
sudo apt-get install apache2
Purge is required to remove all the config files - if you delete the config files but only remove the package, then this is remembered & missing config files are not reinstalled by default.
Then REINSTALL PHP5:
apt-get purge libapache2-mod-php5 php5 && \
apt-get install libapache2-mod-php5 php5
Related
root#vultr:~# systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-07-28 02:16:44 UTC; 23min ago
Docs: man:nginx(8)
Main PID: 12999 (nginx)
Tasks: 2 (limit: 1148)
Memory: 8.2M
CGroup: /system.slice/nginx.service
├─12999 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─13000 nginx: worker process
Jul 28 02:16:44 vultr.guest systemd[1]: Starting A high performance web server and a reverse proxy server...
Jul 28 02:16:44 vultr.guest systemd[1]: nginx.service: Failed to parse PID from file /run/nginx.pid: Invalid argument
Jul 28 02:16:44 vultr.guest systemd[1]: Started A high performance web server and a reverse proxy server.
The nginx is in good status.
I want to create and write certificate.crt with acme:
sudo su -l -s /bin/bash acme
curl https://get.acme.sh | sh
export CF_Key="xxxx"
export CF_Email="yyyy#yahoo.com"
CF_Key is my global api key in cloudflare,CF_Email is the register email to login cloudflare.
acme#vultr:~$ acme.sh --issue --dns dns_cf -d domain.com --debug 2
The output content is so long that i can't post here,so i upload into the termbin.com ,we share the link below:
https://termbin.com/taxl
Please open the webpage,you can get the whole output info,and check which result in error,there are two main issues:
1.My nginx server is in good status,acme.sh can't detect it.
2.How can set the config file?
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EAB_KEY_ID
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EAB_HMAC_KEY
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EMAIL
To write key into specified directory:
acme.sh --install-cert -d domain.com
--key-file /usr/local/etc/certfiles/private.key
--fullchain-file /usr/local/etc/certfiles/certificate.crt
It encounter problem:
[Tue Jul 27 01:12:15 UTC 2021] Installing key to:/usr/local/etc/certfiles/private.key
cat: /home/acme/.acme.sh/domain.com/domain.com.key: No such file or directory
To check files in /usr/local/etc/certfiles/
ls /usr/local/etc/certfiles/
private.key
No certificate.crt in /usr/local/etc/certfiles/.
How to fix then?
From acme.sh v3.0.0, acme.sh is using Zerossl as default ca, you must
register the account first(one-time) before you can issue new certs.
Here is how ZeroSSL compares with LetsEncrypt.
With ZeroSSL as CA
You must register at ZeroSSL before issuing a certificate. To register run the below command (assuming yyyy#yahoo.com is email with which you want to register)
acme.sh --register-account -m yyyy#yahoo.com
Now you can issue a new certificate (assuming you have set CF_Key & CF_Email or CF_Token & CF_Account_ID)
acme.sh --issue --dns dns_cf -d domain.com
Without ZeroSSL as CA
If you don't want to use ZeroSSL and say want to use LetsEncrypt instead, then you can provide the server option to issue a certificate
acme.sh --issue --dns dns_cf -d domain.com --server letsencrypt
Here are more options for the CA server.
I have setup Hashicorp - vault (Vault v1.5.4) on Ubuntu 18.04. My backend is Consul (single node running on same server as vault) - consul service is up.
My vault service fails to start
systemctl list-units --type=service | grep "vault"
vault.service loaded failed failed vault service
journalctl -xe -u vault
Oct 03 00:21:33 ubuntu2 systemd[1]: vault.service: Scheduled restart job, restart counter is at 5.
-- Subject: Automatic restarting of a unit has been scheduled
- Unit vault.service has finished shutting down.
Oct 03 00:21:33 ubuntu2 systemd[1]: vault.service: Start request repeated too quickly.
Oct 03 00:21:33 ubuntu2 systemd[1]: vault.service: Failed with result 'exit-code'.
Oct 03 00:21:33 ubuntu2 systemd[1]: Failed to start vault service.
-- Subject: Unit vault.service has failed
vault config.json
"api_addr": "http://<my-ip>:8200",
storage "consul" {
address = "127.0.0.1:8500"
path = "vault"
},
Service config
StandardOutput=/opt/vault/logs/output.log
StandardError=/opt/vault/logs/error.log
cat /opt/vault/logs/error.log
cat: /opt/vault/logs/error.log: No such file or directory
cat /opt/vault/logs/output.log
cat: /opt/vault/logs/output.log: No such file or directory
sudo tail -f /opt/vault/logs/error.log
tail: cannot open '/opt/vault/logs/error.log' for reading: No such file or
directory
:/opt/vault/logs$ ls -al
total 8
drwxrwxr-x 2 vault vault 4096 Oct 2 13:38 .
drwxrwxr-x 5 vault vault 4096 Oct 2 13:38 ..
After much debugging, the issue was silly goofup mixing .hcl and .json (they are so similar - but different) - cut-n-paste between stuff the storage (as posted) needs to be in json format. The problem is of course compounded when the error message saying nothing and there is nothing in the logs.
"storage": {
"consul": {
"address": "127.0.0.1:8500",
"path" : "vault"
}
},
There were a couple of other additional issues to sort out to get it going- disable_mlock : true, opening the firewall for 8200: sudo ufw allow 8200/tcp.
Finally got done (rather started).
I have recently updated my vagrant version to 2.2.9. When running the command, vagrant up I am now getting this error:
homestead: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
homestead: Job for mariadb.service failed because the control process exited with error code.
homestead: See "systemctl status mariadb.service" and "journalctl -xe" for details.
I'm not sure what is causing this issue, I've updated the virtualbox, vagrant and the homestead package many times in the past without issue.
My machine is OS Catalina 10.15.5
I have tried uninstalling & re-installing, I've also tried installing an older version of vagrant. Everything results in the same error above. I'm not sure what to do next - any suggestions are greatly appreciated!
EDIT
Thank you, #Aminul!
Here is the output I get:
Status: "MariaDB server is down"
Jun 20 19:17:53 homestead mysqld[42962]: 2020-06-20 19:17:53 0 [Note] InnoDB: Starting shutdown...
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Plugin 'InnoDB' init function returned error.
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [Note] Plugin 'FEEDBACK' is disabled.
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Could not open mysql.plugin table. Some plugins may be not loaded
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Unknown/unsupported storage engine: InnoDB
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Aborting
Jun 20 19:17:54 homestead systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jun 20 19:17:54 homestead systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jun 20 19:17:54 homestead systemd[1]: Failed to start MariaDB 10.4.13 database server.
Running: mysql --version returns:
mysql Ver 15.1 Distrib 10.4.13-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
So clearly, it's saying that MariaDB is not started. I can research how to start that. I'm more curious though -- is this something that's happened to homestead? Or is this a result of something else? Normally, I can just vagrant up and everything is good to go. I worry that if I mess with things I'm setting myself up for failure down the road.
EDIT 2
When running this:
vagrant#homestead:~$ systemctl start mysqld.service
This is what I am prompted with:
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to start 'mariadb.service'.
Authenticating as: vagrant,,, (vagrant)
Password:
I'm not sure what the credentials are to keep testing.
ADDITIONAL SOLUTION
Thank you,Raphy963!
I didn't want to answer my own question, and I was able to find another work-around that hopefully will help someone else.
The application I am working on is not yet in production, so I was able to change my database from MySQL to PostgreSQL.
I removed/uninstalled all instances of virtualbox, vagrant & homestead. I also removed the "VirtualBox VMs" directory.
I re-installed everything, starting with VirtualBox, Vagrant & then laravel/homestead. I am now running the latest versions of everything; using the Laravel documentation for instructions.
After everything was installed, running vagrant up did not create errors, however I was still not able to connect to MySQL.
I updated my Homestead.yaml file to the following:
---
ip: "10.10.10.10"
memory: 2048
cpus: 2
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: /Users/<username>/Sites
to: /home/vagrant/sites
sites:
- map: blog.test
to: /home/vagrant/sites/blog/public
databases:
- blog
- homestead
features:
- mariadb: false
- ohmyzsh: false
- webdriver: false
I updated my hosts file to this:
10.10.10.10 blog.test
Finally, using TablePlus I was able to connect with the following:
My .env file in my Laravel application looks like this:
DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_PORT=5432
DB_DATABASE=blog
DB_USERNAME=homestead
DB_PASSWORD=secret
I am now able to connect using TablePlus and from my application.
Hope this helps someone!!
I was having the same issue and spent way too much time trying to fix it. I tried using the new release of Homestead from their GitHub repo (https://github.com/laravel/homestead) which claims to fix this exact issue but it didn't work.
After investigating on my own, I realized the scripts used in Vagrant for homestead to work (This repo over here https://github.com/laravel/settler) has been updated to "10.0.0-beta". I did the following to put it back to "9.5.1".
vagrant box remove laravel/homestead
vagrant box add laravel/homestead --box-version 9.5.1
Afterwards, I remade my instance by using vagrant destroy and vagrant up and MariaDB was up and running once more.
While this might not be the best solution, at least I got it to work which is good enough for me.
Hope it helped!
You will need to investigate what is the cause.
Login to your instance by runing vagrant ssh and run systemctl status mariadb.service for checking the error log.
Check what is is the error and reply here if you didn't understand.
Tried many of the examples but none work for me.
My Docker version:
C:\>docker version
Client:
Version: 1.12.2
API version: 1.24
Go version: go1.6.3
Git commit: bb80604
Built: Tue Oct 11 17:00:50 2016
OS/Arch: windows/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 23:26:11 2016
OS/Arch: linux/amd64
I did copy the certs (*.pem) to the /etc/docker/certs.d location but no effect.
docker#default:~$ l /etc/docker/certs.d/
total 24
drwxr-xr-x 2 root root 4096 Nov 30 17:59 ./
drwxr-xr-x 3 root root 4096 Nov 30 17:16 ../
-rwxr-xr-x 1 root root 1679 Nov 30 17:59 ca-key.pem
-rwxr-xr-x 1 root root 1038 Nov 30 17:59 ca.pem
-rwxr-xr-x 1 root root 1078 Nov 30 17:59 cert.pem
-rwxr-xr-x 1 root root 1675 Nov 30 17:59 key.pem
The certs are the ones that are generated when creating the vm.
Appreciate you help on this. Spent a day trying how to solve this.
Message is generated when running docker run hello-world
Log is from docker.log located in /var/lib/boot2docker/
time="2016-11-30T18:25:14.233037149Z" level=debug msg="Client and server don't have the same version (client: 1.12.2, server: 1.12.3 )"
time="2016-11-30T18:25:14.233712555Z" level=error msg="Handler for POST /v1.24/containers/create returned error: No such image: hello-world:latest"
time="2016-11-30T18:25:14.244589790Z" level=debug msg="Calling GET /v1.24/info"
time="2016-11-30T18:25:14.244626594Z" level=debug msg="Client and server don't have the same version (client: 1.12.2, server: 1.12.3)"
time="2016-11-30T18:25:14.249913910Z" level=debug msg="Calling POST /v1.24/images/create?fromImage=hello-world&tag=latest"
time="2016-11-30T18:25:14.249943955Z" level=debug msg="Client and server don't have the same version (client: 1.12.2, server: 1.12.3)"
time="2016-11-30T18:25:14.250041478Z" level=debug msg="Trying to pull hello-world from https://registry-1.docker.io v2"
time="2016-11-30T18:25:14.327535482Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
time="2016-11-30T18:25:14.327561850Z" level=error msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
time="2016-11-30T18:25:14.327574917Z" level=debug msg="Trying to pull hello-world from https://index.docker.io v1"
time="2016-11-30T18:25:14.327587833Z" level=debug msg="hostDir: /etc/docker/certs.d/docker.io"
time="2016-11-30T18:25:14.327858818Z" level=debug msg="[registry] Calling GET https://index.docker.io/v1/repositories/library/hello-world/images"
time="2016-11-30T18:25:14.501831878Z" level=error msg="Not continuing with pull after error: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate signed by unknown authority"
you may be behind a proxy. Try this
sudo vi /var/lib/boot2docker/profile
at the end of the profile file add the following
# replace with your office's proxy environment
export "HTTP_PROXY=http://PROXY:PORT"
export "HTTPS_PROXY=http://PROXY:PORT"
# you can add more no_proxy with your environment.
export "NO_PROXY=192.168.99.*,*.local,169.254/16,*.example.com,192.168.59.*"
then restart boot2docker
The above steps worked for me. I am on windows.
Turns out we are behind a proxy, however those settings would not work with our proxy system of Zscalar. Zscalar interjects its own certificates and event adding those certificates to the Docker's setup would not work. Zscalar does have a SSL bypass setting that exempts a given URL this SSL treatment.
For Docker you must use the URLs of .docker.io and .cloudfront.net
I simply run the following command:
docker run -d -p 80:80 --name webserver nginx
and after pulling all images returns this error:
docker: Error response from daemon: driver failed programming external
connectivity on endpoint webserver
(ac5719bc0e95ead1a4ec6b6ae437c4c0b8a9600ee69ecf72e73f8d2d12020f97):
Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error
(Failure EADDRINUSE).
Here is my docker Version info:
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:15:28 2016
OS/Arch: darwin/amd64
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:15:28 2016
OS/Arch: linux/amd64
How to fix this?
You didn't provide informations such as Docker version, system or docker processes running so I assume the most likely situation.
The output contains: Failure EADDRINUSE. It means that port 80 is used by something else. You can use lsof -i TCP:80 to check which process is listening on that port. If there is nothing running on the port, it might be some issue with Docker. For example the one with not releasing ports immediately.