Error starting Laravel Homestead after updating to 2.29 - laravel

I have recently updated my vagrant version to 2.2.9. When running the command, vagrant up I am now getting this error:
homestead: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
homestead: Job for mariadb.service failed because the control process exited with error code.
homestead: See "systemctl status mariadb.service" and "journalctl -xe" for details.
I'm not sure what is causing this issue, I've updated the virtualbox, vagrant and the homestead package many times in the past without issue.
My machine is OS Catalina 10.15.5
I have tried uninstalling & re-installing, I've also tried installing an older version of vagrant. Everything results in the same error above. I'm not sure what to do next - any suggestions are greatly appreciated!
EDIT
Thank you, #Aminul!
Here is the output I get:
Status: "MariaDB server is down"
Jun 20 19:17:53 homestead mysqld[42962]: 2020-06-20 19:17:53 0 [Note] InnoDB: Starting shutdown...
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Plugin 'InnoDB' init function returned error.
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [Note] Plugin 'FEEDBACK' is disabled.
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Could not open mysql.plugin table. Some plugins may be not loaded
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Unknown/unsupported storage engine: InnoDB
Jun 20 19:17:54 homestead mysqld[42962]: 2020-06-20 19:17:54 0 [ERROR] Aborting
Jun 20 19:17:54 homestead systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jun 20 19:17:54 homestead systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jun 20 19:17:54 homestead systemd[1]: Failed to start MariaDB 10.4.13 database server.
Running: mysql --version returns:
mysql Ver 15.1 Distrib 10.4.13-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
So clearly, it's saying that MariaDB is not started. I can research how to start that. I'm more curious though -- is this something that's happened to homestead? Or is this a result of something else? Normally, I can just vagrant up and everything is good to go. I worry that if I mess with things I'm setting myself up for failure down the road.
EDIT 2
When running this:
vagrant#homestead:~$ systemctl start mysqld.service
This is what I am prompted with:
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to start 'mariadb.service'.
Authenticating as: vagrant,,, (vagrant)
Password:
I'm not sure what the credentials are to keep testing.
ADDITIONAL SOLUTION
Thank you,Raphy963!
I didn't want to answer my own question, and I was able to find another work-around that hopefully will help someone else.
The application I am working on is not yet in production, so I was able to change my database from MySQL to PostgreSQL.
I removed/uninstalled all instances of virtualbox, vagrant & homestead. I also removed the "VirtualBox VMs" directory.
I re-installed everything, starting with VirtualBox, Vagrant & then laravel/homestead. I am now running the latest versions of everything; using the Laravel documentation for instructions.
After everything was installed, running vagrant up did not create errors, however I was still not able to connect to MySQL.
I updated my Homestead.yaml file to the following:
---
ip: "10.10.10.10"
memory: 2048
cpus: 2
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: /Users/<username>/Sites
to: /home/vagrant/sites
sites:
- map: blog.test
to: /home/vagrant/sites/blog/public
databases:
- blog
- homestead
features:
- mariadb: false
- ohmyzsh: false
- webdriver: false
I updated my hosts file to this:
10.10.10.10 blog.test
Finally, using TablePlus I was able to connect with the following:
My .env file in my Laravel application looks like this:
DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_PORT=5432
DB_DATABASE=blog
DB_USERNAME=homestead
DB_PASSWORD=secret
I am now able to connect using TablePlus and from my application.
Hope this helps someone!!

I was having the same issue and spent way too much time trying to fix it. I tried using the new release of Homestead from their GitHub repo (https://github.com/laravel/homestead) which claims to fix this exact issue but it didn't work.
After investigating on my own, I realized the scripts used in Vagrant for homestead to work (This repo over here https://github.com/laravel/settler) has been updated to "10.0.0-beta". I did the following to put it back to "9.5.1".
vagrant box remove laravel/homestead
vagrant box add laravel/homestead --box-version 9.5.1
Afterwards, I remade my instance by using vagrant destroy and vagrant up and MariaDB was up and running once more.
While this might not be the best solution, at least I got it to work which is good enough for me.
Hope it helped!

You will need to investigate what is the cause.
Login to your instance by runing vagrant ssh and run systemctl status mariadb.service for checking the error log.
Check what is is the error and reply here if you didn't understand.

Related

Redis does not start on boot on homestead vagrant server

I know that we can directly ssh into the VM & enable the redis service.
But i think there must be a way to enable redis using homestead.yaml.
I tried to search for docs but i couldn't find anything.
EDIT
I'm posting my homestead.yaml file.
ip: "192.168.10.10"
memory: 1048
cpus: 2
provider: virtualbox
authorize: C:\Users\stack\.ssh\id_rsa.pub
keys:
- C:\Users\stack\.ssh\id_rsa
folders:
- map: W:\sites\project
to: /home/vagrant/project
sites:
- map: project.test
to: /home/vagrant/project/public
databases:
- homestead
features:
- mariadb: false
- ohmyzsh: false
- webdriver: false
I have installed predis so the connection with redis is not an issue.
Everytime i boot my vm, I manually have to go & start redis by typing this command systemctl start redis-server.
Which is the reason i was wondering that there must be a way to enable redis server from inside the homestead.yaml so i don't have to do it manually.
If you want homestead's redis server to start automatically whenever homestead is up...
Log into homestead via ssh.
sudo systemctl enable redis-server
You should only need to run this once.
There was a Bug with this version of Homestead 10.0.1
Redis does not start on boot.
However this has been fixed if you check out the issue i have linked.
Still an issue?
Here is a quick fix while waiting for homestead box update
sudo service redis-server start
Redis is already included/installed thus enabled in Homestead. See included softwares of Homestead.
To test type redis-cli and ping in your command line (inside vagrant)
For predis just run composer require predis/predis.
Try adding services section to your homestead.yaml after features section, but I don't know if order matters.
services:
- enabled:
- "redis-server"
Then vagrant reload --provision

Mongo DB: Failed to connect on Laravel Homestead

I need to create a new Laravel project and I need to use Mongo DB as a database server. Following the Homestead documentation I added this in my Homeasted.yaml file:
mongodb: true
From what I see in the logs the mongo database is created:
homestead-7: Running: script: Creating Mongo Database: homestead
But after this I received this message:
homestead-7: Running: script: Creating Mongo Database: homestead
homestead-7: MongoDB shell version v3.6.3
homestead-7: connecting to: mongodb://127.0.0.1:27017/homestead
homestead-7: 2019-06-03T10:01:52.103+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
homestead-7: 2019-06-03T10:01:52.104+0000 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
homestead-7: connect#src/mongo/shell/mongo.js:251:13
homestead-7: #(connect):1:6
homestead-7: exception: connect failed
The SSH command responded with a non-zero exit status.
From what I found on the internet it can be that the mongo service is not started. I restarted the box without provisioning this time but with the same result. Command:
vagrant#homestead:~$ mongo
Also, I found some solutions that involve changing of some files on an Ubutu O.S but in my case it will not work because the box will start as a fresh instance.
Any idea how to fix this? Thanks in advance!
Laravel version: 5.8.
Homestead: 8.4.0
MongoDB shell: v3.6.3
LATER EDIT
After the VM has started I executed this command:
sudo apt-get install mongodb
After installation I can execute the "mongo" command:
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-use
Strange, so actually Mongo DB isn't installed?! Even if I added the flag. Now I need to figure how to add it every time when the VM is started.
I managed to fix my problem after hours of searching so I will post the fix.
Because I didn't find anything that could help me I started to check the Homestead scripts in order to understand how Mongo is installed and in homestead.rb I found this line:
# Install MongoDB If Necessary
if settings.has_key?('mongodb') && settings['mongodb']
config.vm.provision 'shell' do |s|
s.name = 'Installing MongoDb'
s.path = script_dir + '/install-mongo.sh'
end
end
So I searched were "install-mongo.sh" is called and I found this condition:
if [ -f /home/vagrant/.mongo ]
then
echo "MongoDB already installed."
exit 0
fi
So Mongo DB is not installed every time only if the "/home/vagrant/.mongo" file doesn't exist. At this point I realized that maybe Mongo failed to be installed but this file was written.
So the solution was to destroy the Vagrant box and recreate it from scratch:
vagrant destroy
vagrant up --provision
In Homestead.yaml under features: add -mongodb: true
and run vagrant reload --provision, that is same as what #NicuVlad has suggested but little bit easier.

Unable to setup cloudera manager web on port 7180 - cluster installation

I am using ubuntu local machine with below hostname and trying to setup cloudera Hadoop Distribution CDH5.
chaithu#localhost:~$ hostname
localhost
chaithu#localhost:~$ hostname -f
localhost
chaithu#localhost:~$ ssh chaithu#localhost
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.8.0-36-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
449 packages can be updated.
232 updates are security updates.
Last login: Mon Dec 18 22:44:30 2017 from 127.0.0.1
Failed as Failed to detect root privileges with below error:
/tmp/scm_prepare_node.qkAAjdTz
using SSH_CLIENT to get the SCM hostname: 127.0.0.1 35708 22
opening logging file descriptor
Starting installation script...
Acquiring installation lock...
BEGIN flock 4
END (0)
Detecting root privileges...
effective UID is 1000
BEGIN which pbrun
END (1)
BEGIN sudo -S id
[sudo] password for chaithu:
END (1)
need root privileges but sudo requires password, exiting
closing logging file descriptor
Screen shot for where I am stuck with CDH installation.
Looks like you are missing sudo or passwordless sudo for the user which you are using for installation.
Configure sudo for the user which is used for set up.
Make sure passwordless sudo is configured for that user.

NFS Vagrant on Fedora 22

I'm trying to run Vagrant using libvirt as my provider. Using rsync is unbearable since I'm working with a huge shared directory, but vagrant does succeed when the nfs setting is commented out and the standard rsync config is set.
config.vm.synced_folder ".", "/vagrant", mount_options: ['dmode=777','fmode=777']
Vagrant hangs forever on this step here after running vagrant up
==> default: Mounting NFS shared folders...
In my Vagrantfile I have this uncommented and the rsync config commented out, which turns NFS on.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
When Vagrant is running it echos this out to the terminal.
Redirecting to /bin/systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Redirecting to /bin/systemctl start nfs-server.service
Job for nfs-server.service failed. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
Results of systemctl status nfs-server.service
dillon#localhost ~ $ systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2015-05-29 22:24:47 PDT; 22s ago
Process: 3044 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=1/FAILURE)
Process: 3040 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 3044 (code=exited, status=1/FAILURE)
May 29 22:24:47 localhost.sulfur systemd[1]: Starting NFS server and services...
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: unable to set any sockets for nfsd
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service: main process exited, code=exited, status=1/FAILURE
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS server and services.
May 29 22:24:47 localhost.sulfur systemd[1]: Unit nfs-server.service entered failed state.
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service failed.
The journelctl -xe log has a ton of stuff in it so I won't post all of it here, but there are some things in the bold red.
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.statd[3028]: failed to create RPC listeners, exiting
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Before I ran vagrant up I looked to see if there were any process binding to port 98 with netstat -tulpn and did not see anything and in fact while vagrant is hanging I ran netstat -tulpn again to see what was binding to port 98 and didn't see anything. (checked for both current user and root)
UPDATE: Haven't gotten any responses.
I wasn't able to figure out the current issue I'm having. I tried using lxc instead, but gets stuck on booting. I'd also prefer not to use VirtualBox, but the issue seems to lie within nfs not the hypervisor. Going to try using the rsync-auto feature Vagrant provides, but I'd prefer to get nfs working.
Looks like when using libvirt the user is given control over nfs and rpcbind, and Vagrant doesn't even try to touch those things like I had assumed it did. Running these solved my issue:
service rpcbind start
service nfs stop
service nfs start
The systemd unit dependencies of nfs-server.service contain rpcbind.target but not rpcbind.service.
One simple solution is to create a file /etc/systemd/system/nfs-server.service containing:
.include /usr/lib/systemd/system/nfs-server.service
[Unit]
Requires=rpcbind.service
After=rpcbind.service
On CentOS 7, all I needed to do
was install the missing rpcbind, like this:
yum -y install rpcbind
systemctl enable rpcbind
systemctl start rpcbind
systemctl restart nfs-server
Took me over an hour to find out and try this though :)
Michel
I've had issues with NFS mounts using both the libvirt and the VirtualBox provider on Fedora 22. After a lot of gnashing of teeth, I managed to figure out that it was a firewall issue. Fedora seems to ship with a firewalld service by default. Stopping that service - sudo systemctl stop firewalld - did the trick for me.
Of course, ideally you would configure this firewall rather than disable it entirely, but I don't know how to do that.

Salty Vagrant Master hostname: salt not found

I am trying to load my vagrant box with salt, asking it to install Apache.
I am using salty-vagrant in masterless mode.
The vagrant box gets loaded, but it gets stuck in the console with the following message:
[default] Running provisioner: salt...
Checking if salt-minion is installed
salt-minion found
Checking if salt-call is installed
salt-call found
Salt did not need installing or configuring.
Calling state.highstate... (this may take a while)
When I check the vagrant salt log, the following is found:
[salt.utils ][ERROR ] This master address: 'salt' was previously resolvable but now fails to resolve! The previously resolved ip addr will continue to be used
[salt.minion ][WARNING ] Master hostname: salt not found. Retrying in 30 seconds
Has anyone faced this issue before?
You need to make sure you are passing a minion config with the following option set:
file_client: local
Read all the steps in the Masterless Quick Start: https://github.com/saltstack/salty-vagrant#masterless-quick-start

Resources