I am using the vagrant VM and I have an environment set up using the following command:
mkdir Vagrant
cd Vagrant
vagrant init ubuntu/trusty/64
When I use vagrant up command it displays an error like :
Error while connecting to libvirt: Error making a connection to libvirt URI qemu:///system?no_verify=1&keyfile=/home/shashi/.ssh/id_rsa:
Call to virConnectOpen failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
Related
After running vagrant up I get the following error message.
Vagrant requires administrator access to create SMB shares and
may request access to complete setup of configured shares.
==> homestead: Setting hostname...
==> homestead: Mounting SMB shared folders...
homestead: C:/Code => /home/*****/code
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o vers=3.02,credentials=/etc/smb_creds_vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727,uid=1000,gid=1000,mfsymlinks,_netdev,nofail //169.254.x.x/vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727 /home/*****/code
The error output from the last command was:
mount error(2): No such file or directory
I am able to ssh into the HyperV instance and when I run the command it returns the same. If I look at the properties of C:/Code folder I can see the network path is \\PCNAME\vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727 so the same as the mount command other than the PCNAME is now an IP. I can ping the IP from within the instance and seems to work ok.
Homestead file:
folders:
- map: C:/Code
to: /home/vagrant/code
type: smb
smb_username: vagrant
smb_password: vagrant
The vagrant user has full permissions to the local code folder.
I am running Windows 11, Vagrant 2.3.1, HyperV 10. The External Switch is set-up via my Wi-Fi - could that cause an issue?
I am trying to start minikube cluster on my macOS but i get always "Permission denied"
(base) MacBook-Pro-de-..:desktop ..$ minikube start
-bash: /usr/local/bin/minikube: Permission denied
What i should do ?
Execute following commands to add permissions to files:
$ chmod ugo+rwx ~/.kube/config
$ sudo chown -R $USER ~/.kube
$ chmod +x your-minikube-localization
Configure proxy:
export no_proxy=$no_proxy,$(minikube ip)
export NO_PROXY=$no_proxy,$(minikube ip)
Then run minikube command taking proxy under consideration (IPs set below are just example):
$ minikube start --alsologtostderr --kubernetes-version v1.13.1 --docker-env HTTP_PROXY=http://10.0.2.2:1087 --docker-env HTTPS_PROXY=http://10.0.2.2:1087 --docker-env NO_PROXY=10.0.2.2,192.168.99.100
$ minikube start --alsologtostderr --kubernetes-version v1.13.2 --docker-env HTTP_PROXY=http://10.0.2.2:3128 --docker-env HTTPS_PROXY=http://10.0.2.2:3128 --docker-env NO_PROXY=10.0.2.2,192.168.99.100
In this case proxy configuration:
HTTP_PROXY=http://127.0.0.1:3128
Please must remember to add your minikube IP to NO_PROXY.
Similar problems you can find here: file-permission, kubeconfig.
I got now this Error:
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
E0301 15:19:14.198136 48335 start.go:268] Error setting up kubeconfig: Error reading file "/../.kube/config": open /../.kube/config: not a directory
E0301 15:19:15.128758 48335 util.go:151] Error uploading error message: Error sending error report to https://clouderrorreporting.googleapis.com/v1beta1/projects/k8s-minikube/events:report?key=AIzaSyACUwzG0dEPcl-eOgpDKnyKoUFgHdfoFuA, got response code 400
Vagrant up is not working properly after restarting the machine. Before restart, it was working fine. It is hang up after "default: Mounting NFS shared folders" and throwing an error like "mount.nfs: Connection timed out".
I have checked the exports file and restore with blank data.
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp,rw,actimeo=2 192.168.200.1:/Users/USERNAME/vagrant/ol7/vagabond7 /var/nfs//Users/USERNAME/vagrant/ol7/vagabond7
Stdout from the command:
Stderr from the command:
mount.nfs: Connection timed out
I need to create a new Laravel project and I need to use Mongo DB as a database server. Following the Homestead documentation I added this in my Homeasted.yaml file:
mongodb: true
From what I see in the logs the mongo database is created:
homestead-7: Running: script: Creating Mongo Database: homestead
But after this I received this message:
homestead-7: Running: script: Creating Mongo Database: homestead
homestead-7: MongoDB shell version v3.6.3
homestead-7: connecting to: mongodb://127.0.0.1:27017/homestead
homestead-7: 2019-06-03T10:01:52.103+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
homestead-7: 2019-06-03T10:01:52.104+0000 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
homestead-7: connect#src/mongo/shell/mongo.js:251:13
homestead-7: #(connect):1:6
homestead-7: exception: connect failed
The SSH command responded with a non-zero exit status.
From what I found on the internet it can be that the mongo service is not started. I restarted the box without provisioning this time but with the same result. Command:
vagrant#homestead:~$ mongo
Also, I found some solutions that involve changing of some files on an Ubutu O.S but in my case it will not work because the box will start as a fresh instance.
Any idea how to fix this? Thanks in advance!
Laravel version: 5.8.
Homestead: 8.4.0
MongoDB shell: v3.6.3
LATER EDIT
After the VM has started I executed this command:
sudo apt-get install mongodb
After installation I can execute the "mongo" command:
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-use
Strange, so actually Mongo DB isn't installed?! Even if I added the flag. Now I need to figure how to add it every time when the VM is started.
I managed to fix my problem after hours of searching so I will post the fix.
Because I didn't find anything that could help me I started to check the Homestead scripts in order to understand how Mongo is installed and in homestead.rb I found this line:
# Install MongoDB If Necessary
if settings.has_key?('mongodb') && settings['mongodb']
config.vm.provision 'shell' do |s|
s.name = 'Installing MongoDb'
s.path = script_dir + '/install-mongo.sh'
end
end
So I searched were "install-mongo.sh" is called and I found this condition:
if [ -f /home/vagrant/.mongo ]
then
echo "MongoDB already installed."
exit 0
fi
So Mongo DB is not installed every time only if the "/home/vagrant/.mongo" file doesn't exist. At this point I realized that maybe Mongo failed to be installed but this file was written.
So the solution was to destroy the Vagrant box and recreate it from scratch:
vagrant destroy
vagrant up --provision
In Homestead.yaml under features: add -mongodb: true
and run vagrant reload --provision, that is same as what #NicuVlad has suggested but little bit easier.
1) Context
I am running a build pipeline using Gitlab's VirtualBox runner (Gitlab version 10.6.3). When I manually create a base image (e.g. my-base-vm), then the build runs perfectly on the 1-n clones that Gitlab-CI creates.
2) Observed error
However, when I want to provision the base image using Vagrant (version 2.2.2), the Gitlab CI build ouput for my job shows the following:
Running with gitlab-runner 11.2.0 (35e8515d)
on myproject-build-machine 1c8ab769
Using VirtualBox version 5.2.18_Ubuntur123745 executor...
Creating new VM...
ERROR: Preparation failed: ssh: handshake failed: read tcp 127.0.0.1:35542->127.0.0.1:34963: read: connection reset by peer
Will be retried in 3s ...
Using VirtualBox version 5.2.18_Ubuntur123745 executor...
Creating new VM...
ERROR: Job failed: execution took longer than 1h0m0s seconds
The image is based on the base image ubuntu/bionic64.
3) Configuration
The runner (clone from my-base-vm) seems to have the right NAT rules though (output of VBoxManage showvminfo my-base-vm-runner-1c8ab769-concurrent-0):
NIC 1 Rule(0): name = guestssh, protocol = tcp, host ip = 127.0.0.1, host port = 32805, guest ip = , guest port = 22
NIC 1 Rule(1): name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 2222, guest ip = , guest port = 22
The Gitlab config.toml is configured with the correct username + password (vagrant:vagrant) and the Vagrant file provisions the machine to accept username and password as means of authentication (excerpt from Vagrantfile):
config.vm.provision "shell", inline: <<-SHELL
sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart
SHELL