How to resolve vagrant mount issue? - macos

Vagrant up is not working properly after restarting the machine. Before restart, it was working fine. It is hang up after "default: Mounting NFS shared folders" and throwing an error like "mount.nfs: Connection timed out".
I have checked the exports file and restore with blank data.
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp,rw,actimeo=2 192.168.200.1:/Users/USERNAME/vagrant/ol7/vagabond7 /var/nfs//Users/USERNAME/vagrant/ol7/vagabond7
Stdout from the command:
Stderr from the command:
mount.nfs: Connection timed out

Related

Homestead with HyperV unable to create SMB folders - mount error(2): No such file or directory

After running vagrant up I get the following error message.
Vagrant requires administrator access to create SMB shares and
may request access to complete setup of configured shares.
==> homestead: Setting hostname...
==> homestead: Mounting SMB shared folders...
homestead: C:/Code => /home/*****/code
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o vers=3.02,credentials=/etc/smb_creds_vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727,uid=1000,gid=1000,mfsymlinks,_netdev,nofail //169.254.x.x/vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727 /home/*****/code
The error output from the last command was:
mount error(2): No such file or directory
I am able to ssh into the HyperV instance and when I run the command it returns the same. If I look at the properties of C:/Code folder I can see the network path is \\PCNAME\vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727 so the same as the mount command other than the PCNAME is now an IP. I can ping the IP from within the instance and seems to work ok.
Homestead file:
folders:
- map: C:/Code
to: /home/vagrant/code
type: smb
smb_username: vagrant
smb_password: vagrant
The vagrant user has full permissions to the local code folder.
I am running Windows 11, Vagrant 2.3.1, HyperV 10. The External Switch is set-up via my Wi-Fi - could that cause an issue?

Error while starting Vagrant Virtual Machine

I am using the vagrant VM and I have an environment set up using the following command:
mkdir Vagrant
cd Vagrant
vagrant init ubuntu/trusty/64
When I use vagrant up command it displays an error like :
Error while connecting to libvirt: Error making a connection to libvirt URI qemu:///system?no_verify=1&keyfile=/home/shashi/.ssh/id_rsa:
Call to virConnectOpen failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory

Fedora 24 Vagrant issue. mount.nfs access denied by server

I started using fedora 24 last year for my study/work computer. First time I run into an issue I cannot figure out within a reasonable amount of time.
We need to use Vagrant for a project, and I'm trying to get it running on my computer. The command vagrant up fails at the mounting nfs. Here's the output after the command:
Bringing machine 'default' up with 'libvirt' provider...
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...
==> default: Waiting for SSH to become available...
==> default: Creating shared folders metadata...
==> default: Exporting NFS shared folders...
==> default: Preparing to edit /etc/exports. Administrator privileges will be required...
[sudo] password for feilz:
Redirecting to /bin/systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/etc/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Wed 2017-02-15 15:17:58 EET; 19h ago
Main PID: 16889 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 512)
CGroup: /system.slice/nfs-server.service
Feb 15 15:17:58 feilz systemd[1]: Starting NFS server and services...
Feb 15 15:17:58 feilz systemd[1]: Started NFS server and services.
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o 'vers=4' 192.168.121.1:'/home/feilz/env/debian64' /vagrant
Stdout from the command:
Stderr from the command:
stdin: is not a tty
mount.nfs: access denied by server while mounting 192.168.121.1:/home/feilz/env/debian64
My Vagrantfile looks like: (I skipped the commented out lines)
Vagrant.configure(2) do |config|
config.vm.box = "debian/jessie64"
config.vm.provider :libvirt do |libvirt|
libvirt.driver = "qemu"
end
end
I can run the vagrant ssh command to log in, and write the command
sudo mount -o 'vers=4' 192.168.121.1:'/home/feilz/env/debian64' /vagrant
inside vagrant to try again.Then the output becomes
mount.nfs: access denied by server while mounting 192.168.121.1:/home/feilz/env/debian64
I've gone through loads of webpages. I fixed missing ruby gems (nokogiri and libffi). I tried modifying the /etc/exports file, it doesn't work, and it gets reset after I run vagrant halt / up.
I have installed the vagrant plugin vagrant-libvirt
What haven't I tried yet, that would allow me to use the NFS file sharing for Vagrant?

Homestead error: The SSH command responded with a non-zero exit status

I get an error when I do homestead up --provision on my homestead machine:
...[success logs here]...
==> default: Running provisioner: shell...
default: Running: /var/folders/9j/bsvhbzdn2dx8hjwgnl7nrj7w0000gn/T/vagrant-
shell20170127-4343-1dyyzgz.sh
==> default: mysql:
==> default: [Warning] Using a password on the command line interface
can be insecure.
==> default: Please use --connect-expired-password option or invoke
mysql in interactive mode.
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Although I get this error, everything works fine except one:
Databases that I have defined in ~/.homestead/Homestead.yaml are not created in mysql. So I guess this error causes the issue.
Any help would be appreciated.
I believe this is actually due to MySQL having an expiration time on passwords rather than having them last forever. It seems to happen for me when I'm running my old Laravel Homestead 5 box instead of newer ones for Homestead 7+.
Note that the following solution also fixes ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.
From the host machine:
vagrant up
# ignore "SSH command responded with a non-zero exit status"
vagrant ssh
Now within the client machine:
# log into mysql (for Homestead your password is likely "secret")
mysql -h localhost -u homestead -p
SET PASSWORD = PASSWORD('secret');
-- A) set password to never expire:
ALTER USER 'root'#'localhost' PASSWORD EXPIRE NEVER;
-- or B) to change password as well:
ALTER USER 'root'#'localhost' IDENTIFIED BY 'new_password', 'root'#'localhost' PASSWORD EXPIRE NEVER;
-- quit mysql
quit
# exit back to host shell
exit
Now from the host machine:
vagrant up --provision
And it should work.
However, if you now see ==> default: createdb: database creation failed: ERROR: database "homestead" already exists then run this line within the client machine:
mysql -h localhost -u homestead -p -e "DROP DATABASE homestead"
Then run vagrant up --provision from the host machine and be on your way.

Using rsync on windows with vagrant running a CoreOS VM

I am using windows 8.1 Pro pc running vagrant and cygwin's rsync.
I am configuring as such:
config.vm.synced_folder "../sharedFolder", "/vagrant_data", type: "rsync"
And when I execute vagrant up I get the following error:
C:\dev\vagrantBoxes\coreOS>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'yungsang/coreos' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: core
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Rsyncing folder: /c/dev/vagrantBoxes/sharedFolder/ => /vagrant_data
There was an error when attempting to rsync a synced folder.
Please inspect the error message below for more info.
Host path: /c/dev/vagrantBoxes/sharedFolder/
Guest path: /vagrant_data
Command: rsync --verbose --archive --delete -z --copy-links --chmod=ugo=rwX --no-perms --no-owner --no-group --rsync-path sudo rsync -e ssh -p 2222 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/d
ev/null -i 'C:/Users/aaron.axisa/.vagrant.d/insecure_private_key' --exclude .vagrant/ /c/dev/vagrantBoxes/sharedFolder/ core#127.0.0.1:/vagrant_data
Error: Warning: Permanently added '[127.0.0.1]:2222' (RSA) to the list of known hosts.
rsync: change_dir "/c/dev/vagrantBoxes/sharedFolder" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at /usr/src/ports/rsync/rsync-3.0.9-1/src/rsync-3.0.9/main.c(1052) [sender=3.0.9]
I assume it is an issue with how it is changing the directory path to /c/dev rather than C:\dev
like I commented in the github-issue, following line in your Vagrantfile will most certainly fix your problem
ENV["VAGRANT_DETECTED_OS"] = ENV["VAGRANT_DETECTED_OS"].to_s + " cygwin"
since this is local in your Vagrantfile, the source-files can be kept untouched
workaround: just ln -s /cygdrive/c/ /c on cygwin terminal
There is a hacky way to fix this (worked for me anyway) where you have to change
hostpath = Vagrant::Util::Platform.cygwin_path(hostpath)
to
hostpath = "/cygdrive" + Vagrant::Util::Platform.cygwin_path(hostpath)
on line 43 in C:\HashiCorp\Vagrant\embedded\gems\gems\vagrant-[VERSION]\plugins\synced_folders\rsync\helper.rb
It's different for 1.5.x, you can read this thread here about it: https://github.com/mitchellh/vagrant/issues/3230
I will, however, be the first to admit that editing the core is far from ideal.
From my testing, if you are using cygwin, use the solution by #osroot25 .
If you are using cwRsync and do not have cygwin, there is no workaround using Vagrant except editing the source code as #Andrew Myers details. Tested using Vagrant v1.6.5.
My workaround that works for me is to bypass Vagrant altogether and use cwRsync directly. This works for me because I am syncing a folder that hardly ever changes. I might change it quite a few times in one day (so I have to remember step 2 below each time), but I then go weeks (or months) without any changes. Remember that to use cwRsync you have to edit and use the cwrsync.cmd script. Attempting to access the rsync.exe command directly or by adding it to your path will fail. Step 1: I added the following line to the end of cwrsync.cmd (in the installed folder):
rsync -re "ssh -p 2222" /cygdrive/b/VCS/packages/ vagrant#localhost:packages --exclude ".git/"
Step 2: I have a separate cmd window open that I run the cwrsync.cmd using the full path. Then if I need to sync changes onto the VM, I activate that window, up-arrow, return and updating is instant!
Modifying ENV to set cygwin fix by #osroot25 doesn't work with cwRsync because when you force cygwin detection, the "vagrant ssh" command will not work because it requires the cygpath command in cygwin, which you won't have, so you cannot ssh into the VM. Well you can if you use the ssh command directly with all the right options.

Resources