I am trying to load my vagrant box with salt, asking it to install Apache.
I am using salty-vagrant in masterless mode.
The vagrant box gets loaded, but it gets stuck in the console with the following message:
[default] Running provisioner: salt...
Checking if salt-minion is installed
salt-minion found
Checking if salt-call is installed
salt-call found
Salt did not need installing or configuring.
Calling state.highstate... (this may take a while)
When I check the vagrant salt log, the following is found:
[salt.utils ][ERROR ] This master address: 'salt' was previously resolvable but now fails to resolve! The previously resolved ip addr will continue to be used
[salt.minion ][WARNING ] Master hostname: salt not found. Retrying in 30 seconds
Has anyone faced this issue before?
You need to make sure you are passing a minion config with the following option set:
file_client: local
Read all the steps in the Masterless Quick Start: https://github.com/saltstack/salty-vagrant#masterless-quick-start
Related
I need to create a new Laravel project and I need to use Mongo DB as a database server. Following the Homestead documentation I added this in my Homeasted.yaml file:
mongodb: true
From what I see in the logs the mongo database is created:
homestead-7: Running: script: Creating Mongo Database: homestead
But after this I received this message:
homestead-7: Running: script: Creating Mongo Database: homestead
homestead-7: MongoDB shell version v3.6.3
homestead-7: connecting to: mongodb://127.0.0.1:27017/homestead
homestead-7: 2019-06-03T10:01:52.103+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
homestead-7: 2019-06-03T10:01:52.104+0000 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
homestead-7: connect#src/mongo/shell/mongo.js:251:13
homestead-7: #(connect):1:6
homestead-7: exception: connect failed
The SSH command responded with a non-zero exit status.
From what I found on the internet it can be that the mongo service is not started. I restarted the box without provisioning this time but with the same result. Command:
vagrant#homestead:~$ mongo
Also, I found some solutions that involve changing of some files on an Ubutu O.S but in my case it will not work because the box will start as a fresh instance.
Any idea how to fix this? Thanks in advance!
Laravel version: 5.8.
Homestead: 8.4.0
MongoDB shell: v3.6.3
LATER EDIT
After the VM has started I executed this command:
sudo apt-get install mongodb
After installation I can execute the "mongo" command:
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-use
Strange, so actually Mongo DB isn't installed?! Even if I added the flag. Now I need to figure how to add it every time when the VM is started.
I managed to fix my problem after hours of searching so I will post the fix.
Because I didn't find anything that could help me I started to check the Homestead scripts in order to understand how Mongo is installed and in homestead.rb I found this line:
# Install MongoDB If Necessary
if settings.has_key?('mongodb') && settings['mongodb']
config.vm.provision 'shell' do |s|
s.name = 'Installing MongoDb'
s.path = script_dir + '/install-mongo.sh'
end
end
So I searched were "install-mongo.sh" is called and I found this condition:
if [ -f /home/vagrant/.mongo ]
then
echo "MongoDB already installed."
exit 0
fi
So Mongo DB is not installed every time only if the "/home/vagrant/.mongo" file doesn't exist. At this point I realized that maybe Mongo failed to be installed but this file was written.
So the solution was to destroy the Vagrant box and recreate it from scratch:
vagrant destroy
vagrant up --provision
In Homestead.yaml under features: add -mongodb: true
and run vagrant reload --provision, that is same as what #NicuVlad has suggested but little bit easier.
I tried to setup a new Laravel project this afternoon and I must have did something to my Homestead/Vagrant configuration that ruined it. I think the command I used was vagrant reload {id}.
Now when I try to start my machine, I get the following error:
Bringing machine 'homestead-7' up with 'vmware_fusion' provider...
==> homestead-7: Checking if box 'laravel/homestead' is up to date...
==> homestead-7: Verifying vmnet devices are healthy...
==> homestead-7: Preparing network adapters...
Vagrant found a port collision for the specified port and virtual machine.
While this port was marked to be auto-corrected, the ports in the
auto-correction range are all also used.
VM: homestead-7
Forwarded port: 80 => 8000
When I run Vagrant global-status, I get this:
id name provider state directory
-----------------------------------------------------------------------
410757f homestead-7 vmware_fusion not running /Users/Me/Homestead
I can't run vagrant reload 410757f as I get the same error above, and I can't provision the machine because it needs to be running.
I'm confused as to whats happening here. There is a networking colission, but I don't have any other vagrant boxes. I currently have 4 other Windows VM's, but I made sure I shut down each machine.
I've even tried destroying and recreating the homestead box (no luck). Any ideas?
Edit To extend on this, I tried looking for the process using sudo lsof -i :8000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
vmnet-nat 3943 root 42u IPv4 0x70d4a03b6f2dddd 0t0 TCP *:irdmi (LISTEN)
I killed that using sudo kill -9 3943 and ran sudo lsof -i :8000 again, which gave me nothing, then running homestead up again gave me the same error.
I seemed to solve this by removing and reinstalling VMWare.
Based on my readings online, the issue seemed to be with a cached setting relating to a previous VM on Fusion. Rather than hunt down what that might have been, I thought it was easier to just delete everything and reinstall.
I followed the instructions here and then downloaded and reinstalled it from the VMWare website.
I hope this helps someone in the future!
While working with Chef, Kitchen, Vagrant and Virtual Box today... I encountered a bizarre issue when attempting to use the bento boxes hosted by Hashicorp (https://atlas.hashicorp.com/bento/) to do some Chef cookbook development/testing.
While spinning up a new cookbook, I wanted to test some newer versions of CentOS 7.2 and Ubuntu 16.04 which are not currently live in our environment. I turned to hashicorp's bento boxes to pull them down into my .kitchen.yml config.
.kitchen.yml
---
driver:
name: vagrant
provisioner:
name: chef_zero
customize:
memory: 1024
platforms:
- name: ubuntu-16.04
suites:
- name: default
run_list:
- recipe[sandbox::default]
attributes:
Used chef generate cookbook to create a new cookbook and as you can see above, was using a very vanilla kitchen config to get things started.
When running kitchen create I kept encountering the following error as an SSH Timeout when provisioning the VM using Vagrant and Virtual Box.
ERROR:
Timed out while waiting for the machine to boot.
This means that Vagrant was unable to communicate with the guest machine
within the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
When observing the Virtual Box VM Console, I noted the following (see screen shot below)...
A start job is running for Raise network interfaces (2 min 39s / 5min 3s)
Observing that Vagrant would timeout prior to the start job completing... I attempted to resolve by increasing the boot_timeout from a default of 300 seconds to 600 seconds in my .kitchen.yml
However, further testing proved that this did not resolve the issue even though the VM would successfully initialize after 5mins 3s... Kitchen / Vagrant were unable to SSH to the Host and the failure and the Vagrant SSH timeout persisted.
Ultimately, to resolve this issue I upgraded ChefDK, Vagrant, and VirtualBox to the latest versions available.
Experienced the issue with...
Virtual Box 5.0.30 r112061
Vagrant 1.8.6
Chef Development Kit 0.19.6
Resolved the issue by upgrading to...
Virtual Box 5.1.10 r112026
Vagrant 1.9.0
Chef Development Kit Version: 1.0.3
Following the version upgrades, the Vagrant SSH Timeouts disappeared completely and the box was created successfully within a few seconds.
Virtual Box VM Console
Following >> https://laravel.com/docs/5.3/homestead
bash init.sh
cp: overwrite '/c/Users/myuser/.homestead/Homestead.yaml'? y
cp: overwrite '/c/Users/myuser/.homestead/after.sh'?
cp: overwrite '/c/Users/myuser/.homestead/aliases'?
Homestead initialized!
I don't know if these needs to be overwritten ?
Also , I configure homestead.yaml
folders:
- map: ~/Homestead
to: /home/vagrant/Code
It is showing error[The host path of the shared folder is missing: ~/Homestead]
My homestead installation is
$ pwd
/Homestead
I am sure some steps are missing can someone help
Installation method - Per Project Installation
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'laravel/homestead' is up to date...
==> default: Running provisioner: shell...
SSH authentication failed! This is typically caused by the public/private
keypair for the SSH user not being properly set on the guest VM. Please
verify that the guest VM is setup with the proper public key, and that
the private key path for Vagrant is setup properly as well.
It is a fresh installation !
vagrant destroy && up
default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Authentication failure. Retrying...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
Yes, you should overwrite these files so as to have a clean installation of Homestead (just in case). Now, the reason you are getting this error is because your YAML configuration is wrong. Assuming you have your Laravel code in a Code folder in your Documents folder, your YAML should look like this (NB: this is Windows specific!):
folders:
- map: "C:/Users/Username/Documents/Code"
to: "/home/vagrant/Code"
You need to map the folder containing your local code to a folder in your virtual machine. This will then be set up using shared folders. This way, any code changes in your Code folder will be mirrored to your VM.
Personally, I prefer the Per Project Installation because I can have multiple projects running on different VMs mapped to different domains, at the same time. Check it out here: https://laravel.com/docs/5.3/homestead#per-project-installation
I'm attempting get started with kubernetes and do a vagrant/virtualbox install as per http://kubernetes.io/docs/getting-started-guides/binary_release/#download-kubernetes-and-automatically-set-up-a-default-cluster
My commands are:
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
I get the following errors at the terminal:
master: Vagrant insecure key detected. Vagrant will automatically replace
master: this with a newly generated keypair for better security.
master:
master: Inserting generated public key within guest...
master: Removing insecure key from the guest if it's present...
master: Key inserted! Disconnecting and reconnecting using new SSH key...
master: Warning: Authentication failure. Retrying...
<snip>
master: Warning: Authentication failure. Retrying...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
The script then exits without completing kubernetes "master" set up and without setting up any nodes.
I am able to vagrant ssh master, but need to manually enter the default "vagrant" password.
I am running OSX 10.11.6 (15G31). I am running recent versions of virtualbox (5.0.26 r108824) and vagrant (1.8.5).
These kubernetes "getting started" instructions appear to be downloading the latest kubernetes version (1.3.4).
Because I had older virtualbox and vagrant versions installed, I made sure to completely uninstall and reinstall both, as per:
https://www.virtualbox.org/manual/ch02.html#idm871
https://www.vagrantup.com/docs/installation/uninstallation.html
I assume there is something going wrong with ssh somewhere, though given that these are the published "getting started" instructions and I am using fresh installs of all components, I am surprised that this is not working right out of the box.
Take a look at https://github.com/kubernetes/minikube. It's an official Kubernetes project intended to simplify this exact use case. I've been using it for a few weeks and it works great.
The easiest way to run kubernetes on OSX, I think, is by using Kube-Solo or Kube-Cluster.
Please check this repo:
https://github.com/TheNewNormal/kube-cluster-osx
Note: for me is only working well with the CoreOS stable release.