I'm trying to add "redis-server --daemonize yes" to my Vagrant file using Trigger, but its failing with this message:
The executable 'redis-server' Vagrant is trying to run was not found in the PATH variable.
Before trying this method, I always just would run the command once I ssh'ed into my box and it always worked as-is.
Here is the code in my Vagrantfile:
config.trigger.after :up do |trigger|
trigger.info = "Starting Redis"
trigger.name = "Redis Server"
trigger.run = { inline: "redis-server --daemonize yes" }
end
Does anyone have any recommendations? if I have to put it into the PATH, what would I put?
Thanks
Vagrant Trigger run run a inline or remote script on the host
Per your comments you want the command to be run on the guest/VM so you need to use the run_remote option: A collection of settings to run a inline or remote script on the host.
Related
I am developing angular4 app using vagrant. I have installed vagrant-fsnotify plugin in order to notify file system changes to trigger hot build. The problem I have is how to run automatically vagrant fsnotify when vagrant booted?
Maybe vagrant-trigger can help you run this command everytime you boot your vm. An example should be like:
Vagrant.configure("2") do |config|
# Your existing Vagrant configuration
...
# start fsnotify on host after the guest starts
config.trigger.after :up do
run "vagrant fsnotify"
end
end
Correct form of trigger statement is:
# start fsnotify on host after the guest starts
config.trigger.after :up do |trigger|
trigger.run = {inline: "bash -c 'vagrant fsnotify > output.log 2>&1 &'"}
end
While vagrant up is executing, any call to vagrant status will report that the machine is 'running', even if the provisioning is not yet complete.
Is there a simple command for asking whether the vagrant up call is done and the machine is fully-provisioned?
You could have your provision script write to a networked file and query that. Or you could vagrant ssh -c /check/for/something if there was a file or service to check agains. Your provision script could also ping out to a listener you set up.
You could also use the Vagrant log or debug output to check when provisioning is done.
I want Vagrant to open the site specific to the box after starting. How can I make this happen?
Vagrantfile is a ruby script so you can call any command from the file, but it will run the command immediately and in any occasion.
Then, if you want to run after the box is started, you can use the vagrant trigger plugin and do something like
Vagrant.configure(2) do |config|
.....
config.trigger.after :up do
system("open", "http://stackoverflow.com/")
end
end
On Linux (Kubunutu 18LTS) the following worked great:
config.trigger.after [:up] do |trigger|
trigger.name = "Open default browser"
if Vagrant::Util::Platform.linux?
trigger.run = {inline: "bash -c 'xdg-open http://#{configure['BOX_IP']}'"}
end
end
It is also possible to check for other host platforms like this:
# Check for Windows
Vagrant::Util::Platform.windows?
# Check for MacOS
Vagrant::Util::Platform.darwin?
With Vagrant 2.2.5 the triggers work out of the box. No additional plugin is required.
Our vagrant box takes ~1h to provision thus when vagrant up is run for the first time, at the very end of provisioning process I would like to package the box to an image in a local folder so it can be used as a base box next time it needs to be rebuilt. I'm using vagrant-triggers plugin to place the code right at the end of :up process.
Relevant (shortened) Vagrantfile:
pre_built_box_file_name = 'image.vagrant'
pre_built_box_path = 'file://' + File.join(Dir.pwd, pre_built_box_file_name)
pre_built_box_exists = File.file?(pre_built_box_path)
Vagrant.configure(2) do |config|
config.vm.box = 'ubuntu/trusty64'
config.vm.box_url = pre_built_box_path if pre_built_box_exists
config.trigger.after :up do
if not pre_built_box_exists
system("echo 'Building gett vagrant image for re-use...'; vagrant halt; vagrant package --output #{pre_built_box_file_name}; vagrant up;")
end
end
end
The problem is that vagrant locks the machine while the current (vagrant up) process is running:
An action 'halt' was attempted on the machine 'gett',
but another process is already executing an action on the machine.
Vagrant locks each machine for access by only one process at a time.
Please wait until the other Vagrant process finishes modifying this
machine, then try again.
I understand the dangers of two processes provisioning or modifying the machine at one given time, but this is a special case where I'm certain the provisioning has completed.
How can I manually "unlock" vagrant machine during provisioning so I can run vagrant halt; vagrant package; vagrant up; from within config.trigger.after :up?
Or is there at least a way to start vagrant up without locking the machine?
vagrant
This issue has been fixed in GH #3664 (2015). If this still happening, probably it's related to plugins (such as AWS). So try without plugins.
vagrant-aws
If you're using AWS, then follow this bug/feature report: #428 - Unable to ssh into instance during provisioning, which is currently pending.
However there is a pull request which fixes the issue:
Allow status and ssh to run without a lock #457
So apply the fix manually, or waits until it's fixed in the next release.
In case you've got this error related to machines which aren't valid, then try running the vagrant global-status --prune command.
Definitely a bit more of a hack than a solution, but I'd rather a hack than nothing.
I ran into this issue and nothing that was suggested here was working for me. Even though this is 6 years old, it's what came up on a google (along with precious little else), I thought I'd share what solved it for me in case anyone else lands here.
My Setup
I'm using vagrant with ansible-local provisioner on a local virtualbox VM, which provisions remote AWS EC2 instances. (i.e. the ansible-local runs on the virtualbox instance, vagrant provisions the virtualbox instance, ansible handles the cloud). This setup is largely because my host OS is Windows and it's a little easier to take Microsoft out of the equation on this one.
My Mistake
Ran an ansible shell task with a command that doesn't terminate without user input (and did not run it with the & to run in the background).
My Frustration
Even in the linux subsystem, trying a ps aux | grep ruby or ps aux | grep vagrant was unhelpful because the PID would change every time. Probably a reason for this, likely has something to do with how the subsystem works, but I don't know what that reason is.
My Solution
Just kill the AWS EC2 instances manually. In the console, in the CLI, pick your flavor. Your terminal where you were running vagrant provision or vagrant up should then finally complete and spit out the summary output, even if you ctrl + C'd out of the command.
Hoping this helps someone!
Vagrant's chef-client provisioning fails until after I use RDP into the VM and login as 'vagrant' user for the first time.
The debug output says:
INFO interface: Machine: error-exit ["Berkshelf::VagrantWrapperError", "VagrantPlugins::CommunicatorWinRM::Errors::WinRMBadExitStatus: The following WinRM command responded with a non-zero exit status.\nVagrant assumes that this means thecommand failed!\n\ncmd.exe /c install.bat 11.16.4\n\nStdout from the command:\n\nDownloading Chef 11.16.4 for Windows...\r\nInstalling Chef 11.16.4\r\n\n\nStderr from the command:\n\n"]
Ideally, I could "vagrant destroy" and "vagrant up" back to back without any other steps necessary.
How can I work around this?
I am using:
Vagrant 1.6.5
Chef 11.16.4
Windows 8 (kensykora/windows_81)
Windows 2012 (kensykora/windows_2012_r2_standard)
You could set up the vagrant user to auto-login:
https://technet.microsoft.com/en-us/magazine/ee872306.aspx
Then you can package the modified box instance and use the new box from there on:
http://docs.vagrantup.com/v2/cli/package.html