Start Vagrant VM and start Node.js script by shell script - bash

I installed StackEdit on Vagrant. I would like to start Vagrant and StackEdit by one click.
I created bash script:
#!/bin/bash
vagrant up
#ssh -p 2222 -i /d/stackedit/.vagrant/machines/default/virtualbox/private_key vagrant#127.0.0.1 -t '/home/vagrant/Code/start_server.sh'
start "C:\Program Files\Mozilla Firefox\firefox.exe" http://stackedit.app:5000
and start_server.sh in VM
if [ $(ps -e|grep node|wc -l) = "0" ] ; then
(export PORT=5000 && node Code/Project/public/stackedit/server.js) &
fi
sleep 5
exit 0
If I run start_server.sh via ssh manualy everything works, but when I try it with ssh in start script - now commented line - server doesn't run.
I tried copy this script to /ect/rc.local, but the result is same.
I tried add #reboot /home/vagrant/Code/start_server.sh to crontab -e too, but without success.
Can anyone help me?
My system is Windows 10. I use Git Bash.

you should put everything in your Vagrantfile
#Run provisioning
You can run your script from Vagrantfile using a shell provisioner
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "Code/start_server.sh"
end
check, you have some options by default it will run as root so you can change if you want to run your script as vagrant user
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "Code/start_server.sh", privileged: false
end
and also you can make sure you run your script everytime you boot the VM (by default it runs only once or when calling specifically the provision parameter)
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "Code/start_server.sh", run: "always"
end
#opening the website after the system is running
Vagrantfile is a ruby script so you can call any command from the file, but it will run the command immediately and in any occasion.
Then, if you want to run after the box is started, you can use the vagrant trigger and do something like
Vagrant.configure(2) do |config|
.....
config.trigger.after :up do |trigger|
trigger.run = {inline: 'system("open", "http://stackedit.app:5000"')
end
end

Related

Is it possible to restart a machine when provisioning a machine using Vagrant and pickup where the script left off?

I was reading a tutorial in bash where they said to restart the machine, there was no option to restart a service directly, it was a matter of restarting the machine, and then there were more commands after that that still needed to be run when provisioning.
So is there any way to restart a box amid provisioning and then pick up where you left off after that?
As far as I know you can't have a single script/set of commands that would carry on where it left off if it attempts to restart the OS, such as:
config.vm.provision "shell", inline: <<-SHELL
echo $(date) > ~/rebootexample
reboot
echo $(date) >> ~/rebootexample
SHELL
In this example the second echo call would not be carried out.
You could split the script/commands up and use a plugin such as vagrant reload.
An example snippet of a Vagrantfile to highlight its possible use:
# execute code before reload
config.vm.provision "shell", inline: <<-SHELL
echo $(date) > ~/rebootexample
SHELL
# trigger reload
config.vm.provision :reload
# execute code after reload
config.vm.provision "shell", inline: <<-SHELL
echo $(date) >> ~/rebootexample
SHELL
I've never done this, but if I had to I would split the script into two pieces, one before restart that includes the restart command, then another that's post install.
The first one would also create a lock file.
The overall script would run the first script if the lock file didn't exist or run the second one if the file exists. This overall script would be set up for startup.
One trick you can employ is to send restart signal and save rest of the provisioning work as a script to be run on boot:
config.vm.provision "shell", inline: <<-SHELL
echo "Do your thing... DONE"
cat <<-RCLOCAL | sed -s 's_^ __' > /etc/rc.local
#!/bin/bash
echo "This will be run once on next boot and then it's destroyed and never run again"
rm /etc/rc.local
RCLOCAL
chmod o+x /etc/rc.local
shutdown -r now #restart
SHELL
This was tested to work on debian 9, so you may need to enable services or find another way to get your code bootsrapped to run on the next boot if you're running something else.
Unfortunately you can't simply do:
config.vm.provision "shell", inline: "shutdown -r now"
config.vm.provision "shell", inline: "echo 'hello world'"
results in ==>
The SSH connection was unexpectedly closed by the remote end. This
usually indicates that SSH within the guest machine was unable to
properly start up. Please boot the VM in GUI mode to check whether
it is booting properly.
Vagrant has a reboot option for provisioning, however, the reboot guest capabilities is currently not support for Linux.
You can check my plugin out here, https://github.com/secret104278/vagrant_reboot_linux/tree/master , I've implement the function for Linux to reboot.
This can be done like so:
config.vm.provision 'shell', path: 'part1.sh'
config.vm.provision 'shell', reboot: true
config.vm.provision 'shell', path: 'part2.sh'
https://developer.hashicorp.com/vagrant/docs/provisioning/shell#reboot

vagrant provision commands via ssh

I want to be able to provision a vm by executing commands via ssh. I don't want to upload a shell script and execute it because that will not work on my case as my VM is a virtual appliance that has SSH support but not bash. Is this possible?
Thanks
Indeed you would not be able to use the shell provisioner with a script file.
a possibility is to use inline script
$script = <<SCRIPT
echo I am provisioning...
apt-get install -y apache2
SCRIPT
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: $script
end
This should not upload any file but should execute the script directly from ssh
you can run vagrant ssh -c COMMAND to execute command from ssh once the machine is provisioned so not ideal but you could script all your provisioning with this command in a script and execute this script from host once the vm is boot

Is it possible to send multiple commands to vagrant ssh via a shell script?

I'm on a Windows host using Git Bash to run the .sh files.
There are 4 components to my current project. To start up it on localhost, I have to:
webdriver-manager start since I'm the QA and need that running anyway
vagrant up in the project's parent folder, then close out that window (or just start the VM myself via VirtualBox UI)
vagrant ssh cd /vagrant cd "component's folder" docker-compose up x 4
grunt serve
Right now, I have a .sh file each for 1, 2, and 4, but I cannot find how to pass along multiple commands to vagrant ssh, especially since docker-compose up needs to be constantly running.
Is there a way to pass along those cds and the docker-compose?
I found the ssh documentation from vagrant which mentions something about needing to do fancy things to get it running background processes, but I have no idea what it's doing or how to implement that in a .sh file since the wording is so wishy-washy.
Also, I'm new to shell scripts in general, so if there's a smarter way to go about this to solve the issue, I'd appreciate it, too. These scripts aren't necessary, I just don't want to have to type it repeatedly every day when I'm running my tests locally.
From your Vagrantfile, have something like this
$script = <<SCRIPT
echo "running script in the VM"
cd /vagrant
cd "component's folder"
docker-compose up
cd "component's folder 2"
docker-compose up
# and add all other commands you would run from the VM
SCRIPT
Vagrant.configure(2) do |config|
....
config.vm.provision "shell", inline: $script
....
end
Note: this will run the commands as sudo (from your VM) if you want to run them as your vagrant user, just do
config.vm.provision "shell", inline: $script, privileged: "false"
If the commands needs to be invoked on vagrant up, you can provide provisioning script available on the host machine by:
config.vm.provision "shell", path: '/vagrant/scripts/provision.sh'
so Vagrant will then upload this script into the guest and execute it (using URL instead of path would also work),
Alternatively you may use inline shell syntax:
config.vm.provision "shell", inline: "echo Hello, World"
Or to run the script within VM, then try:
config.vm.provision "shell", inline: %Q(/usr/bin/env VAR=1 bash /vagrant/script.sh)
To run one-time off commands in VM, you may use vagrant ssh command for that, for example:
vagrant ssh -c "cd /vagrant && echo Hello, World"

Can I get vagrant to execute a series of commands where I get shell access and the webserver launches?

Everytime I launch vagrant for one of our projects I go through the following incantation:
vagrant up
vagrant ssh
sudo su deploy
supervisorctl stop local
workon odoo-8.0
/home/deploy/odoo/build/8.0/openerp-server -c /home/deploy/odoo/local/odoo_serverrc
This runs the server in a way that lets me see the terminal output. Is there a way I could package this all up so I can do say; vagrant dev or some such?
You can use shell provisioner.
In your vagrantfile, you can do things like this:
$script = <<SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: $script
end
You can replace
echo I am provisioning...
date > /etc/vagrant_provisioned_at
with your own commands.
On the first 'vagrant up' that creates the environment, provisioning is run. If the environment was already created and the up is just resuming a machine or booting it up, they won't run unless the --provision flag is explicitly provided.
There are many more good ways to provision, I would also recommend using Ansible. Here is the doc you can read:
https://docs.vagrantup.com/v2/provisioning/basic_usage.html
First, create a shell script with your commands in them:
#!/bin/bash
vagrant up
vagrant ssh
sudo su deploy
supervisorctl stop local
workon odoo-8.0
/home/deploy/odoo/build/8.0/openerp-server -c /home/deploy/odoo/local/odoo_serverrc
Put it somewhere in your guest with ansible. Next, copy the /home/vagrant/.bashrc file into yoour ansible files/ folder. Add the line
bash /path/to/shellfile.sh
to the .bashrc and make sure ansible copies it into your guest.
After that, the shell script should be executed every time you log into the guest.

Automatically chdir to vagrant directory upon "vagrant ssh"

So, I've got a bunch of vagrant VMs running some flavor of Linux (centos, ubuntu, whatever). I would like to automatically ensure that a "vagrant ssh" will also "cd /vagrant" so that no-one has to remember to do that whenever they log in.
I've figured out (duh!) that echo "\n\ncd /vagrant" >> /home/vagrant/.bashrc will do the trick. What I don't know is how to ensure that this only happens if the cd command isn't already there. I'm not a shell expert, so I'm completely confused here. :)
You can do this by using the config.ssh.extra_args setting in your Vagrantfile:
config.ssh.extra_args = ["-t", "cd /vagrant; bash --login"]
Then anytime you run vagrant ssh you will be in the /vagrant directory.
I put
echo "cd /vagrant_projects/my-project" >> /home/vagrant/.bashrc
in my provision.sh, and it works like a charm.
cd is a Bash shell built-in, as long as a shell is installed it should be there.
Also, be aware that ~/.bash_profile is for interactive login shell, if you add cd /vagrant in ~vagrant/.bashrc, it may NOT work.
Because distros like Ubuntu does NOT have this file -> ~/.bash_profile by default and instead use ~/.bashrc and ~/.profile
If someone creates a ~/.bash_profile for vagrant user on Ubuntu, ~vagrant/.bashrc will not be read.
You need to add cd /vagrant to your .bashrc in the vm. The best way to do this is in your provisioner script.
If you don't have a provisioner script, make one by adding this line to your Vagrantfile before end:
config.vm.provision "shell", path: "scripts/vagrant/provisioner.sh", privileged: false
Path is relative to the project root where the Vagrantfile is, and privileged depends on your project and what else is in your provisioner script which might need to be privileged. I use priveleged false and sudo explicitly when necessary.
And in the provisioner script:
if ! grep -q "cd /vagrant" ~/.bashrc ; then
echo "cd /vagrant" >> ~/.bashrc
fi
This will add cd /vagrant to .bashrc, but only if it isn't there already. This is useful if you reprovision, as it will prevent your .bashrc from getting cluttered.
Some answers mention a conflict with .bash_profile. If the above code doesn't work, you can try the same line with .bash_profile or .profile instead of .bashrc. However, I've been using vagrant with ubuntu guests. My Laravel/homestead box based on Ubuntu has a .bash_profile and a .profile but having cd /vagrant in .bashrc did work for me when using vagrant ssh without changing or deleting the other files.
You can add cd /vagrant to your .bashrc and it will run the command when you ssh. The /bashrc you want is in /home/vagrant (the user you login as when you vagrant ssh.) You can just stick the new line at the bottom of the file.
You can also do it this way:
vagrant ssh -c "cd /vagrant && bash"
And you could include it in a script to launch it (like ./vagrant-ssh).
May be this can help. Edit the Vagrantfile as replace your username with vagrant
`
config.vm.provision "shell" do |s|
s.inline = <<-SHELL
# Change directory automatically on ssh login
if ! grep -qF "cd /home/vagrant/ansible" /home/vagrant/.bashrc ;
then echo "cd /home/vagrant/ansible" >> /home/vagrant/.bashrc ; fi
chown vagrant. /home/vagrant/.bashrc
`
Ideally we just want to alter the vagrant ssh behaviour.
In my case, I wanted something that didn't affect any other processes in the environment, so we can do something like this in the vagrant file-
VAGRANT_COMMAND = ARGV[0]
if VAGRANT_COMMAND == "ssh"
config.ssh.extra_args = ["-t", "cd /vagrant; bash --login"]
end
You can use Ansible to assert that your .bashrc file contains cd /vagrant.
If you are not already using the Ansible provisioner for your VM, add the following lines to your Vagrantfile:
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "provisioning/playbook.yml"
end
And in your playbook, add the following task/play:
---
- hosts: all
gather_facts: no
tasks:
- name: chdir to vagrant directory
ansible.builtin.lineinfile:
path: /home/vagrant/.bashrc
line: cd /vagrant
According to this Q&A, I would recommend to modify .bashrc instead of .profile or .bash_profile.

Resources