How to organize users in vagrant shell provision? - shell

So I'm setting up a vagrant environment for our small team of 4 developers. I'm using an Ubuntu/Precise32 box and I created a shell script for provisioning with lots of apt-get and cp calls. Something like this:
#!/bin/bash
#filename: provision.sh
sudo apt-get update
apt-get install debconf-utils -y > /dev/null
debconf-set-selections <<< "mysql-server mysql-server/root_password password myPassword"
debconf-set-selections <<< "mysql-server mysql-server/root_password_again password myPassword"
sudo apt-get install -y vim apache2 mysql-server-5.5 mysql-client git sqlite python-pip phpmyadmin
sudo pip install virtualenv
sudo pip install Django==1.4
sudo a2enmod rewrite
sudo service apache2 restart
echo "Copying hosts ..."
sudo cp /vagrant/hosts /etc/
echo "Copying .gitconfig ..."
sudo cp /vagrant/.gitconfig /home/vagrant/
echo "Copying .bashrc ..."
sudo cp /vagrant/.bashrc /home/vagrant/
echo "Copying .bash_aliases ..."
sudo cp /vagrant/.bash_aliases /home/vagrant/
sudo ln -fs /usr/share/phpmyadmin /var/www
if [ ! -d "/vagrant/projects" ]; then
echo "Creating folder /vagrant/projects"
mkdir /vagrant/projects
fi
cd /vagrant/projects
#git clone myServer:/git/project.git
#can't clone because the user is vagrant. tries ssh vagrant#myServer asking for a password
Now I would like to clone some git repositories (from our own servers) if they don't already exist. But within the provision the active user is vagrant and I don't want to create a vagrant user on our git server or any other servers that we could use.
Each developer in the team already have their ssh accounts on other servers. So should I just create all the users in all the vagrant boxes? And if so, how can they ssh into other servers with no password?
I don't want the developers (myself included) to make user management on their own vagrant box (stuff like adduser, ssh-copy-id etc...). I want to provision everything like cloning git repositories and maybe rsync'ing but I want to be able to set up the right user for different vagrant boxes.
I want to be able to do this from shell provision:
If Vagrant box 1 => create user developer1 that already has passwordless ssh access to our servers
If Vagrant box 2 => create user developer2 that already has passwordless ssh access to our servers
If Vagrant box 3 => create user developer3 that already has passwordless ssh access to our servers
If Vagrant box 4 => create user developer4 that already has passwordless ssh access to our servers
Thank you!

I don't know the answer, but hopefully I may be able to point you in the direction of a possible solution.
I'm guessing the /vagrant share will point to the host in your setup, in which case you could store that information in the project folder on the individual developer's machines and then call / use it in the provisioning setup.
Alternatively, try using 'Socket.gethostname' in the vagrant file - in Ruby it returns the host computer's name, so you could use this to sniff which on which developer's machine the vagrant file is running.
i.e.
if Socket.gethostname === 'Developer1PC'
end
if Socket.gethostname === 'Developer2PC'
end
if Socket.gethostname === 'Developer3PC'
end
if Socket.gethostname === 'Developer4PC'
end
You'll have to excuse any ruby errors, I'm not a ruby dev, but I've just had to do something along similar lines in Vagrant.

Related

Prevent .bash_profile from executing when connecting via SSH

I have several servers running Ubuntu 18.04.3 LTS. Although it's considered bad practice to auto login, I understand the risks.
I've done the following to auto-login the user:
sudo mkdir /etc/systemd/system/getty#tty1.service.d
sudo nano /etc/systemd/system/getty#tty1.service.d/override.conf
Then I add the following to the file:
[Service]
ExecStart=
ExecStart=-/sbin/agetty --noissue --autologin my_user %I $TERM
Type=idle
Then, I edit the following file for the user to be able to automatically start a program:
sudo nano /home/my_user/.bash_profile
# Add this to the file:
cd /home/my_user/my_program
sudo ./program
This works great on the console when the server starts, however, when I SSH into the server, the same program is started and I don't want that.
The simplest solution is to SSH with a different user but is there a way to prevent the program from running when I SSH in using the same user?
The easy approach is to check the environment for variables ssh sets; there are several.
# only run my_program on login if not connecting via ssh
if [ -z "$SSH_CLIENT" ]; then
cd /home/my_user/my_program && sudo ./program
fi

Vagrant - Provisioning script not changing directory

New to vagrant, please help!
Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "laravel/homestead"
config.vm.provision "shell", path: "vm-setup/provision.sh"
end
vm-setup/provision.sh
# Update apt-get
apt-get -y update
# Install tree
apt-get install tree
# Create .bash_aliases
sudo echo 'alias cls="clear"' >> ~/.bash_aliases
sudo chsh -s $(which zsh) vagrant
cd /vagrant
provision.sh file runs fine. When I run "vagrant provision" it updates apt-get, installs tree and even changes the shell to ZSH.
But sudo echo 'alias cls="clear"' >> ~/.bash_aliases and cd /vagrant lines do not work, not sure why. When I vagrant ssh into the machine, I am being taken to root directory (/home/vagrant). I would like to start in /vagrant folder.
Vagrant's shell provisioner by default runs with privileged = true:
privileged (boolean) - Specifies whether to execute the shell script
as a privileged user or not (sudo). By default this is "true".
When you perform vagrant ssh you login to a VM as vagrant user.
That's why:
1.
# Create .bash_aliases
sudo echo 'alias cls="clear"' >> ~/.bash_aliases
It writes to root's ~/.bash_aliases and it is really there:
root#vagrant:~# id
uid=0(root) gid=0(root) groups=0(root)
root#vagrant:~# cat .bash_aliases
alias cls="clear"
Solution: write to vagrant's home folder:
# Create .bash_aliases
echo 'alias cls="clear"' >> /home/vagrant/.bash_aliases
chown vagrant:vagrant /home/vagrant/.bash_aliases
2.
cd /vagrant
This means that folder was changed in provision script, nothing else.
Solution: add this statement to vagrant's .bash_aliases as well:
echo 'cd vagrant' >> /home/vagrant/.bash_aliases
Your final vm-setup/provision.sh is:
# Update apt-get
apt-get -y update
# Install tree
apt-get install tree
# Create .bash_aliases
echo 'alias cls="clear"' >> /home/vagrant/.bash_aliases
echo 'cd /vagrant' >> /home/vagrant/.bash_aliases
chown vagrant:vagrant /home/vagrant/.bash_aliases
chsh -s $(which zsh) vagrant
Even not being the case, just for the sake of completion:
I had struggled many times while trying to use Vagrant as testing tool for setup scripts and just now I realized the underlying reason:
Using this Vagrantfile statement:
config.vm.provision "shell", path: "myScript.sh"
myScript.sh is inlined to the virtual machine standard input. This is good in the sense you don't need access to the actual script from inside the virtual machine (typically through /vagrant path).
...but it comes with the drawback that any relative path won't work properly.
Of course: We can adjust it to absolute path based on /vagrant. But this requires to modify the script we are trying to test.
So in this case (and in my opinion in any case we are not going to disable /vagrant share), it is a better solution to use "inline:" option with the "machine-internal" path:
config.vm.provision "shell", inline: "/vagrant/myScript.sh"
...This will inline this statement instead of the contents of the file and relative paths (or even "script path based" ones such as $(dirname "${0}")/relative/path) are going to work properly.
Additionally, if the setup script you are going to test is intended to be executed by non privileged users (for example if it is going to set up some user configuration we will expect to work just after a vagrant ssh -with vagrant user-) it is also a good idea to add the privileged: false option, pointed out by #Nickolay:
config.vm.provision "shell", inline: "/vagrant/myScript.sh", privileged: false

boot2docker startup script to mount local shared folder with host

I'm running boot2docker 1.3 on Win7.
I want to connect a shared folder.
In the VirtualBox Manager under the image properties->shared folders I've added the folder I've want and named it "c/shared". The "auto-mount" and "make permanent" boxes are checked.
When boot2docker boots, it isn't mounted though. I have to do an additional:
sudo mount -t vboxsf c/shared /c/shared
for it to show up.
Since I need that for every time I'll ever use docker, I'd like that to just run on boot, or just already be there. So I thought if there were some startup script I could add, but I can't seem to find where that would be.
Thanks
EDIT: It's yelling at me about this being a duplicate of Boot2Docker on Mac - Accessing Local Files which is a different question. I wanted to mount a folder that wasn't one of the defaults such as /User on OSX or /c/Users on windows. And I'm specifically asking for startup scripts.
/var/lib/boot2docker/bootlocal.sh fits your need probably, it will be run by initial script /opt/bootscripts.sh
And bootscripts.sh will also put the output into the /var/log/bootlocal.log, see segment below (boot2docker 1.3.1 version)
# Allow local HD customisation
if [ -e /var/lib/boot2docker/bootlocal.sh ]; then
/var/lib/boot2docker/bootlocal.sh > /var/log/bootlocal.log 2>&1 &
fi
One use case for me is
I usually put shared directory as /c/Users/larry/shared, then I add script
#/bin/bash
ln -s /c/Users/larry/shared /home/docker/shared
So each time, I can access ~/shared in boot2docker as the same as in host
see FAQ.md (provided by #KCD)
If using boot2docker (Windows) you should do following:
First create shared folder for boot2docker VM:
"C:/Program Files/Oracle/VirtualBox/VBoxManage" sharedfolder add default -name some_shared_folder -hostpath /c/some/path/on/your/windows/box
#Then make this folder automount
docker-machine ssh
vi /var/lib/boot2docker/profile
Add following at the end of profile file:
sudo mkdir /windows_share
sudo mount -t vboxsf some_shared_folder /windows_share
Restart docker-machine
docker-machine restart
Verify that folder content is visible in boot2docker:
docker-machine ssh
ls -al /windows_share
Now you can mount the folder either using docker run or docker-compose.
Eg:
docker run it --rm --volume /windows_share:/windows_share ubuntu /bin/bash
ls -al /windows_share
If changes in the profile file are lost after VM or Windows restart please do following:
1) Edit file C:\Program Files\Docker Toolbox\start.sh and comment out following line:
#line number 44 (or somewhere around that)
yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
#change the line above to:
# yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
Thanks for your help with this. An additional few flags I needed to add, in order for the new mount to be accessible by the boot2docker "docker" user:
sudo mount -t vboxsf -o umask=0022,gid=50,uid=1000 Ext-HD /Volumes/Ext-HD
With docker 1.3 you do not need to manually mount anymore. Volumes should work properly as long as the source on the host vm is in your user directory.
https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
I can't make it work following Larry Cai's instruction. I figured I could make changes to "c:\Program Files\Boot2Docker for Windows\start.sh", add below
eval "$(./boot2docker.exe shellinit 2>/dev/null | sed 's,\\,\\\\,g')"
your mount command
eval "$(./boot2docker ssh 'sudo mount -t vboxsf c/shared /c/shared')"
I also add the command to start my container here.
eval "$(docker start KDP)"

Can I get vagrant to execute a series of commands where I get shell access and the webserver launches?

Everytime I launch vagrant for one of our projects I go through the following incantation:
vagrant up
vagrant ssh
sudo su deploy
supervisorctl stop local
workon odoo-8.0
/home/deploy/odoo/build/8.0/openerp-server -c /home/deploy/odoo/local/odoo_serverrc
This runs the server in a way that lets me see the terminal output. Is there a way I could package this all up so I can do say; vagrant dev or some such?
You can use shell provisioner.
In your vagrantfile, you can do things like this:
$script = <<SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: $script
end
You can replace
echo I am provisioning...
date > /etc/vagrant_provisioned_at
with your own commands.
On the first 'vagrant up' that creates the environment, provisioning is run. If the environment was already created and the up is just resuming a machine or booting it up, they won't run unless the --provision flag is explicitly provided.
There are many more good ways to provision, I would also recommend using Ansible. Here is the doc you can read:
https://docs.vagrantup.com/v2/provisioning/basic_usage.html
First, create a shell script with your commands in them:
#!/bin/bash
vagrant up
vagrant ssh
sudo su deploy
supervisorctl stop local
workon odoo-8.0
/home/deploy/odoo/build/8.0/openerp-server -c /home/deploy/odoo/local/odoo_serverrc
Put it somewhere in your guest with ansible. Next, copy the /home/vagrant/.bashrc file into yoour ansible files/ folder. Add the line
bash /path/to/shellfile.sh
to the .bashrc and make sure ansible copies it into your guest.
After that, the shell script should be executed every time you log into the guest.

How do you automatically go to a folder whenever you use vagrant with "vagrant ssh" with Laravel Homestead?

I'm asking so I don't have to "cd" everytime I use Vagrant. Thanks.
You can add cd dir-name to your .bashrc file inside your vm. So once you ssh into your vagrant machine it'll automatically run and change the directory.
On ubuntu .bashrc file is located in home (/home/vagrant) directory.
Alternatively you can connect to your vagrant box through starndard ssh command. This will allow you to specify the directory name at the connect time and have more freedom.
For example
ssh -p 2222 vagrant#localhost -t "cd dir-name ; /bin/bash"
You can see vagrant ssh config using below command. So you can check your port, user.. etc.
vagrant ssh-config

Resources