I have a script that creates 15+ docker containers (I don't maintain the script) that I would like to use with Vagrant.
These are created in with a shell script inside a Vagrantfile.
On the first 'vagrant up', all the containers are there and everything works as expected.
If I do a vagrant halt or if I reboot my laptop, the docker images do not start, even though the docker restart policy is always.
Is this a bad use case for vagrant? Is there a way around this behavior?
Related
Environment: Windows Host running Vagrant and VirtualBox with Ubuntu as the guest OS
Background: Doing basic provisioning inside Vagrantfile
When switching between various sites, I have found that I have to first run vagrant halt, cd in the Windows CLI to the base folder of the site I next want to work on (where the Vagrantfile is stored), and then vagrant up again in order for the synced folder in the guest OS to be mapped correctly to the code folder on the Windows host. This shutting down and restarting takes more time than I would like. I tried simply running vagrant provision after cd'ing in the Windows CLI to the folder of the site I want to work on, but this did not re-map the synced folder. I also tried vagrant suspend, cd'ed to another site folder, and then ran vagrant resume --provision, but this did not re-map the synced folder either. Is there any way to re-map the synced folder without going through the vagrant halt, cd another_site, vagrant up sequence?
1 Vagrantfile = 1 VirtualBox VM
so basically you're in folder1/ with its own Vagrantfile, you have started and managed this VM. You now want to "switch" to another VM (not just a folder) so you need to shutdown the current VM, go to the folder2 and start this VM.
What you want is to have a single VM and multiple project folder (site) linked within this VM. some projects like homestead let you easily managed this. Otherwise you can manage yourself: you need a single Vagrantfile (so it will be only 1 VM) and then all your projects within this project structure.
I initially configured my docker setup for Docker for Windows. Everything worked great. I'm using docker-compose to define 3 containers, each of which have a volume being mapped from my ./src (path on host) to /src/ (path on container).
I recently found out that the production server might have Windows 10 Home, which doesn't support Docker for Windows. So, my thinking is that I should revert to docker toolbox to be prepared for that scenario.
So I uninstalled Docker for Windows and installed Docker toolbox. I can build my images with docker-compose build just fine, but now when I run docker compose up -d, 2 of my containers immediately crash because the /src/ directory never gets mounted.
I can verify that the volumes are not getting mounted by running docker exec -it ng01 bash and seeing that the volume directory exists but is empty. 2 of my co-workers can reproduce this issue on their windows machines with docker toolbox.
Does anyone know why this is happening, or how to get around it? I've been looking at a bunch of similar SO posts, but the various solutions have gotten me nowhere. I would appreciate some guidance.
Here is my docker-compose file.
I have my source code in src/.
I have my Dockerfiles in docker/
Here is hotloader.Dockerfile.
Here is web.Dockerfile. I don't think they are the issue, but I might as well share them anyways.
Thank you in advance!
Docker Toolbox for Windows works by setting up a VirtualBox VM named default. Running any docker command forwards that command to the VM (Windows Machine → Virtual Machine → Docker).
To mount local Windows folders as Docker volumes, those folders first need to be shared and mounted on the VM that is running Docker.
By default, C:\Users is shared, so mounting volumes from that location will work without any configuration.
So you can either move your project on this already shared location(C:\Users) or you can follow the steps in this document https://headsigned.com/posts/mounting-docker-volumes-with-docker-toolbox-for-windows/
Hope this helps! :)
Using docker client, is there a way to share a folder in windows with a docker container without having to first share the folder via the Virtual Box VM.
Have understood the need of having a double slash from this and this
Ran the following command from the docker client for windows
docker run -it -v //F/devfolder:/development/windev <imagename> <cmdname>
but when did a ls on /development/windev , it turned out it was empty.
I did not have any problem when I tried mounting the c/Users/username folder via the following command
docker run -it -v //c/Users/username/desktop:/development/windev <image> <command>
and the windev folder listed the contents as I would expect it to be
Tried sharing F/devFolder via Virtualbox GUI and gave full access but still the contents of the folder is not listed.
[I am not using boot2docker but docker-machine]
Is it not possible to share any other folder than the c/Users/ folder? If yes, anything else I need to do to ensure that I can see the contents of the mounted folder?
Not only you have to mount it in your VirtualBox, but you also have to instruct, in your boot2docker TinyCore session that you want that folder visible (once you have done a docker-machine ssh yourMachine):
mount -t vboxsf -o uid=1000,gid=50 your-other-share-name /some/mount/location
I know that you are using docker-machine, and not boot2docker, yet docker-machine is still using a boot2docker.iso VM image based on TinyCore, so this command still applies.
I'm fairly new to Puppet and am using Puppet 3.x. Right now I have a puppet master running locally, which I connect to using vagrant ssh.
At the moment if I change a manifest file on that puppet master I'm logging out of ssh and calling vagrant destroy, followed by vagrant up.
This takes a good 10 minutes. Is there a faster/better way of doing this? I'm looking at librarian-puppet but am not sure whether / how to use it in this circumstance.
Re-Run Provisioners
If you've already provisioned the machine at least once, you can make changes to your manifests and then run:
vagrant provision
on the host. This will re-run any provisioners without requiring a new Vagrant instance to be built and provisioned for the first time. In most cases, unless you have a pathological set of manifests, this will be significantly faster.
I built my config with https://puphpet.com and have successfully deployed my virtual machine using vagrant up. Now I'm trying to customize it a little bit and see in the docs that there are two folders to run scripts in the VM: puphpet/files/exec-always and puphpet/files/exec-once.
I wrote a couple test BASH scripts and dropped them there but the only way I could "apply" these was if I destroy the VM completely and reprovision it from scratch. This takes an enormous amount of time.
What's the proper way to test and debug these scripts?
You should use $ vagrant provision!