Vagrant & Docker: The container started never left the "stopped" state - vagrant

I'm following the vagrant guide to using docker but I receive this error when launching vagrant:
Jons-MacBook-Pro:vagrant jonhaven$ vagrant up --provider=docker
Bringing machine 'default' up with 'docker' provider...
==> default: Docker host is required. One will be created if necessary...
default: Docker host VM is already ready.
==> default: Vagrant has noticed that the synced folder definitions have changed.
==> default: With Docker, these synced folder changes won't take effect until you
==> default: destroy the container and recreate it.
==> default: Starting container...
==> default: Waiting for container to enter "running" state...
The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
And this is my Dockerfile (same as in the video):
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.image = "paintedfox/postgresql"
end
end
Has anyone seen this before? I'm on OSX 10.9.4 and using Vagrant works otherwise, just not with Docker.
Edit:
I followed the advice given and verified that I could run my docker image via docker. My working docker command is this:
docker run -p 8888:8888 -d haven/play /opt/activator/activator ui -Dhttp.address=0.0.0.0
However, I can't get this to launch via vagrant no matter what combination of create_args or cmd options I use in Vagrant. To be clear, the issue is not the ports but that the docker container will not stay running.

The Docker will stop if there isn't something keeping standard out going.
It looks like the paintedfox/postgresql CMD is ["/sbin/my_init"]
I assume this is a non-daemonized command that is meant to keep the container running which would mean it's exiting by error. I would try to debug by running the docker manually:
docker run -i -t paintedfox/postgresql /bin/bash
and then try to run the command:
/sbin/my_init
and see if it exits with an error. If you are running the docker in a vagrant you will first have to SSH into the Vagrant with
vagrant ssh

Related

Vagrant Waiting For Domain To Get An IP Address

First, apologies: I'm a newbie.
I've created a very basic Vagrantfile by running Vagrant init. I only made a few changes:
config.vm.box = "generic/fedora28"
config.vm.box_version = "1.8.32"
config.vm.provider "libvirt" do |lv|
lv.memory = "4096"
end
(There are also a few items in my config.vm.provision section).
After running vagrant up , the process gets stuck at
==> default: Waiting for domain to get an IP address...
I'm running this off a Fedora 27 box, which uses version 2.0.2 of the Vagrant package (even though current is 2.1.5).
I've tried adding this line:
config.vm.network "private_network", ip: "192.168.100.101"
but it had no effect.
Can anyone help?
I have a Vagrant File that spins up 4 VMs, of image generic/ubuntu2004 on libvirt kvm, to make a k3s cluster that's accessible on the LAN. (multipass + k3s is only accessible via localhost b/c it doesn't allow easy bridging.)
I ran both of these commands > 50 times
sudo vagrant destroy --force --parallel
sudo vagrant up
On the ~51'st time, I noticed vagrant up got stuck on "Waiting for domain to get an IP address..."
What fixed it for me was sudo reboot. You know the classic have you tried unplugging it and plugging it back in?
Something else to try (I rebooted before trying it)
https://bugzilla.redhat.com/show_bug.cgi?id=1283989

Totally remove a vagrant box(?) from my host

After vagrant halt and vagrant destroy and after a cd && cd .vagrant/boxes && rm -rf ... I've been continuing to see a box.
I've also pruned vagrant cache with command vagrant global-status --prune.
Here the output of vagrant status:
Current machine states:
my-vagrant-api not created (virtualbox)
What I need to do to totally remove Vagrant box from an Host?
Nothing you have to do. It is already deleted.
It is only showing vagrant machine configuration that you defined in Vagrantfile. See the output:
my-vagrant-api not created (virtualbox)
It is saying vagrant machine with name my-vagrant-api is defined in Vagrantfile but not created.
If it is created then it will look like:
vagrant status
Current machine states:
server1 poweroff (virtualbox)
server2 poweroff (virtualbox)
server3 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
If you don't want this configuration then you can simply delete that Vagrantfile

How to run scripts automatically after doing vagrant ssh?

I am new to Vagrant but good in Docker.
In Vagrant I am aware of the fact that
config.vm.provision :shell,path: "bootstrap.sh", run: 'always'
in the Vagrantfile will provision vagrant box while doing vagrant up. With this, the vagrant box interactive console appears after the intended provisioning is done.
But I need to configure in such a way that, first the control goes in to vagrant box console and then the intended script is up and running. Because my requirement is to run a script automatically post vagrant up and not to run a bootstrapped script.
In analogy with Docker, my question can be seen as
what is the Vagrant equivalent for CMD in Dockerfile ?
You can look at vagrant triggers. You can run dedicated script/command after each specific vagrant command (up, destroy ...)
For example
Vagrant.configure("2") do |config|
# Your existing Vagrant configuration
...
# start apache on the guest after the guest starts
config.trigger.after :up do |trigger|
trigger.run_remote = {inline: "service apache2 start"}
end
end

How do I know which vagrantfile is in use?

This seems to be common problem:
Lets say i have a project which has Vagrantfile. So i executed "vagrant up" and vagrant machine is up and running. So for so good.
After few days i synced the same project codebase to another location on same machine and tried to execute vagrant up. But this time it failed saying "vagrant cant forward request to this port 8080". This is because another vagrant instance occupied this port 8080.
Now if i want to know from where this vagrant box spin off, how can i do this?
This command will tell you the state of all Vagrant environments:
$ vagrant global-status
If you would stop any machin, you can use:
$ vagrant halt id

Access Docker features when using Vagrant

I've been using Vagrant for some time and am now exploring Docker. I've always loved Vagrant for its simplicity. Right now, I'm trying to use the Docker provider within Vagrant. I'm using the following Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.image = "fgrehm/vagrant-ubuntu:precise"
end
end
My understanding is that I can just run vagrant up. I can then run the Docker container using vagrant docker-run -- <command>.
So far so good. What makes Docker so awesome is the fact that you can mess around and commit changes. I do not understand is how to incorporate this in my workflow when using the Docker provider for Vagrant. E.g. how do I run docker commit to commit the state of the container? I'd expect some kind of vagrant docker-commit, but this does not exist?
Edit: In hindsight, I think this is not the way you should be using Vagrant/Docker. Though it is claimed that both tools complement each other, my opinion is that they do not (yet) play really well together. Right now, we're using Dockerfiles to build our images. Additionally, we made a set of bash scripts to launch a container.
Try to specify "has_ssh" to true, and ask Vagrant to use port 22 instead of 2222:
ENV["VAGRANT_DEFAULT_PROVIDER"] = "docker"
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d, o|
d.image = "fgrehm/vagrant-ubuntu:precise"
d.has_ssh = true
o.ssh.port = 22
end
end
Then use vagrant up; vagrant ssh to access it.
Another option if you have same client version of docker at your host-host machine as docker server started inside boot2docker VM. In that case you can set
export DOCKER_HOST=tcp://:4243 (or 2375 depending on docker version)
and then access all docker features running local client:
docker ps -a

Resources