eval "$(docker-machine env default)" - bash

I have issues with launching docker with docker-compose.
When I run docker-compose -f dev.yml build I following error >
Building postgres
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
However if I run docker-machine ls machine is clearly up >
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v1.12.1
I fixed the error by running eval "$(docker-machine env default)" after which docker-compose -f dev.yml build completes successfully.
My question why did this work, what actually happens and how do I undo it?
Also is this a safe way to fix this? Right now this just my laptop, but these containers are supposed to hit company servers in near future.
I am not super fluent with bash but I been always told not to run eval and especially not to run eval with "

When you run docker commands, the CLI connects to the Docker daemon's API, and it's the API that actually does the work. You can manage remote Docker hosts from your local CLI by changing the API connection details, which Docker stores in environment variables on the client where the CLI runs.
With Docker Machine, your Docker engine is running in a VM, which is effectively a remote machine, so your local CLI needs to be configured to connect to it. Docker Machine knows the connection details for the engines it manages, so running docker-machine env default prints out the details for the default machine. The output is something like this:
$ docker-machine env default
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://172.16.62.130:2376"
export DOCKER_CERT_PATH="/Users/elton/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
Using eval executes each of those export commands, instead of just writing them to the console, so it's a quick way of setting up your environment variables.
You can undo it and reset the local environment with docker-machine env --unset, which gives you the output for unsetting the environment (so the CLI will try to connect to the local Docker Engine).

This is indeed the expected way to use Docker on a machine that does not natively support Docker, e.g. on Windows or Mac OS X.
The Docker documentation includes this step in its description for using Docker Machine here: https://docs.docker.com/machine/get-started/
What this step does (I suggest you also try this yourself):
Run docker-machine env default.
Take the output of that command and execute it in the current shell session.
If you run docker-machine env default yourself, you will see that it simply suggests to set some environment variables, which allow the Docker commands to find the VM running the Docker daemon. Without these variables set, Docker simply does not know how to communicate with the Docker daemon.
In a server environment (Linux), you will not need Docker Machine, since the Linux kernel natively supports running containers. You only need Docker Machine (a small VM running a Linux kernel) on operating systems that don't natively support running containers.

Related

Testcontainers with Podman in Java tests

Is it possible to use Testcontainers with Podman in Java tests?
As of March 2022 Testcontainers library doesn't detect an installed Podman as a valid Docker environment.
Can Podman be a Docker replacement on both MacOS with Apple silicon (local development environment) and Linux x86_64 (CI/CD environment)?
It is possible to use Podman with Testcontainers in Java projects, that use Gradle on Linux and MacOS (both x86_64 and Apple silicon).
Prerequisites
Podman Machine and Remote Client are installed on MacOS - https://podman.io/getting-started/installation#macos
Podman is installed on Linux - https://podman.io/getting-started/installation#linux-distributions
Enable the Podman service
Testcontainers library communicates with Podman using socket file.
Linux
Start Podman service for a regular user (rootless) and make it listen to a socket:
systemctl --user enable --now podman.socket
Check the Podman service status:
systemctl --user status podman.socket
Check the socket file exists:
ls -la /run/user/$UID/podman/podman.sock
MacOS
Podman socket file /run/user/1000/podman/podman.sock can be found inside the Podman-managed Linux VM. A local socket on MacOS can be forwarded to a remote socket on Podman-managed VM using SSH tunneling.
The port of the Podman-managed VM can be found with the command podman system connection list --format=json.
Install jq to parse JSON:
brew install jq
Create a shell alias to forward the local socket /tmp/podman.sock to the remote socket /run/user/1000/podman/podman.sock:
echo "alias podman-sock=\"rm -f /tmp/podman.sock && ssh -i ~/.ssh/podman-machine-default -p \$(podman system connection list --format=json | jq '.[0].URI' | sed -E 's|.+://.+#.+:([[:digit:]]+)/.+|\1|') -L'/tmp/podman.sock:/run/user/1000/podman/podman.sock' -N core#localhost\"" >> ~/.zprofile
source ~/.zprofile
Open an SSH tunnel:
podman-sock
Make sure the SSH tunnel is open before executing tests using Testcontainers.
Configure Gradle build script
build.gradle
test {
OperatingSystem os = DefaultNativePlatform.currentOperatingSystem;
if (os.isLinux()) {
def uid = ["id", "-u"].execute().text.trim()
environment "DOCKER_HOST", "unix:///run/user/$uid/podman/podman.sock"
} else if (os.isMacOsX()) {
environment "DOCKER_HOST", "unix:///tmp/podman.sock"
}
environment "TESTCONTAINERS_RYUK_DISABLED", "true"
}
Set DOCKER_HOST environment variable to Podman socket file depending on the operating system.
Disable Ryuk with the environment variable TESTCONTAINERS_RYUK_DISABLED.
Moby Ryuk helps you to remove containers/networks/volumes/images by given filter after specified delay.
Ryuk is a technology for Docker and doesn't support Podman. See testcontainers/moby-ryuk#23
Testcontainers library uses Ruyk to remove containers. Instead of relying on Ryuk to implicitly remove containers, we will explicitly remove containers with a JVM shutdown hook:
Runtime.getRuntime().addShutdownHook(new Thread(container::stop));
Pass the environment variables
As an alternative to configuring Testcontainers in a Gradle build script, you can pass the environment variables to Gradle.
Linux
DOCKER_HOST="unix:///run/user/$UID/podman/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
MacOS
DOCKER_HOST="unix:///tmp/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
Full example
See the full example https://github.com/evgeniy-khist/podman-testcontainers
For Linux, it definitely work even though official testcontainers documentation is not really clear about it.
# Enable socket
systemctl --user enable podman.socket --now
# Export env var expected by Testcontainers
export DOCKER_HOST=unix:///run/user/${UID}/podman/podman.sock
export TESTCONTAINERS_RYUK_DISABLED=true
Sources:
https://quarkus.io/blog/quarkus-devservices-testcontainers-podman/
https://github.com/testcontainers/testcontainers-java/issues/2088#issuecomment-893404306
I was able to build on Evginiy's excellent answer, since Podman has improved in the time since the original answer. On Mac OS, these steps were sufficient for me and made testcontainers happy:
Edit ~/.testcontainers.properties and add the following line
ryuk.container.privileged=true
Then run the following
brew install podman
podman machine init
sudo /opt/homebrew/Cellar/podman/4.0.3/bin/podman-mac-helper install
podman machine set --rootful
podman machine start
If you don't want to run rootful podman, ryuk needs to be disabled:
export TESTCONTAINERS_RYUK_DISABLED="true"
Running without ryuk basically works, but lingering containers can sometimes cause problems and name collisions in automated tests. Evginiy's suggestion of a shutdown hook would resolve this, but would need code changes.
An add-on to #hollycummins answer. You can get it working without --rootful by setting the following environment variables (or their testcontainers properties counter part):
DOCKER_HOST=unix:///Users/steve/.local/share/containers/podman/machine/podman-machine-default/podman.sock`
TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/var/run/user/501/podman/podman.sock
TESTCONTAINERS_RYUK_CONTAINER_PRIVILEGED=true
This will mount the podman socket of the linux VM into the Ryuk container. 501 is the UID of the user core in the linux VM user. See podman machine ssh.
if you running testcontainer build inside a docker container, alternatively you can start the service like this
podman system service -t 0 unix:///tmp/podman.sock &
OR
podman system service -t 0 tcp:127.0.0.1:19999 &

docker deamon is not work in windows

I try to run docker in bash ubuntu on windows. But every time I get this message
"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?". If i run it in powershell - it work. Can somebody help?
Connecting to the docker deamon requires some privilidges that you don't have when starting the bash terminal.
You can however use the docker command terminal which will allow you to interact with the docker deamon.
Found the solution on this post: https://blog.jayway.com/2017/04/19/running-docker-on-bash-on-windows/
Connect Docker on WSL to Docker on Windows
Running docker against an engine on a different machine is actually quite easy, as Docker can expose a TCP endpoint which the CLI can attach to.
This TCP endpoint is turned off by default; to activate it, right-click the Docker icon in your taskbar and choose Settings, and tick the box next to “Expose daemon on tcp://localhost:2375 without TLS”.
With that done, all we need to do is instruct the CLI under Bash to connect to the engine running under Windows instead of to the non-existing engine running under Bash, like this:
$ docker -H tcp://0.0.0.0:2375 images
REPOSITORY TAG IMAGE ID CREATED SIZE
There are two ways to make this permanent – either add an alias for the above command, or better yet, export an environment variable which instructs Docker where to find the host engine:
$ echo "export DOCKER_HOST='tcp://0.0.0.0:2375'" >> ~/.bashrc
$ source ~/.bashrc
Now, running docker commands from Bash works just like they’re supposed to.
$ docker run hello-world
Hello from Docker!This message shows that your installation appears to be working correctly.

How to enable Docker API access from Windows running Docker Toolbox (docker machine)

I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default

Using docker on Mac. Is it possible to start docker daemon using docker-machine and pass in arguments?

I have a private repo that I need to pass in via the "--insecure-registry myprivateregistry:5000" argument that works fine in my linux environment via this command:
docker -d --insecure-registry myprivateregistry:5000
I'm not sure how to pass this is when I'm starting my MAC client however. I use docker-machine to start and stop my default instance, but I don't see how to pass in that option. Please help.
I figured it out. I need to add that as an extra argument in this file after I do a docker-machine ssh default (my running daemon):
/var/lib/boot2docker/profile
EXTRA_ARGS='--insecure-registry myprivateregistry:5000'
Restart the docker daemon via docker-machine restart default and now I can connect!
This article was useful: Docker daemon config file on boot2docker / docker-machine / Docker Toolbox

How do I get Docker to run on a Windows system behind a corporate firewall?

I'm trying to get a working Docker installation following this tutorial:
http://docs.docker.io/en/latest/installation/windows/
So far, I got the VM running with a manually downloaded repository (followed the GitHub link and downloaded as a ZIP file, because "git clone" didn't work behind my corporate proxy, even after setting up the proxy with "git conf --global http.proxy ..." - it kept asking me for authentification 407, although I entered my user name and password).
Now I am in the state in which I should use "docker run busybox echo hello world" (Section "Running Docker").
When I do this, I first get told that Docker is not installed (as shown at the bottom of the tutorial), and then, after I got it with apt-get install docker, I get "Segmentation Fault or critical error encountered. Dumping core and aborting."
What can I do now? Is this because I didn't use git clone or is something wrong with the Docker installation? I read somewhere, that apt-get install docker doesn't install the Docker I want, but some GNOME tool. Can I maybe specify my apt-request to get the right tool?
Windows Boot2Docker behind corporate proxy
(Context: March 2015, Windows 7, behind corporate proxy)
TLDR; see GitHub project VonC/b2d:
Clone it and:
configure ..\env.bat following the env.bat.template,
add the alias you want in the 'profile' file,
execute senv.bat then b2d.bat.
You then are in a properly customized boot2docker environment with:
an ssh session able to access internet behind corporate proxy when you type docker search/pull.
Dockerfiles able to access internet behind corporate proxy when they do an apt-get update/install and you type a docker build.
Installation and first steps
If you are admin of your workstation, you can run boot2docker install on your Windows.
It currently comes with:
Boot2Docker 1.5.0 (Docker v1.5.0, Linux v3.18.5)
Boot2Docker Management Tool v1.5.0
VirtualBox v4.3.20-r96997
msysGit v1.9.5-preview20141217
Then, once installed:
add c:\path\to\Boot2Docker For Windows\ in your %PATH%
(one time): boot2docker init
boot2docker start
boot2docker ssh
type exit to exit the ssh session, and boot2docker ssh to go back in: the history of commands you just typed is preserved.
if you want to close the VM, boot2docker stop
You actually can see the VM start or stop if you open the Virtual Box GUI, and type in a DOS cmd session boot2docker start or stop.
Hosts & Proxy: Windows => Boot2Docker => Docker Containers
The main point to understand is that you will need to manage 2 HOSTS:
your Windows workstation is the host to the Linux Tiny Core run by VirtualBox in order for you to define and run containers
(%HOME%\.boot2docker\boot2docker.iso =>
.%USERPROFILE%\VirtualBox VMs\boot2docker-vm\boot2docker-vm.vmdk),
Your boot2docker Linux Tiny Core is host to your containers that you will run.
In term of proxy, that means:
Your Windows Host must have set its HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variable (you probably have them already, and they can be used for instance by the Virtual Box to detect new versions of Virtual Box)
Your Tiny Core Host must have set http_proxy, https_proxy and no_proxy (note the case, lowercase in the Linux environment) for:
the docker service to be able to query/load images (for example: docker search nginx).
If not set, the next docker pull will get you a dial tcp: lookup index.docker.io: no such host.
This is set in a new file /var/lib/boot2docker/profile: it is profile, not .profile.
the docker account (to be set in /home/docker/.ashrc), if you need to execute any other command (other than docker) which would require internet access)
any Dockerfile that you would create (or the next RUN apt-get update will get you a, for example, Could not resolve 'http.debian.net').
That means you must add the lines ENV http_proxy http://... first, before any RUN command requiring internet access.
A good no_proxy to set is:
.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
(with '.company' the domain name of your company, for the internal sites)
Data persistence? Use folder sharing
The other point to understand is that boot2docker uses Tiny Core, a... tiny Linux distribution (the .iso file is only 26 MB).
And Tiny Core offers no persistence (except for a few technical folders): if you modify your ~/.ashrc with all your preferred settings and alias... the next boot2docker stop / boot2docker start will restore a pristine Linux environment, with your modification gone.
You need to make sure the VirtualBox has the Oracle_VM_VirtualBox_Extension_Pack downloaded and added in the Virtual Box / File / Settings / Extension / add the Oracle_VM_VirtualBox_Extension_Pack-4.x.yy-zzzzz.vbox-extpack file).
As documented in boot2docker, you will have access (from your Tiny Core ssh session) to /c/Users/<yourLogin> (ie the %USERPROFILE% is shared by Virtual Box)
Port redirection? For container and for VirtualBox VM
The final point to understand is that no port is exported by default:
your container ports are not visible from your Tiny Core host (you must use -p 80:80 for example in order to expose the 80 port of the container to the 80 port of the Linux session)
your Tiny Cort ports are not exported from your Virtual Box VM by default: even if your container is visible from within Tiny Core, your Windows browser won't see it: http://127.0.0.1 won't work "The connection was reset".
For the first point, docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4 won't work without a -p 80:80 in it.
For the second point, define an alias doskey vbm="c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" $*, and then:
- if the Virtual Box 'boot2docker-vm' is not yet started, uses vbm modifyvm
- if the Virtual Box 'boot2docker-vm' is already started, uses vbm controlvm
Typically, if I realize, during a boot2docker session, that the port 80 is not accessible from Windows:
vbm controlvm "boot2docker-vm" natpf1 "tcp-port80,tcp,,80,,80";
vbm controlvm "boot2docker-vm" natpf1 "udp-port80,udp,,80,,80";
Then, and only then, I can access http://127.0.0.1
Persistent settings: copied to docker service and docker account
In order to use boot2docker easily:
create on Windows a folder %USERPROFILE%\prog\b2d
add a .profile in it (directly in Windows, in%USERPROFILE%\prog\b2d), with your settings and alias.
For example (I modified the original /home/docker/.ashrc):
# ~/.ashrc: Executed by SHells.
#
. /etc/init.d/tc-functions
if [ -n "$DISPLAY" ]
then
`which editor >/dev/null` && EDITOR=editor || EDITOR=vi
else
EDITOR=vi
fi
export EDITOR
# Alias definitions.
#
alias df='df -h'
alias du='du -h'
alias ls='ls -p'
alias ll='ls -l'
alias la='ls -la'
alias d='dmenu_run &'
alias ce='cd /etc/sysconfig/tcedir'
export HTTP_PROXY=http://<user>:<pwd>#proxy.company:80
export HTTPS_PROXY=http://<user>:<pwd>#proxy.company:80
export NO_PROXY=.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
export http_proxy=http://<user>:<password>#proxy.company:80
export https_proxy=http://<user>:<password>#proxy.company:80
export no_proxy=.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
alias l='ls -alrt'
alias h=history
alias cdd='cd /c/Users/<user>/prog/b2d'
ln -fs /c/Users/<user>/prog/b2d /home/docker
(192.168.59.103 is usually the ip returned by boot2docker ip)
Putting everything together to start a boot2docker session: b2d.bat
create and add a b2d.bat script in your %PATH% which will:
start boot2docker
copy the right profile, both for the docker service (which is restarted) and for the /home/docker user account.
initiate an interactive ssh session
That is:
doskey vbm="c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" $*
boot2docker start
boot2docker ssh sudo cp -f /c/Users/<user>/prog/b2d/.profile /var/lib/boot2docker/profile
boot2docker ssh sudo /etc/init.d/docker restart
boot2docker ssh cp -f /c/Users/<user>/prog/b2d/.profile .ashrc
boot2docker ssh
In order to enter a new boot2docker session, with your settings defined exactly as you want, simply type:
b2d
And you are good to go:
End result:
a docker search xxx will work (it will access internet)
any docker build will work (it will access internet if the ENV http_proxy directives are there)
any Windows file from %USERPROFILE%\prog\b2d can be modified right from ~/b2d.
Or you actually can write and modify those same files (like some Dockerfile) right from your Windows session, using your favorite editor (instead of vi)
And all this, behind a corporate firewall.
Bonus: http only
Tuan adds in the comments:
Maybe my company's proxy doesn't allow https. Here's my workaround:
boot2docker ssh,
kill the docker process and
set the proxy export http_proxy=http://proxy.com, then
start docker with docker -d --insercure-registry docker.io

Resources