How to set proxy in docker toolbox? - windows

I have just installed docker toolbox on windows environnement (Windows 7 Pro) and I have got a network time out due to the entreprise proxy. How can I set the proxy in docker toolbox ?
Thanks for your help.

I encountered the same problem. Here is my solution.
Env:
Win7, Docker Toolbox 17.03, cmder terminal, behind enterprise proxy setting.
Solution:
in C:\Program Files\Docker Toolbox, find start.sh file. add following two proxy settings:
export http_proxy="http://hostname:port/"
export https_proxy="http://hostname:port/"
At least, it works for me.

I have a similar problem for Windows 7 but it was resolved by these steps :
Step 1. Create a batch script C:\Program Files\Docker Toolbox\kitematic_proxy.cmd with below configuration
set proxy=YOUR_PROXY
SET HTTP_PROXY=%proxy%
SET HTTPS_PROXY=%proxy%
for /f %%i in ('docker-machine.exe ip default') do set DOCKER_HOST=%%i
SET NO_PROXY=%DOCKER_HOST%
set DOCKER_HOST=tcp://%DOCKER_HOST%:2376
cd Kitematic
Kitematic.exe
Step 2. Open Oracle Virtual machine from the start menu , go to command prompt by clicking Show (Make sure your Oracle Vm is up and running)
enter
sudo vi /var/lib/boot2docker/profile
add this lines
export HTTP_PROXY=http://your.proxy.name:8080
export HTTPS_PROXY=http://your.proxy.name:8080
use your proxy address & port
this link help me a lot
https://github.com/docker/kitematic/wiki/Common-Proxy-Issues-&-Fixes
Note:
Don't forget to add 192.168.99.100 ip to your proxy setting's exception list (use inetcpl.cpl )
Don't forget to add HTTP_PROXY and HTTPS_PROXY to your user variable (Advance settings->Environment variables)
Don't forget to restart your pc

Installing docker on windows 7 (docker 18.09.0) behind an enterprise proxy was quite complicated for me. Here are the steps I followed:
set HTTP_PROXY variable in your windows environment (HTTP_PROXY=http://your_proxy:port)
install docker toolbox with installer or run in powershell as admin: choco install docker-toolbox (Warning! Don't use Docker for windows, as it targets Windows 10)
ensure you don't have any previous VM created by your previous attempts ( docker-machine ls should be empty. If not run: docker-machine rm default)
run in powershell as user: docker-machine --native-ssh create -d virtualbox --engine-env HTTP_PROXY=$HTTP_PROXY --engine-env HTTPS_PROXY=$HTTPS_PROXY default.
run C:\Program Files\Docker Toolbox\start.sh
Now run docker pull busybox. This should work.

I had an issue on my Windows 7 Docker toolbox installation
$ docker --version
Docker version 18.09.3, build 774a1f4eee
$ docker-compose --version
docker-compose version 1.23.2, build 1110ad01
When I tried
docker run hello-world
I received
Unable to find image 'hello-world:latest' locally
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while waiting headers).
See 'C:\Program Files\Docker Toolbox\docker.exe run --help'.
According to https://docs.docker.com/toolbox/faqs/troubleshoot/ I've registered my enterprise proxy in /var/lib/boot2docker/profile inside the docker machine:
Use ssh to log in to the virtual machine. This example logs in to the default machine.
$ docker-machine ssh default
docker#default:~$ sudo vi /var/lib/boot2docker/profile
Then I added my enterprise proxy in the end of the profile
export "HTTP_PROXY=http://host:port"
export "HTTPS_PROXY=http://host:port"
after that I continued the instructions
Add a NO_PROXY setting to the end of the file similar to the example below.
export "NO_PROXY=192.168.*.*"
Restart Docker.
After you modify the profile on your VM, restart Docker and log out of the machine.
docker#default:~$ sudo /etc/init.d/docker restart
docker#default:~$ exit
After that docker run hello-world command works well
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Configuration lost after PC restart
As #rsb2097 mentioned after every PC reboot Docker Machine lose the settings in /var/lib/boot2docker/profile. I face the same problem too and I don't know how to avoid this, but I made a script to write these settings simpler.
I thought that happens because I shut down the PC without stopping the docker machine (VirtualBox says that there are active connections on shut down): supposed that it damages.
I tried docker-machine stop but it doesn't help.
As a result I wrote AddDockerMachineProxy.cmd script that writes the proxy settings using plink.exe from Putty (https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
Usage
Restart the PC.
Run Docker Quickstart Terminal, I have following output:
Starting "default"...
(default) Check network to re-create if needed...
(default) Windows might ask for the permission to configure a dhcp server.
Sometimes, such confirmation window is minimized in the taskbar.
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses.
You may need to re-run the `docker-machine env` command.
Regenerate TLS machine certs?
Warning: this is irreversible. (y/n): Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Run AddDockerMachineProxy.cmd script (plink.exe must be in %PATH%):
#echo off
echo Was "Docker Quickstart Terminal" run after the reboot to init the machine?
echo If not this script fails.
pause
set "exePlink=plink.exe"
set "connectionString=-pw tcuser docker#192.168.99.100"
echo Profile BEFORE:
call "%exePlink%" %connectionString% cat /var/lib/boot2docker/profile
if errorlevel 1 ( echo ERROR: PSCP failed !!! & goto BadExit )
echo APPENDING PROXY
call "%exePlink%" %connectionString% sudo bash -c "'echo export \"HTTP_PROXY=http://host:port\">> /var/lib/boot2docker/profile'"
if errorlevel 1 ( echo ERROR: PSCP failed !!! & goto BadExit )
call "%exePlink%" %connectionString% sudo bash -c "'echo export \"HTTPS_PROXY=http://host:port\">> /var/lib/boot2docker/profile'"
if errorlevel 1 ( echo ERROR: PSCP failed !!! & goto BadExit )
call "%exePlink%" %connectionString% sudo bash -c "'echo export \"NO_PROXY=192.168.*.*\">> /var/lib/boot2docker/profile'"
if errorlevel 1 ( echo ERROR: PSCP failed !!! & goto BadExit )
echo Profile AFTER:
call "%exePlink%" %connectionString% cat /var/lib/boot2docker/profile
if errorlevel 1 ( echo ERROR: PSCP failed !!! & goto BadExit )
echo Restart docker service:
call "%exePlink%" %connectionString% sudo /etc/init.d/docker restart
if errorlevel 1 ( echo ERROR: PSCP failed !!! & goto BadExit )
echo Testing connection
call docker image pull hello-world || ( echo ERROR: docker image pull is failed !!! & goto BadExit )
echo Done!
exit /b 0
:BadExit
echo ERROR !!!
exit /b 1

Ah! Actually with Docker Toolbox windows part is just very thin layer over created virtual machine, so my method is to configure virtual machine itself to make everything work. So.
0) Set global environment variables on Windows host machine
HTTP_PROXY = "http://login:password#yourproxy:8080"
HTTPS_PROXY = "http://login:password#yourproxy:8080"
Note caps letters! (also you can set FTP_PROXY and NO_PROXY)
1) Run Docker Quickstart Terminal, it will create virtual machine named default under you VirtualBox or whatever. Also it will display address of your newly created VM like
docker is configured to use the default machine with IP 192.168.99.104
2) SSH to this address (i.e. with PuTTY). Login:docker Password:tcuser
3) Run
echo '
{
"proxies":
{
"default":
{
"httpProxy": "http://login:password#yourproxy:8080",
"httpsProxy": "http://login:password#yourproxy:8080"
}
}
}' > /home/docker/.docker/config.json
This will force docker client (on VM!) to run containers with correct envs inside.
4) So now you can use docker client inside VM. To force Windows docker client (as well as docker-compose) also to set correct envs inside running containers, put the same config.json as in p.3 on Windows host machine to C:\User\<yourhomedir>\.docker directory.
Now check the environment inside running container
docker run -ti ubuntu env
HTTPS_PROXY=http://login:password#yourproxy:8080
https_proxy=http://login:password#yourproxy:8080
HTTP_PROXY=http://login:password#yourproxy:8080
http_proxy=http://login:password#yourproxy:8080
Note both CAPS and lower letter variables are set properly!
Final check for everything is ok:
docker run -ti ubuntu apt-get update
5) One issue you may face, is that address of your proxy is from network, which docker use when creating own networks, so it will spoil route to your proxy right after you will do docker network create. So make sure the proxy address is not like 172.18.x.x . If so force docker to use another address space for created networks by making another config on VM
sudo -i
echo '
{
"default-address-pools": [
{"base":"172.80.0.0/16","size":24}
]
}' > /etc/docker/daemon.json
Then restart dockerd /etc/init.d/docker restart
6) Do not restart your virtual machine, pause it when needed.

Related

Testcontainers with Podman in Java tests

Is it possible to use Testcontainers with Podman in Java tests?
As of March 2022 Testcontainers library doesn't detect an installed Podman as a valid Docker environment.
Can Podman be a Docker replacement on both MacOS with Apple silicon (local development environment) and Linux x86_64 (CI/CD environment)?
It is possible to use Podman with Testcontainers in Java projects, that use Gradle on Linux and MacOS (both x86_64 and Apple silicon).
Prerequisites
Podman Machine and Remote Client are installed on MacOS - https://podman.io/getting-started/installation#macos
Podman is installed on Linux - https://podman.io/getting-started/installation#linux-distributions
Enable the Podman service
Testcontainers library communicates with Podman using socket file.
Linux
Start Podman service for a regular user (rootless) and make it listen to a socket:
systemctl --user enable --now podman.socket
Check the Podman service status:
systemctl --user status podman.socket
Check the socket file exists:
ls -la /run/user/$UID/podman/podman.sock
MacOS
Podman socket file /run/user/1000/podman/podman.sock can be found inside the Podman-managed Linux VM. A local socket on MacOS can be forwarded to a remote socket on Podman-managed VM using SSH tunneling.
The port of the Podman-managed VM can be found with the command podman system connection list --format=json.
Install jq to parse JSON:
brew install jq
Create a shell alias to forward the local socket /tmp/podman.sock to the remote socket /run/user/1000/podman/podman.sock:
echo "alias podman-sock=\"rm -f /tmp/podman.sock && ssh -i ~/.ssh/podman-machine-default -p \$(podman system connection list --format=json | jq '.[0].URI' | sed -E 's|.+://.+#.+:([[:digit:]]+)/.+|\1|') -L'/tmp/podman.sock:/run/user/1000/podman/podman.sock' -N core#localhost\"" >> ~/.zprofile
source ~/.zprofile
Open an SSH tunnel:
podman-sock
Make sure the SSH tunnel is open before executing tests using Testcontainers.
Configure Gradle build script
build.gradle
test {
OperatingSystem os = DefaultNativePlatform.currentOperatingSystem;
if (os.isLinux()) {
def uid = ["id", "-u"].execute().text.trim()
environment "DOCKER_HOST", "unix:///run/user/$uid/podman/podman.sock"
} else if (os.isMacOsX()) {
environment "DOCKER_HOST", "unix:///tmp/podman.sock"
}
environment "TESTCONTAINERS_RYUK_DISABLED", "true"
}
Set DOCKER_HOST environment variable to Podman socket file depending on the operating system.
Disable Ryuk with the environment variable TESTCONTAINERS_RYUK_DISABLED.
Moby Ryuk helps you to remove containers/networks/volumes/images by given filter after specified delay.
Ryuk is a technology for Docker and doesn't support Podman. See testcontainers/moby-ryuk#23
Testcontainers library uses Ruyk to remove containers. Instead of relying on Ryuk to implicitly remove containers, we will explicitly remove containers with a JVM shutdown hook:
Runtime.getRuntime().addShutdownHook(new Thread(container::stop));
Pass the environment variables
As an alternative to configuring Testcontainers in a Gradle build script, you can pass the environment variables to Gradle.
Linux
DOCKER_HOST="unix:///run/user/$UID/podman/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
MacOS
DOCKER_HOST="unix:///tmp/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
Full example
See the full example https://github.com/evgeniy-khist/podman-testcontainers
For Linux, it definitely work even though official testcontainers documentation is not really clear about it.
# Enable socket
systemctl --user enable podman.socket --now
# Export env var expected by Testcontainers
export DOCKER_HOST=unix:///run/user/${UID}/podman/podman.sock
export TESTCONTAINERS_RYUK_DISABLED=true
Sources:
https://quarkus.io/blog/quarkus-devservices-testcontainers-podman/
https://github.com/testcontainers/testcontainers-java/issues/2088#issuecomment-893404306
I was able to build on Evginiy's excellent answer, since Podman has improved in the time since the original answer. On Mac OS, these steps were sufficient for me and made testcontainers happy:
Edit ~/.testcontainers.properties and add the following line
ryuk.container.privileged=true
Then run the following
brew install podman
podman machine init
sudo /opt/homebrew/Cellar/podman/4.0.3/bin/podman-mac-helper install
podman machine set --rootful
podman machine start
If you don't want to run rootful podman, ryuk needs to be disabled:
export TESTCONTAINERS_RYUK_DISABLED="true"
Running without ryuk basically works, but lingering containers can sometimes cause problems and name collisions in automated tests. Evginiy's suggestion of a shutdown hook would resolve this, but would need code changes.
An add-on to #hollycummins answer. You can get it working without --rootful by setting the following environment variables (or their testcontainers properties counter part):
DOCKER_HOST=unix:///Users/steve/.local/share/containers/podman/machine/podman-machine-default/podman.sock`
TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/var/run/user/501/podman/podman.sock
TESTCONTAINERS_RYUK_CONTAINER_PRIVILEGED=true
This will mount the podman socket of the linux VM into the Ryuk container. 501 is the UID of the user core in the linux VM user. See podman machine ssh.
if you running testcontainer build inside a docker container, alternatively you can start the service like this
podman system service -t 0 unix:///tmp/podman.sock &
OR
podman system service -t 0 tcp:127.0.0.1:19999 &

docker deamon is not work in windows

I try to run docker in bash ubuntu on windows. But every time I get this message
"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?". If i run it in powershell - it work. Can somebody help?
Connecting to the docker deamon requires some privilidges that you don't have when starting the bash terminal.
You can however use the docker command terminal which will allow you to interact with the docker deamon.
Found the solution on this post: https://blog.jayway.com/2017/04/19/running-docker-on-bash-on-windows/
Connect Docker on WSL to Docker on Windows
Running docker against an engine on a different machine is actually quite easy, as Docker can expose a TCP endpoint which the CLI can attach to.
This TCP endpoint is turned off by default; to activate it, right-click the Docker icon in your taskbar and choose Settings, and tick the box next to “Expose daemon on tcp://localhost:2375 without TLS”.
With that done, all we need to do is instruct the CLI under Bash to connect to the engine running under Windows instead of to the non-existing engine running under Bash, like this:
$ docker -H tcp://0.0.0.0:2375 images
REPOSITORY TAG IMAGE ID CREATED SIZE
There are two ways to make this permanent – either add an alias for the above command, or better yet, export an environment variable which instructs Docker where to find the host engine:
$ echo "export DOCKER_HOST='tcp://0.0.0.0:2375'" >> ~/.bashrc
$ source ~/.bashrc
Now, running docker commands from Bash works just like they’re supposed to.
$ docker run hello-world
Hello from Docker!This message shows that your installation appears to be working correctly.

How to enable Docker API access from Windows running Docker Toolbox (docker machine)

I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default

eval "$(docker-machine env default)"

I have issues with launching docker with docker-compose.
When I run docker-compose -f dev.yml build I following error >
Building postgres
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
However if I run docker-machine ls machine is clearly up >
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v1.12.1
I fixed the error by running eval "$(docker-machine env default)" after which docker-compose -f dev.yml build completes successfully.
My question why did this work, what actually happens and how do I undo it?
Also is this a safe way to fix this? Right now this just my laptop, but these containers are supposed to hit company servers in near future.
I am not super fluent with bash but I been always told not to run eval and especially not to run eval with "
When you run docker commands, the CLI connects to the Docker daemon's API, and it's the API that actually does the work. You can manage remote Docker hosts from your local CLI by changing the API connection details, which Docker stores in environment variables on the client where the CLI runs.
With Docker Machine, your Docker engine is running in a VM, which is effectively a remote machine, so your local CLI needs to be configured to connect to it. Docker Machine knows the connection details for the engines it manages, so running docker-machine env default prints out the details for the default machine. The output is something like this:
$ docker-machine env default
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://172.16.62.130:2376"
export DOCKER_CERT_PATH="/Users/elton/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
Using eval executes each of those export commands, instead of just writing them to the console, so it's a quick way of setting up your environment variables.
You can undo it and reset the local environment with docker-machine env --unset, which gives you the output for unsetting the environment (so the CLI will try to connect to the local Docker Engine).
This is indeed the expected way to use Docker on a machine that does not natively support Docker, e.g. on Windows or Mac OS X.
The Docker documentation includes this step in its description for using Docker Machine here: https://docs.docker.com/machine/get-started/
What this step does (I suggest you also try this yourself):
Run docker-machine env default.
Take the output of that command and execute it in the current shell session.
If you run docker-machine env default yourself, you will see that it simply suggests to set some environment variables, which allow the Docker commands to find the VM running the Docker daemon. Without these variables set, Docker simply does not know how to communicate with the Docker daemon.
In a server environment (Linux), you will not need Docker Machine, since the Linux kernel natively supports running containers. You only need Docker Machine (a small VM running a Linux kernel) on operating systems that don't natively support running containers.

How do I get Docker to run on a Windows system behind a corporate firewall?

I'm trying to get a working Docker installation following this tutorial:
http://docs.docker.io/en/latest/installation/windows/
So far, I got the VM running with a manually downloaded repository (followed the GitHub link and downloaded as a ZIP file, because "git clone" didn't work behind my corporate proxy, even after setting up the proxy with "git conf --global http.proxy ..." - it kept asking me for authentification 407, although I entered my user name and password).
Now I am in the state in which I should use "docker run busybox echo hello world" (Section "Running Docker").
When I do this, I first get told that Docker is not installed (as shown at the bottom of the tutorial), and then, after I got it with apt-get install docker, I get "Segmentation Fault or critical error encountered. Dumping core and aborting."
What can I do now? Is this because I didn't use git clone or is something wrong with the Docker installation? I read somewhere, that apt-get install docker doesn't install the Docker I want, but some GNOME tool. Can I maybe specify my apt-request to get the right tool?
Windows Boot2Docker behind corporate proxy
(Context: March 2015, Windows 7, behind corporate proxy)
TLDR; see GitHub project VonC/b2d:
Clone it and:
configure ..\env.bat following the env.bat.template,
add the alias you want in the 'profile' file,
execute senv.bat then b2d.bat.
You then are in a properly customized boot2docker environment with:
an ssh session able to access internet behind corporate proxy when you type docker search/pull.
Dockerfiles able to access internet behind corporate proxy when they do an apt-get update/install and you type a docker build.
Installation and first steps
If you are admin of your workstation, you can run boot2docker install on your Windows.
It currently comes with:
Boot2Docker 1.5.0 (Docker v1.5.0, Linux v3.18.5)
Boot2Docker Management Tool v1.5.0
VirtualBox v4.3.20-r96997
msysGit v1.9.5-preview20141217
Then, once installed:
add c:\path\to\Boot2Docker For Windows\ in your %PATH%
(one time): boot2docker init
boot2docker start
boot2docker ssh
type exit to exit the ssh session, and boot2docker ssh to go back in: the history of commands you just typed is preserved.
if you want to close the VM, boot2docker stop
You actually can see the VM start or stop if you open the Virtual Box GUI, and type in a DOS cmd session boot2docker start or stop.
Hosts & Proxy: Windows => Boot2Docker => Docker Containers
The main point to understand is that you will need to manage 2 HOSTS:
your Windows workstation is the host to the Linux Tiny Core run by VirtualBox in order for you to define and run containers
(%HOME%\.boot2docker\boot2docker.iso =>
.%USERPROFILE%\VirtualBox VMs\boot2docker-vm\boot2docker-vm.vmdk),
Your boot2docker Linux Tiny Core is host to your containers that you will run.
In term of proxy, that means:
Your Windows Host must have set its HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variable (you probably have them already, and they can be used for instance by the Virtual Box to detect new versions of Virtual Box)
Your Tiny Core Host must have set http_proxy, https_proxy and no_proxy (note the case, lowercase in the Linux environment) for:
the docker service to be able to query/load images (for example: docker search nginx).
If not set, the next docker pull will get you a dial tcp: lookup index.docker.io: no such host.
This is set in a new file /var/lib/boot2docker/profile: it is profile, not .profile.
the docker account (to be set in /home/docker/.ashrc), if you need to execute any other command (other than docker) which would require internet access)
any Dockerfile that you would create (or the next RUN apt-get update will get you a, for example, Could not resolve 'http.debian.net').
That means you must add the lines ENV http_proxy http://... first, before any RUN command requiring internet access.
A good no_proxy to set is:
.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
(with '.company' the domain name of your company, for the internal sites)
Data persistence? Use folder sharing
The other point to understand is that boot2docker uses Tiny Core, a... tiny Linux distribution (the .iso file is only 26 MB).
And Tiny Core offers no persistence (except for a few technical folders): if you modify your ~/.ashrc with all your preferred settings and alias... the next boot2docker stop / boot2docker start will restore a pristine Linux environment, with your modification gone.
You need to make sure the VirtualBox has the Oracle_VM_VirtualBox_Extension_Pack downloaded and added in the Virtual Box / File / Settings / Extension / add the Oracle_VM_VirtualBox_Extension_Pack-4.x.yy-zzzzz.vbox-extpack file).
As documented in boot2docker, you will have access (from your Tiny Core ssh session) to /c/Users/<yourLogin> (ie the %USERPROFILE% is shared by Virtual Box)
Port redirection? For container and for VirtualBox VM
The final point to understand is that no port is exported by default:
your container ports are not visible from your Tiny Core host (you must use -p 80:80 for example in order to expose the 80 port of the container to the 80 port of the Linux session)
your Tiny Cort ports are not exported from your Virtual Box VM by default: even if your container is visible from within Tiny Core, your Windows browser won't see it: http://127.0.0.1 won't work "The connection was reset".
For the first point, docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4 won't work without a -p 80:80 in it.
For the second point, define an alias doskey vbm="c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" $*, and then:
- if the Virtual Box 'boot2docker-vm' is not yet started, uses vbm modifyvm
- if the Virtual Box 'boot2docker-vm' is already started, uses vbm controlvm
Typically, if I realize, during a boot2docker session, that the port 80 is not accessible from Windows:
vbm controlvm "boot2docker-vm" natpf1 "tcp-port80,tcp,,80,,80";
vbm controlvm "boot2docker-vm" natpf1 "udp-port80,udp,,80,,80";
Then, and only then, I can access http://127.0.0.1
Persistent settings: copied to docker service and docker account
In order to use boot2docker easily:
create on Windows a folder %USERPROFILE%\prog\b2d
add a .profile in it (directly in Windows, in%USERPROFILE%\prog\b2d), with your settings and alias.
For example (I modified the original /home/docker/.ashrc):
# ~/.ashrc: Executed by SHells.
#
. /etc/init.d/tc-functions
if [ -n "$DISPLAY" ]
then
`which editor >/dev/null` && EDITOR=editor || EDITOR=vi
else
EDITOR=vi
fi
export EDITOR
# Alias definitions.
#
alias df='df -h'
alias du='du -h'
alias ls='ls -p'
alias ll='ls -l'
alias la='ls -la'
alias d='dmenu_run &'
alias ce='cd /etc/sysconfig/tcedir'
export HTTP_PROXY=http://<user>:<pwd>#proxy.company:80
export HTTPS_PROXY=http://<user>:<pwd>#proxy.company:80
export NO_PROXY=.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
export http_proxy=http://<user>:<password>#proxy.company:80
export https_proxy=http://<user>:<password>#proxy.company:80
export no_proxy=.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
alias l='ls -alrt'
alias h=history
alias cdd='cd /c/Users/<user>/prog/b2d'
ln -fs /c/Users/<user>/prog/b2d /home/docker
(192.168.59.103 is usually the ip returned by boot2docker ip)
Putting everything together to start a boot2docker session: b2d.bat
create and add a b2d.bat script in your %PATH% which will:
start boot2docker
copy the right profile, both for the docker service (which is restarted) and for the /home/docker user account.
initiate an interactive ssh session
That is:
doskey vbm="c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" $*
boot2docker start
boot2docker ssh sudo cp -f /c/Users/<user>/prog/b2d/.profile /var/lib/boot2docker/profile
boot2docker ssh sudo /etc/init.d/docker restart
boot2docker ssh cp -f /c/Users/<user>/prog/b2d/.profile .ashrc
boot2docker ssh
In order to enter a new boot2docker session, with your settings defined exactly as you want, simply type:
b2d
And you are good to go:
End result:
a docker search xxx will work (it will access internet)
any docker build will work (it will access internet if the ENV http_proxy directives are there)
any Windows file from %USERPROFILE%\prog\b2d can be modified right from ~/b2d.
Or you actually can write and modify those same files (like some Dockerfile) right from your Windows session, using your favorite editor (instead of vi)
And all this, behind a corporate firewall.
Bonus: http only
Tuan adds in the comments:
Maybe my company's proxy doesn't allow https. Here's my workaround:
boot2docker ssh,
kill the docker process and
set the proxy export http_proxy=http://proxy.com, then
start docker with docker -d --insercure-registry docker.io

Resources