To be able to run Linux containers on a Windows 2016 host we followed this tutorial. The issue we're having is that we can't seem to enable the experimental features. In the docs it says:
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to enabled.
File C:\ProgramData\docker\config\config.json:
{
"experimental": "enabled",
"debug": true
}
After restarting the Docker service (Restart-Service docker) and running docker info we sill see the flag Experimental: false:
Operating System: Windows Server 2016 Standard Version 1607 (OS Build 14393.3686)
OSType: windows
Architecture: x86_64
Docker Root Dir: C:\ProgramData\docker
Experimental: false
How is it possible to enable the Docker experimental features on a Windows Server 2016?
Even when I try set the environment variable and restart powershell and the docker service it doesn't register within docker info:
[Environment]::SetEnvironmentVariable("DOCKER_CLI_EXPERIMENTAL", "enabled", "Machine")
After logging in to docker with docker login the file "C:\Users\bob\.docker\config.json". When adding the key it's still not registered after service restart:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "xxxxx"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (windows)"
},
"experimental": "enabled",
"debug": true
}
You put your configuration to a file with name config.json. But according to docs correct file name is daemon.json.
Full path to config file must be: C:\ProgramData\docker\config\daemon.json
From all of the answers above, only adding --experimental parameter in services -> docker -> Path to executable worked. You can change it using regedit.exe
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\docker
key ImagePath value "C:\Program Files\docker\dockerd.exe" --experimental --run-service if you have any other starting parameters you should keep them.
You can do it using sc (in powershell sc.exe)
I followed the Canonical tutorial for linux containers on Windows, and got stuck trying to pull the correct ubuntu (linux not windows) image (then found your question about setting experimental). If you ask the dockerd.exe for parameters it will accept (dockerd.exe --help) one of the options is --experimental.
Setting --experimental on the dockerd invocation worked for me.
If you can configure the invocation of your daemon with --experimental (rather than inside a configuration file), this could solve your problem.
Try setting the environment variable with:
[Environment]::SetEnvironmentVariable("DOCKER_CLI_EXPERIMENTAL", "enabled")
This worked on my linux cluster where specifying User or Machine seemed to result in the variable being ignored..
I found out that it simply won't run on Windows Server 2016 as stated here:
Docker Desktop has changed its way to leverage WSL2 for running Linux containers on Windows 10. The plan for Docker EE is unclear as Docker Inc. has sold it to Mirantis. https://github.com/docker/for-win/issues/6470#issuecomment-633883063
So if you plan to run both Linux and Windows containers in production, you may want to look for other options, such as Kubernetes.
It turns out the Linux Containers on Windows (lcow) Server is a preview feature of both Windows Server, version 1709 and Docker EE. It won’t work on Windows Server 2016 of which the version is older than 1709.
Related
I am on a corporate Windows laptop and I want to start experimenting with Docker. Being a corporate machine, everything needs to go through the corporate proxy.
I installed Debian on WSL and then the Docker Desktop, which installed its components on the Debian WSL VM. My first priority however was to test docker on WSL directly and not through Docker Desktop. So I set to read the Docker docs and download the docker/getting-started image through the Debian terminal. That, however, failed due to not using the network proxy.
Desktop Docker docs state that setting the proxy settings on Docker Desktop will propagate the proxy settings to Docker itself. Indeed, I set the proxy settings on Docker Desktop, and I was now able to properly download my image from inside Debian.
Since I want to have full control of Docker through the Debian terminal and not Docker Desktop, I want to understand in which way the proxy settings propagate to Docker inside WSL. I imagined that Docker Desktop altered some configuration file inside Debian, but a grep on the whole system of the proxy ip got me nothing. So my question is, in what way does the Docker Desktop let Docker know which proxy to use?
As much as I know, And am not 100% sure as I have not worked with docker in a while.
When you start docker service in WSL, this will trigger the init.d/docker script, And when you set the Company proxy manually in docker desktop, The loading time is :
Stopping Docker service
Updating configuration Script at /etc/init.d/docker
Starting the service again, and with it the new script
And to make sure that this is valid, You can try to check the /etc/init.d/docker script contents.
and as an alternative way of not adding the scripts manually. you can export the proxy configuration in WSL, and check if it will work without adding the proxy configuration to Docker Desktop.
With the license change for Docker Desktop on Windows, I'm looking for an alternative. Podman + WSL2 seems to do the trick for me. Except for Testcontainers in my Quarkus tests.
I'm able to run my tests within WSL2 by starting podman system service in WSL2 (podman system service -t 0 tcp:localhost:8880) and setting the DOCKER_HOST env var (DOCKER_HOST=tcp://localhost:8880).
Now this works, but isn't really what I need, since at my company we develop in VSCode, IntelliJ and Eclipse. I'd like to be able to run the tests from within those IDE's. Is there any way to pass the podman uri (from WSL) to my IDE in Windows while running Quarkus tests?
If anyone would know any other docker desktop alternatives that work with TestContainers, that would be awesome as well. I have tried Rancher Desktop, but it gets stuck and the tests eventually time out.
You have to install podman-remote packages on your windows host machine, then configure it to use tcp://WSL2_IP:8880 (podman documentation) and finally make an alias for the program docker -> podman.exe.
Now you are able to run docker commands as usual... docker ps docker run etc. But it does not mean that all tools will work out of the box. You have to tune it.
For example for testcontainers you have to set env variables on host machine:
PowerShell
[System.Environment]::SetEnvironmentVariable("DOCKER_HOST", "tcp://WSL2_IP:8880", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_CHECKS_DISABLE", "True", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_RYUK_DISABLED", "True", [System.EnvironmentVariableTarget]::User)
P.S. All that kind of variables were set for you by docker, but from now you have to do it by yourself.
We ran the testcontainers-java tests using various solutions for Docker.
I don't know if running in WSL changes a lot compared to the Windows only setup.
In general, Testcontainers doesn't rely only on the CLI commands only and works best with compatible Docker environments. Based on the findings in that experiment, you can try minikube.
enter image description hereTake IntelliJ for example, you can set DOCKER_HOST env var by "Run/Debug Configurations" and it works perfectly.
TL/DR: Is it possible to use Docker on Windows, with Linux containers, and with TLS enabled?
Observation 1:
When I use Docker on Windows 10 (Docker Desktop 2.2.0.3, and engine 19.03.5) I can happily use Linux containers.
Observation 2:
Using the same environment as observation 1 above, if I want to expose the docker daemon on TCP with TLS, I can use openssl to set up the CA, and all the certs I need - again, no problem. Just to clarify, this is all happening on localhost - only the one host PC is involved.
My Docker Engine config file (Docker Desktop > Settings > Docker Engine) ends up looking like this:
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false,
"tlsverify": true,
"tlscacert": "C:/dockercerts/ca.pem",
"tlscert": "C:/dockercerts/server-cert.pem",
"tlskey": "C:/dockercerts/server-key.pem",
"hosts": [
"tcp://0.0.0.0:2376",
"npipe://"
]
}
And, the following docker version command works like a charm for me:
docker --tlsverify ^
--tlscacert=C:/dockercerts/ca.pem ^
--tlscert=C:/dockercerts/cert.pem ^
--tlskey=C:/dockercerts/key.pem ^
-H=localhost:2376 version
Observation 3:
But to make the docker version command in observation 2 work, I have to switch Docker Desktop from "Linux Containers" to "Windows Containers".
(I currently have no Windows containers.)
If I try to switch Docker Desktop to use Linux containers, then Docker Desktop crashes on start-up (or on restart). I even had to re-install the whole thing a couple of times - I could not get to the "reset to factory options" button.
Background:
I was trying to use the Docker API (the REST services) over HTTPS rather than HTTP - so that's what prompted all of this - in case that helps.
Likely Conclusion...?:
It's not possible to mix these specific things on Windows - and I should use a Linux host for my Linux containers.
However, I'd be delighted to see a set-up where I can run that docker version command on Windows, using my certificates, and Linux containers - all at the same time.
Failing that, if anyone has any insight into why it's not possible ("...windows pipes...?") or something like that, I would be very interested.
(I do see a fairly large number of Docker and TLS questions on SO - but nothing specific to this scenario.)
UPDATE:
Here is the specific error I get:
Docker.Core.Backend.BackendException:
Failed to start
at Docker.Core.Pipe.NamedPipeClient.<TrySendAsync>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Docker.Core.Pipe.NamedPipeClient.Send(String action, Object[] parameters)
at Docker.Actions.<RestartDaemon>b__37_0()
at Docker.ApiServices.TaskQueuing.TaskQueue.<>c__DisplayClass18_0.<.ctor>b__1()
Docker.Core.DockerException:
Failed to start
at Docker.Backend.ContainerEngine.Linux.DoStart(Settings settings, String daemonOptions, Credential credential)
at Docker.Backend.ContainerEngine.Linux.Restart(Settings settings, String daemonOptions, Credential credential)
at Docker.Backend.BackendNamedPipeServer.<Run>b__8_3(Object[] args)
at Docker.Core.Pipe.NamedPipeServer.<>c__DisplayClass9_0.<Register>b__0(Object[] parameters)
at Docker.Core.Pipe.NamedPipeServer.RunAction(String action, Object[] parameters)
Researching the following...
Failed to start at Docker.Core.Pipe.NamedPipeClient.<TrySendAsync>d__5.MoveNext()
... has not led to any insights, beyond some me too comments - mostly related to version updates.
Issue Ticket
Unable to run Docker for Windows using TLS with Linux Containers
I use Windows 7 and can't install Docker for Windows, so I use Docker Toolbox.
Docker Toolbox is not supported by Microsoft Visual Studio Code for Remote Container Development.
But I need to use this functionality with my docker toolbox.
There is an issue on Github not solved yet https://github.com/microsoft/vscode-remote-release/issues/95
Docker Toolbox was a product based on docker-machine and virtualbox to use a local VM. That VM has all your user profile shared by default, so you can share any folder on your profile with a container in the VM using the path /c/user/<profile_name>/folder/a/b.
Warning: Be careful to avoid sharing all your user profile with an image you don't trust
Steps to enable VSCode remote containers when using docker machine
You need to start your docker-machine (tested with vscode 1.40.2+)
In your .devcontainer.json you can overwrite the workspace mount volume command (More info here)
"workspaceMount":
"src=//c/Users/yourusername/git/reponame,dst=/workspaces/reponame,type=bind,consistency=delegated"
VSCode search the default workspace inside the container in /workspaces with the same name as the original and opens it automatically, but you can override this in .devconatiner if you need or open it manually.
Important: your repository should always be inside your windows user profile (%userprofile%). This is a requirement from Docker Toolbox default shares.
Note: the problem with Docker Toolboox is that Visual Studio Code doesn't support the docker-machine path style to mount volumes by default. But this workaround can help you.
Updated 2020/05/13
Tested with 1.44 it still works but you can't use an environment variable to config mount paths yet. So each developer should customize the local path of the repo after clone the repository.
Updated 2020/10/29
Microsoft added information about how to use VSCode remote containers with Docker Machine here. Microsoft docs let's you imagine what kind of path should you use because it doesn't assume that the docker-machine environment is a local VM. This is where you can found this answer useful.
I wanted to start the docker daemon with an open TCP address like this: docker daemon -H tcp://0.0.0.0:2375, but the terminal suggested that I use dockerd instead, which is apparently not a program that comes with the Docker Client for mac. Is there a way I can either
A - get some form of dockerd on my mac machine.
B - get around the use of dockerd by some other method.
?
Install socat command: brew install socat
Choose a port: (in the example 8099)
Run: socat -d -d TCP-L:8099,fork UNIX:/var/run/docker.sock
and then use tcp://localhost:8099 as API URL
works for me, hope this helps
Finally I found the config of mac docker like dockerd.
Click the docker icon in the menu bar, preferences, advanced
get around the use of dockerd by some other method. (2016)
Note that in 2022, you can go without dockerd/Docker Desktop entirely.
See Batuhan Apaydin's article "A modern toolkit to start working with container images on macOS that meets your needs without requiring a Docker Daemon or even Docker Desktop".
It uses lima+nerdctl
The nerdctl tool is designed as a drop-in replacement for the Docker client
And Lima is a hypervisor that launches Linux virtual machines with automatic file sharing, port forwarding, and containerd.
The name of lima comes from an abbreviation of the first two capital letters of LInux MAchines.
The design of Lima is similar to WSL2, but Lima focuses on macOS as the primary target host.
Lima uses QEMU, which is a generic and open source machine emulator and virtualizer, as a hypervisor under the hood to achieve the virtualization thing.
Lima can also work with other container engines such as Podman and even for non-container applications.
By default, when lima launches a VM, it runs buildkitd and containerd in a rootless way and also downloads necessary client tooling around them such as buildctl, nerdctl.
Everything will be set up for us. So, all that’s left is building, pulling, and running containers
For buildkit, Batuhan proposes developer-guy/buildkit-machine
buildkit-machine allows you to make buildkitd daemon accessible in your macOS environment.
To do so, it uses lima, which is a Linux subsystem for macOS, under the hood.
lima spins up a VM that runs buildkitd daemon in a rootless way which means that sock file of the buildkitd daemon is now be able to accessible from /run/user/<USERID>/buildkit/buildkitd.
So: no more Docker Desktop / dockerd, and use container in a rootless mode!
For more, see Bret Fisher's video "Free Docker Desktop Alternatives: DevOps and Docker Live Show (Ep 156)" (Jan. 2022)
I have found a workaround for this in the official forum
https://forums.docker.com/t/using-pycharm-docker-plugin-with-docker-beta/8617/9
$socat TCP-LISTEN:2376,reuseaddr,fork UNIX-CLIENT:/var/run/docker.sock
That workaround opens port 2376 to the world... as TLS isn't enabled, this is a bad idea as anyone on the same network can hijack your docker daemon
It is not supported to run dockerd on Mac. From this issue:
I think on Darwin it should never suggest to run dockerd. The daemon runs in a Linux virtual machine, so you do not need to (and cannot) run it manually.
If you want to do any specific configuration on mac, you might have already installed Docker Desktop. Docker desktop supports configuration using UserInterface shown below in the screenshot.