Docker Desktop for Windows with Linux Containers and TLS - windows

TL/DR: Is it possible to use Docker on Windows, with Linux containers, and with TLS enabled?
Observation 1:
When I use Docker on Windows 10 (Docker Desktop 2.2.0.3, and engine 19.03.5) I can happily use Linux containers.
Observation 2:
Using the same environment as observation 1 above, if I want to expose the docker daemon on TCP with TLS, I can use openssl to set up the CA, and all the certs I need - again, no problem. Just to clarify, this is all happening on localhost - only the one host PC is involved.
My Docker Engine config file (Docker Desktop > Settings > Docker Engine) ends up looking like this:
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false,
"tlsverify": true,
"tlscacert": "C:/dockercerts/ca.pem",
"tlscert": "C:/dockercerts/server-cert.pem",
"tlskey": "C:/dockercerts/server-key.pem",
"hosts": [
"tcp://0.0.0.0:2376",
"npipe://"
]
}
And, the following docker version command works like a charm for me:
docker --tlsverify ^
--tlscacert=C:/dockercerts/ca.pem ^
--tlscert=C:/dockercerts/cert.pem ^
--tlskey=C:/dockercerts/key.pem ^
-H=localhost:2376 version
Observation 3:
But to make the docker version command in observation 2 work, I have to switch Docker Desktop from "Linux Containers" to "Windows Containers".
(I currently have no Windows containers.)
If I try to switch Docker Desktop to use Linux containers, then Docker Desktop crashes on start-up (or on restart). I even had to re-install the whole thing a couple of times - I could not get to the "reset to factory options" button.
Background:
I was trying to use the Docker API (the REST services) over HTTPS rather than HTTP - so that's what prompted all of this - in case that helps.
Likely Conclusion...?:
It's not possible to mix these specific things on Windows - and I should use a Linux host for my Linux containers.
However, I'd be delighted to see a set-up where I can run that docker version command on Windows, using my certificates, and Linux containers - all at the same time.
Failing that, if anyone has any insight into why it's not possible ("...windows pipes...?") or something like that, I would be very interested.
(I do see a fairly large number of Docker and TLS questions on SO - but nothing specific to this scenario.)
UPDATE:
Here is the specific error I get:
Docker.Core.Backend.BackendException:
Failed to start
at Docker.Core.Pipe.NamedPipeClient.<TrySendAsync>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Docker.Core.Pipe.NamedPipeClient.Send(String action, Object[] parameters)
at Docker.Actions.<RestartDaemon>b__37_0()
at Docker.ApiServices.TaskQueuing.TaskQueue.<>c__DisplayClass18_0.<.ctor>b__1()
Docker.Core.DockerException:
Failed to start
at Docker.Backend.ContainerEngine.Linux.DoStart(Settings settings, String daemonOptions, Credential credential)
at Docker.Backend.ContainerEngine.Linux.Restart(Settings settings, String daemonOptions, Credential credential)
at Docker.Backend.BackendNamedPipeServer.<Run>b__8_3(Object[] args)
at Docker.Core.Pipe.NamedPipeServer.<>c__DisplayClass9_0.<Register>b__0(Object[] parameters)
at Docker.Core.Pipe.NamedPipeServer.RunAction(String action, Object[] parameters)
Researching the following...
Failed to start at Docker.Core.Pipe.NamedPipeClient.<TrySendAsync>d__5.MoveNext()
... has not led to any insights, beyond some me too comments - mostly related to version updates.
Issue Ticket
Unable to run Docker for Windows using TLS with Linux Containers

Related

Can I develop with VS Code in containers on a remote host running Windows/WSL2?

Original Post
I have a Windows workstation with WSL2 and Docker installed that I am able to use for container based development in VS Code. I would like to be able to develop inside the containers on this system remotely. I am able to SSH directly into the WSL2 environment on the workstation and am able to start the docker daemon without logging directly into Windows by creating a Task to start the daemon automatically as described here: https://stackoverflow.com/a/59467740/10692741
However when I try to access Docker on the remote machine by following this guide: https://code.visualstudio.com/docs/remote/containers-advanced#_developing-inside-a-container-on-a-remote-docker-host, I get the following error:
error during connect: Get http://docker/v1.24/version: net/http: HTTP/1.x transport connection broken: malformed HTTP status code "\x00c\x00o\x00m\x00m\x00a\x00n\x00d\x00"
I have also tried connecting via a SSH tunnel as outlined here: https://code.visualstudio.com/docs/remote/troubleshooting#_using-an-ssh-tunnel-to-connect-to-a-remote-docker-host and am unable to connect to Docker as well.
Has anyone had success with a setup like this? Or is this not supported due to limitations with Docker on Windows, WSL2, and/or Windows OpenSSH implementation?
Update: 2021-01-21
When I SSH into the Windows machine remotely, I am able to see the docker containers in the VS Code extension. I am able to start them, stop them, and enter into them with the shell. However, when I try to attach VS Code I get same error shown above.
Things that may have possibly affected this over the past couple days:
Adding SSH keys on my local machine to the ssh-agent via ssh-add /my/key
Exposing Docker daemon on tcp://localhost:2375 without TLS on the remote Windows machine
Also I want to note that the I've tried using Windows, Mac, and Linux as the local machine. With Mac and Linux I am able to open a remote session into the Windows machine, but from the Windows local machine I am able to SSH into the remote Windows machine but cannot open a remote connection in VS Code for some reason.
Ok, I was able to get this working using the port/socket forwarding technique. For sake of clarity, I'll use:
local development workstation, local workstation, or just workstation to indicate the computer from which we wish to use VSCode to access Docker containers on ...
the remote Docker host, remote, or just Docker host
Sanity check -- Do you have Docker Desktop installed on both systems? On the local development workstation, you can skip the WSL2 integration, but you'll at least need the client tools, since the VSCode extension uses them.
Steps I took:
I already had Docker with WSL2 integration set up on my main system (which for the purposes of this exercise, became my remote Docker host), along with VSCode, so I knew everything was working there. It sounds like that was your starting point as well.
On another system on the same network (accessed with RDP to make it simple), I already had VSCode installed as well, with the Remote Development Extension Pack. I also have WSL on that system, but only a v1 instance there. Not that WSL on the workstation should be a factor at all for the purposes of this exercise.
I installed Docker Desktop for Windows on that local development workstation.
I also installed the Docker extension for VSCode, since I didn't yet have it on the local development workstation.
On the workstation, I was not yet set up to SSH from PowerShell into my WSL Ubuntu distro on the remote. From PowerShell on the workstation, I generated an ECDSA key (per this and other documents) and added the public key to my authorized_keys on the the remote.
On the workstation, I started the OpenSSH Authentication Service and added the newly created key to the agent (in PowerShell) with ssh-agent add ~\.ssh\id_ecdsa.
I logged out of the workstation and back in so that the path changes were picked up for the Docker desktop install.
I was then able to ssh from Powershell on the local to Ubuntu/WSL on the remote with the port forwarding. Since I'm using the Windows 10 OpenSSH server as a jumphost to my WSL SSH servers, my command looked slightly different (with a -o "ProxyCommand ... mainly), but overall the structure is the same as the one listed in the "SSH Tunnel" doc you linked in your question.
On the remote (manually, not through any integration from the local), I did a basic docker run -it --rm Ubuntu and left it open.
On the local, from PowerShell, I set the DOCKER_HOST environment variable via [System.Environment]::SetEnvironmentVariable("DOCKER_HOST","tcp://localhost:23750").
I was then able to see the remote container using docker ps on the local. I could also docker exec -it containername bash into it remotely.
Of course, the above two steps aren't needed in the long term for VSCode, they were just part of my process to make sure everything was up and running (since, as you might expect, I did have several points at which I failed during this process).
So with that working, it was a simple matter in VSCode to change the Docker extension's DOCKER_HOST setting to tcp://localhost:23750. And voila, I could see all images on the remote as well as attach to them from VSCode.
Other thing(s) to check
I'll add to this list if we find additional reasons why it might not be working, but for now:
You mention that you are starting the Docker Desktop daemon automatically at startup via Task Manager, but you don't mention anything about the WSL2 instance. However, since you are able to ssh into it, I assume you have a way to bring it up as well? My experience has been that, unless the owning user is logged in, WSL terminates any instances after a few seconds, even if a service is running. There's a workaround, I believe, that I can dust off if this is a problem.

Enable docker experimental features on Windows Server

To be able to run Linux containers on a Windows 2016 host we followed this tutorial. The issue we're having is that we can't seem to enable the experimental features. In the docs it says:
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to enabled.
File C:\ProgramData\docker\config\config.json:
{
"experimental": "enabled",
"debug": true
}
After restarting the Docker service (Restart-Service docker) and running docker info we sill see the flag Experimental: false:
Operating System: Windows Server 2016 Standard Version 1607 (OS Build 14393.3686)
OSType: windows
Architecture: x86_64
Docker Root Dir: C:\ProgramData\docker
Experimental: false
How is it possible to enable the Docker experimental features on a Windows Server 2016?
Even when I try set the environment variable and restart powershell and the docker service it doesn't register within docker info:
[Environment]::SetEnvironmentVariable("DOCKER_CLI_EXPERIMENTAL", "enabled", "Machine")
After logging in to docker with docker login the file "C:\Users\bob\.docker\config.json". When adding the key it's still not registered after service restart:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "xxxxx"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (windows)"
},
"experimental": "enabled",
"debug": true
}
You put your configuration to a file with name config.json. But according to docs correct file name is daemon.json.
Full path to config file must be: C:\ProgramData\docker\config\daemon.json
From all of the answers above, only adding --experimental parameter in services -> docker -> Path to executable worked. You can change it using regedit.exe
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\docker
key ImagePath value "C:\Program Files\docker\dockerd.exe" --experimental --run-service if you have any other starting parameters you should keep them.
You can do it using sc (in powershell sc.exe)
I followed the Canonical tutorial for linux containers on Windows, and got stuck trying to pull the correct ubuntu (linux not windows) image (then found your question about setting experimental). If you ask the dockerd.exe for parameters it will accept (dockerd.exe --help) one of the options is --experimental.
Setting --experimental on the dockerd invocation worked for me.
If you can configure the invocation of your daemon with --experimental (rather than inside a configuration file), this could solve your problem.
Try setting the environment variable with:
[Environment]::SetEnvironmentVariable("DOCKER_CLI_EXPERIMENTAL", "enabled")
This worked on my linux cluster where specifying User or Machine seemed to result in the variable being ignored..
I found out that it simply won't run on Windows Server 2016 as stated here:
Docker Desktop has changed its way to leverage WSL2 for running Linux containers on Windows 10. The plan for Docker EE is unclear as Docker Inc. has sold it to Mirantis. https://github.com/docker/for-win/issues/6470#issuecomment-633883063
So if you plan to run both Linux and Windows containers in production, you may want to look for other options, such as Kubernetes.
It turns out the Linux Containers on Windows (lcow) Server is a preview feature of both Windows Server, version 1709 and Docker EE. It won’t work on Windows Server 2016 of which the version is older than 1709.

How to enable Docker API access from Windows running Docker Toolbox (docker machine)

I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default

Docker - Replacement for `dockerd` on Mac

I wanted to start the docker daemon with an open TCP address like this: docker daemon -H tcp://0.0.0.0:2375, but the terminal suggested that I use dockerd instead, which is apparently not a program that comes with the Docker Client for mac. Is there a way I can either
A - get some form of dockerd on my mac machine.
B - get around the use of dockerd by some other method.
?
Install socat command: brew install socat
Choose a port: (in the example 8099)
Run: socat -d -d TCP-L:8099,fork UNIX:/var/run/docker.sock
and then use tcp://localhost:8099 as API URL
works for me, hope this helps
Finally I found the config of mac docker like dockerd.
Click the docker icon in the menu bar, preferences, advanced
get around the use of dockerd by some other method. (2016)
Note that in 2022, you can go without dockerd/Docker Desktop entirely.
See Batuhan Apaydin's article "A modern toolkit to start working with container images on macOS that meets your needs without requiring a Docker Daemon or even Docker Desktop".
It uses lima+nerdctl
The nerdctl tool is designed as a drop-in replacement for the Docker client
And Lima is a hypervisor that launches Linux virtual machines with automatic file sharing, port forwarding, and containerd.
The name of lima comes from an abbreviation of the first two capital letters of LInux MAchines.
The design of Lima is similar to WSL2, but Lima focuses on macOS as the primary target host.
Lima uses QEMU, which is a generic and open source machine emulator and virtualizer, as a hypervisor under the hood to achieve the virtualization thing.
Lima can also work with other container engines such as Podman and even for non-container applications.
By default, when lima launches a VM, it runs buildkitd and containerd in a rootless way and also downloads necessary client tooling around them such as buildctl, nerdctl.
Everything will be set up for us. So, all that’s left is building, pulling, and running containers
For buildkit, Batuhan proposes developer-guy/buildkit-machine
buildkit-machine allows you to make buildkitd daemon accessible in your macOS environment.
To do so, it uses lima, which is a Linux subsystem for macOS, under the hood.
lima spins up a VM that runs buildkitd daemon in a rootless way which means that sock file of the buildkitd daemon is now be able to accessible from /run/user/<USERID>/buildkit/buildkitd.
So: no more Docker Desktop / dockerd, and use container in a rootless mode!
For more, see Bret Fisher's video "Free Docker Desktop Alternatives: DevOps and Docker Live Show (Ep 156)" (Jan. 2022)
I have found a workaround for this in the official forum
https://forums.docker.com/t/using-pycharm-docker-plugin-with-docker-beta/8617/9
$socat TCP-LISTEN:2376,reuseaddr,fork UNIX-CLIENT:/var/run/docker.sock
That workaround opens port 2376 to the world... as TLS isn't enabled, this is a bad idea as anyone on the same network can hijack your docker daemon
It is not supported to run dockerd on Mac. From this issue:
I think on Darwin it should never suggest to run dockerd. The daemon runs in a Linux virtual machine, so you do not need to (and cannot) run it manually.
If you want to do any specific configuration on mac, you might have already installed Docker Desktop. Docker desktop supports configuration using UserInterface shown below in the screenshot.

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

Resources