Can I develop with VS Code in containers on a remote host running Windows/WSL2? - windows

Original Post
I have a Windows workstation with WSL2 and Docker installed that I am able to use for container based development in VS Code. I would like to be able to develop inside the containers on this system remotely. I am able to SSH directly into the WSL2 environment on the workstation and am able to start the docker daemon without logging directly into Windows by creating a Task to start the daemon automatically as described here: https://stackoverflow.com/a/59467740/10692741
However when I try to access Docker on the remote machine by following this guide: https://code.visualstudio.com/docs/remote/containers-advanced#_developing-inside-a-container-on-a-remote-docker-host, I get the following error:
error during connect: Get http://docker/v1.24/version: net/http: HTTP/1.x transport connection broken: malformed HTTP status code "\x00c\x00o\x00m\x00m\x00a\x00n\x00d\x00"
I have also tried connecting via a SSH tunnel as outlined here: https://code.visualstudio.com/docs/remote/troubleshooting#_using-an-ssh-tunnel-to-connect-to-a-remote-docker-host and am unable to connect to Docker as well.
Has anyone had success with a setup like this? Or is this not supported due to limitations with Docker on Windows, WSL2, and/or Windows OpenSSH implementation?
Update: 2021-01-21
When I SSH into the Windows machine remotely, I am able to see the docker containers in the VS Code extension. I am able to start them, stop them, and enter into them with the shell. However, when I try to attach VS Code I get same error shown above.
Things that may have possibly affected this over the past couple days:
Adding SSH keys on my local machine to the ssh-agent via ssh-add /my/key
Exposing Docker daemon on tcp://localhost:2375 without TLS on the remote Windows machine
Also I want to note that the I've tried using Windows, Mac, and Linux as the local machine. With Mac and Linux I am able to open a remote session into the Windows machine, but from the Windows local machine I am able to SSH into the remote Windows machine but cannot open a remote connection in VS Code for some reason.

Ok, I was able to get this working using the port/socket forwarding technique. For sake of clarity, I'll use:
local development workstation, local workstation, or just workstation to indicate the computer from which we wish to use VSCode to access Docker containers on ...
the remote Docker host, remote, or just Docker host
Sanity check -- Do you have Docker Desktop installed on both systems? On the local development workstation, you can skip the WSL2 integration, but you'll at least need the client tools, since the VSCode extension uses them.
Steps I took:
I already had Docker with WSL2 integration set up on my main system (which for the purposes of this exercise, became my remote Docker host), along with VSCode, so I knew everything was working there. It sounds like that was your starting point as well.
On another system on the same network (accessed with RDP to make it simple), I already had VSCode installed as well, with the Remote Development Extension Pack. I also have WSL on that system, but only a v1 instance there. Not that WSL on the workstation should be a factor at all for the purposes of this exercise.
I installed Docker Desktop for Windows on that local development workstation.
I also installed the Docker extension for VSCode, since I didn't yet have it on the local development workstation.
On the workstation, I was not yet set up to SSH from PowerShell into my WSL Ubuntu distro on the remote. From PowerShell on the workstation, I generated an ECDSA key (per this and other documents) and added the public key to my authorized_keys on the the remote.
On the workstation, I started the OpenSSH Authentication Service and added the newly created key to the agent (in PowerShell) with ssh-agent add ~\.ssh\id_ecdsa.
I logged out of the workstation and back in so that the path changes were picked up for the Docker desktop install.
I was then able to ssh from Powershell on the local to Ubuntu/WSL on the remote with the port forwarding. Since I'm using the Windows 10 OpenSSH server as a jumphost to my WSL SSH servers, my command looked slightly different (with a -o "ProxyCommand ... mainly), but overall the structure is the same as the one listed in the "SSH Tunnel" doc you linked in your question.
On the remote (manually, not through any integration from the local), I did a basic docker run -it --rm Ubuntu and left it open.
On the local, from PowerShell, I set the DOCKER_HOST environment variable via [System.Environment]::SetEnvironmentVariable("DOCKER_HOST","tcp://localhost:23750").
I was then able to see the remote container using docker ps on the local. I could also docker exec -it containername bash into it remotely.
Of course, the above two steps aren't needed in the long term for VSCode, they were just part of my process to make sure everything was up and running (since, as you might expect, I did have several points at which I failed during this process).
So with that working, it was a simple matter in VSCode to change the Docker extension's DOCKER_HOST setting to tcp://localhost:23750. And voila, I could see all images on the remote as well as attach to them from VSCode.
Other thing(s) to check
I'll add to this list if we find additional reasons why it might not be working, but for now:
You mention that you are starting the Docker Desktop daemon automatically at startup via Task Manager, but you don't mention anything about the WSL2 instance. However, since you are able to ssh into it, I assume you have a way to bring it up as well? My experience has been that, unless the owning user is logged in, WSL terminates any instances after a few seconds, even if a service is running. There's a workaround, I believe, that I can dust off if this is a problem.

Related

How are Docker Desktop proxy settings on Windows propagated to Docker?

I am on a corporate Windows laptop and I want to start experimenting with Docker. Being a corporate machine, everything needs to go through the corporate proxy.
I installed Debian on WSL and then the Docker Desktop, which installed its components on the Debian WSL VM. My first priority however was to test docker on WSL directly and not through Docker Desktop. So I set to read the Docker docs and download the docker/getting-started image through the Debian terminal. That, however, failed due to not using the network proxy.
Desktop Docker docs state that setting the proxy settings on Docker Desktop will propagate the proxy settings to Docker itself. Indeed, I set the proxy settings on Docker Desktop, and I was now able to properly download my image from inside Debian.
Since I want to have full control of Docker through the Debian terminal and not Docker Desktop, I want to understand in which way the proxy settings propagate to Docker inside WSL. I imagined that Docker Desktop altered some configuration file inside Debian, but a grep on the whole system of the proxy ip got me nothing. So my question is, in what way does the Docker Desktop let Docker know which proxy to use?
As much as I know, And am not 100% sure as I have not worked with docker in a while.
When you start docker service in WSL, this will trigger the init.d/docker script, And when you set the Company proxy manually in docker desktop, The loading time is :
Stopping Docker service
Updating configuration Script at /etc/init.d/docker
Starting the service again, and with it the new script
And to make sure that this is valid, You can try to check the /etc/init.d/docker script contents.
and as an alternative way of not adding the scripts manually. you can export the proxy configuration in WSL, and check if it will work without adding the proxy configuration to Docker Desktop.

Launching a Singularity Container Remotely using Visual Studio Code

I am aware that you can launch docker containers remotely in VSCode. Is it possible to do the same with singularity containers?
Update: the solution to this was published in the same issue (https://github.com/microsoft/vscode-remote-release/issues/3066#issuecomment-1019500216) as before by user oschulz:
As promised, here are some instructions on how to use Singularity with VS-Code Remote SSH via SSH RemoteCommand. The procedure described below makes VS-Code run it’s remote server component inside a Singularity container instance (other runtimes like Shifter work too).
Acknowledgement: Credit for a lot of this goes to #gipert, who refined my original approach (using a custom SSH script) when support for RemoteCommand became available in VS-Code recently
Step 1
Use VS-Code >= v1.64 (includes support for the SSH RemoteCommand setting). Install the Pre-Release version of the Remote SSH extension
Important: In the VS-Code settings, set "remote.SSH.enableRemoteCommand": true.
Step 2
In your "$HOME/.ssh/config", add something like
Host myimage1~*
RemoteCommand singularity shell /path/to/image1.sif
RequestTTY yes
Host myimage2~*
RemoteCommand singularity shell /path/to/image2.sif
RequestTTY yes
Host somehost myimage1~somehost myimage2~somehost
HostName some.host.somewhere
User your_username_
Host otherhost myimage1~otherhost myimage2~otherhost
HostName some.otherhost.somewhere
User your_username_
Test whether this works using ssh myimage1~somehost. This should drop you into an SSH session inside of an instance of the "/path/to/image1.sif" container image on some.host.somewhere.
Connecting to the remote host with VS-Code: F1 > "Connect to Host" > "myimage1~somehost” should now get you a remote VS-Code session running in the container image as well. The same for "myimage2~somehost", "myimage1~otherhost" and "myimage2~otherhost".
Step 3
However, since VS-code reuses remote server instance, that's not sufficient to run multiple container images on the same host at the same time. To get separate (per container) VS-Code server instances the same host, add something like this to your VS-Code preferences:
"remote.SSH.serverInstallPath": {
"myimage1~somehost": "~/.vscode-container/myimage1",
"myimage1~otherhost": "~/.vscode-container/myimage1",
"myimage2~somehost": "~/.vscode-container/myimage2",
"myimage2~otherhost": "~/.vscode-container/myimage2"
}
Request to the VS-Code dev team
Could "remote.SSH.serverInstallPath" be controlled via an environment variable? This would allow us to eliminate all these cumbersome "remote.SSH.serverInstallPath" preferences. The environment variable could be set by a container startup script on the remote side (like the one below) automatically, depending on the selected container image.
Other Container runtimes
To use a different container runtime than Singularity (e.g. Shifter, Charliecloud, etc.), simply replace singularity shell /path/to/image1.sif by the appropriate command for your runtime.
On some systems (e.g. with Shifter at NERSC) you may also need to override $XDG_RUNTIME_DIR, since it's default location may not be writable from within a container instance. In such cases, it's best to use a custom container run-script like
#!/bin/sh
export XDG_RUNTIME_DIR="${TMPDIR:-/tmp}/`whoami`/run"
exec shifter --image="$1"
So in your SSH config, use
RemoteCommand /my/homedir/.local/bin/run_container image_name
I maintain a little container start-script called cenv that handles $XDG_RUNTIME_DIR (and quite a bit more, including some default bind-mounts) automatically for both Singularity and Shifter (contributions welcome).
Tips and tricks
If things don't work, try "Kill server on remote" from VS-Code and reconnect.
You can also try starting over from scratch with brute force: Close the VS-Code remote connection. Then, from an external terminal, kill the remote VS-Code server instance:
$ ssh somehost
$ kill -9 -1
(Will kill all processes you own on the remote host.)
Remove the ~/.vscode-server directory.
Old:
I believe this is still not supported. Refer to this issue: https://github.com/microsoft/vscode-remote-release/issues/3066, and there are also some ideas for potential workarounds in the same link.

How to enable Docker API access from Windows running Docker Toolbox (docker machine)

I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default

Easiest way to connect with PuTTY to an existing docker container

Often I come across this situation:
I have an existing docker container, running a certain service, usually set up from a Dockerfile from Github, etc., usually based on Ubuntu
I am able to run commands inside this container (with docker exec or by setting an entrypoint), including sh
Interactive commands like vi, nano, aptitude or mc don't work, because of the buggy terminal of Docker Toolbox - with errors ranging from defective arrow keys over garbled characters to plain crashes.
Now the question:
Can I run anything inside my container to connect to a machine with a proper terminal? For example I could SSH into the docker host, so maybe I can run something there that the container can connect to?
I tried mosh, but it seems the mosh client does not run a shell by itself, but instead tries to forward to sshd, which the container doesn't have.
Docker is used to create light weight containers that can run a service with as minimal resources as possible. In addition, docker does not limit what code, apps or utilities you would want to run. That being said, if you are trying to connect to the container as you would to other linux servers, via ssh, you would need to be sure that the docker instance contains and is running an ssh server such as openssh-server and that you expose the port, normally port 22, when you execute the 'docker run' command.

What is the best practices of using Docker for front end development on OS X and passthrough ENV from host to container

I am looking for best practices about front-end developing on OSX with docker and I have found number of projects on github. Here they are:
docker-osx-dev
boot2docker-xhyve
coreos-xhyve
docker-unison
hodor
The fact is I need two-way syncing files from host system to virtual container and vice versa via mounted (synced) folder and IO performance should be like native one. Therefore I don't consider shared folders FS like vboxsf and vmhgfs. Also it's needed to have some build tools (gulp etc) with working wathcer within shared folder.
What do you think about xhyve (with NFS) instead of VirtualBox? Who tried the unison, what the performance docker provides with it?
At last I have a special task I want to run app.js via nodejs through host to container ENV if it is possible. In other words I have to add ENV variable for PATH to nodejs (within virtual container) to my ~/.bash_profile. Is there any chance to do passthrough NODE_PATH from host to container at all?
Thanks.
Not sure if "best practice" is asking for opinions (which is against SO policy), note that this also heavily depends on your tools chain.
I'm not a fan of boot2docker as it works to date (although it may improve and it may be the best approach in the long term as it is the official approach maintained by the docker team).
EDIT: boot2docker was discontinued and replaced by Docker Machine which does pretty much the same thing but in a more generic way, allowing you to manage Docker daemons locally, in LAN or in the cloud.
For Me, I'm on Windows, but I face the same (even more) difficulties as OSX devs. As I'm using Hyper-V, boot2docker (VirtualBox) can't run, so I have to roll my own. Also, last time I tried boot2docker - it ran TinyCoreLinux, which is another Linux distribution I'd have to learn while my focus is CoreOS in the cloud, so I'd rather just focus on CoreOS.
The target for setting up your dev is as follows:
Have ssh access with mounting rights to a docker host (either in VM or on LAN): this is CoreOS on Hyper-V for me.
Have a native docker client & export DOCKER_HOST=<ip or hostname here>
mount /mnt/from/host working directory into your docker host for live reload: this works through mount.cifs on CoreOS with a systemd unit for me.
Make dev.Dockerfile for your dev requirements, if you're a node developer, start from the node image, npm install gulp/browserify/.. whatever you need as a base image for your projects & docker build -f dev.Dockerfile -t my_dev_container .
docker run -it -v /mnt/from/host/:/src/app/ -e my_dev_container
You are now in a terminal with a fully isolated environment which can be put under source control & replicated between project members and has full live reload abilities.
Draw backs: if you rely on REPL or intelliSense from your IDE, you'll have to have an IDE that can use the remote server. Or you have to run your IDE within the dev container (cloud9 or use X server).
Of course if you live in a terminal and are fluent in vim, you are good to go.

Resources