What is the best practices of using Docker for front end development on OS X and passthrough ENV from host to container - macos

I am looking for best practices about front-end developing on OSX with docker and I have found number of projects on github. Here they are:
docker-osx-dev
boot2docker-xhyve
coreos-xhyve
docker-unison
hodor
The fact is I need two-way syncing files from host system to virtual container and vice versa via mounted (synced) folder and IO performance should be like native one. Therefore I don't consider shared folders FS like vboxsf and vmhgfs. Also it's needed to have some build tools (gulp etc) with working wathcer within shared folder.
What do you think about xhyve (with NFS) instead of VirtualBox? Who tried the unison, what the performance docker provides with it?
At last I have a special task I want to run app.js via nodejs through host to container ENV if it is possible. In other words I have to add ENV variable for PATH to nodejs (within virtual container) to my ~/.bash_profile. Is there any chance to do passthrough NODE_PATH from host to container at all?
Thanks.

Not sure if "best practice" is asking for opinions (which is against SO policy), note that this also heavily depends on your tools chain.
I'm not a fan of boot2docker as it works to date (although it may improve and it may be the best approach in the long term as it is the official approach maintained by the docker team).
EDIT: boot2docker was discontinued and replaced by Docker Machine which does pretty much the same thing but in a more generic way, allowing you to manage Docker daemons locally, in LAN or in the cloud.
For Me, I'm on Windows, but I face the same (even more) difficulties as OSX devs. As I'm using Hyper-V, boot2docker (VirtualBox) can't run, so I have to roll my own. Also, last time I tried boot2docker - it ran TinyCoreLinux, which is another Linux distribution I'd have to learn while my focus is CoreOS in the cloud, so I'd rather just focus on CoreOS.
The target for setting up your dev is as follows:
Have ssh access with mounting rights to a docker host (either in VM or on LAN): this is CoreOS on Hyper-V for me.
Have a native docker client & export DOCKER_HOST=<ip or hostname here>
mount /mnt/from/host working directory into your docker host for live reload: this works through mount.cifs on CoreOS with a systemd unit for me.
Make dev.Dockerfile for your dev requirements, if you're a node developer, start from the node image, npm install gulp/browserify/.. whatever you need as a base image for your projects & docker build -f dev.Dockerfile -t my_dev_container .
docker run -it -v /mnt/from/host/:/src/app/ -e my_dev_container
You are now in a terminal with a fully isolated environment which can be put under source control & replicated between project members and has full live reload abilities.
Draw backs: if you rely on REPL or intelliSense from your IDE, you'll have to have an IDE that can use the remote server. Or you have to run your IDE within the dev container (cloud9 or use X server).
Of course if you live in a terminal and are fluent in vim, you are good to go.

Related

Run Quarkus tests with TestContainers using WSL2 + Podman

With the license change for Docker Desktop on Windows, I'm looking for an alternative. Podman + WSL2 seems to do the trick for me. Except for Testcontainers in my Quarkus tests.
I'm able to run my tests within WSL2 by starting podman system service in WSL2 (podman system service -t 0 tcp:localhost:8880) and setting the DOCKER_HOST env var (DOCKER_HOST=tcp://localhost:8880).
Now this works, but isn't really what I need, since at my company we develop in VSCode, IntelliJ and Eclipse. I'd like to be able to run the tests from within those IDE's. Is there any way to pass the podman uri (from WSL) to my IDE in Windows while running Quarkus tests?
If anyone would know any other docker desktop alternatives that work with TestContainers, that would be awesome as well. I have tried Rancher Desktop, but it gets stuck and the tests eventually time out.
You have to install podman-remote packages on your windows host machine, then configure it to use tcp://WSL2_IP:8880 (podman documentation) and finally make an alias for the program docker -> podman.exe.
Now you are able to run docker commands as usual... docker ps docker run etc. But it does not mean that all tools will work out of the box. You have to tune it.
For example for testcontainers you have to set env variables on host machine:
PowerShell
[System.Environment]::SetEnvironmentVariable("DOCKER_HOST", "tcp://WSL2_IP:8880", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_CHECKS_DISABLE", "True", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_RYUK_DISABLED", "True", [System.EnvironmentVariableTarget]::User)
P.S. All that kind of variables were set for you by docker, but from now you have to do it by yourself.
We ran the testcontainers-java tests using various solutions for Docker.
I don't know if running in WSL changes a lot compared to the Windows only setup.
In general, Testcontainers doesn't rely only on the CLI commands only and works best with compatible Docker environments. Based on the findings in that experiment, you can try minikube.
enter image description hereTake IntelliJ for example, you can set DOCKER_HOST env var by "Run/Debug Configurations" and it works perfectly.

Can I develop with VS Code in containers on a remote host running Windows/WSL2?

Original Post
I have a Windows workstation with WSL2 and Docker installed that I am able to use for container based development in VS Code. I would like to be able to develop inside the containers on this system remotely. I am able to SSH directly into the WSL2 environment on the workstation and am able to start the docker daemon without logging directly into Windows by creating a Task to start the daemon automatically as described here: https://stackoverflow.com/a/59467740/10692741
However when I try to access Docker on the remote machine by following this guide: https://code.visualstudio.com/docs/remote/containers-advanced#_developing-inside-a-container-on-a-remote-docker-host, I get the following error:
error during connect: Get http://docker/v1.24/version: net/http: HTTP/1.x transport connection broken: malformed HTTP status code "\x00c\x00o\x00m\x00m\x00a\x00n\x00d\x00"
I have also tried connecting via a SSH tunnel as outlined here: https://code.visualstudio.com/docs/remote/troubleshooting#_using-an-ssh-tunnel-to-connect-to-a-remote-docker-host and am unable to connect to Docker as well.
Has anyone had success with a setup like this? Or is this not supported due to limitations with Docker on Windows, WSL2, and/or Windows OpenSSH implementation?
Update: 2021-01-21
When I SSH into the Windows machine remotely, I am able to see the docker containers in the VS Code extension. I am able to start them, stop them, and enter into them with the shell. However, when I try to attach VS Code I get same error shown above.
Things that may have possibly affected this over the past couple days:
Adding SSH keys on my local machine to the ssh-agent via ssh-add /my/key
Exposing Docker daemon on tcp://localhost:2375 without TLS on the remote Windows machine
Also I want to note that the I've tried using Windows, Mac, and Linux as the local machine. With Mac and Linux I am able to open a remote session into the Windows machine, but from the Windows local machine I am able to SSH into the remote Windows machine but cannot open a remote connection in VS Code for some reason.
Ok, I was able to get this working using the port/socket forwarding technique. For sake of clarity, I'll use:
local development workstation, local workstation, or just workstation to indicate the computer from which we wish to use VSCode to access Docker containers on ...
the remote Docker host, remote, or just Docker host
Sanity check -- Do you have Docker Desktop installed on both systems? On the local development workstation, you can skip the WSL2 integration, but you'll at least need the client tools, since the VSCode extension uses them.
Steps I took:
I already had Docker with WSL2 integration set up on my main system (which for the purposes of this exercise, became my remote Docker host), along with VSCode, so I knew everything was working there. It sounds like that was your starting point as well.
On another system on the same network (accessed with RDP to make it simple), I already had VSCode installed as well, with the Remote Development Extension Pack. I also have WSL on that system, but only a v1 instance there. Not that WSL on the workstation should be a factor at all for the purposes of this exercise.
I installed Docker Desktop for Windows on that local development workstation.
I also installed the Docker extension for VSCode, since I didn't yet have it on the local development workstation.
On the workstation, I was not yet set up to SSH from PowerShell into my WSL Ubuntu distro on the remote. From PowerShell on the workstation, I generated an ECDSA key (per this and other documents) and added the public key to my authorized_keys on the the remote.
On the workstation, I started the OpenSSH Authentication Service and added the newly created key to the agent (in PowerShell) with ssh-agent add ~\.ssh\id_ecdsa.
I logged out of the workstation and back in so that the path changes were picked up for the Docker desktop install.
I was then able to ssh from Powershell on the local to Ubuntu/WSL on the remote with the port forwarding. Since I'm using the Windows 10 OpenSSH server as a jumphost to my WSL SSH servers, my command looked slightly different (with a -o "ProxyCommand ... mainly), but overall the structure is the same as the one listed in the "SSH Tunnel" doc you linked in your question.
On the remote (manually, not through any integration from the local), I did a basic docker run -it --rm Ubuntu and left it open.
On the local, from PowerShell, I set the DOCKER_HOST environment variable via [System.Environment]::SetEnvironmentVariable("DOCKER_HOST","tcp://localhost:23750").
I was then able to see the remote container using docker ps on the local. I could also docker exec -it containername bash into it remotely.
Of course, the above two steps aren't needed in the long term for VSCode, they were just part of my process to make sure everything was up and running (since, as you might expect, I did have several points at which I failed during this process).
So with that working, it was a simple matter in VSCode to change the Docker extension's DOCKER_HOST setting to tcp://localhost:23750. And voila, I could see all images on the remote as well as attach to them from VSCode.
Other thing(s) to check
I'll add to this list if we find additional reasons why it might not be working, but for now:
You mention that you are starting the Docker Desktop daemon automatically at startup via Task Manager, but you don't mention anything about the WSL2 instance. However, since you are able to ssh into it, I assume you have a way to bring it up as well? My experience has been that, unless the owning user is logged in, WSL terminates any instances after a few seconds, even if a service is running. There's a workaround, I believe, that I can dust off if this is a problem.

Docker windows version container compatibility

There is one question that doesn't show up in the forums, what which has a reason to be discussed, IMHO:
Why isn't it possible to pull or build windows docker images (i.e. nanoserver 2019) on an older host system? On the official site, it is documented, that it is not compatible to run, yes:
Version compatibility
But, as I said, "to run". I don't need to run that newer windows container image on the older host system, I just want to pull and build it, to distribute it to a compatible system later on.
Thus, is there a way to handle this issue that shouldn't be one?
You missed one important thing:
Even just do docker build, it will use container, it use container to build not directly on your host machine. Next is the process when docker build:
Docker will create a temporary build container from the base image which you mentioned in Dockerfile with FROM.
Run all instructions of Dockerfile in above temporary build container.
Save the temporary build container as image.
So, as you said you have seen Version compatibility for container from microsoft, so now I think you could also see why build also need this, just because it will also create a container(Just this temporary container will be removed after build).
UPDATE:
The whole story is:
YES, in linux, no problem for a old host os to build/run a new os image/container, because host & container just share the same kernel, the rootfs is provided by container itself.
BUT, you are talking about windows, from windows official, we could see next:
Windows Server 2016 and Windows 10 Anniversary Update (both version 14393) were the first Windows releases that could build and run Windows Server containers. Containers built using these versions can run on newer releases such as Windows Server version 1709, but there are a few things you need to know before you start.
As we've been improving the Windows container features, we've had to make some changes that can affect compatibility. Older containers will run the same on newer hosts with Hyper-V isolation, and will use the same (older) kernel version. However, if you want to run a container based on a newer Windows build, it can only run on the newer host build.
Above is the reason why old windows os could not run a new windows container.
Further more, what I want to say is docker build is just same reason with docker run:
docker run $theImageName need to start a container base on the image theImageName, and as microsoft said, the new os container had to use the new features of kernel, so the new container cannot use the old windows host. Remember, container & host will share the same kernel.
And, docker build -t xxx . will find the Dockerfile with FROM $baseImageName in it, then start a container base on the image $baseImageName, this container is a temp container. All instructions in Dockerfile will executed in this temp container, not in docker host. And finally, this temp build container will be deleted, so you did not see this temp container.
So, as you see, both docker run & docker build will start the container which need to utilize the new windows host's feature, could not use old windows' kernel. This is the limit of microsoft, if you have already understand the limit for docker run on windows, the reason is same of docker build on windows.

Docker - Replacement for `dockerd` on Mac

I wanted to start the docker daemon with an open TCP address like this: docker daemon -H tcp://0.0.0.0:2375, but the terminal suggested that I use dockerd instead, which is apparently not a program that comes with the Docker Client for mac. Is there a way I can either
A - get some form of dockerd on my mac machine.
B - get around the use of dockerd by some other method.
?
Install socat command: brew install socat
Choose a port: (in the example 8099)
Run: socat -d -d TCP-L:8099,fork UNIX:/var/run/docker.sock
and then use tcp://localhost:8099 as API URL
works for me, hope this helps
Finally I found the config of mac docker like dockerd.
Click the docker icon in the menu bar, preferences, advanced
get around the use of dockerd by some other method. (2016)
Note that in 2022, you can go without dockerd/Docker Desktop entirely.
See Batuhan Apaydin's article "A modern toolkit to start working with container images on macOS that meets your needs without requiring a Docker Daemon or even Docker Desktop".
It uses lima+nerdctl
The nerdctl tool is designed as a drop-in replacement for the Docker client
And Lima is a hypervisor that launches Linux virtual machines with automatic file sharing, port forwarding, and containerd.
The name of lima comes from an abbreviation of the first two capital letters of LInux MAchines.
The design of Lima is similar to WSL2, but Lima focuses on macOS as the primary target host.
Lima uses QEMU, which is a generic and open source machine emulator and virtualizer, as a hypervisor under the hood to achieve the virtualization thing.
Lima can also work with other container engines such as Podman and even for non-container applications.
By default, when lima launches a VM, it runs buildkitd and containerd in a rootless way and also downloads necessary client tooling around them such as buildctl, nerdctl.
Everything will be set up for us. So, all that’s left is building, pulling, and running containers
For buildkit, Batuhan proposes developer-guy/buildkit-machine
buildkit-machine allows you to make buildkitd daemon accessible in your macOS environment.
To do so, it uses lima, which is a Linux subsystem for macOS, under the hood.
lima spins up a VM that runs buildkitd daemon in a rootless way which means that sock file of the buildkitd daemon is now be able to accessible from /run/user/<USERID>/buildkit/buildkitd.
So: no more Docker Desktop / dockerd, and use container in a rootless mode!
For more, see Bret Fisher's video "Free Docker Desktop Alternatives: DevOps and Docker Live Show (Ep 156)" (Jan. 2022)
I have found a workaround for this in the official forum
https://forums.docker.com/t/using-pycharm-docker-plugin-with-docker-beta/8617/9
$socat TCP-LISTEN:2376,reuseaddr,fork UNIX-CLIENT:/var/run/docker.sock
That workaround opens port 2376 to the world... as TLS isn't enabled, this is a bad idea as anyone on the same network can hijack your docker daemon
It is not supported to run dockerd on Mac. From this issue:
I think on Darwin it should never suggest to run dockerd. The daemon runs in a Linux virtual machine, so you do not need to (and cannot) run it manually.
If you want to do any specific configuration on mac, you might have already installed Docker Desktop. Docker desktop supports configuration using UserInterface shown below in the screenshot.

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

Resources