Unable to use fabric8 with docker for windows - windows

Seems like there might be something i dont understand.
I configured my pom.xml with <dockerHost>tcp://localhost:2375</dockerHost>
Everything is ok for build / deploy.
But it seems fabric8 plugin (v3.5.31) is not configurable with docker for windows : https://store.docker.com/editions/community/docker-ce-desktop-windows
so impossible to use fabric8:start
Tryed DOCKER_HOST env, no go.
Am i missing something? Do i really need to install with gofabric8? (wich does not work anyway)
The docker engine is all there, i am unable to find any documentation on this sadly. There should be a way to configure it to use the parameter!
Thanks.

The dockerHost in the case for windows is not http://localhost:2375. You need to find the IP of the VM were docker is running. This can be found by running the command docker-machine env <machine-name>. This will give you the correct DOCKER_HOST that needs to be used.
Alternatively, there is a feature for windows in this plugin that allows you to not specify a dockerHost and rather locate a docker-machine or create a new one, and use the docker server running on that machine. Take a look at this pull request for more info

Related

How are Docker Desktop proxy settings on Windows propagated to Docker?

I am on a corporate Windows laptop and I want to start experimenting with Docker. Being a corporate machine, everything needs to go through the corporate proxy.
I installed Debian on WSL and then the Docker Desktop, which installed its components on the Debian WSL VM. My first priority however was to test docker on WSL directly and not through Docker Desktop. So I set to read the Docker docs and download the docker/getting-started image through the Debian terminal. That, however, failed due to not using the network proxy.
Desktop Docker docs state that setting the proxy settings on Docker Desktop will propagate the proxy settings to Docker itself. Indeed, I set the proxy settings on Docker Desktop, and I was now able to properly download my image from inside Debian.
Since I want to have full control of Docker through the Debian terminal and not Docker Desktop, I want to understand in which way the proxy settings propagate to Docker inside WSL. I imagined that Docker Desktop altered some configuration file inside Debian, but a grep on the whole system of the proxy ip got me nothing. So my question is, in what way does the Docker Desktop let Docker know which proxy to use?
As much as I know, And am not 100% sure as I have not worked with docker in a while.
When you start docker service in WSL, this will trigger the init.d/docker script, And when you set the Company proxy manually in docker desktop, The loading time is :
Stopping Docker service
Updating configuration Script at /etc/init.d/docker
Starting the service again, and with it the new script
And to make sure that this is valid, You can try to check the /etc/init.d/docker script contents.
and as an alternative way of not adding the scripts manually. you can export the proxy configuration in WSL, and check if it will work without adding the proxy configuration to Docker Desktop.

Run Quarkus tests with TestContainers using WSL2 + Podman

With the license change for Docker Desktop on Windows, I'm looking for an alternative. Podman + WSL2 seems to do the trick for me. Except for Testcontainers in my Quarkus tests.
I'm able to run my tests within WSL2 by starting podman system service in WSL2 (podman system service -t 0 tcp:localhost:8880) and setting the DOCKER_HOST env var (DOCKER_HOST=tcp://localhost:8880).
Now this works, but isn't really what I need, since at my company we develop in VSCode, IntelliJ and Eclipse. I'd like to be able to run the tests from within those IDE's. Is there any way to pass the podman uri (from WSL) to my IDE in Windows while running Quarkus tests?
If anyone would know any other docker desktop alternatives that work with TestContainers, that would be awesome as well. I have tried Rancher Desktop, but it gets stuck and the tests eventually time out.
You have to install podman-remote packages on your windows host machine, then configure it to use tcp://WSL2_IP:8880 (podman documentation) and finally make an alias for the program docker -> podman.exe.
Now you are able to run docker commands as usual... docker ps docker run etc. But it does not mean that all tools will work out of the box. You have to tune it.
For example for testcontainers you have to set env variables on host machine:
PowerShell
[System.Environment]::SetEnvironmentVariable("DOCKER_HOST", "tcp://WSL2_IP:8880", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_CHECKS_DISABLE", "True", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_RYUK_DISABLED", "True", [System.EnvironmentVariableTarget]::User)
P.S. All that kind of variables were set for you by docker, but from now you have to do it by yourself.
We ran the testcontainers-java tests using various solutions for Docker.
I don't know if running in WSL changes a lot compared to the Windows only setup.
In general, Testcontainers doesn't rely only on the CLI commands only and works best with compatible Docker environments. Based on the findings in that experiment, you can try minikube.
enter image description hereTake IntelliJ for example, you can set DOCKER_HOST env var by "Run/Debug Configurations" and it works perfectly.

Docker stuck on "Waiting for SSH to be available..."

I'm using a docker with Windows and Hyper-v to create containers. I've added a docker machine vmachine to my docker configuration. First time the machine is created, it gets an IP (although I cannot manage nginx to access it - ERR_CONNECTION_REFUSED) and finishes the bootup.
When I turn off the machine and then try to boot it, i get stuck in this message
Waiting for SSH to be available...
And it doesn't evolve from there. The machine is booted, however, I get an IPv6 when I input the command docker-machine ip vmachine like - fe80::215:5dff:fe21:10b insted of a IPv4
What am I doing wrong?
Problem here is by default docker uses DockerNAT network switch. You should create a new external network switch instead. This issue is covered here and here. You can create an external network switch using the below command
docker-machine create -d hyperv --hyperv-virtual-switch external-switch tempbox1
or you can create one through the UI
Be sure to reboot the device after creating the external switch.
I had a similar issue and non of the solutions worked. Turns out that according to this answer, docker launches SSH with Unix specific elements. This is said to have been fixed in the releases that followed, but I still encountered the 'Waiting for SSH' issue. I resolved this by simply using GIT bash to run all docker related SSH commands.
Use the switch --native-ssh
for example docker-machine --native-ssh .... Get more details from here
docker-machine.exe -debug create --driver hyperv --hyperv-virtual-switch "External Virtual Switch" --hyperv-cpu-count "1" --hyperv-memory "1024" --hyperv-disk-size "20000" mydockervm
make sure to have additional VirtualSwitch configure , with external network driver selected , Uninstall virtualbox
Use the debug switch to see the exact error , for me it was not able to allocate memory.
Here's what's solved it for me.
Turns out Windows 10 starting version 1709 has a built in SSH client at C:\Windows\System32\OpenSSH. Here's an article discussing it.
Looks like docker is using that SSH implementation and it's not compatible. I didn't look for a proper way to remove the built-in SSH implementatino in Windows 10, and simply renamed the folder. That was enough to fix it for me.
After doing what is mentioned in the above suggestions and if you are running docker on a windows machine try to login using cli. This has worked for me.
If you are using Command Promt Docker will stuck at Waiting for SSH to be available..., So change to use GIT BASH as #Dave Howson said it will work.
If you're using oracle VM you must ensure first that your new cloud vm is running.
Before:
After:

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

What is the best practices of using Docker for front end development on OS X and passthrough ENV from host to container

I am looking for best practices about front-end developing on OSX with docker and I have found number of projects on github. Here they are:
docker-osx-dev
boot2docker-xhyve
coreos-xhyve
docker-unison
hodor
The fact is I need two-way syncing files from host system to virtual container and vice versa via mounted (synced) folder and IO performance should be like native one. Therefore I don't consider shared folders FS like vboxsf and vmhgfs. Also it's needed to have some build tools (gulp etc) with working wathcer within shared folder.
What do you think about xhyve (with NFS) instead of VirtualBox? Who tried the unison, what the performance docker provides with it?
At last I have a special task I want to run app.js via nodejs through host to container ENV if it is possible. In other words I have to add ENV variable for PATH to nodejs (within virtual container) to my ~/.bash_profile. Is there any chance to do passthrough NODE_PATH from host to container at all?
Thanks.
Not sure if "best practice" is asking for opinions (which is against SO policy), note that this also heavily depends on your tools chain.
I'm not a fan of boot2docker as it works to date (although it may improve and it may be the best approach in the long term as it is the official approach maintained by the docker team).
EDIT: boot2docker was discontinued and replaced by Docker Machine which does pretty much the same thing but in a more generic way, allowing you to manage Docker daemons locally, in LAN or in the cloud.
For Me, I'm on Windows, but I face the same (even more) difficulties as OSX devs. As I'm using Hyper-V, boot2docker (VirtualBox) can't run, so I have to roll my own. Also, last time I tried boot2docker - it ran TinyCoreLinux, which is another Linux distribution I'd have to learn while my focus is CoreOS in the cloud, so I'd rather just focus on CoreOS.
The target for setting up your dev is as follows:
Have ssh access with mounting rights to a docker host (either in VM or on LAN): this is CoreOS on Hyper-V for me.
Have a native docker client & export DOCKER_HOST=<ip or hostname here>
mount /mnt/from/host working directory into your docker host for live reload: this works through mount.cifs on CoreOS with a systemd unit for me.
Make dev.Dockerfile for your dev requirements, if you're a node developer, start from the node image, npm install gulp/browserify/.. whatever you need as a base image for your projects & docker build -f dev.Dockerfile -t my_dev_container .
docker run -it -v /mnt/from/host/:/src/app/ -e my_dev_container
You are now in a terminal with a fully isolated environment which can be put under source control & replicated between project members and has full live reload abilities.
Draw backs: if you rely on REPL or intelliSense from your IDE, you'll have to have an IDE that can use the remote server. Or you have to run your IDE within the dev container (cloud9 or use X server).
Of course if you live in a terminal and are fluent in vim, you are good to go.

Resources