I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.
Related
With the license change for Docker Desktop on Windows, I'm looking for an alternative. Podman + WSL2 seems to do the trick for me. Except for Testcontainers in my Quarkus tests.
I'm able to run my tests within WSL2 by starting podman system service in WSL2 (podman system service -t 0 tcp:localhost:8880) and setting the DOCKER_HOST env var (DOCKER_HOST=tcp://localhost:8880).
Now this works, but isn't really what I need, since at my company we develop in VSCode, IntelliJ and Eclipse. I'd like to be able to run the tests from within those IDE's. Is there any way to pass the podman uri (from WSL) to my IDE in Windows while running Quarkus tests?
If anyone would know any other docker desktop alternatives that work with TestContainers, that would be awesome as well. I have tried Rancher Desktop, but it gets stuck and the tests eventually time out.
You have to install podman-remote packages on your windows host machine, then configure it to use tcp://WSL2_IP:8880 (podman documentation) and finally make an alias for the program docker -> podman.exe.
Now you are able to run docker commands as usual... docker ps docker run etc. But it does not mean that all tools will work out of the box. You have to tune it.
For example for testcontainers you have to set env variables on host machine:
PowerShell
[System.Environment]::SetEnvironmentVariable("DOCKER_HOST", "tcp://WSL2_IP:8880", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_CHECKS_DISABLE", "True", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_RYUK_DISABLED", "True", [System.EnvironmentVariableTarget]::User)
P.S. All that kind of variables were set for you by docker, but from now you have to do it by yourself.
We ran the testcontainers-java tests using various solutions for Docker.
I don't know if running in WSL changes a lot compared to the Windows only setup.
In general, Testcontainers doesn't rely only on the CLI commands only and works best with compatible Docker environments. Based on the findings in that experiment, you can try minikube.
enter image description hereTake IntelliJ for example, you can set DOCKER_HOST env var by "Run/Debug Configurations" and it works perfectly.
I'm having a .Net Framework and .NetCore Containers and I would like to run them in Kubernetes. I have Docker Desktop for Windows installed and Kubernetes with it. How can I run these Windows Containers in Kubernetes?
This Documentation specifies how to create a Windows Node on Kubernetes, but it is very confusing. As I am on windows machine and I see linux based commands in there (And no mention of what OS you need to run all those). I am on Windows 10 Pro Machine. Is there a way to run these containers on Kubernetes?
When I try to create a Pod with Windows Containers, it fails with the following error message "Failed to pull image 'imagename'; rpc error: code = Unknown desc = image operating system 'windows' cannot be used on this platform"
Welcome on StackOverflow Srinath
To my knowledge you can't run Windows Containers on local version of Kubernetes at this moment. When you enable Kubernetes option in your Docker Desktop for Windows installation, the Kubernetes cluster is simply run inside Linux VM (with its own Docker Runtime for Linux containers only) on Hyper-V hypervisor.
The other solution for you is to use for instance a managed version of Kubernetes with Windows nodes from any of popular cloud providers. I think relatively easy to start is Azure (if you don't have an Azure subscription, create a free trial account, valid for 12 months).
I would suggest you to use an old way to run Kubernetes on Azure, a service called Azure Container Service aka ACS, for one reason, it has been verified by me to be working well with Windows Containers, especially for testing purposes (I could not achieve the same with its successor, called AKS):
Run following commands in Azure Cloud Shell, and your cluster will be ready to use in few minutes.
az group create --name azEvalRG --location eastus
az acs create -g azEvalRG -n k8s-based-win -d k8s-based-win --windows --agent-count 1 -u azureuser --admin-password 'YourSecretPwd1234$' -t kubernetes --location eastus
I'm trying to run the Hetionet v1.0 docker container mentioned in this SO post.
I've setup a digitalocean droplet with Docker
I ran docker pull dhimmel/hetionet and it worked
Now I run docker run dhimmel/hetionet and the following happens (and never returns to the interactive shell prompt).
If that completed successfully I think the last thing I'm supposed to do is run sh ~/run-docker.sh. Furthermore nothing is live at my droplet's ip_address:7474.
The error in the screenshot above looks a lot like it could be related to some redundant #Path("/") annotation, as described in this SO post's comment, buried in the docker container but I'm not sure.
Is the output from running docker run dhimmel/hetionet supposed to hang my shell? I'm running a 2 GB Memory / 40 GB Disk Droplet on Ubuntu 16.04 with Docker 1.12.5.
Thanks for your interest in the Hetionet Docker.
The output in 3 is expected. It looks like a Docker container successfully launched, downloaded the Hetionet database, and launched the Neo4j server. I'll look into fixing the warnings, but they're not errors, as Neo4j is still launching.
For production, we use a more advanced Docker run command. Depending on your use case, you may want to use the development docker run command:
docker run \
--publish=7474:7474 \
--publish=7687:7687 \
--volume=$HOME/neo4j/hetionet-data:/data \
--volume=$HOME/neo4j/hetionet-logs:/var/lib/neo4j/logs \
dhimmel/hetionet
Both the production and development command map ports. This will make it so the Neo4j server running inside your Docker container is available at http://localhost:7474/. This is most likely what you want. If you're doing this on DigitalOcean, you would replace http://localhost with the IP address of your droplet.
For an interactive shell session in a dhimmel/hetionet container, you can use:
docker run --interactive --tty dhimmel/hetionet bash
However, that command does not launch the Neo4j server -- it just let's you explore the image.
Does this clear things up?
I am looking for best practices about front-end developing on OSX with docker and I have found number of projects on github. Here they are:
docker-osx-dev
boot2docker-xhyve
coreos-xhyve
docker-unison
hodor
The fact is I need two-way syncing files from host system to virtual container and vice versa via mounted (synced) folder and IO performance should be like native one. Therefore I don't consider shared folders FS like vboxsf and vmhgfs. Also it's needed to have some build tools (gulp etc) with working wathcer within shared folder.
What do you think about xhyve (with NFS) instead of VirtualBox? Who tried the unison, what the performance docker provides with it?
At last I have a special task I want to run app.js via nodejs through host to container ENV if it is possible. In other words I have to add ENV variable for PATH to nodejs (within virtual container) to my ~/.bash_profile. Is there any chance to do passthrough NODE_PATH from host to container at all?
Thanks.
Not sure if "best practice" is asking for opinions (which is against SO policy), note that this also heavily depends on your tools chain.
I'm not a fan of boot2docker as it works to date (although it may improve and it may be the best approach in the long term as it is the official approach maintained by the docker team).
EDIT: boot2docker was discontinued and replaced by Docker Machine which does pretty much the same thing but in a more generic way, allowing you to manage Docker daemons locally, in LAN or in the cloud.
For Me, I'm on Windows, but I face the same (even more) difficulties as OSX devs. As I'm using Hyper-V, boot2docker (VirtualBox) can't run, so I have to roll my own. Also, last time I tried boot2docker - it ran TinyCoreLinux, which is another Linux distribution I'd have to learn while my focus is CoreOS in the cloud, so I'd rather just focus on CoreOS.
The target for setting up your dev is as follows:
Have ssh access with mounting rights to a docker host (either in VM or on LAN): this is CoreOS on Hyper-V for me.
Have a native docker client & export DOCKER_HOST=<ip or hostname here>
mount /mnt/from/host working directory into your docker host for live reload: this works through mount.cifs on CoreOS with a systemd unit for me.
Make dev.Dockerfile for your dev requirements, if you're a node developer, start from the node image, npm install gulp/browserify/.. whatever you need as a base image for your projects & docker build -f dev.Dockerfile -t my_dev_container .
docker run -it -v /mnt/from/host/:/src/app/ -e my_dev_container
You are now in a terminal with a fully isolated environment which can be put under source control & replicated between project members and has full live reload abilities.
Draw backs: if you rely on REPL or intelliSense from your IDE, you'll have to have an IDE that can use the remote server. Or you have to run your IDE within the dev container (cloud9 or use X server).
Of course if you live in a terminal and are fluent in vim, you are good to go.
I installed boot2docker as explained on the docker website. Here are some command runs to show that I have things installed correctly:
$$:~ kv$ boot2docker start
Waiting for VM and Docker daemon to start...
...................ooo
Started.
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/kvantum/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
$$:~ kv$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 14.04 b39b81afc8ca 11 days ago 188.3 MB
hello-world latest e45a5af57b00 3 weeks ago 910 B
After this, I ran the following command:
docker run -t -i ubuntu:14.04 /bin/bash
Inside the container, I installed zeromq, and started a zeromq server on port 5555 using tcp.
My questions are following:
If I exit out of the container, will it save all the work I do inside it?
I have no idea how to connect to the server running on port 5555. I read something about exposing a port, but I am not sure how to go about doing that. I did an ifconfig inside the container, and tried to connect to the server from the host like this:
$$:~ kv$ ./zmq_client tcp://container_ip:5555
This did not work. Can someone please lists the steps I need to take in order to connect to the server running within the container.
For completion sake, I am providing the list of my environment variables:
TERM_PROGRAM=Apple_Terminal
TERM=xterm-256color
SHELL=/bin/bash
TMPDIR=/var/folders/km/5kbpdx4s7cg4rmyc6d5q9l9r0000gq/T/
DOCKER_HOST=tcp://192.168.109.103:2376
Apple_PubSub_Socket_Render=/tmp/launch-1tWMHJ/Render
TERM_PROGRAM_VERSION=326
OLDPWD=/Users
TERM_SESSION_ID=262CBC8B-0A74-4B70-9F28-D9FA51FF713C
USER=kv
SSH_AUTH_SOCK=/tmp/launch-ZTWNGL/Listeners
__CF_USER_TEXT_ENCODING=0x1F7:0:0
DOCKER_TLS_VERIFY=1
__CHECKFIX1436934=1
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin
PWD=/Users/kv
DOCKER_CERT_PATH=/Users/kv/.boot2docker/certs/boot2docker-vm
HOME=/Users/kv
SHLVL=1
LOGNAME=kv
LC_CTYPE=UTF-8
DISPLAY=/tmp/launch-rco9zt/org.macosforge.xquartz:0
_=/usr/bin/env
One last question I have is about code performance. So within my Mac OS X, I have a docker container running (which runs Ubuntu). If I run the application, like a zeromq based server inside the container, will it not be slower as compared to running it on Mac OS X directly. Please explain the benefits of using docker in such a scenario..
You should really do some more reading and research before turning to SO, then ask about anything you can't figure out. But:
No. If the container is "exited" you can restart it and your files will still be there, but once it is removed your files are gone. You can use docker commit to save them to an image, but the best bet is to use a Dockerfile.
docker run -p 5000:8000 image will expose port 8000 in the container as port 5000 on the host.
Yes, it will be slower due to the boot2docker VM. It would not be slower if you were running on a Linux host. The advantage is that zeromq is now running in an isolated container with all its dependencies.