Does containerd have a volume-from parameter like docker? - containerd

In the past we used docker volume-from to share the same storage volume from an existing container,such as make use of one emptyDir. Now how does containerd solve this scenario?

Related

kubernetes forcing docker option for container

Is it possible to make sure that my containers are running with specific docker option.
I need to run my container with the --device option. I cannot use device plugin because I am running a windows container and device manager does not seems implemented for windows.
Thank you for your help

Running Elasticsearch in Azure Container Instances

I'm trying to run ElasticSearch in Azure Container Instances. I've created the container like this using the Azure CLI:
az container create --image elasticsearch:7.4.2 --name $containerGroupName -g $resourceGroup --ip-address public --dns-name-label <mydns> --memory 8 --cpu 2 --ports 9200
The container ends up in a waiting state. When I check out the logs in the Azure portal, I'll see the following error:
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Apparently, this has something to do with virtual memory: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
But how do I change it inside the container? I cannot connect to it since the container is not running.
Very late to the party but hopefully this will help someone in the future. Elasticsearch if deployed stand-alone requires the environment variable discovery.type to be set. Since Azure does not allow vars with a . (dot) at the moment you'll have to create a custom Docker image and host this e. g. with Azure Container Registry or Dockerhub, ...
Content can be as little as this:
FROM elasticsearch:7.8.1
ENV discovery.type=single-node
I realize that ACI is not the best option to run ElasticSearch. I only need it temporarily for a proof-of-concept while I'm waiting for a stable environment of ElasticSearch elsewhere.
Eventually I got it running by picking an older image of ElasticSearch:
az container create --image elasticsearch:5.6.14-alpine --name $containerGroupName -g $resourceGroup --ip-address public --dns-name-label <mydns> --memory 4 --cpu 2 --ports 9200
The solution to this is simply adding this line to the bound elasticsearch.yml config file:
discovery.type: single-node
For this to be permanent even when you restart the container, the config directory of the elasticsearch container needs to be bound to an Azure Fileshare directory, where you make permanent changes there.
Create a Azure file share and create a elasticsearch folder with another config folder inside it: https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-create-file-share
Deploy a elasticsearch container with the fileshare volume mounted and bind /mnt/elasticsearch/config to a folder you've created on the new fileshare (template tags: "mountPath": "/mnt/elasticsearch/config", "shareName": "elasticsearch/config"): https://docs.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
While the container doesn't error out and shuts itself down because of this max virtual memory error, copy the entire /usr/share/elasticsearch/config folder and filers to the fileshare folder. As soon as you get terminal access, run cp -R /usr/share/elasticsearch/config /mnt/elasticsearch/config
On your file share folder, you should now have a config directory with files and other folders created by the elastisearch startup process. One of this files is elasticsearch.yml. Open it, add the line discovery.type: single-node and save it.
Finally, change mounting and binding location to the correct location, so elasticsearch starts up and reads our now modified configurations. Bind /usr/share/elasticsearch/config to the fileshare /elasticsearch/config folder (template tags: "mountPath": "/usr/share/elasticsearch/config", "shareName": "elasticsearch/config"). More info here: https://www.elastic.co/guide/en/elasticsearch/reference/master/docker.html#docker-configuration-methods
Start the container with the new mounting locations and now the max virtual memory error should be gone.
Please, only use this trick for proofs-of-concept or very low volume logs. Using network storage like Azure Fileshare in general is not suitable for Elasticsearch and can even crash it.

How to build a cassandra cluster with docker on a windows machine?

I want to build a cassandra cluster with docker. The documentation already tells you how to this so this is not the problem I have.
However I am currently using Docker on Windows 10 and obviously it cannot execute the nested command in docker run --name some-cassandra2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-cassandra)" cassandra:tag which results in an empty seed list for the container.
How can I nest a command like this in Windows or - if this is not possible - get a workaround for this?
I managed to fix it thanks to a docker-compose.yml by Jason Giedymin. It should work in v1 as well as v2 of docker-compose. By doing it this way you just let docker do the linking from the get go and tell cassandras about other seeds with the environment variable the container already gives you.
The sleep 30 part is pretty smart as well as it makes sure that the second container doesn't try to connect on a container that isn't fully up yet.
One thing I would recommend though, is using external_links instead of links. This way other containers don't rely on all of the cassandra containers to be up to start/work. This would defeat the purpose of a distributed database.
I still don't know how to nest Windows cmd commands into each other so I would still be thankful for some tips.

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

What is the best practices of using Docker for front end development on OS X and passthrough ENV from host to container

I am looking for best practices about front-end developing on OSX with docker and I have found number of projects on github. Here they are:
docker-osx-dev
boot2docker-xhyve
coreos-xhyve
docker-unison
hodor
The fact is I need two-way syncing files from host system to virtual container and vice versa via mounted (synced) folder and IO performance should be like native one. Therefore I don't consider shared folders FS like vboxsf and vmhgfs. Also it's needed to have some build tools (gulp etc) with working wathcer within shared folder.
What do you think about xhyve (with NFS) instead of VirtualBox? Who tried the unison, what the performance docker provides with it?
At last I have a special task I want to run app.js via nodejs through host to container ENV if it is possible. In other words I have to add ENV variable for PATH to nodejs (within virtual container) to my ~/.bash_profile. Is there any chance to do passthrough NODE_PATH from host to container at all?
Thanks.
Not sure if "best practice" is asking for opinions (which is against SO policy), note that this also heavily depends on your tools chain.
I'm not a fan of boot2docker as it works to date (although it may improve and it may be the best approach in the long term as it is the official approach maintained by the docker team).
EDIT: boot2docker was discontinued and replaced by Docker Machine which does pretty much the same thing but in a more generic way, allowing you to manage Docker daemons locally, in LAN or in the cloud.
For Me, I'm on Windows, but I face the same (even more) difficulties as OSX devs. As I'm using Hyper-V, boot2docker (VirtualBox) can't run, so I have to roll my own. Also, last time I tried boot2docker - it ran TinyCoreLinux, which is another Linux distribution I'd have to learn while my focus is CoreOS in the cloud, so I'd rather just focus on CoreOS.
The target for setting up your dev is as follows:
Have ssh access with mounting rights to a docker host (either in VM or on LAN): this is CoreOS on Hyper-V for me.
Have a native docker client & export DOCKER_HOST=<ip or hostname here>
mount /mnt/from/host working directory into your docker host for live reload: this works through mount.cifs on CoreOS with a systemd unit for me.
Make dev.Dockerfile for your dev requirements, if you're a node developer, start from the node image, npm install gulp/browserify/.. whatever you need as a base image for your projects & docker build -f dev.Dockerfile -t my_dev_container .
docker run -it -v /mnt/from/host/:/src/app/ -e my_dev_container
You are now in a terminal with a fully isolated environment which can be put under source control & replicated between project members and has full live reload abilities.
Draw backs: if you rely on REPL or intelliSense from your IDE, you'll have to have an IDE that can use the remote server. Or you have to run your IDE within the dev container (cloud9 or use X server).
Of course if you live in a terminal and are fluent in vim, you are good to go.

Resources