Avoid containers to shutdown if machine is rebooted - macos

I have a OSX, and I would like to know if is possible to persist a container between OS reboots. I'm currently using my machine to host my code and using it to install platforms or languages like Node.js and Golang. I would like to create my environment inside a container, and also leave my code inside it, but without losing the container if my machine reboots. Is it possible? I didn't find anything related.

Your container never killed if your system reboot except you start container with --rm which will remove on stop.
Your container will restart automatically if you start container with docker run -dit --restart always my_container
As per " also leave my codes inside it" this question is concern there is two solution to avoid loss of data or code and any other configuration.
You lose data because
It is possible to store data within the writable layer of a container,
but there are some downsides:
The data doesn’t persist when that container is no longer running, and
it can be difficult to get the data out of the container if another
process needs it.
https://docs.docker.com/storage/
So here is the solution.
Docker offers three different ways to mount data into a container from
the Docker host: volumes, bind mounts, or tmpfs volumes. When in
doubt, volumes are almost always the right choice. Keep reading for
more information about each mechanism for mounting data into
containers.
https://docs.docker.com/storage/#good-use-cases-for-tmpfs-mounts
Here how you can persist the nodejs code and golang code
docker run -v /nodejs-data-host:/nodejs-container -v /go-data-host:/godata-container -dit your_image
As per packages|runtimes (nodejs and go) is the concern they persist if your container killed or stop because they store in docker image.

Related

Docker - passing new content to production

I' m new to Docker and still searching for a safe way to update production code without losing any valuable data.
So far the way we update our production machine is like this:
docker build the new code
docker push the image
docker pull the image (on the preferred machine)
docker stack rm && docker stack deploy
I' ve read countless guides about backups, but still can't understand if you lose something and what this is if you don't backup and something goes wrong. So I have some questions:
When you docker stack rm the container, you delete it? And if yes do I lose something by doing that (e.g volumes)?
Should I backup the container and its volumes (which i still don't understand how to do it), or just the image? Or just create a new tag when docker build my new code and I am safe?
Thank you
When you docker rm a container, you delete the container filesystem, but you don't affect any volumes that might have been attached to that container. If you docker run a new container that mounts the same volumes, it will see their content.
You'd never back up an entire container. You do need to back up the contents of volumes.
A good practice is to design your application to not store anything in local files at all: store absolutely everything in a database or other "remote" storage. The actual storage doesn't have to be in Docker. Then you can back up the database the same way you would any other database, and freely delete and create as many copies of the container as you need (possibly by adjusting replica counts in Swarm or Kubernetes).

docker ubuntu container filesystem

I pulled a standard docker ubuntu image and ran it like this:
docker run -i -t ubuntu bash -l
When I do an ls inside the container I see a proper filesystem and I can create files etc. How is this different from a VM? Also what are the limits of how big a file can I create on this container filesystem? Also is there a way I can create a file inside the container filesystem that persists in the host filesystem after the container is stopped or killed?
How is this different from a VM?
A VM will lock and allocate resources (disk, CPU, memory) for its full stack, even if it does nothing.
A Container isolates resources from the host (disk, CPU, memory), but won't actually use them unless it does something. You can launch many containers, if they are doing nothing, they won't use memory, CPU or disk.
Regarding the disk, those containers (launched from the same image) share the same filesystem, and through a COW (copy on Write) mechanism and UnionFS, will add a layer when you are writing in the container.
That layer will be lost when the container exits and is removed.
To persists data written in a container, see "Manage data in a container"
For more, read the insightful article from Jessie Frazelle "Setting the Record Straight: containers vs. Zones vs. Jails vs. VMs"

Can I use LXD image to create Docker container and/or vice versa?

A container system making use of LXC containers.
the above statement is true for both LXD and Docker. In that case can we use LXD image to create Docker container and/or vice versa?
They are fundamentally different.
With LXD container image you get full OS experience, it means all applications and processes that are part of the distro, only kernel is shared with the host. With Docker image you get single process application.
So you can have Docker running inside LXD container, but not the other way around.
The confusion might arise from the fact that Docker did use liblxc library in the past, that LXD daemon is using to control containers. If I understand correctly Docker is using another library called libcontainer now to provide isolation.

High availability issue with rethinkdb cluster in kubernetes

I'm setting up rethinkdb cluster inside kubernetes, but it doesn't work as expected for high availability requirement. Because when a pod is down, kubernetes will creates another pod, which runs another container of the same image, old mounted data (which is already persisted on host disk) will be erased and the new pod will join the cluster as a brand new instance. I'm running k8s in CoreOS v773.1.0 stable.
Please correct me if i'm wrong, but that way it seems impossible to setup a database cluster inside k8s.
Update: As documented here http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#restartpolicy, if RestartPolicy: Always it will restart the container if exits failure. It means by "restart" that it brings up the same container, or create another one? Or maybe because I stop the pod via command kubectl stop po so it doesn't restart the same container?
That's how Kubernetes works, and other solution works probably same way. When a machine is dead, the container on it will be rescheduled to run on another machine. That other machine has no state of container. Event when it is the same machine, the container on it is created as a new one instead of restarting the exited container(with data inside it).
To persistent data, you need some kind of external storage(NFS, EBS, EFS,...). In case of k8s, you may want to look into this https://github.com/kubernetes/kubernetes/blob/master/docs/design/persistent-storage.md This Github issue also has many information https://github.com/kubernetes/kubernetes/issues/6893
And in deed, that's the way to achieve HA in my opinion. Container are all stateless, they don't hold anything inside them. Any configuration needs for them should be store outside such as using thing like Consul or Etcd. By separating this like this, it's easier to restart a container
Try using PetSets http://kubernetes.io/docs/user-guide/petset/
That allows you to name your (pet) pods. If a pod is killed, then it will come back with the same name.
Summary of the petset feature is as follows.
Stable hostname
Stable domain name
Multiple pets of a similar type will be named with a "-n" (rethink-0,
rethink-1, ... rethink-n for example)
Persistent volumes
Now apps can cluster/peer together
When a pet pod dies, a new one will be started and will assume all the same "state" (including disk) of the previous one.

Docker container running Mesos cluster and running other docker containers on cluster (using Marathon)

I'm just starting off with Mesos, Docker and Marathon but I can't find anywhere where this specific question is answered.
I want to set up a Mesos cluster running on Docker - there are a couple of internet resources to do this, but then I want to run Docker containers on top of Mesos itself. This would then mean Docker containers running inside other Docker containers.
Is there a problem with this? It doesn't intuitively seem right somehow but would seem like it would be really handy to do so. Ideally I want to run Mesos cluster (with Marathon, Chronos etc.) and then run Hadoop within Docker containers on top of that. Is this possible or a standard way of doing things? Any other suggestions as to what good practice is would be appreciated.
Thanks
You should be able to run it, taking care of some issues when running the mesos (with Docker) containers, like running in privileged mode. Take a look to jpetazzo/dind to see how you can install and run docker in docker. Then you can setup mesos in that container to have one container with mesos and docker installed.
There are some references over the Internet similar to what you want to do. Check this article and this project that I think you will find very interesting.
There are definitely people running Mesos in docker containers, but you'll need to use privileged mode and set up some volumes if you want mesos to access the outer docker binary (see this thread).
Current biggest caveat: don't name your mesos-slave containers "mesos-*" or MESOS-2016 will bite you. See epic
MESOS-2115 for other remaining issues related to running mesos-slave in docker containers.

Resources