Nomad job for existing containers - nomad

I'm fairly new to nomad. From a nomad job, I specified a docker image. From what I understand, nomad will download the image and create it's own container and maintain that container. Is there a way for nomad to maintain a container that's already running? (I.e a container I had before I had set up nomad)
Thanks!

I highly doubt it. The nomad docker driver handles starting the container and maintaining the container lifecycle. I doubt there is a mechanism to reference a container that already exists, although you could likely write a driver (or extend the current docker driver) to add that functionality: https://www.nomadproject.io/docs/drivers/docker.html

Moving a running docker Container into a nomad cluster would be kinda against the idea of docker containers, because a Container should always be ephermeral. You can read more about this in the Docker guidlines.
It says :
Create ephemeral containers
The image defined by your Dockerfile should generate containers that are as ephemeral as possible. By “ephemeral”, we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
Also technically its only possible with a hacky workaround because nomad has no import functionality. If you are willing to stop the Container you can try to create a nomad task with the same image and then copy the docker Container files into the root of your nomad task folder, but this will not work as easy as recreating the container with a volume mount to your exported working directory.
You can prepare your own Docker Image and push it into a repository of a container registry like Docker Hub (Tutorial) or use a 3rd Party Registry like the GitLab container registry(Tutorial) and provide it to Nomad.
If you use a private registry with credential you can provide the registry credential directly in the config stanza of any docker task
task "server" {
driver = "docker"
config {
image = "<private-registry-url>/<image-name>:<tag>"
auth {
username = "<private-registry-user>"
password = "<private-registry-password>"
}
}
}

Related

How can I snapshot and restore firecracker vm running containers using firecracker-containerd?

I am running containers in Firecracker micro-VM using firecracker contained. I was able to create a firecracker VM and run a container inside it. However, I haven't found any APIs in firecracker-containerd to snapshot and restore a VM running container that I can directly use in my GO code. Are there any APIs there for that? I have seen APIs for snapshotting and restoring VMs in FIrecracker-microvm/firecracker repo where it sends requests to the API server for pausing/snapshotting/resuming/ restoring VMs.
https://github.com/firecracker-microvm/firecracker/blob/main/docs/snapshotting/snapshot-support.md
Is it the way to follow for snapshotting a firecracker VM running containers?

Is it possible to identify the image from which a docker container was built?

The context to my question is as follows:
I have a few docker images that I have built, and from those, I have got a few containers running, on various networks.
When I use docker inspect network <network_name>, it returns json data, containing a "Containers" structure.
When I used docker run ... to create the containers, I forgot to use the --name option, so the container ID is just a long random string. As such, I can't work out what that container is.
Given this context, is it possible to identify the image from which a docker container was built?
Yes, you can use docker inspect , but do it on the container and not on the network:
docker inspect --format='{{.Config.Image}}' $INSTANCE_ID
where $INSTANCE_ID is the container ID

Docker - passing new content to production

I' m new to Docker and still searching for a safe way to update production code without losing any valuable data.
So far the way we update our production machine is like this:
docker build the new code
docker push the image
docker pull the image (on the preferred machine)
docker stack rm && docker stack deploy
I' ve read countless guides about backups, but still can't understand if you lose something and what this is if you don't backup and something goes wrong. So I have some questions:
When you docker stack rm the container, you delete it? And if yes do I lose something by doing that (e.g volumes)?
Should I backup the container and its volumes (which i still don't understand how to do it), or just the image? Or just create a new tag when docker build my new code and I am safe?
Thank you
When you docker rm a container, you delete the container filesystem, but you don't affect any volumes that might have been attached to that container. If you docker run a new container that mounts the same volumes, it will see their content.
You'd never back up an entire container. You do need to back up the contents of volumes.
A good practice is to design your application to not store anything in local files at all: store absolutely everything in a database or other "remote" storage. The actual storage doesn't have to be in Docker. Then you can back up the database the same way you would any other database, and freely delete and create as many copies of the container as you need (possibly by adjusting replica counts in Swarm or Kubernetes).

Avoid containers to shutdown if machine is rebooted

I have a OSX, and I would like to know if is possible to persist a container between OS reboots. I'm currently using my machine to host my code and using it to install platforms or languages like Node.js and Golang. I would like to create my environment inside a container, and also leave my code inside it, but without losing the container if my machine reboots. Is it possible? I didn't find anything related.
Your container never killed if your system reboot except you start container with --rm which will remove on stop.
Your container will restart automatically if you start container with docker run -dit --restart always my_container
As per " also leave my codes inside it" this question is concern there is two solution to avoid loss of data or code and any other configuration.
You lose data because
It is possible to store data within the writable layer of a container,
but there are some downsides:
The data doesn’t persist when that container is no longer running, and
it can be difficult to get the data out of the container if another
process needs it.
https://docs.docker.com/storage/
So here is the solution.
Docker offers three different ways to mount data into a container from
the Docker host: volumes, bind mounts, or tmpfs volumes. When in
doubt, volumes are almost always the right choice. Keep reading for
more information about each mechanism for mounting data into
containers.
https://docs.docker.com/storage/#good-use-cases-for-tmpfs-mounts
Here how you can persist the nodejs code and golang code
docker run -v /nodejs-data-host:/nodejs-container -v /go-data-host:/godata-container -dit your_image
As per packages|runtimes (nodejs and go) is the concern they persist if your container killed or stop because they store in docker image.

Change Instance type of a cluster registered ec2 instance

I have an Amazon EC2 instance which is registered to a cluster of Amazon ECS.
And I want to change this instance's type from c4.large to c4.8xlarge.
I'm able to change its type from c4.large to c4.8xlarge in AWS console. But after the change, I found
[ERROR] Could not register module="api client" err="ClientException: Container instance type changes are not supported. Container instance XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX was previously registered as c4.large.
being printed in /var/log/ecs/ecs-agent.log.20XX-XX-XX-XX file.
Is it possible to change ec2 instance type and re-register it to a cluster?
I think maybe deregister it first, then register it again should work. But I'm afraid this may cause something irreversible in my AWS working environment. So I haven't tried this method yet.
To solve this connection problem between the agent and cluster, just delete the file /var/lib/ecs/data/ecs_agent_data.json and restart docker and ECS.
After that, a new container instance will be created in your cluster with the new size.
sudo rm /var/lib/ecs/data/ecs_agent_data.json
sudo service docker restart
sudo start ecs
Then you can go to the ECS cluster console and deregister the old container instance
UPDATE:
According to #florins and #MBear commented below, AWS updated the data file on ECS instances.
sudo rm /var/lib/ecs/data/agent.db
sudo service docker restart
sudo start ecs
As of March 2021 / AMI image ami-0db98e57137013b2d, /var/lib/ecs/data/ecs_agent_data.json mentioned in the last useful answer does not exist. For me, the commands to execute on the changed instance were:
sudo rm /var/lib/ecs/data/agent.db
sudo service docker restart
After that, it was possible to deploy containers to the instance, without fresh registration (AWS automatically registered a second ECS container instance of the new type). I did have a leftover container instance with the resources of the old instance type to remove.
You can't do this. Per their docs:
The type of EC2 instance that you choose for your container instances determines the resources available in your cluster. Amazon EC2 provides different instance types, each with different CPU, memory, storage, and networking capacity that you can use to run your tasks. For more information, see Amazon EC2 Instances.
This means that when you launch a container on an instance, the agent gathers a bunch of metadata about the instance to run it. If you change it, all of that metadata (or a lot) has changed in a bad way. CPU units, memory, etc. The agent is aware of this and will report it as an error.
You should spin up a new instance of the new type and register it to the cluster and let the task run on it. If it's a service, just terminate the old instance and let it run it against the new one.
I can't think of any real reason why terminating your old instance would cause something irreversible unless it is misconfigured or fragile via user specific settings, by default this would not cause anything destructive.
As alternative approach if the EC2 instance does not store any valuable a new instance using the old instance as template could be started. This takes all existing values and can be achieved just with a few clicks in minutes.
Select the EC2 instance and then "Actions -> Images and templates -> Start more like this". Just change the instance type.
When the instance is running got the the ECS cluster to the tab "ECS instances" and activate the new created instance.
Shutdown the old instance
Update your task maybe taking more cpu and memory and update the service to take the new task revision

Resources