container restarting again and again and unable to stop / remove / kill - macos

I have a problem, when I check the list of running container by command:
docker ps It show me running container with id and name. I killed it by command docker kill jenkins-master.1.vvafqnuu97itpn9clqgyqgqe7
after a few seconds It was started again with new container id automatically.
I have tried another command to remove it : docker container rm jenkins-master.1.vvafqnuu97itpn9clqgyqgqe7
It removed and then again started with another container id after few seconds.
I am really upset what's going on...
I have stoped container first and then removed, when I checked after remove by docker ps it was showing no container in list and after few seconds a container was running with some other id... It was surprising me.

The container is managed by swarm mode. Swarm mode will see the difference between the current state and target state and create a new container to correct the difference. Try:
docker service ls
docker service rm jenkins-master

Related

Is it possible to get bash access in NOT running container?

I able to run Flask API container successfully. But during the app execution it fails and stops the container for some reason.
I do checked container logs and noticed some file missing error is coming. Now I want to debug what file is missing by accessing /bin/bash of stopped container. But it throws an error saying container is not running.
docker exec -it CONTAINER /bin/bash
Is there any workaround to access bash in the STOPPED container?
No, you cannot.
However, it might be useful to either check the logs or specify bash as an entry point when doing a docker run
Checking logs: https://docs.docker.com/config/containers/logging/
docker logs <CONTAINER_NAME>
Shell Entry point: https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime
docker run --name <CONTAINER_NAME> --entrypoint /bin/bash <IMAGE_NAME>
If you container does not have /bin/bash, try
docker run --name <CONTAINER_NAME> --entrypoint /bin/sh <IMAGE_NAME>
You can try to use the docker commit command.
From the docs:
It can be useful to commit a container’s file changes or settings into
a new image. This allows you to debug a container by running an
interactive shell, or to export a working dataset to another server.
Resource with a example:
We can transform a container into a Docker image using the commit
command. All we need to know is the name or the identifier of the
stopped container. (You can get a list of all stopped containers with
docker ps -a).
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
0dfd54557799 ubuntu "/bin/bash" 25 seconds ago Exited (1) 4 seconds ago peaceful_feynman
Having the identifier 0dfd54557799 of the stopped container, we can
create a new Docker image. The resulting image will have the same
state as the previously stopped container. At this point, we use
docker run and overwrite the original entrypoint to get a way into the
container.
# Commit the stopped image
docker commit 0dfd54557799 debug/ubuntu
# now we have a new image
docker images list
REPOSITORY TAG IMAGE ID CREATED SIZE
debug/ubuntu <none> cc9db32dcc2d 2 seconds ago 64.3MB
# create a new container from the "broken" image
docker run -it --rm --entrypoint sh debug/ubuntu
# inside of the container we can inspect - for example, the file system
$ ls /app
App.dll
App.pdb
App.deps.json
# CTRL+D to exit the container
# delete the container and the image
docker image rm debug/ubuntu
You can't because this container is dead as well as turned down virtual machine is dead. You can check logs using docker logs command.
docker container ls -aq
docker logs <name_of_your_dead_container>
From the man pages from docker-run:
--entrypoint=""
Overwrite the default ENTRYPOINT of the image
So use something like:
docker run --entrypoint=/usr/bin/sleep 1000 ......
This will start the container and wait 1000 seconds, allowing you to connect and debug.

how to troubleshoot dockerfile when application crash/fail in the container?

If the application fails in the docker container you would not be able to troubleshoot what happened. Please propose a solution to that
docker ps -a
This will list all the containers including those who have already existed (for whatever reason)
Then you can copy the process id of the container of your interest and:
docker logs <pid of container that has failed>
Another interesting command is:
docker inspect <pid of container that has failed>
It returns a big json - you can check some sections there, like memory settings, "State" (if the process was OOM killed and so forth)

How do I stop my Docker Windows container from the command line?

I have a Docker Windows container that I want to stop from the command line. Seems like an easy thing to do, but the commands
docker stop my-docker-machine
and
docker kill my-docker-machine
produce the error
Error response from daemon: No such container: my-docker-machine
I've searched the following without success:
Windows Containers on Windows Server
Docker Documentation
How do I stop a docker container so it will rerun with the same command?
How do I stop my Docker Windows container from the command line?
I decided to try using the stop command with the container id instead of the name:
Docker stop ab44b99065ce
Works much better!
Edit: You might want a list of container id's first. As pointed out by Varun Babu Pozhath, you can use
docker ps to list all running containers or
docker ps -all to list all containers.

In Docker for Windows, How do I prevent containers from automatically starting on the daemon start?

Every time my Docker for Windows daemon boots up, it will spin up 7 different containers. I can go through and docker kill <id> each container.. I can spin up additional containers, do other stuff etc, and all is fine... until I reboot. Once I reboot the docker daemon, they all appear again, the exact same 7 containers.
Where can I go to flush them from being candidates to reboot automatically?
Maybe those containers have a restart policy which will make Docker to run them every-time it sees them absent?
You can confirm it with a docker inspect.
If you see them running, try, before killing them, to docker update them:
docker update --restart=no container1 container2 ...
Then reboot and see if those containers are still running.

Running an docker image with cron

I am using an image from docker hub and it uses cron to perform some actions after some interval. I have registered and pushed it as described in documentation as a worker process (not a web). It also requires several environment variables.
I've run it from command line, e.g. docker run -t -e E_VAR1=VAL1 registry.heroku.com/image_name/worker and it worked for few days, then suddenly stopped and I had to run the command again.
Questions:
Is this a correct way to run a docker (as worker process) in Heroku?
Why might it stop running after few days? Is there any logs to check?
Is there a way to restart the process automatically?
How properly set environment variables for the docker in Heroku?
Thanks!
If you want to have this run in the background, you should use the -d flag to disconnect stdin and stdout, and not -t.
To check logs, user docker logs [container name or id]. You can find out the container's name and id using docker ps -a. That should give you an idea as to why the container stopped.
To have the container restart automatically add the --restart always flag when you run it. Alternatively, use --restart on-failure to only restart when it exited with a nonzero exit code.
The way you set environment variables seems fine.

Resources