I have a shell script .sh to stop and remove my running containers, which usually works very well. But in one out of ten times the docker rm command seems to be executed before docker stop finished, so it raises an error that the container is still running. What can i do to prevent that error?
docker stop nostalgic_shirley
docker rm nostalgic_shirley
Output
nostalgic_shirley
nostalgic_shirley
Error response from daemon: You cannot remove a running container xxxx. Stop the container before attempting removal or force remove
if i then try to use docker rm nostalgic_shirley again, it works fine.
I should not use docker stop --force as the shutdown should be gracefully
If you really care to stop the container before deleting:
docker stop nostalgic_shirley
while [ ! -z "$(docker ps | grep nostalgic_shirley)" ]; do sleep 2; done
docker rm nostalgic_shirley
If not, simply delete it with "force" option (You can even skip stopping):
docker rm -f nostalgic_shirley
Do you have by any chance any other process/user login to container sometimes when it did not stopped ?
Since as you mentioned this is something not happening frequently : have you checked the state of environment when this happens for parameters like CPU utilization, memory etc.
Have you tried
a) stopping the container with overriding -t flag
Use "-t" flag of "docker stop" command
-t, --time=10
Seconds to wait for stop before killing it
b) Use "-f" flag of "docker rm" command
-f, --force[=false]
Force the removal of a running container (uses SIGKILL)
Related
When I run or start a Docker container, it will not stay running.
Docker start will just return the name of whatever container I gave it, but wont actually do anything. Docker run (ex $ docker run -p 8080:80 --name hello -d hello-world) will create it but it will exit immediately.
If I run docker ps after one of these, it will show nothing listed as currently running.
If I run docker ps -a, it will show all of my containers and show the one that I just attempted to run having exited a few seconds ago.
Is this common and how do I get my containers to stay running? I am trying to learn how to use Docker and it has been one of the worst experiences. Thank you for any help or suggestions
Docker containers are generally used to run applications/processes in an isolated environment.
When you run the hello-world image, it creates a container which has only purpose of printing out the name using standard output. That is the only process that ran and the container was done with its work. That is why you see nothing when done docker ps.
In order to keep a container running, you need to have a process inside that container that will run (for example: a server, database, application etc.)
Try creating a container form mysql image, and then check the running container.
In your command, you specify the -d flag (aka detach), which means Run container in background and print container ID (from Docker docs). See more discussion about this here: Docker container will automatically stop after "docker run -d"
docker run -p 8080:80 --name hello -d hello-world
If you run it without the -d flag, it should run in the foreground and send output to your terminal
docker run -p 8080:80 --name hello hello-world
You don't see it running in docker ps -a because that container just executes the hello-world script and exits. If the container starts a long running process then you'll be able to find it in docker ps -a. To verify this, you can try running the nginx demo containers (e.g. nginx-hello) which serve up 'hello world'/demo pages.
To know what's wrong with your container use (docker logs (your container name)) command.
then you can sort it out what went wrong with your container
Is this common and how do I get my containers to stay running?
What happen when you start a Docker container ?
By default, it executes the command/the entrypoint specified in the Dockerfile image.
Generally that command or the entrypoint is a script or a program located in the image.
When that script/program exits, the container exits too. That's all.
To keep a container alive, the script/program has to stay running.
You start an hello image container, a "hello" container says "hello" and exits.
That may be a script as simple as :
#!/bin/sh
echo "hello"
So that is expected to finish and exit the container.
Run a database or a web server and you will see a different behavior. The script/program keeps running... while you don't stop that. So the container also stays running while you don't stop that.
To experiment, you can run your hello-world container with an endless command :
docker run -p 8080:80 --name hello -d hello-world --entrypoint tail -f /dev/null
You will see that the container stays running.
A docker container exits when its main process finishes. The hello-world main process just prints some text and exits, so container exits too.
You can run this command straightly to see it's text:
docker run hello-world
If you want a running container, maybe you can try a nginx demo:
docker run --name nginx-demo -p 8080:80 -d nginx
then you can visit http://localhost:8080 using your web browser.
I'm running Docker Toolbox on VirtualBox on Windows 10.
I'm having an annoying issue where if I docker exec -it mycontainer sh into a container - to inspect things, the shell will abruptly exit randomly back to the host shell, while I'm typing commands. Some experimenting reveals that it's when I press two letters at the same time (as is common when touch typing) that causes the exit.
The container will still be running.
Any ideas what this is?
More details
Here's a minimal docker image I'm running inside. Essentially, I'm trying to deploy kubernetes clusters to AWS via kops, but because I'm on Windows, I have to use a container to run the kops commands.
FROM alpine:3.5
#install aws-cli
RUN apk add --no-cache \
bind-tools\
python \
python-dev \
py-pip \
curl
RUN pip install awscli
#install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#install kops
RUN curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
RUN chmod +x kops-linux-amd64
RUN mv kops-linux-amd64 /usr/local/bin/kops
I build this image:
docker build -t mykube .
I run this in the working directory of my the project I'm trying to deploy:
docker run -dit -v "${PWD}":/app mykube
I exec into the shell:
docker exec -it $containerid sh
Inside the shell, I start running AWS commands as per here.
Here's some example output:
##output of previous dig command
;; Query time: 343 msec
;; SERVER: 10.0.2.3#53(10.0.2.3)
;; WHEN: Wed Feb 14 21:32:16 UTC 2018
;; MSG SIZE rcvd: 188
##me entering a command
/ # aws s3 mb s3://clus
##shell exits abruptly to host shell while I'm writing
DavidJ#DavidJ-PC001 MINGW64 ~/git-workspace/webpack-react-express (master)
##container is still running
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
37a341cfde83 mykube "/bin/sh" 5 minutes ago Up 3 minutes gifted_bhaskara
##nothing in docker logs
$ docker logs --details 37a341cfde83
A more useful update
Adding the -D flag gives an important clue:
$ docker -D exec -it 04eef8107e91 sh -x
DEBU[0000] Error resize: Error response from daemon: no such exec
/ #
/ #
/ #
/ #
/ # sdfsdfjskfdDEBU[0006] [hijack] End of stdin
DEBU[0006] [hijack] End of stdout
Also, I've ascertained that what specifically is causing the issue is pressing two letters at the same time (which is quite common when I'm touch typing).
There appears to be a github issue for this here, though this one is for docker for windows, not docker toolbox.
This issue appears to be a bug with docker and windows. See the github issue here.
As a work around, prefix your docker exec command with winpty, which comes with git bash.
eg.
winpty docker exec -it mycontainer sh
Check the USER which is the one you are login with when doing a docker exec -it yourContainer sh.
Its .bahsrc, .bash_profile or .profile might include a command which would explain why the session abruptly quits.
Check also the logs associated to that container (docker logs --details yourContainer) in order to see if that closed session generated anything in stderr.
Reasons I can think of for a process to be killed in your container include:
Pid 1 exiting in the container. This would cause the container to go into a stopped state, but a restart policy could have restarted it. See your docker container inspect output to see if this is happening. This is the most common cause I've seen.
Out of memory on the OS, where the kernel would then kill processes. View your system logs and dmesg to see if this is happening.
Exceeding the container memory limit, where docker would kill the container, possibly restarting it depending on your policy. You would again view docker container inspect but the status will have different details.
Process being killed on the host, potentially by a security tool.
Perhaps a selinux or apparmor policy being violated.
Networking issues. Never encountered it myself, but since docker is a client / server design, there's a potential for a network disconnect to drop the exec session.
The server itself is failing, and you'd see various logs in syslog / dmesg indicating problems it can't recover from.
If I connect to a docker container
$> docker exec -it my_container zsh
and inside it I want to kill something I started with ctrl+c I noticed that it takes forever to complete. I've googled around and it seems that ctrl+c works a bit different than you would expect. My question, how can I fix ctrl+c inside a container ?
The problem is that Ctrl-C sends a signal to the top-level process inside the container, but that process doesn't necessarily react as you would expect. The top-level process has ID 1 inside the container, which means that it doesn't get the default signal handlers that processes usually have. If the top-level process is a shell, then it can receive the signal through its own handler, but doesn't forward it to the command that is executed within the shell. Details are explained here. In both cases, the docker container acts as if it simply ignores Ctrl-C.
Starting with docker 0.6.5, you can add -t to the docker run command, which will attach a pseudo-TTY. Then you can type Control-C to detach from the container without terminating it.
If you use -t and -i then Control-C will terminate the container. When using -i with -t then you have to use Control-P Control-Q to detach without terminating.
Test 1:
$ ID=$(sudo docker run -t -d ubuntu /usr/bin/top -b)
$ sudo docker attach $ID
Control-P Control-Q
$ sudo docker ps
The container is still listed.
Test 2:
$ ID=$(sudo docker run -t -i -d ubuntu /usr/bin/top -b)
$ sudo docker attach $ID
Control-C
$ sudo docker ps
the container is not there (it has been terminated). If you type Control-P Control-Q instead of Control-C in the 2nd example, the container would still be running.
Wrap the program with a docker-entrypoint.sh bash script that blocks
the container process and is able to catch ctrl-c. This bash example
might help:
https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash
#!/bin/bash
# trap ctrl-c and call ctrl_c()
trap ctrl_c INT
function ctrl_c() {
echo "** Trapped CTRL-C"
}
for i in `seq 1 5`; do
sleep 1
echo -n "."
done
Use Ctrl+\ instead of Ctrl+C
it kills the process instead of politely asking it to shut down.(read more here.)
In some cases, when I used ctrl-C to terminate a process inside a container, the container terminates.
Additionally, I've seen cases where processes running inside containers leave zombie processes.
I have found that when starting a container with the --init switch, both of these problems are addressed. This appears to make my containers operate in a more "normal, expected UNIX-like manner".
Note: --init is different from -i, which is short for --interactive
If you want more information on what the --init switch does, please read up on it on the Docker web pages that include information on docker run. The information on that web page says "Run an init inside the container that forwards signals and reaps processes".
I had the similar problem when I was trying to run mdbook (the Rust executable) in the docker container. The mdbook starts simple webserver and I want to stop it via Ctrl+C which did not work.
$ docker -ti --rm -p 4321:4321 my-docker-image mdbook serve --hostname 0.0.0.0 --port 4321
2019-08-16 14:00:11 [INFO] (mdbook::book): Book building has started
2019-08-16 14:00:11 [INFO] (mdbook::book): Running the html backend
2019-08-16 14:00:11 [INFO] (mdbook::cmd::serve): Serving on: http://0.0.0.0:4321
2019-08-16 14:00:11 [INFO] (ws): Listening for new connections on 0.0.0.0:3001.
2019-08-16 14:00:11 [INFO] (mdbook::cmd::watch): Listening for changes...
^C^C
Be inspired by #NID's answer I encapsulated the mdbook executable by universal bash script docker-entrypoint.sh which did the trick (without the need to explicitly catch the INT signal).
$ docker -ti --rm -p 4321:4321 my-docker-image docker-entrypoint.sh mdbook serve --hostname 0.0.0.0 --port 4321
2019-08-16 14:00:11 [INFO] (mdbook::book): Book building has started
2019-08-16 14:00:11 [INFO] (mdbook::book): Running the html backend
2019-08-16 14:00:11 [INFO] (mdbook::cmd::serve): Serving on: http://0.0.0.0:4321
2019-08-16 14:00:11 [INFO] (ws): Listening for new connections on 0.0.0.0:3001.
2019-08-16 14:00:11 [INFO] (mdbook::cmd::watch): Listening for changes...
^C $
The content of the docker-entrypoint.sh is very simple:
#!/bin/bash
$#
I tried the --init solution by #Remy Orange and it worked for me. After some searching, including i)How to use --init parameter in docker run, ii) What is advantage of Tini? and iii) init, I wrote the detailed solution below:
Install tini on Ubuntu:
via launching:
$ sudo apt update && sudo apt install tini
Or, if tini is not available in your distribution or is too old, please check a Dockerfile to add tini at here.
Run your Docker container with --init:
docker run -ti --init --rm YOUR_DOCKER_CONTAINER_EXMAPLE bash
Then you come into your docker container and you can run some processes or experiments. E.g., run a Python code, then you can launch Ctrl + C to cancel this Python code, just as what you can do on Ubuntu (i.e., the regular terminal which is outside the docker container).
See the screenshot in my case:
launching Ctrl + C (i.e., ^C) to cancel the python process:
It stops, showing KeyboardInterrupt as expected:
when your docker terminal is not responding to Ctrl+C/Ctrl+D/Ctrl+/, try these steps:
#1>> Open another terminal session and enter the command:
**`docker container ls`**
or
**`docker container list`**
#2>> locate the container id from the above list and issue the docker container stop command:
**`docker stop <<containerId>>`**
#3>> next time when you launch the docker container, use the flag "**-it**" to respond to the Ctrl+C event
docker run -it <>
Now you can stop ,with control+C
If you use Docker Compose, you can add the init parameter to forward signals to the container:
version: "2.4"
services:
web:
image: alpine:latest
init: true
To make it work you need to have the option -ti in your docker exec command.
Wasted about 2 hours.
New commands -- (Working fine)
sudo docker stop
sudo docker rm
sudo docker run -t
Old commands -- (Not working anymore)
sudo docker stop
sudo docker rm
sudo docker run
Ctrl + C
sudo docker start
Hope that helps someone.
For me Ctrl+C works only after running a container with docker run -it <container id/name>
For whom still have this issue, worked for me the Ctrl+d. Neither Ctrl+c or Ctrl+z worked.
I am trying to run a container and modify certain files in it. I am trying to do this using a script. If I use:
docker run -i -t <container> <image>, it is giving me
STDERR: cannot enable tty mode on non tty input
If I use:
docker run -d <container> <image> bash, the container is not starting.
Is there anyway to do this?
Thanks
Run the docker image in background using:
docker run -d <image>:<version>
Check running docker containers using:
docker ps
If there is only one container running you can use below command to attach to a running docker container and use bash to browser files/directories inside container:
docker exec -it $(docker ps -q) bash
You can then modify/edit any file you want and restart the container.
To stop a running container:
docker stop $(docker ps -q)
To run a stopped container:
docker start -ia $(docker ps -lq)
So to start off, the -i -t is for an interactive tty mode for interacting with the container. If you are invoking this in a script then it's likely that this won't work as you expect.
This is not really the way containers are meant to be used. If it is a permanent change, you should be rebuilding the image and using that for the container.
However, if you want to make changes to files that are reflected in the container, you could consider using volumes to mount directories from the host into the container. This would look something like:
docker run -v /some/host/dir:/some/container/dir -d container
At this point anything you change within /some/host/dir will be within the container at /some/container/dir. You can then make your changes with a script on the host, without having to invoke the docker cli.
I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:
CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")
but I'm having trouble with the second part--committing the changes to the image once the sleep 10 command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?
EDIT: As an alternative, is there a way to trigger ctrl-p-q via a shell script in the container to leave the container running but return to the host?
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it
I didn't find any of these answers satisfying, as my goal was to 1) launch a container, 2) run a setup script, and 3) capture/store the state after setup, so I can instantly run various scripts against that state later. And all in a local, automated, continuous integration environment (e.g. scripted and non-interactive).
Here's what I came up with (and I run this in Travis-CI install section) for setting up my test environment:
#!/bin/bash
# Run a docker with the env boot script
docker run ubuntu:14.04 /path/to/env_setup_script.sh
# Get the container ID of the last run docker (above)
export CONTAINER_ID=`docker ps -lq`
# Commit the container state (returns an image_id with sha256: prefix cut off)
# and write the IMAGE_ID to disk at ~/.docker_image_id
(docker commit $CONTAINER_ID | cut -c8-) > ~/.docker_image_id
Note that my base image was ubuntu:14.04 but yours could be any image you want.
With that setup, now I can run any number of scripts (e.g. unit tests) against this snapshot (for Travis, these are in my script section). e.g.:
docker run `cat ~/.docker_image_id` /path/to/unit_test_1.sh
docker run `cat ~/.docker_image_id` /path/to/unit_test_2.sh
Try this if you want an auto commit for all which are running. Put this in a cron or something, if this helps
#!/bin/bash
for i in `docker ps|tail -n +2|awk '{print $1}'`; do docker commit -m "commit new change" $i; done