I able to run Flask API container successfully. But during the app execution it fails and stops the container for some reason.
I do checked container logs and noticed some file missing error is coming. Now I want to debug what file is missing by accessing /bin/bash of stopped container. But it throws an error saying container is not running.
docker exec -it CONTAINER /bin/bash
Is there any workaround to access bash in the STOPPED container?
No, you cannot.
However, it might be useful to either check the logs or specify bash as an entry point when doing a docker run
Checking logs: https://docs.docker.com/config/containers/logging/
docker logs <CONTAINER_NAME>
Shell Entry point: https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime
docker run --name <CONTAINER_NAME> --entrypoint /bin/bash <IMAGE_NAME>
If you container does not have /bin/bash, try
docker run --name <CONTAINER_NAME> --entrypoint /bin/sh <IMAGE_NAME>
You can try to use the docker commit command.
From the docs:
It can be useful to commit a container’s file changes or settings into
a new image. This allows you to debug a container by running an
interactive shell, or to export a working dataset to another server.
Resource with a example:
We can transform a container into a Docker image using the commit
command. All we need to know is the name or the identifier of the
stopped container. (You can get a list of all stopped containers with
docker ps -a).
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
0dfd54557799 ubuntu "/bin/bash" 25 seconds ago Exited (1) 4 seconds ago peaceful_feynman
Having the identifier 0dfd54557799 of the stopped container, we can
create a new Docker image. The resulting image will have the same
state as the previously stopped container. At this point, we use
docker run and overwrite the original entrypoint to get a way into the
container.
# Commit the stopped image
docker commit 0dfd54557799 debug/ubuntu
# now we have a new image
docker images list
REPOSITORY TAG IMAGE ID CREATED SIZE
debug/ubuntu <none> cc9db32dcc2d 2 seconds ago 64.3MB
# create a new container from the "broken" image
docker run -it --rm --entrypoint sh debug/ubuntu
# inside of the container we can inspect - for example, the file system
$ ls /app
App.dll
App.pdb
App.deps.json
# CTRL+D to exit the container
# delete the container and the image
docker image rm debug/ubuntu
You can't because this container is dead as well as turned down virtual machine is dead. You can check logs using docker logs command.
docker container ls -aq
docker logs <name_of_your_dead_container>
From the man pages from docker-run:
--entrypoint=""
Overwrite the default ENTRYPOINT of the image
So use something like:
docker run --entrypoint=/usr/bin/sleep 1000 ......
This will start the container and wait 1000 seconds, allowing you to connect and debug.
Related
I am trying to run the frdmrobotics/playground Docker Image like in this tutorial.
And also Connect the Gui via VcXsrv.
I tried to pull the Image but already only got errors.
Also I have no idea how I should connect VcXsrv from inside the Image
The Problem with Pulling was solved by updating Docker for Windows by starting it and letting it search for an Update.
After a Restart I could Pull the image with docker pull frdmrobotics/playground.
For starting a Container of the Image
docker run -it frdmrobotics/playground
works, but from the Tutorial
cd to the desired development folder e.g.
cd c:/rosProgramming
and then use:
docker run -dt --name robot_env --restart unless-stopped -v %cd%:/root/workspace frdmrobotics/playground
That starts the Container which can then be connected to by using:
docker exec -it robot_env bash
For Connecting VcXsrv I found the Info that I had to set some things in the Image:
export DISPLAY=192.168.105.1:0.0
export LIBGL_ALWAYS_INDIRECT=1
The needed IP Address in that command I got via ipconfig, it is the one which is marked with "(WSL)".
After that when I started a Programm it Opened in my VcXsrv Instance
After that you can use:
cd /root/code/ros1
to navigate to the ros1 environment
Sometimes the Docker container where Memgraph is running just stops working or says that the process was aborted with exit code 137. How can I fix this?
You should check the Memgraph logs, where you'll probably find the reason why the process was aborted.
Since you said that you're using Memgraph with Docker, there are two options:
If you run Memgraph with Docker using the volume for logs, that is with
-v mg_log:/var/log/memgraph, then mg_log folder usually can be found at \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\ (Windows) or /var/lib/docker/volumes/ (Linux and macOS).
If you run Memgraph without using the volume for logs, then you need to enter the Docker container. In order to do that, first you have to find out the container ID by running docker ps. Then you have to copy the container ID and run docker exec -it <containerID> bash. For example, if container ID is 83d76fe4df5a, then you run docker exec -it 83d76fe4df5a bash. Next, you need find the folder where logs are located. You can do that by running cd /var/log/memgraph. To read the logs, run cat <memgraph_date>.log, that is, if you have log file memgraph_2022-03-02.log located inside the log folder, then run cat memgraph_2022-03-02.log.
Hopefully, when you read the logs, you'll be able to fix your problem.
I have vm environment which i have created using Microsoft azure cloud. I have installed docker in this vm. I can run docker image without specifying the any terminal like sh or bash and it is working. when i say
docker run -it hello-world --->> it works
docker run -it hello-world sh ---->>> it don't works.
actually i am working on a networking tool kathara where i have to start a virtual lab using many pcs and router and then i have to specify the terminal for them when i want to open any pc or router.
this is the actual error i am getting wheni start conatiner
"critical - 400 client error: bad request ("oci runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown")"
docker run -it hello-world runs the container's default command: ./hello. That works, because that's what the container is designed to do.
docker run -it hello-world /bin/bash tries to run /bin/bash inside the container. That doesn't work, because that's not what the container is designed to do. That command does not exist within the container.
If you want to run /bin/bash, choose a container that has /bin/bash.
This is even suggested in the output of docker run -it hello-world:
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
I'm curious if there's a way to see how much disk space a running Windows container is using in addition to the layers that are part of the container's image. Basically, how much the container "grew" since it was created.
In Linux (Or Linux containers running in a HyperV), this would be docker ps -s, however that command isn't implemented on Windows containers. I also tried docker system df -v but also, not implemented. Perhaps there's a hacky way by looking at a certain directly on disk or something?
I checked on Windows 10 1809 running non-HyperV (process isolation) containers, I'm pretty sure its the same for Windows Server containers.
The data seems to be kept in:
C:\ProgramData\Docker\windowsfilter\{ContainerId}
There's a direct reference to the folder in docker inspect {Id} under GraphDriver\Data\dir.
The folder contains file sandbox.vhdx which appears to be the "writable layer" of each container.
I wasn't able to open it and view the filesystem, but if I write some data inside the container I can force the file to grow:
docker exec <Id> powershell get-childitem c:\ -recurse `> c:\windows\temp\test.txt
The layer persists when the container is stopped/restarted, and the folder is removed when the container is rmed.
While researching I saw an open PR in moby to improve cleanup of this folder.
I'm using docker for windows (docker desktop 2.0.0.3) and docker ps -s is actually implemented.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
81acb264aa0f httpd "httpd-foreground" 6 minutes ago Up 6 minutes 80/tcp httpd 2B (virtual 132MB)
Docker for windows runs on a MobyLinuxVM. You can access the VM and the docker directories:
docker run --privileged -it -v /var/run/docker.sock:/var/run/docker.sock jongallant/ubuntu-docker-client
root#8b58d2fbe186:/# docker run --net=host --ipc=host --uts=host --pid=host –it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
root#8b58d2fbe186:/# chroot /host
Now you can access the docker folders in /var/lib/docker as on linux and check the sizes.
I have a problem, when I check the list of running container by command:
docker ps It show me running container with id and name. I killed it by command docker kill jenkins-master.1.vvafqnuu97itpn9clqgyqgqe7
after a few seconds It was started again with new container id automatically.
I have tried another command to remove it : docker container rm jenkins-master.1.vvafqnuu97itpn9clqgyqgqe7
It removed and then again started with another container id after few seconds.
I am really upset what's going on...
I have stoped container first and then removed, when I checked after remove by docker ps it was showing no container in list and after few seconds a container was running with some other id... It was surprising me.
The container is managed by swarm mode. Swarm mode will see the difference between the current state and target state and create a new container to correct the difference. Try:
docker service ls
docker service rm jenkins-master