Docker run container slow - performance

I run the following command several times and get very closely results.
➜ time docker run --rm -it alpine:3.7 true
docker run --rm -it alpine:3.7 true 0.03s user 0.03s system 5% cpu 2.263 total
docker run takes 2 or 3 seconds to execute one simple command in a very small image. I think it's a performance issue, may be cpu/memory/io bottleneck.
I want to know how to figure out this problem?

Related

how to solve the extited code 139, for run the image cloudera/quickstart, in docker with WSL2 Ubuntu?

I'm using WSL2 with Ubuntu 20.04 distribution, and I was trying create a container in Docker with the Following command:
docker run --hostname=quickstart.cloudera --privileged=true -it -v $PWD:/src --publish-all=true -p 8888:8888 -p 8080:8080 -p 7180:7180 cloudera/quickstart /usr/bin/docker-quickstart
when I run this command, a download started with a weight of about 4.4 GB, (i think that is because because was the first time that I run this container), whe the download was over, I used the following command to check the containers docker ps -a and the status for the container is Exited (139) 6 minutes ago, when check my image list
REPOSITORY TAG IMAGE ID CREATED SIZE
uracilo/hadoop latest 902e5bb989ad 8 months ago 727MB
cloudera/quickstart latest 4239cd2958c6 4 years ago 6.34GB
I think that the image was created successfully, but when I try to run the first command, I keep gettind the Exited (139) in the status and I can't use the container
Apparently the exit code 139 refers to some problem with the system or the hardware, maybe the RAM, but I'm not sure. and I don't know if this problem is because I'm using wsl or my 8GB in ram
not enough to run the image
is there any way to run this image successfully?
You need to create a file named .wslconfig under %userprofile% folder on your Windows and copy the following lines into that file
[wsl2]
kernelCommandLine = vsyscall=emulate
Then just restart your Docker engine.
I fixed this by changing the Docker engine from WSL2 back-end to Hyper-V
https://community.cloudera.com/t5/Support-Questions/docker-exited-with-139-when-running-cloudera-quickstart/td-p/298586

Is it possible to get bash access in NOT running container?

I able to run Flask API container successfully. But during the app execution it fails and stops the container for some reason.
I do checked container logs and noticed some file missing error is coming. Now I want to debug what file is missing by accessing /bin/bash of stopped container. But it throws an error saying container is not running.
docker exec -it CONTAINER /bin/bash
Is there any workaround to access bash in the STOPPED container?
No, you cannot.
However, it might be useful to either check the logs or specify bash as an entry point when doing a docker run
Checking logs: https://docs.docker.com/config/containers/logging/
docker logs <CONTAINER_NAME>
Shell Entry point: https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime
docker run --name <CONTAINER_NAME> --entrypoint /bin/bash <IMAGE_NAME>
If you container does not have /bin/bash, try
docker run --name <CONTAINER_NAME> --entrypoint /bin/sh <IMAGE_NAME>
You can try to use the docker commit command.
From the docs:
It can be useful to commit a container’s file changes or settings into
a new image. This allows you to debug a container by running an
interactive shell, or to export a working dataset to another server.
Resource with a example:
We can transform a container into a Docker image using the commit
command. All we need to know is the name or the identifier of the
stopped container. (You can get a list of all stopped containers with
docker ps -a).
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
0dfd54557799 ubuntu "/bin/bash" 25 seconds ago Exited (1) 4 seconds ago peaceful_feynman
Having the identifier 0dfd54557799 of the stopped container, we can
create a new Docker image. The resulting image will have the same
state as the previously stopped container. At this point, we use
docker run and overwrite the original entrypoint to get a way into the
container.
# Commit the stopped image
docker commit 0dfd54557799 debug/ubuntu
# now we have a new image
docker images list
REPOSITORY TAG IMAGE ID CREATED SIZE
debug/ubuntu <none> cc9db32dcc2d 2 seconds ago 64.3MB
# create a new container from the "broken" image
docker run -it --rm --entrypoint sh debug/ubuntu
# inside of the container we can inspect - for example, the file system
$ ls /app
App.dll
App.pdb
App.deps.json
# CTRL+D to exit the container
# delete the container and the image
docker image rm debug/ubuntu
You can't because this container is dead as well as turned down virtual machine is dead. You can check logs using docker logs command.
docker container ls -aq
docker logs <name_of_your_dead_container>
From the man pages from docker-run:
--entrypoint=""
Overwrite the default ENTRYPOINT of the image
So use something like:
docker run --entrypoint=/usr/bin/sleep 1000 ......
This will start the container and wait 1000 seconds, allowing you to connect and debug.

Is there a way to see container disk usage on Docker for Windows?

I'm curious if there's a way to see how much disk space a running Windows container is using in addition to the layers that are part of the container's image. Basically, how much the container "grew" since it was created.
In Linux (Or Linux containers running in a HyperV), this would be docker ps -s, however that command isn't implemented on Windows containers. I also tried docker system df -v but also, not implemented. Perhaps there's a hacky way by looking at a certain directly on disk or something?
I checked on Windows 10 1809 running non-HyperV (process isolation) containers, I'm pretty sure its the same for Windows Server containers.
The data seems to be kept in:
C:\ProgramData\Docker\windowsfilter\{ContainerId}
There's a direct reference to the folder in docker inspect {Id} under GraphDriver\Data\dir.
The folder contains file sandbox.vhdx which appears to be the "writable layer" of each container.
I wasn't able to open it and view the filesystem, but if I write some data inside the container I can force the file to grow:
docker exec <Id> powershell get-childitem c:\ -recurse `> c:\windows\temp\test.txt
The layer persists when the container is stopped/restarted, and the folder is removed when the container is rmed.
While researching I saw an open PR in moby to improve cleanup of this folder.
I'm using docker for windows (docker desktop 2.0.0.3) and docker ps -s is actually implemented.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
81acb264aa0f httpd "httpd-foreground" 6 minutes ago Up 6 minutes 80/tcp httpd 2B (virtual 132MB)
Docker for windows runs on a MobyLinuxVM. You can access the VM and the docker directories:
docker run --privileged -it -v /var/run/docker.sock:/var/run/docker.sock jongallant/ubuntu-docker-client
root#8b58d2fbe186:/# docker run --net=host --ipc=host --uts=host --pid=host –it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
root#8b58d2fbe186:/# chroot /host
Now you can access the docker folders in /var/lib/docker as on linux and check the sizes.

Maven inside docker container horribly slow

I'm trying to setup a Docker container to build my java project with maven
I have created my Docker file FROM maven:3.2-jdk-7 and built the image.
when I execute with:
docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/app -v
"$HOME"/.m2:/root/.m2 -w /usr/src/app -v "$HOME"/.ssh:/root/.ssh test
mvn clean package -Dmaven.test.skip=true
It takes about 20 minute to complete.. but if I run the the same mvn command on my host it takes 2minutes
I have tried giving more memory to the container by using
-m 4gb
But it didn't change anything, looking at docker stats the container barely used more than 2G
I'm running all this from OSX
Is there anything I need to do to have a maven finish in a decent time? I very very surprised it takes THAT much when on the host it takes 2minutes..
This what docker stats says after maven has been building for 10 min
CPU: 201.13%
Mem usage / limit : 2.508GiB
MEM % : 62.69%
NET I/O: 3.01kB / 861B
BLOCK I/O: 57.7MB / 2.23MB
PIDS: 38
- EDIT -
It turns out Docker for mac does not play well when using mounted volume.
In order to avoid having to git clone the project inside the container I preferred using using -v "$PWD":/usr/src/app
To test I have directly git cloned the app form within the container and now the build takes a normal amount of time (4minutes)
Note that the git clone took... 6 min!!! instead (1min on host) so in total from git clone to final build it still takes 10min which is ridiculous.
So yea OSX and Docker is a big no no when using mounted volume...
I ran into this same issue using the same docker run syntax as you (docker run -v src:dest). A Maven build that took ~30 seconds on my OSX host was taking ~4 minutes in my container. I didn't solve it entirely, but switching to explicitly use a bind mount took my builds from around 4 minutes down to about 1.5 minutes. This still isn't an acceptable increase in build time for my use case, but it may help someone else. Try switching your docker run command to this:
docker run --name=my-maven-project -it \
--mount type=bind,source="$(pwd)",destination=/usr/src/app,consistency=delegated <docker image name>
NOTE: The consistency option at the very end is only valid on OSX, and has two other values, either of which may be more appropriate for your situation. I tried all three out of curiosity and build times were comparable between the delegated and cached options, meanwhile the consistent option nearly as slow as the way I was doing it before (unsurprisingly). Here's the documentation:
https://docs.docker.com/storage/bind-mounts/
So, unfortunately, despite bind mounts being "very performant," they're still apparently at least twice as slow as a native filesystem when it comes to maven builds, at least on OSX. With luck that will improve as time goes on.

How do I run the Hetionet v1.0 docker container?

I'm trying to run the Hetionet v1.0 docker container mentioned in this SO post.
I've setup a digitalocean droplet with Docker
I ran docker pull dhimmel/hetionet and it worked
Now I run docker run dhimmel/hetionet and the following happens (and never returns to the interactive shell prompt).
If that completed successfully I think the last thing I'm supposed to do is run sh ~/run-docker.sh. Furthermore nothing is live at my droplet's ip_address:7474.
The error in the screenshot above looks a lot like it could be related to some redundant #Path("/") annotation, as described in this SO post's comment, buried in the docker container but I'm not sure.
Is the output from running docker run dhimmel/hetionet supposed to hang my shell? I'm running a 2 GB Memory / 40 GB Disk Droplet on Ubuntu 16.04 with Docker 1.12.5.
Thanks for your interest in the Hetionet Docker.
The output in 3 is expected. It looks like a Docker container successfully launched, downloaded the Hetionet database, and launched the Neo4j server. I'll look into fixing the warnings, but they're not errors, as Neo4j is still launching.
For production, we use a more advanced Docker run command. Depending on your use case, you may want to use the development docker run command:
docker run \
--publish=7474:7474 \
--publish=7687:7687 \
--volume=$HOME/neo4j/hetionet-data:/data \
--volume=$HOME/neo4j/hetionet-logs:/var/lib/neo4j/logs \
dhimmel/hetionet
Both the production and development command map ports. This will make it so the Neo4j server running inside your Docker container is available at http://localhost:7474/. This is most likely what you want. If you're doing this on DigitalOcean, you would replace http://localhost with the IP address of your droplet.
For an interactive shell session in a dhimmel/hetionet container, you can use:
docker run --interactive --tty dhimmel/hetionet bash
However, that command does not launch the Neo4j server -- it just let's you explore the image.
Does this clear things up?

Resources