I am writing a healthcheck routine for a docker container. By design it should check how much CPU and memory it's using and return an "unhealthy" 1 if they exceed limits.
Is there a way to check container CPU and memory usage from INSIDE the container by running a .sh script?
All metrics are available in cgroup filesystem inside container. Read more here: https://docs.docker.com/config/containers/runmetrics
The client is connected to the docker socket (/var/run/docker.sock), which is not available inside the container. A workaround would be to mount host's /var/run/docker.sock into the container with the following option when you are starting the container:
-v /var/run/docker.sock:/var/run/docker.sock
For example,
docker run -it -v /var/run/docker.sock:/var/run/docker.sock $MY_IMAGE_NAME
Related
I can run a docker image with docker run -m 2048 <imagename> which runs a Windows container with 2048MB of ram.
However, when I try the equivalent that I can find in a docker compose file mem_reservation: 2048M I get an error:
Windows does not support MemoryReservation
How can I assign my docker container 2048MB of RAM when running it from docker compose, the equivalent of docker run -m ?
The application I am testing requires the machine to report 2GB of RAM as a minimum, and without any memory settings in the yml file the container only reports 1GB RAM to my application.
I will run a docker container with the command
docker run -ti --rm -p 8080:80 -v $(pwd)/my/path/to/config myimage:latest
but the plan is to write a function in a script, that start these image, while starting the VM in systemd.
Do I have to set up a service in /lib/systemd/system ?
From the docker website's documentation,
https://docs.docker.com/config/containers/start-containers-automatically/
You can use a restart policy, and parameters are,
Use a restart policy
To configure the restart policy for a container, use the --restart flag when using the docker run command. The value of the --restart flag can be any of the following:
Flag
Description
no
Do not automatically restart the container. (the default)
on-failure
Restart the container if it exits due to an error, which manifests as a non-zero exit code.
always
Always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted. (See the second bullet listed in restart policy details)
unless-stopped
Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.
If these do not work for your requirements, then there is information about you can use a process manager such as upstart, systemd, or supervisor lower down the webpage.
In your Dockerfile, add at the last
ENTRYPOINT service ssh restart && bash
I am using a docker container to build and deploy my software to a collection of ec2's. In the deployment script I build my software and then package it in a docker image. The image is pushed to my private registry, pulled by my production ec2's and then run. So essentially I will need to run docker within a docker container.
The problem is that I can't actually start docker on my container. If I try
service docker start
I get
bash: service: command not found
And if I try
docker -d
I get
2014/10/07 15:54:35 docker daemon: 0.11.1-dev 02d20af/0.11.1; execdriver: native; graphdriver:
[e2feb6f9] +job serveapi(unix:///var/run/docker.sock)
[e2feb6f9] +job initserver()
[e2feb6f9.initserver()] Creating server
2014/10/07 15:54:35 Listening for HTTP on unix (/var/run/docker.sock)
[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed
[e2feb6f9] -job initserver() = ERR (1)
2014/10/07 15:54:35 loopback mounting failed
The service command doesn't exist on the docker container so I can't start docker. I'm not sure what I should be doing now to start docker so I'm a bit stuck here, any help is appreciated.
A bit more information
Host machine is running fedora 20 (will eventually be running amazon linux on an ec2)
Docker container is running centos 7.0
Host is running Docker version 1.2.0, build fa7b24f/1.2.0
Container is running docker-0.11.1-22.el7.centos.x86_64
How about not running 'docker inside docker' and run docker on your host, but from within your docker container? Just mount your docker.sock and docker binary:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image]
https://github.com/sameersbn/docker-gitlab uses this approach to spin up docker containers, take a look at this image.
You can also take a look at: https://registry.hub.docker.com/u/mattgruter/doubledocker/
UPDATE on july 2016
The most current approach is to use docker:dind image, as described here:
https://hub.docker.com/_/docker/
Short summary:
$ docker run --privileged --name some-docker -d docker:dind
and then:
$ docker run --rm --link some-docker:docker docker info
While in almost all cases I would suggest following #cthulhu's answer and not running "docker in docker", in the cases when you must (e.g. a test suite which tests against multiple docker version), use the following to create additional loopback devices:
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(Taken from the thread for Docker Issue #7058)
You can simply run docker inside the docker container using dind. Try this image from Jerome, as follows:
docker run --privileged -t -i jpetazzo/dind
Check this page for more details:
https://github.com/jpetazzo/dind
How To Run #Testcontainers based test cases inside the docker container ?
I have Simple Spring Boot App that has Integration Test (Component level) that are interacting with containers using Testcontainers. Test cases are ruining fine from outside container(Local machine).
We are running everything in containers and build is running on docker jenkins image.
Docker file is creating jar and then image. #Testcontainers is not able to find docker installed.
Below is my docker file.
FROM maven:3.6-jdk-11-openj9
VOLUME ["/var/run/docker.sock"]
RUN apt-get update
RUN apt-get -y install docker.io
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN mvn -Dmaven.repo.local=/root/m2 --batch-mode -f pom.xml clean package
EXPOSE 8080
CMD ["/bin/bash"]
While running build i am getting below below error
org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy - ping failed with configuration Environment variables, system properties and defaults. Resolved dockerHost=unix:///var/run/docker.sock due to org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
Whats the best way to handle this case ? I want to run my component level integration test during mvn build phase using docker file.
below reference did not helped me.
https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
This is not complete answer but you should enable access to a docker daemon from inside your container. Installing Docker and running it's daemon inside your container is complicated so not recommended. Docker can be controlled via Unix socket or over TCP (I assume the host system is a Linux).
How Test containers look for Docker:
By default it tries to connect to Unix socket /var/run/docker.sock. You can specify other socket path or TCP address by setting environment variables (DOCKER_HOST).
How docker exposes it's control API:
By default via Unix socket /var/run/docker.sock (on your host). You can expose docker API elsewhere by adding following parameters to docker start command (the location of command launching your docker is system dependent): -H fd:// -H tcp://127.0.0.1:2376. Note that you can specify more than one option. -H fd:// - is the default, tcp://127.0.0.1:2376 - tells Docker to listen on localhost port 2376.
How to make Docker available inside your container ("Docker in Docker"): If you enabled network access - no need to do additional config except pointing Testcontaners to it as mentioned above. If you want to use default Unix socket then you can map (mount) it into container via volume option:
docker run --volume /var/run/docker.sock:/var/run/docker.sock your-image-id-here
The remaining problem is that mounted docker.sock inside container will also be owned by root:docker (with same uid:gid as on your host system) so Testcontainers would work only if your container user can connect to that socket. That is user of running process is root or happen to have exact same group id inside your container as group id of docker on your host system.
I do not know yet a good solution to this one, so for starters you can run your tests inside container as root, or hard-code container's user group-id to match your host's docker group id.
Is there away to log into host VM's shell, similarly to how can we easily enter into running containers bash?
docker exec -it bash
I accidentally broke one container's crucial file, so that it couldn't start. Unfortunately, that container stored it's data inside. The result was that whenever I tried to run it, it couldn't start. The only solutions I saw were about navigating to host docker daemon's files. However, I'm running docker VM on windows, and I cannot access the files inside VM (MobyLinuxVM).
I'm using Docker for Windows, version 1.12.3-beta30.1 (8711)
Hack your way in
run a container with full root access to MobyLinuxVM and no seccomp profile (so you can mount stuff)
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
https://forums.docker.com/t/how-can-i-ssh-into-the-betas-mobylinuxvm/10991/6