I'm trying to restrict the CPUs of the container using the docker --cpuset-cpus option. But I'm not getting the desired result for some reason. For example the following command should just print 1:
docker run -it --cpuset-cpus=0 ubuntu:latest grep processor /proc/cpuinfo | wc -l
But I get the result as 4 (4 is the number of cpus shown in my host). This is so for any OS.
docker run -it --cpuset-cpus=0 centos grep processor /proc/cpuinfo | wc -l
docker run -it --cpuset-cpus=0 alpine grep processor /proc/cpuinfo | wc -l
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:45 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:41:24 2017
OS/Arch: linux/amd64
Experimental: false
Am I wrong in my understanding of the cpuset-cpus option? If so, what is the exact parameter that I need to pass to get the desired behavior I'm expecting? (grep processor /proc/cpuinfo | wc -l should output 1)
You mean you want to use less CPU in percentage or in numbers. Like you want to use 50% of your CPU, or 2 CPU's?
$ docker run -it --cpuset-cpus="0-2" ubuntu:14.04 /bin/bash
This means processes in container can be executed on cpu 0, cpu 1 and cpu 2.
The --cpu-quota flag limits the container’s CPU usage. The default 0 value allows the container to take 100% of a CPU resource (1 CPU).
It seems that there is an issue about that https://github.com/moby/moby/issues/20770
try to use something similar to
docker run --rm --cpuset-cpus=0,1 ubuntu sh -c "cat /sys/fs/cgroup/cpuset/cpuset.cpus"
And check if that works.
Hope it helps.
As far as I know the kernel's CPU sets don't affect the proc file in any way.
Related
I am running spark workers using docker, replicated using a docker-compose setup:
version: '2'
services:
spark-worker:
image: bitnami/spark:latest
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://1.1.1.1:7077
deploy:
mode: replicated
replicas: 4
When I run docker-compose exec spark-worker ls, for example, it only runs on usually the first replica. Is there a way to broadcast these commands to all of the replicas?
docker-compose version 1.29.2, build 5becea4c
Docker version 20.10.7, build f0df350
There's no built-in facility for this, but you could construct something fairly easily. For example:
docker ps -q --filter label=com.docker.compose.service=spark-worker |
xargs -ICID docker exec CID ls
I would propose a simpler command than what #larsks proposed, which leverages the docker-compose command itself.
SERVICE_NAME=spark-worker
for id in $(docker-compose ps -q $SERVICE_NAME); do
docker exec -t $id "echo" "hello"
done;
When I use docker or docker-compose with volumes I often have issues with permissions as the container user is not known on the host:
mkdir i-can-to-what-i-want
rmdir i-can-to-what-i-want
docker run -v$(pwd):/home -w/home ubuntu touch you-shall-not-delete-it
$ ls -al you-shall-not-delete-it
-rw-r--r-- 2 root root 0 2020-08-08 00:11 you-shall-not-delete-it
One solution is to always do this:
UID=$(id -u) GID=$(id -g) docker-compose up
Or
UID=$(id -u) GID=$(id -g) docker run ...
But... it is cumbersome...
Any other method?
--user will do the job, unless this is the exact cumbersome solution that you are trying to avoid:
who
neo tty7 2020-08-08 04:46 (:0)
docker run --user $UID:$GID -v$(pwd):/home -w/home ubuntu touch you-shall-delete-it
ls -la
total 12
drwxr-xr-x 3 neo neo 4096 Aug 8 02:12 .
drwxr-xr-x 34 neo neo 4096 Aug 8 02:03 ..
drwxr-xr-x 2 neo neo 4096 Aug 8 02:03 i-can-to-what-i-want
-rw-r--r-- 1 neo neo 0 Aug 8 02:12 you-shall-delete-it
In fact you don't use volume here :
docker run -v$(pwd):/home
you use bind mound.
When you use a bind mount, the resource on the host machine is mounted into a container.
Relying on the host machine’s filesystem has advantages (speed and a dynamic data source) but has also its limitations (file ownership and portability).
How I see things :
1)When you use docker-compose in dev and that you need to bind your source code that constantly changes, bind mount is unavoidable but you can simplify things by setting the user/group of the container directly in the compose.
version: '3.5'
services:
app:
user: "${UID}:${GID}"
...
Note that ${UID} and ${GID} are here shell variables.
${UID} is defined in bash, but ${GID} is not. You could export it if required or so use the user id for both : user: "${UID}:${UID}".
2)When you use docker or docker-compose in a frame where you don't need to provide the files/folders from that host at container creation time but that you can do it in the image creation, favor volume (name volume) over bind mount.
I've found some interesting weirdness when trying to mount a docker image on windows.
I created a .sh script that does a mount of the project folder to run our developer environment image. I want one script that every dev can run, regardless of their machine. All it does is runs docker with the current project folder.
#!/usr/bin/env bash
docker run -it --rm -v D:\my\project\folder:/wkDir $IMAGE_TAG yarn dev
Runs okay. Now the plan is to call this script from npm, so I'd like this to work relative to the current folder. Let's try another version.
docker run -it --rm -v $PWD:/wkDir $IMAGE_TAG yarn dev
Fails with:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from
daemon: Mount denied:
The source path "D:/my/project/folder;C"
doesn't exist and is not known to Docker.
Wat. What's ;C and where did it come from?
So I do echo $PWD which gives me /d/my/project/folder.
Interesting, so $PWD resolves to the correct path in linux path format, and it seems like docker is trying to translate from that to the correct windows path, except there's this ;C that appears out of nowhere. And the \ are /...
What exactly is going on here?
I get the same result in VSCode's terminal git bash and powershell.
Update: I noticed that running the .sh in VSCode's powershell terminal, opens a separate cmd.exe console window which seems to run the script in git bash. So this might be a git bash issue.
So with some extra digging I found these three threads, related to git-bash mucking up docker mount:
https://forums.docker.com/t/weird-error-under-git-bash-msys-solved/9210
https://github.com/moby/moby/issues/24029#issuecomment-250412919
When I look up mingw's documentation on the path conversion git-bash is using, I find this table of syntax:
http://www.mingw.org/wiki/Posix_path_conversion
One of which outputs in the format: x;x;C:\MinGW\msys\1.0\x. Note the ;C in it. If git-bash is trying to be clever, stuffing up the syntax and outputting a path with this format, this would explain it.
Solution is to escape the path conversion, using by prefixing with /. So the working docker command to run docker from git-bash with present working directory:
docker run -it --rm -v /${PWD}:/wkDir $IMAGE_TAG yarn dev
Mounting the current directory into a Docker container in Windows 10 from Git Bash (MinGW) may fail due to a POSIX path conversion. Any path starting with / is converted to a valid Windows path.
touch test.txt
docker run --rm -v $(pwd):/data busybox ls -la /data/test.txt
# ls: C:/Git/data/test.txt: No such file or directory
Escape the POSIX paths by prefixing with /
To skip the path conversion, all POSIX paths have to be prefixed with the extra leading slash (/), including /$(pwd).
touch test.txt
docker run --rm -v /$(pwd):/data busybox ls -la //data/test.txt
# -rwxr-xr-x 1 root root 0 Jun 22 23:45 //data/test.txt
In Git Bash the path //data/test.txt is not converted and in Linux shells // (leading double slash) is ignored and treated the same way as /.
Disable the path conversion
Disable the POSIX path conversion in Git Bash (MinGW) using MSYS_NO_PATHCONV environment variable.
The path conversion can be disabled at the command level:
touch test.txt
MSYS_NO_PATHCONV=1 docker run --rm -v $(pwd):/data busybox ls -la /data/test.txt
# -rwxr-xr-x 1 root root 0 Jun 22 23:45 /data/test.txt
The path conversion can be disabled at the shell (or system) level:
export MSYS_NO_PATHCONV=1
touch test.txt
docker run --rm -v $(pwd):/data busybox ls -la /data/test.txt
# -rwxr-xr-x 1 root root 0 Jun 22 23:45 /data/test.txt
For me the solution was simply to include a closing slash / at end of any paths.
E.g. instead of
/opt/apache-atlas-2.0.0/bin/atlas_start.py
...use
/opt/apache-atlas-2.0.0/bin/atlas_start.py/
I had the same issue on git bash and not command prompt.
You can instead
docker run -it --rm -v "/${PWD}/D:\my\project\folder":/wkDir $IMAGE_TAG yarn dev
Can you try below command -
docker run -it --rm -v %cd%:/wkDir $IMAGE_TAG yarn dev
I've actually had the same issue. Depending on if you are using Git Bash this command works(using nginx as an example):
docker container run --name container-name -v `pwd -W` /html:/usr/share/nginx/html -p 8000:80 -d nginx
of course you can specify the port and directory as you desire.
Straight worked for me below. just don't use dynamic variable.
docker run --rm -u root -p 8080:8080 -v jenkins-data/:/var/jenkins_home -v /var/run/docker.sock/:/var/run/docker.sock -v /Users/<YOUR USER NAME>/:/home jenkinsci/blueocean
using ulimit command, i set core file size.
ulimit -c unlimited
and I compiled c source code using gcc - g option.
then a.out generated.
after command
./a.out
there is runtime error .
(core dumped)
but core file was not generated.(ex. core.294340)
how to generated core file?
First make sure the container will write the cores to an existing location in the container filesystem. The core generation settings are set in the host, not in the container. Example:
echo '/cores/core.%e.%p' | sudo tee /proc/sys/kernel/core_pattern
will generate cores in the folder /cores.
In your Dockerfile, create that folder:
RUN mkdir /cores
You need to specify the core size limit; the ulimit shell command would not work, cause it only affects at the current shell. You need to use the docker run option --ulimit with a soft and hard limit. After building the Docker image, run the container with something like:
docker run --ulimit core=-1 --mount source=coredumps_volume,target=/cores ab3ca583c907 ./a.out
where coredumps_volume is a volume you already created where the core will persist after the container is terminated. E.g.
docker volume create coredumps_volume
If you want to generate a core dump of an existing process, say using gcore, you need to start the container with --cap-add=SYS_PTRACE to allow a debugger running as root inside the container to attach to the process. (For core dumps on signals, see the other answer)
I keep forgetting how to do it exactly and keep stumbling upon this question which provides marginal help.
All in all it is very simple:
Run the container with extra params --ulimit core=-1 --privileged to allow coredumps:
docker run -it --rm \
--name something \
--ulimit core=-1 --privileged \
--security-opt seccomp=unconfined \
--entrypoint '/bin/bash' \
$IMAGE
Then, once in container, set coredump location and start your failing script:
sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t
myfailingscript.a
Enjoy your stacktrace
cd /tmp; gdb -c `ls -t /tmp | grep core | tail -1`
Well, let's resurrect an ancient thread.
If you're running Docker on Linux, then all of this is controlled by /proc/sys/kernel/core_pattern on your raw metal. That is, if you cat that file on bare metal and inside the container, they'll be the same. Note also the file is tricky to update. You have to use the tee method from some of the other posts.
echo core | sudo tee /proc/sys/kernel/core_pattern
If you change it in bare metal, it gets changed in your container. So that also means that behavior is going to be based on where you're running your containers.
My containers don't run apport, but my bare metal did, so I wasn't getting cores. I did the above (I had already solved the ulimit -c thing), and suddenly I get core files in the current directory.
The key to this is understanding that it's your environment, your bare metal, that controls the contents of that file.
I am in a running docker container with node and for some reason the timezones / the time of the host machine vs inside the docker container never line up:
root#foobar:~# node -e "console.log(new Date())"
>> Tue May 17 2016 15:12:43 GMT+0200 (CEST)
root#foobar:~# docker exec 9179105c0ff9 node -e "console.log(new Date())"
>> Tue May 17 2016 13:13:01 GMT+0000 (Europe)
root#foobar:~# cat /etc/timezone
>> Europe/Vienna
root#foobar:~# docker exec 9179105c0ff9 cat /etc/timezone
>> Europe/Vienna
So what I already did in my docker-start shell is script is the following:
docker run \
...
-v /etc/localtime:/etc/localtime:ro \
-v /etc/timezone:/etc/timezone:ro \
-e "TZ=Europe/Vienna" \
...
... but still, as you can see in the first codeblock, the time is still wrong! Any ideas on this? What am I missing?
(fyi: I am running a meteor app deployed via mupx)
UPDATE:
After running date on the host and inside the container, there again is a difference of 2hrs. so for some reason the docker container just does not "apply" my timezone and it seems like the problem is not JS/node related, since dateis just a simple unix system cmd ... what am I missing here?!
>> Tue May 17 2016 15:12:43 GMT+0200 (CEST)
and
>> Tue May 17 2016 13:13:01 GMT+0000 (Europe)
are approximately the same time(around 18sec difference, because you didn't ran the commands at the same time).
Take a closer look, it's around 3pm GMT+0200 and around 1pm GMT+0000.
This is just a difference in output format, but the time is the same.
If you execute .getTime() on the value of the new Date(), you will probably have the same values.
This is probably due to differences in default output format in different node.js versions.
I do it in this way:
args: ["-v /etc/timezone:/etc/timezone:ro","-v /etc/localtime:/usr/share/zoneinfo/Europe/Prague:ro", '-e TZ=Europe/Prague']
Need to install timedatectl
I was surprised to see the same problem when using new Date().toString() in a container. It always returns the string for GMT+000, and not for the server's time zone. There might be container-specific settings to change the time zone, but since I know my time zone - New York, I just used toLocaleString:
new Date().toLocaleString("en-US", {timeZone: "America/New_York"})