I have a machine with two nvidia 1060 GPU.When running a off-screen rendering program inside windows container, it worked, but when I run multiple container, they all used the same GPU (the first one).
# the way I start GPU supported container on windows
docker run -ti --isolation process --device class/5B45201D-F2F2-4F3B-85BB-30FF1F953599 "image-id" powershell
Under linux container with nvidia-docker, I can specify option --gpus '"device=1"' when calling docker run. This operation assing only the second GPU to the created container.
# the way I start GPU supported container on linux
nvidia-docker run -u root --net=host --gpus '"device=1"' -ti "image-id" bash
I wonder if there is an equivalant for --gpus '"device=1"' when running Windows container. To specify a GPU id when running container, so that I can let one container use one GPU each.
Environment:
Host OS: Windows 11 21H2
Docker Image: mcr.microsoft.com/windows/server:ltsc2022
GPU: Nvidia 1060 x 2
Thanks!
Please note I am trying to start a windows based container, not a linux one. XD
Related
I'm using WSL2 with Ubuntu 20.04 distribution, and I was trying create a container in Docker with the Following command:
docker run --hostname=quickstart.cloudera --privileged=true -it -v $PWD:/src --publish-all=true -p 8888:8888 -p 8080:8080 -p 7180:7180 cloudera/quickstart /usr/bin/docker-quickstart
when I run this command, a download started with a weight of about 4.4 GB, (i think that is because because was the first time that I run this container), whe the download was over, I used the following command to check the containers docker ps -a and the status for the container is Exited (139) 6 minutes ago, when check my image list
REPOSITORY TAG IMAGE ID CREATED SIZE
uracilo/hadoop latest 902e5bb989ad 8 months ago 727MB
cloudera/quickstart latest 4239cd2958c6 4 years ago 6.34GB
I think that the image was created successfully, but when I try to run the first command, I keep gettind the Exited (139) in the status and I can't use the container
Apparently the exit code 139 refers to some problem with the system or the hardware, maybe the RAM, but I'm not sure. and I don't know if this problem is because I'm using wsl or my 8GB in ram
not enough to run the image
is there any way to run this image successfully?
You need to create a file named .wslconfig under %userprofile% folder on your Windows and copy the following lines into that file
[wsl2]
kernelCommandLine = vsyscall=emulate
Then just restart your Docker engine.
I fixed this by changing the Docker engine from WSL2 back-end to Hyper-V
https://community.cloudera.com/t5/Support-Questions/docker-exited-with-139-when-running-cloudera-quickstart/td-p/298586
Is it possible to make sure that my containers are running with specific docker option.
I need to run my container with the --device option. I cannot use device plugin because I am running a windows container and device manager does not seems implemented for windows.
Thank you for your help
I'm curious if there's a way to see how much disk space a running Windows container is using in addition to the layers that are part of the container's image. Basically, how much the container "grew" since it was created.
In Linux (Or Linux containers running in a HyperV), this would be docker ps -s, however that command isn't implemented on Windows containers. I also tried docker system df -v but also, not implemented. Perhaps there's a hacky way by looking at a certain directly on disk or something?
I checked on Windows 10 1809 running non-HyperV (process isolation) containers, I'm pretty sure its the same for Windows Server containers.
The data seems to be kept in:
C:\ProgramData\Docker\windowsfilter\{ContainerId}
There's a direct reference to the folder in docker inspect {Id} under GraphDriver\Data\dir.
The folder contains file sandbox.vhdx which appears to be the "writable layer" of each container.
I wasn't able to open it and view the filesystem, but if I write some data inside the container I can force the file to grow:
docker exec <Id> powershell get-childitem c:\ -recurse `> c:\windows\temp\test.txt
The layer persists when the container is stopped/restarted, and the folder is removed when the container is rmed.
While researching I saw an open PR in moby to improve cleanup of this folder.
I'm using docker for windows (docker desktop 2.0.0.3) and docker ps -s is actually implemented.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
81acb264aa0f httpd "httpd-foreground" 6 minutes ago Up 6 minutes 80/tcp httpd 2B (virtual 132MB)
Docker for windows runs on a MobyLinuxVM. You can access the VM and the docker directories:
docker run --privileged -it -v /var/run/docker.sock:/var/run/docker.sock jongallant/ubuntu-docker-client
root#8b58d2fbe186:/# docker run --net=host --ipc=host --uts=host --pid=host –it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
root#8b58d2fbe186:/# chroot /host
Now you can access the docker folders in /var/lib/docker as on linux and check the sizes.
I'm using Docker 1.13.1 for Mac. The Docker client allows you to change the amount of memory provided to Docker using a simple slider interface.
How can I set this value via docker's command line utility?
For added clarity, this is not per container memory, this is the value of "Total Memory" that's returned when you run docker info.
Thank you
With docker (at least version 18.03.1) the settings for the VM are maintained in a special file located at:
/Users/<username>/Library/Group\ Containers/group.com.docker/settings.json
If you close docker you can edit it directly from the command line using sed, for example the command below will replace the 2 GB limit with a 10GB limit, and create a backup file of the original settings at settings.json.bak
sed -i .bak 's/2048/10240/g' /Users/`id -un`/Library/Group\ Containers/group.com.docker/settings.json
When docker restarts, it will now have 10 GB.
On a Mac, Docker actually runs as a Hyperkit virtual machine. The docker command line utility just interfaces with the docker daemon process running inside that virtual machine.
If you run ps auxwww | grep hyperkit on your Mac, you'll see the hyperkit process running with the amount of memory passed as an argument. This is controlled by the Docker Mac client, and I imagine the saved value is stored in a .plist file somewhere.
In order to modify that on the command line, you'd need to find where the Docker Mac client stores the data, modify it, and restart the hyperkit process.
I have been experimenting with docker the aim is
Install docker on windows 16
for which I am following instructions from https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_server
everything looks fine till installation of the docker , after that when I try to test the docker it give
below error.
failed to register layer: re-exec error: exit status 1: output: ProcessBaseLayer C:\ProgramData\docker\windowsfilter\0c9
effd422805350acb1f051eb171399678aff003192c41be274acc4762b786c: The system cannot find the path specified.
here I am just trying to pull "hello-world" from docker hub
My ultimate aim is to run IIS on docker container and deploy the application running on IIS.
Had anyone faced such issue or could come up with any suggestions
its VM machine with
OS :- Win2016
HDD :- 50GB
RAM :- 4GB
You can't pull the hello-world image from docker hub because it's linux image. I used to see it before while playing with Docker Container on windows when trying to do the same thing.
If you want to get different image like IIS, just run command follow the instructions:
First:
Install-PackageProvider ContainerImage -Force
Then
Install-ContainerImage -Name WindowsServerCore
After that, restart your docker container with
Restart-Service docker
And you'll have the images of IIS on your machine.
I've tried to get the images of windows as the way we do in Linux but it always throw error as you have, I guess that all images we have on hub now are for Linux only, so if you want to have particular things, you must build it by yourselves or using existed images for Windows on Docker hub (about 9 or 10 images as I remember).
This is probably because there's no hello-world image for Windows. You can try running docker run windowsservercore cmd /C hello world.