I wanted to pull the the Docker image for Windows core/nano, docker pull microsoft/xxx to my local computer. The pull starts but it quickly ends with the error unknown blob
Same result for both of these:
PS C:\>docker pull microsoft/nanoserver
PS C:\>docker pull microsoft/windowsservercore
When trying to use images from Microsofts Docker repository "microsoft/xxx" you must ensure that you are running Docker with Windows containers, not Linux containers.
https://learn.microsoft.com/sv-se/virtualization/windowscontainers/quick-start/quick-start-windows-10
Related
I am working in an air-gapped environment running Fedora CoreOS which comes packaged with Podman. I have several container images I have been working on transporting into the air-gapped environment. In order to do this I have followed these steps:
I acquired the images on a machine with internet access. Some of the images were pulled into Podman from my Docker registry using podman pull docker-daemon:docker.io/my-example-image:latest while some were pulled directly from the online repositories using podman pull.
I saved the images to a tar file using (for example) podman save docker.io/my-example-image:latest -o my-example-image.tar
I transported the tar files to the air-gapped environment on physical media and loaded them using podman load -i my-example-image.tar
When I check the images using podman images they all appear in the images list. However, if I try to run a container from one of these images, using sudo podman run docker.io/my-example-image I get a long error message:
Trying to pull docker.io/my-example-image
Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on [::1}:53: read udp [::1]:50762 ->
[::1]:53: read: connection refused
Error: unable to pull docker.io/my-example-image: Error initializing source docker://my-example-image:latest:
error pinging docker registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": dial tcp: lookup
registry-1.docker.io on [::1]:53: read udp [::1]50762 -> [::1]:53: read: connection refused
I get a similar message for images that were acquired from other repositories like quay.io
It seems to me that the error is caused by the machine's inability to establish a connection with a registry, which makes sense to me considering that the environment is air-gapped. But I am not sure why podman is even trying to pull these images when they already exist in the environment as confirmed by podman images
I have tried using various ways of referencing the image within the podman run command including
sudo podman run docker.io/my-example-image:latest
sudo podman run my-example-image
sudo podman run my-example-image:latest
I have tried searching for a solution to this problem to no avail and would very much appreciate any guidance on this.
Each user has its own container storage.
The user root uses the directory /var/lib/containers/
Normal users use the directory ~/.local/share/containers/
The command
podman load -i my-example-image.tar
will use the directory ~/.local/share/containers/
The command
sudo podman run docker.io/my-example-image
will use the directory /var/lib/containers
If you would like to share a read-only container storage between users,
check out the setting additionalimagestores in the file storage.conf
[storage.options]
additionalimagestores = [ "/var/lib/mycontainers",]
Reference:
https://www.redhat.com/sysadmin/image-stores-podman
I want to include support for Xdebug in a PHP Docker container, however as part of this, I need to specify the IP of the Windows machine running the Docker container via XDEBUG_CONFIG=remote_host=${HOST_IP} - Currently HOST_IP is manually specified in a .env file, but I'd like to automate this to reduce the setup steps for other users.
My problem is that I can't seem to find a way to easily determine the IP of the host machine. It also needs to work on both Windows and Linux Docker hosts, as not all users use Windows as their desktop environment. I also can't use ${HOSTNAME}, as this fails to resolve in DNS.
Does anyone have any suggestions on how to achieve this?
EDIT2: Updating this answer for the newer versions of Docker: From 18.03 onward, Docker For Windows and other Docker platforms have been updated to include a cross-platform hostname for their Docker host, host.docker.internal - which is bloody helpful.
You might try a formatted docker info:
https://docs.docker.com/engine/reference/commandline/info/#format-the-output
docker info --format '{{json .}}'
docker info --format '{{json .path.to.ip}}'
E.g. in a (single-host) Docker Swarm you can get the host ip by:
docker info --format '{{json .Swarm.NodeAddr}}'
Via command substition stored in a variable:
docker_host_ip=$(docker info --format '{{json .Swarm.NodeAddr}}')
I could not try it on on Windows or on Docker without Swarm ... but docker info should work across platforms.
Update (according to comments below):
Not really (syntactically) "beautiful" you can use --format {{index path arrayIndex "Key"}} with docker network inspect and access the first element of an array (index 0) and then access the map inside this array via its key ("Gateway"):
docker network inspect docker_dockernet --format '{{index .IPAM.Config 0 "Gateway"}}'
I have been experimenting with docker the aim is
Install docker on windows 16
for which I am following instructions from https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_server
everything looks fine till installation of the docker , after that when I try to test the docker it give
below error.
failed to register layer: re-exec error: exit status 1: output: ProcessBaseLayer C:\ProgramData\docker\windowsfilter\0c9
effd422805350acb1f051eb171399678aff003192c41be274acc4762b786c: The system cannot find the path specified.
here I am just trying to pull "hello-world" from docker hub
My ultimate aim is to run IIS on docker container and deploy the application running on IIS.
Had anyone faced such issue or could come up with any suggestions
its VM machine with
OS :- Win2016
HDD :- 50GB
RAM :- 4GB
You can't pull the hello-world image from docker hub because it's linux image. I used to see it before while playing with Docker Container on windows when trying to do the same thing.
If you want to get different image like IIS, just run command follow the instructions:
First:
Install-PackageProvider ContainerImage -Force
Then
Install-ContainerImage -Name WindowsServerCore
After that, restart your docker container with
Restart-Service docker
And you'll have the images of IIS on your machine.
I've tried to get the images of windows as the way we do in Linux but it always throw error as you have, I guess that all images we have on hub now are for Linux only, so if you want to have particular things, you must build it by yourselves or using existed images for Windows on Docker hub (about 9 or 10 images as I remember).
This is probably because there's no hello-world image for Windows. You can try running docker run windowsservercore cmd /C hello world.
After install docker v1.12.0-rc2 on windows 10 pro machine, set the http and https variables, i get the following error:
docker run hello-world
Unable to find image 'hello-world:latest' locally
Pulling repository docker.io/library/hello-world
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error while pulling image: Get https://index.docker.io/v1/repos
itories/library/hello-world/images: x509: certificate is valid for FG200B3911602237, not index.docker.io.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
world image is images for Linux container, you can't use it on windows container.
At the moment, there're only about 9 or 10 images for windows containers, you can use it follow this article https://msdn.microsoft.com/en-us/virtualization/windowscontainers/deployment/deployment.
After install v1.12.0-rc3-beta18 build 5226 pull container works behind a proxy. tks #huy-tran
I have to pull docker image from Docker Hub and start running multiple peers as containers.
now, I am manually opening terminal and executing my docker run command on downloaded image but I am planning to automate this process like if I/user want 2 peers to run then I should be able to provide IP Address and Port information to Docker run command and start these peers in different terminals without manual step.
After executing these commands I should be able to store these IP address and port numbers in a JSON file further transactions.
Could you please help me!!! Thanks!!
Got quick solution for the above problem.. Below is the command I have applied docker run -d IMAGE NAME /bin/bash above command runs the container in background process. Also, I am taking network credentials by applying docker inspect <Container Id>