How to move a containerd image to different namespace? - windows

I'm currently setting up a Kubernetes cluster with windows nodes. I accidentally created a local image in the default namespace.
As shown by ctr image ls, my image is in the default namespace:
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/myimage:latest application/vnd.docker.distribution.manifest.v2+json sha256:XXX 6.3 GiB windows/amd64 -
Therefore, Kubernetes cannot find the image while creating the pod (ErrImageNeverPull, imagePullPolicy is set to Never). The reason for this is, the image isn't in the right namespace k8s.io:
The command ctr --namespace k8s.io image ls shows the base Kubernetes images:
REF TYPE DIGEST SIZE PLATFORMS LABELS
mcr.microsoft.com/oss/kubernetes/pause:3.6 application/vnd.docker.distribution.manifest.list.v2+json sha256:XXX 3.9 KiB linux/amd64,linux/arm64,windows/amd64 io.cri-containerd.image=managed
mcr.microsoft.com/oss/kubernetes/pause#sha256:DIGEST application/vnd.docker.distribution.manifest.list.v2+json sha256:XXX 3.9 KiB linux/amd64,linux/arm64,windows/amd64 io.cri-containerd.image=managed
...
The most straight-forward approach I tried, was exporting the image, deleting the image, and importing the image with different namespace. (as mentioned on a Github comment in the cri-tools project)
ctr --namespace k8s.io image import --base-name foo/myimage container_import.tar
It works. But I wonder, if there is any shorter (less time consuming) way than re-importing the image.
(Maybe by running a simple command or changing a text file.)
EDIT: To clarify my question: I have one node with a container stored in namespace "default". I want to have the same container stored in namespace "k8s.io" on the same node.
What else can I do, instead of running the following two (slow) commands?
ctr -n default image export my-image.tar my-image
ctr -n k8s.io image import my-image.tar
I assume a more faster way of renaming the namespace, since it is just editing some meta data.

As # P Ekambaram suggested, the podman save and podman load commands let you share images across multiple servers and systems when they aren't available locally or remotely.
You can use Podman to manage images and containers.
The podman save command saves an image to an archive, making it available to be loaded on another server.
For instance, to save a group of images on a host named servera:
[servera]$ podman save --output images.tar \
docker.io/library/redis \
docker.io/library/mysql \
registry.access.redhat.com/ubi8/ubi \
registry.access.redhat.com/ubi8/ubi:8.5-226.1645809065 \
quay.io/centos7/mysql-80-centos7 docker.io/library/nginx
Once complete, you can take the file images.tar to serverb and load it with podman load:
[serverb]$ podman load --input images.tar
The newly released Podman 4.0 includes the new podman image scp command, a useful command to help you manage and transfer container images.
With Podman's podman image scp, you can transfer images between local and remote machines without requiring an image registry.
Podman takes advantage of its SSH support to copy images between machines, and it also allows for local transfer. Registryless image transfer is useful in a couple of key scenarios:
Doing a local transfer between users on one system
Sharing images over the network

Related

How run docker images without connect to Internet?

I have installed docker in a system which has no connection to Internet so to run an image with docker, I had to download a simple image from this and from another system. Then I put this image in my offline system in this path : C:\Users\Public\Documents\Hyper-V\Virtual hard disks
but when I run docker run hello-world in cmd I see this message:
Unable to find image 'hello-world:latest' locally
and tries to download hello-world image form Internet but it has to no connection to the Internet so it field. Now I want to know where I should put my images in to be visible to docker?
You can do it the easy way without messing around with folders, by exporting the docker image from any other machine with access to internet:
pull the image on a machine with internet access.
$docker pull hello-world
save that image to a .tar file.
$ docker save --output hello-world.tar {your image name or ID}
copy that file to any machine.
load the .tar file to docker.
$docker load --input hello-world.tar
Check out:
https://docs.docker.com/engine/reference/commandline/image_save/
https://docs.docker.com/engine/reference/commandline/load/#examples
You are trying to start a container using the dockerfile. You need to first build the image from dockerfile. You can do this via
docker build -t < image name > < path >
You will require the internet connection while building the image.
You can check the image in your system using
docker images
Once you build the docker image you can start the container without internet connection using
docker run < image name >
Also you can export the same image using docker save and docker load functionalities.
Docker runs in a client-server architecture environment just almost like git. It can pull resources from the server online with the client on "your machine".
The command $docker pull hello-world requires connection to the server as part of docker itself.

Docker image history at import from tar

I want to be able to deliver my changes to QA department using Docker. My workflow is:
Make changes
Build image
Save it as a tar
Import it on another computer locally
Restart compose with my image (containing my changes)
I can't load it to a registry due to a sticky management process and must deliver it as a tar. An image history is loosing during saving and import and when we try to up compose with a newly imported image it raises an error:
docker: Error response from daemon: No command specified.
An image is inherited from a local image which in it's turn inherited from ubuntu:16.04 image. In both images I added this line after FROM directive:
ENTRYPOINT bash
But had no luck and faced the same error.
How to save/import image and be able to run containers using the image?
Instead of import use load
docker load -i <exported.tar>
This will load all layers. When you do
docker import exported.tar image:tag
This will load all file system in a single layer of the image. If you need to add a CMD to it you can use
docker import -c 'CMD ["/bin/bash"]' exported.tar image:tag
So I would suggest using load instead of import.

Does docker stores all its files as "memory image", as part of image, not disk file?

I was trying to add some files inside a docker container like "touch". I found after I shutdown this container, and bring it up again, all my files are lost. Also, I'm using ubuntu image, after shutdown-restart the same image, all my software that has been installed by apt-get is gone! Just like running a new image. So how can I save any file that I created?
My question is, does docker "store" all its file systems like "/tmp" as memory file system, so nothing is actually saved to disk?
Thanks.
This is normal behavoir for docker. You have to define a volume to save your data, those volumes will exist even if you shutdown your container.
For example with a simple apache webserver:
$ docker run -dit --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
This will mount your "current" director to /usr/local/apache2/htdocs at the container, so those files wil be available there.
A other approach is to use named volumes, those ones are not linked to a directory on your disk. Please refer to the docs:
Docker Manage - Data
When you start a container using docker run command,docker run ubuntu, docker starts a new container based on the image you specified. Any changes you make to the previous container will not be available, as this is a new instance spawned from the base image.
There a multiple ways to persist your data/changes to your container.
Use Volumes.
Data volumes are designed to persist data, independent of the container’s lifecycle. You could attach a data volume or mount a host directory as a volume.
Use Docker commit to create a new image with your changes and start future containers based on that image.
docker commit <container-id> new_image_name
docker run new_image_name
Use docker ps -a to list all the containers. It will list all containers including the ones that have exited. Find the docker id of the container that you were working on and start it using docker start <id>.
docker ps -a #find the id
docker start 1aef34df8ddf #start the container in background
References
Docker Volumes
Docker Commit

I can't find my Docker image after building it

I'm new to Docker, so please allow me to describe the steps that I did. I'm using Docker (not Docker toolbox) on OS X. I built the image from Dockerfile using the following command
sudo docker build -t myImage .
Docker confirmed that building was successful.
Successfully built 7240e.....
However, I can't find the image anywhere. I looked at this question, but the answer is for Docker toolbox, and I don't have a folder /Users/<username>/.docker as suggested by the accepted answer.
You would be able to see your docker images by the below command:
docker images
And to check which all containers are running in docker:
docker ps -a
Local builds (in my case using buildkit) will create and cache the image layers but simply leave them in the cache rather than tell the docker daemon they're an actual image. To do that you need to use the --load flag.
$ docker buildx build -t myImage .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
Doesn't show anything, but...
$ docker buildx build -t myImage --load .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myImage latest 538021e3d342 18 minutes ago 190MB
And there it is!
There actually is a warning about this in the output of the build command... but it's above all the build step logs so vanishes off your terminal without easily being seen.
To get list of Images
docker image ls
or
docker images
In addition to the correct responses above that discuss how to access your container or container image, if you want to know how the image is written to disk...
Docker uses a Copy on Write File System (https://en.wikipedia.org/wiki/Copy-on-write) and stores each Docker image as a series of read only layers and stores them in a list. The link below does a good job explaining how the image layers are actually stored on disk.
https://docs.docker.com/storage/storagedriver/
As already said, after the docker images
this command will show you all the images you have locally.
i.e "somth like that"
REPOSITORY TAG IMAGE ID CREATED SIZE
codestandars 1.0 a22daacf6761 8 minutes ago 622MB
bulletinboard 1.0 b73e8e68edc0 2 hours ago 681MB
ubuntu 18.04 cf0f3ca922e0 4 days ago 64.2MB
now you should
docker run -it and the IMAGE ID or the TAG related to the repository you want to run.
Command to list the docker images is :
docker images
The default docker images will show all top level images, their repository and tags, and their size.
An image will be listed more than once if it has multiple repository names or tags.
Click here for the screenshot for more details

How to specify docker image path on command line without editing configuration setting?

I have my docker container images in different directories. And I would like to specify the path of the directory in the docker -run command. There is a method to change this path by editing the '-g' option in the configuration file, but it requires to restart the docker deamon. Is there any way to specify the docker image path in the docker-run command itself?
Docker must have the knowledge of not just your image physical location, but its complete tree. because docker image is made up of layers, where each layer is built with one Dockerfile command.
Hence, you should let docker register / know all the images from the directory where the images are present. Moreover, if you have physically copied these images from another machine, they would not work unless they are registered / tagged within Docker engine.
The short answer to your question is NO, it is not possible.
Docker engine itself should manage the images, you could do all what docker engine is doing by changing all the configuration files it maintains internally, because all of them are plain text. But it is definitely not worth your time, and you are better off with docker managing the images itself.

Resources