Docker image history at import from tar - image

I want to be able to deliver my changes to QA department using Docker. My workflow is:
Make changes
Build image
Save it as a tar
Import it on another computer locally
Restart compose with my image (containing my changes)
I can't load it to a registry due to a sticky management process and must deliver it as a tar. An image history is loosing during saving and import and when we try to up compose with a newly imported image it raises an error:
docker: Error response from daemon: No command specified.
An image is inherited from a local image which in it's turn inherited from ubuntu:16.04 image. In both images I added this line after FROM directive:
ENTRYPOINT bash
But had no luck and faced the same error.
How to save/import image and be able to run containers using the image?

Instead of import use load
docker load -i <exported.tar>
This will load all layers. When you do
docker import exported.tar image:tag
This will load all file system in a single layer of the image. If you need to add a CMD to it you can use
docker import -c 'CMD ["/bin/bash"]' exported.tar image:tag
So I would suggest using load instead of import.

Related

How to move a containerd image to different namespace?

I'm currently setting up a Kubernetes cluster with windows nodes. I accidentally created a local image in the default namespace.
As shown by ctr image ls, my image is in the default namespace:
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/myimage:latest application/vnd.docker.distribution.manifest.v2+json sha256:XXX 6.3 GiB windows/amd64 -
Therefore, Kubernetes cannot find the image while creating the pod (ErrImageNeverPull, imagePullPolicy is set to Never). The reason for this is, the image isn't in the right namespace k8s.io:
The command ctr --namespace k8s.io image ls shows the base Kubernetes images:
REF TYPE DIGEST SIZE PLATFORMS LABELS
mcr.microsoft.com/oss/kubernetes/pause:3.6 application/vnd.docker.distribution.manifest.list.v2+json sha256:XXX 3.9 KiB linux/amd64,linux/arm64,windows/amd64 io.cri-containerd.image=managed
mcr.microsoft.com/oss/kubernetes/pause#sha256:DIGEST application/vnd.docker.distribution.manifest.list.v2+json sha256:XXX 3.9 KiB linux/amd64,linux/arm64,windows/amd64 io.cri-containerd.image=managed
...
The most straight-forward approach I tried, was exporting the image, deleting the image, and importing the image with different namespace. (as mentioned on a Github comment in the cri-tools project)
ctr --namespace k8s.io image import --base-name foo/myimage container_import.tar
It works. But I wonder, if there is any shorter (less time consuming) way than re-importing the image.
(Maybe by running a simple command or changing a text file.)
EDIT: To clarify my question: I have one node with a container stored in namespace "default". I want to have the same container stored in namespace "k8s.io" on the same node.
What else can I do, instead of running the following two (slow) commands?
ctr -n default image export my-image.tar my-image
ctr -n k8s.io image import my-image.tar
I assume a more faster way of renaming the namespace, since it is just editing some meta data.
As # P Ekambaram suggested, the podman save and podman load commands let you share images across multiple servers and systems when they aren't available locally or remotely.
You can use Podman to manage images and containers.
The podman save command saves an image to an archive, making it available to be loaded on another server.
For instance, to save a group of images on a host named servera:
[servera]$ podman save --output images.tar \
docker.io/library/redis \
docker.io/library/mysql \
registry.access.redhat.com/ubi8/ubi \
registry.access.redhat.com/ubi8/ubi:8.5-226.1645809065 \
quay.io/centos7/mysql-80-centos7 docker.io/library/nginx
Once complete, you can take the file images.tar to serverb and load it with podman load:
[serverb]$ podman load --input images.tar
The newly released Podman 4.0 includes the new podman image scp command, a useful command to help you manage and transfer container images.
With Podman's podman image scp, you can transfer images between local and remote machines without requiring an image registry.
Podman takes advantage of its SSH support to copy images between machines, and it also allows for local transfer. Registryless image transfer is useful in a couple of key scenarios:
Doing a local transfer between users on one system
Sharing images over the network

How to download and import a dump directly into your oracle Docker image?

I am quite new to docker technology and still learning and reading through the docs. I have an oracle base image which i would like to use as a parent image to build my own image and then pushing it towards custom docker registry/repository.
The base image already provides a full setup of oracle db. But as next steps, i would like
download a dump file (e.g. dump url) directly into the docker image (without downloading to local
workspace)
run some sql script
lastly, import the dump using data pump (impdp)
I tried to follow https://github.com/mpern/oracle-docker, but here you always need to store dump file locally and point it as volume.
Is it possible if i can use curl command to download and directly store in oracle docker container workspace? Afterwards importing it from there
You can run an interactive bash session inside your container to check if curl is installed, and if it is not installed then you need to install Curl. Using an interactive bash session, you can then download your dump file.
The ports you require will also need to be be published, if the container is connecting outside of Docker and the host machine, you can use docker run with the -p parameter.
An example is below,
docker run -p 80:80 -it (Your image) /bin/bash
More information on this regarding the docker run command, and Dockerfiles
https://docs.docker.com/engine/reference/commandline/run/
https://docs.docker.com/engine/reference/builder/

How to run local Docker Image

I have an image that I want to run on my local machine. I took this image from my friend and not from docker-hub or any repository. The file shared is in ".img" format.
I am able to import this image on docker but unable to run.
What I did:
Compress the image file from ".img" format to ".tar.gz" format so that the docker image can be imported. I used 7-zip tool to convert this.
From my local I imported the docker image using this new file(.tar.gz)<
Trying to run this imported image but fails.
Commands Executed:
PS C:\Users\C61464> docker import .\Desktop\regchange.tar.gz
sha256:a0008215897dd1a7db205c191edd0892f484d230d8925fd09e79d8878afa2743
PS C:\Users\C61464> docker images
REPOSITORY TAG IMAGE ID CREATED
SIZE
<none> <none> 7fdbbdcc59c4 2 minutes ago 1.05GB
PS C:\Users\C61464> docker tag 7fdbbdcc59c4 bwise:version1.0
PS C:\Users\C61464> docker images
REPOSITORY TAG IMAGE ID CREATED
SIZE
bwise version1.0 7fdbbdcc59c4 3 minutes ago 1.05GB
PS C:\Users\C61464> docker run -p 8888:80 bwise:version1.0
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: No command specified.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I searched a lot for this error and found that for running we need to specify the path used while creating the image(In Dockerfile) but I am not sure as I am new to docker. Am I doing something wrong or I need to have the docker file to run this image?
Perhaps the Docker Image you have had no CMD or ENTRYPOINT defined when it was built, so the docker daemon doesn't know what to do with the image
Try doing
docker run -it -p 8888:80 bwise:version1.0 sh
(if it's a *nix based image). That should start an interactive shell.
You can do:
docker run -p 8888:80 bwise:version1.0 {command_you_want_to_run}
On the image when starting it.
The docker image may be broken.
Look inside. See suggestions how to in How to see docker image contents
Run this command to inspect your image
docker inspect [docker-image-name]
Inspect you will see base image and other info about that image

Docker Compose on Mac - Image Location

I use docker-compose for local development on my Mac. I have multiple images being built with docker compose. My docker and docker-compose set up is very standard. Now I want to share my locally built image file with someone. Where are these local files stored?
Searching a bit gave me answers like:
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
But then how can I extract one image from this and share? I tried running the tty that is present with it, but to no avail.
Docker Version: 18.03 Docker for Mac
Docker compose Version: 2
If you have docker-hub account (which is free), then you can use docker push command to save docker image into registry and use docker pull to pull on other machine.
Another solution is to use save + import commands.
For that you can use docker save and docker import commands.
docker#default:~$ docker save --help
Usage: docker save [OPTIONS] IMAGE [IMAGE...]
Save one or more images to a tar archive (streamed to STDOUT by default)
Options:
-o, --output string Write to a file, instead of STDOUT
docker#default:~$
After that you have TAR file on your file system (check -o value) then transfer the file to another machine and execute docker import
docker#default:~$ docker import --help
Usage: docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
Import the contents from a tarball to create a filesystem image
Options:
-c, --change list Apply Dockerfile instruction to the created image
-m, --message string Set commit message for imported image
docker#default:~$
Apparently, the solution with docker-compose is using the docker save command. We do not need to know the locations of images as #fly2matrix mentioned. We can use the docker save command to save the image in TAR file.
docker save --output image-name.tar image-name:tag
Then this image can be shared and loaded by other users through:
docker load --input image-name.tar

How run docker images without connect to Internet?

I have installed docker in a system which has no connection to Internet so to run an image with docker, I had to download a simple image from this and from another system. Then I put this image in my offline system in this path : C:\Users\Public\Documents\Hyper-V\Virtual hard disks
but when I run docker run hello-world in cmd I see this message:
Unable to find image 'hello-world:latest' locally
and tries to download hello-world image form Internet but it has to no connection to the Internet so it field. Now I want to know where I should put my images in to be visible to docker?
You can do it the easy way without messing around with folders, by exporting the docker image from any other machine with access to internet:
pull the image on a machine with internet access.
$docker pull hello-world
save that image to a .tar file.
$ docker save --output hello-world.tar {your image name or ID}
copy that file to any machine.
load the .tar file to docker.
$docker load --input hello-world.tar
Check out:
https://docs.docker.com/engine/reference/commandline/image_save/
https://docs.docker.com/engine/reference/commandline/load/#examples
You are trying to start a container using the dockerfile. You need to first build the image from dockerfile. You can do this via
docker build -t < image name > < path >
You will require the internet connection while building the image.
You can check the image in your system using
docker images
Once you build the docker image you can start the container without internet connection using
docker run < image name >
Also you can export the same image using docker save and docker load functionalities.
Docker runs in a client-server architecture environment just almost like git. It can pull resources from the server online with the client on "your machine".
The command $docker pull hello-world requires connection to the server as part of docker itself.

Resources