What's the differences between layer and image in docker? - image

I know that an image consists of many layers.
for example, if u run "docker history [Image]", u can get a sequence of ids, and the ID on the top is the same as the image id, the rest IDs are layer ID.
in this case, are these rest layer IDs correspond to some other images? if it is true, can I view a layer as an image?

Layers are what compose the file system for both Docker images and Docker containers.
It is thanks to layers that when you pull a image, you eventually don't have to download all of its filesystem. If you already have another image that has some of the layers of the image you pull, only the missing layers are actually downloaded.
are these rest layer IDs correspond to some other images?
yes, they are just like images, but without any tag to identify them.
can I view a layer as an image?
yes
show case
docker pull busybox
docker history busybox
IMAGE CREATED CREATED BY SIZE COMMENT
d7057cb02084 39 hours ago /bin/sh -c #(nop) CMD ["sh"] 0 B
cfa753dfea5e 39 hours ago /bin/sh -c #(nop) ADD file:6cccb5f0a3b3947116 1.096 MB
Now create a new container from layer cfa753dfea5e as if it was an image:
docker run -it cfa753dfea5e sh -c "ls /"
bin dev etc home proc root sys tmp usr var

Layers and Images not strictly synonymous.
https://windsock.io/explaining-docker-image-ids/
When you pull an image from Docker hub, "layers" have "" Image IDs.
When you commit changes to locally built images, these layers will have Images IDs. Until when you push to Dockerhub. Only the leaf image will have Image ID for all others users pulling that image you uploaded.

From docker documentation:
A Docker image is a read-only template. For example, an image could contain an Ubuntu operating system with Apache and your web application installed. Images are used to create Docker containers. Docker provides a simple way to build new images or update existing images, or you can download Docker images that other people have already created. Docker images are the build component of Docker.
Each image consists of a series of layers. Docker makes use of union file systems to combine these layers into a single image. Union file systems allow files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system.
One of the reasons Docker is so lightweight is because of these layers. When you change a Docker image—for example, update an application to a new version— a new layer gets built. Thus, rather than replacing the whole image or entirely rebuilding, as you may do with a virtual machine, only that layer is added or updated. Now you don’t need to distribute a whole new image, just the update, making distributing Docker images faster and simpler.
The way I like to look at these things is like backup types. We can create full backups and after that create incremental backups. The full backup is not changed (although in some systems to decrease restore time after each incremental backup the full backup is changed to contain changes but for this discussion we can ignore this case) and just changes are backed up in a separate manner. So we can have different layers of backups, like we have different layers of images.
EDIT:
View the following links for more information:
Docker image vs container
Finding the layers and layer sizes for each Docker image

Related

How can I push docker compose based images for Gitlab Registry Using CI/CD

I've linked Postgres DB & API images using docker-compose, its working in local, I've to push both image for gitlab registry Using CI/CD.
enter image description here
The gitlab docs cover how to build and push Docker images during CI quite well.
It is important to note/use the CI_REGISTRY_IMAGE variable. If the image name you try to push is different from CI_REGISTRY_IMAGE, the push will fail. If you want to have multiple images, you either need to label them in the tag ($CI_REGISTRY_IMAGE:db-$CI_COMMIT_REF_SLUG) or create a separate repo and manage the images there.

How can I update an Image in Google Artifact Registry?

When viewing Images in Google Cloud Platform's Artifact Registry, there is an "Updated" time column. However, whenever I build the same image and push it again, it creates a new image.
As part of a Cloud Build process, I am pulling this Ruby-based image, updating gems, then pushing it back to the Artifact Registry for use in later build steps (DB migration, unit tests). My hope is that upon updating the Ruby gems, nothing would happen in most cases, resulting in an identical Docker Image. In such a case, I'd expect no new layers to be pushed. However, every time I build, there is always a new layer pushed, and therefore a new Artifact.
Thus, the problem may be with how Cloud Build's gcr.io/cloud-builders/gsutil works rather than Artifact Registry itself. Here're my relevant build steps in case it matters:
- id: update_gems
name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/{my repo}/{my image}:deploy',
'-f', 'docker/bundled.Dockerfile', '.' ]
- id: update_image
name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'us-central1-docker.pkg.dev/$PROJECT_ID/{my repo}/{my image}:deploy' ]
The first step refers to "bundled.Dockerfile" which has these contents:
FROM us-central1-docker.pkg.dev/{same project as above}/{my repo}/{my image}:deploy
WORKDIR /workspace
RUN bundle update
RUN bundle install
Is there a way to accomplish what I'm currently doing (ie update a Deploy-time container used to run rspec tests and run rake db:migrate without making new images every time we build? I assume those images are taking up space and I'm getting billed for it. I assume there's a way to "Update" an existing Image in the Artifact Registry since there is an "Updated" column.
You are not looking at container "images". You are looking at "layers" of an image. The combination of layers results in a container image. These can also be artifacts for Cloud Build, etc.
You cannot directly modify a layer in Artifact Registry. Any changes you make to the image creation will result in one or more layers changing while results in one or more new layers being created. Creating an image does usually does not result in all layers changing. Your new image is probably the result of old and new layers. Layers are cached in Artifact Registry for future images/builds.
More than one container image can use the same layers. If Google allowed you to modify individual layers, you would break/corrupt the resulting containers.

Docker runs old container instead of new one that I'm pulling (AWS)

I recently deployed a Springboot application to AWS using Docker.
I'm going crazy trying to update my image/container. I've tried deleting everything, used and unused containers, images, tags, etc. and pushing everything again. Docker system prune, docker rm, docker rmi, using a different account too... It still runs that old version of the project.
It all indicates that there's something going on at the server level. I'm using PuTTY.
Help is much appreciated.
What do you mean by old container ? Is it some changes you did from some version control then didn't update on the container ?or just do docker restart container I'd
There's a lot to unpack here, so if you can provide more information - that'd be good. But here's a likely common denominator.
If you're using any AWS service (EKS, Farget, ECS), these are just based on the docker image you provide them. So if your image isn't correctly updated, they won't update.
But the same situation will occur with docker run also. If you keep pointing to the same image (or the image is conceptually unchanged) then you won't have a change.
So I doubt the problem is within Docker or AWS.
Most Likely
You're not rebuilding the spring application binaries with each change
Your Dockerfile is pulling in the wrong binaries
If you are using an image hosting service, like ECR or Nexus, then you need to make sure the image name is pointing to the correct image:tag combination. If you update the image, it should be given a unique tag and then that tagged image referenced by Docker/AWS
After you build an image, you can verify that the correct binaries were copied by using docker export <container_id> | tar -xf - <location_of_binary_in_image_filesystem>
That will pull out the binary. Then you can run it locally to test if it's what you wanted.
You can view the entire filesystem with docker export <container_id> | tar -tf - | less

Where can I create images from? Locally at all? (Docker remote API)

May/may not be an easy question here, but where can I pull images from to create a new docker image via the API?
Documentation
My (unsuccessful) attempts have been trying to build an image from something local. Using docker images to get a list of images, then trying to use their Image ID or Repository has not worked for me while using the fromImage query param like so:
curl --data '' host:port/images/create?fromImage=test/hello-world&tag=webTesting
I consistently get the following error:
{"errorDetail":{"message":"Error: image test/hello-world not found"},"error":"Error: image test/hello-world not found"}
In running docker images, we can very clearly see the following:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
test/hello-world latest 6d9bd5e6da4e 2 days ago 556.9 MB
In all combinations of using the repository/tag/id the error still displays. I understand that we can create images from urls with fromSrc, and there are alternative create image routes by uploading .tar files, but is it possible in the case to create an image from one that already exists locally? I've had success in compiling images from ubuntu or centos, but I'm looking basically to replicate something local with new tags/repository.
I do see in the documentation that fromImage parameter may only be used when pulling an image -- does this mean we can only import images that are hosted on Dockerhub?
As you noted, the Docker remote API documentation clearly states that a pull operation must be triggered for this image reference.
This does not require that you use DockerHub, but it means that the image must be located on a registry and not simply in your daemon's local cache. If you were running a Docker registry instance on your host (which is easily done with the public registry image in DockerHub) on port 5000, then you could use "localhost:5000/test/hello-world" as the fromImage, and it will pull it from your locally hosted registry (after you push it locally of course).

Delete docker image from remote repo

I have the following
docker registry :http://myPrivateRegistry:5000
repository : myRepo
Image : myImage
I pushed this image to the remote repo by the following
docker push http://myPrivateRegistry:5000/myRepo/myImage
How do I delete this image from the 'remote repo' not just locally??
docker rmi http://myPrivateRegistry:5000/myRepo/myImage untags the image but does not remove it from teh remote repo
After some time googling I've found that you could use Curl command to delete images, e.g:
curl -X DELETE registry-url/v1/repositories/repository-name/
As far as I can see, this is still being debated in issue 422
While deletes are part of the API, they cannot be safely implemented on top of an eventually consistent backend (read: s3).
The main blocker comes from the complexity of reference counting on top of an eventually consistent storage system.
We need to consider whether it is worth facing that complexity or to adopt a hybrid storage model, where references are stored consistently.
As long as the registry supports varied backends, using an eventually consistent VFS model, safe deletions are not really possible without more infrastructure.
Issue 210 does mention
Soft delete have been implemented as part of the API, and more specialized issues have been opened for garbage collection.
https://github.com/docker/distribution/issues/422#issuecomment-114963170

Resources