Delete docker image from remote repo - image

I have the following
docker registry :http://myPrivateRegistry:5000
repository : myRepo
Image : myImage
I pushed this image to the remote repo by the following
docker push http://myPrivateRegistry:5000/myRepo/myImage
How do I delete this image from the 'remote repo' not just locally??
docker rmi http://myPrivateRegistry:5000/myRepo/myImage untags the image but does not remove it from teh remote repo

After some time googling I've found that you could use Curl command to delete images, e.g:
curl -X DELETE registry-url/v1/repositories/repository-name/

As far as I can see, this is still being debated in issue 422
While deletes are part of the API, they cannot be safely implemented on top of an eventually consistent backend (read: s3).
The main blocker comes from the complexity of reference counting on top of an eventually consistent storage system.
We need to consider whether it is worth facing that complexity or to adopt a hybrid storage model, where references are stored consistently.
As long as the registry supports varied backends, using an eventually consistent VFS model, safe deletions are not really possible without more infrastructure.
Issue 210 does mention
Soft delete have been implemented as part of the API, and more specialized issues have been opened for garbage collection.
https://github.com/docker/distribution/issues/422#issuecomment-114963170

Related

How to prevent GitLab CI/CD from deleting the whole build

I'm currently having a frustrating issue.
I have a setup of GitLab CI on a VPS server, which is working completely fine, I have my pipelines running without a problem.
The issue comes after having to redo a pipeline. Each time GitLab deletes the whole folder, where the build is and builds it again to deploy it. My problem is that I have a "uploads" folder, that stores all user content, that was uploaded, and each time I redo a pipeline everything gets deleted from this folder and I obviously need this content, because it's the purpose of the app.
I have tried GitLab CI cache - no luck. I have also tried making a new folder, that isn't in the repository, it deletes it too.
Running my first job looks like so:
Job
As you can see there are a lot of lines, that says "Removing ..."
In order to persist a folder with local files while integrating CI pipelines, the best approach is to use Docker data persistency, as you'll be able to delete everything from the last build while keeping local files inside your application between your builds, while maintains the ability to start from stretch every time you start a new pipeline.
Bind-mount volumes
Volumes managed by Docker
GitLab's CI/CD Documentation provides a short briefing on how to persist storage between jobs when using Docker to build your applications.
I'd also like to point out that if you're using Gitlab Runner through SSH, they explicitly state they do not support caching between builds when using this functionality. Even when using the standard Shell executor, they highly discourage saving data to the Builds folder. so it can be argued that the best practice approach is to use a bind-mount volume to your host and isolate the application from the user uploaded data.

Kubernetes deployment - specify multiple options for image pull as a fallback?

We have had image pull issues at one time or another with all of our possible docker registries including Artifactory, AWS ECR, and GitLab. Even DockerHub occasionally has issues.
Is there a way in a Kubernetes deployment to specify that a pod can get an image from multiple different repositories so it can fall back if one is down?
If not, what other solutions are there to maintain stability? I've seen things like Harbor and Trow, but it seems like a heavy handed solution to a simple problem.
Is there a way in a Kubernetes deployment to specify that a pod can get an image from multiple different repositories so it can fall back if one is down?
Not really, not natively 😔. You could probably trick a K8s node to pull images from different image registries (one at a time) if you place them behind something like a TCP load balancer that directs traffic to multiple registries. But this might take a lot of testing and work.
If not, what other solutions are there to maintain stability? I've seen things like Harbor and Trow, but it seems like a heavy handed solution to a simple problem.
I'd say either Harbor, Quay, and Trow is the way to go if you want something more redundant.
Kubernetes has the ability to set ImagePullPolicy and you can set it for example to Never if you'd like to pre-pull all your critical images on all the K8s nodes. You can tie this to some automation to pre-pull your images across your clusters and nodes.
I've actually opened a K8s feature request to see 👀 if this idea gains traction.
Update:
If you're using containerd or cri-o (or even Docker has registry mirrors). You have the ability to configure mirror registries:
containerd.toml example
...
[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins.cri.registry.mirrors."local.insecure-registry.io"]
endpoint = ["http://localhost:32000"]
[plugins.cri.registry.mirrors."gcr.io"]
endpoint = ["https://gcr.io"]
[plugins.cri.registry.configs]
[plugins.cri.registry.configs.auths]
[plugins.cri.registry.configs.auths."https://gcr.io"]
auth = "xxxxx...."
...
cri-o.conf example
...
# registries is used to specify a comma separated list of registries to be used
# when pulling an unqualified image (e.g. fedora:rawhide).
registries = [
“registry.example.xyz”,
“registry.fedoraproject.org”
]
...
✌️

Heprledger Composer REST: old models in endpoints

I deleted all the containers and started the network and REST from scratch. But endpoints do not match what is written in the model (a very old version of the network is displayed).
This is definitely not a browser cache problem. I do not understand what's wrong.
Where does REST take these old models? What do I need to update or delete?
Solved. I just needed to delete old images with docker rmi $(docker images -aq) and start network again.

Undeploying Business Network

Using HyperLedger Composer 0.19.1, I can't find a way to undeploy my business network. I don't necessarily want to upgrade to a newer version each time, but rather replacing the one deployed with a fix in the JS code for instance. Any replacement for the undeploy command that existed before?
There is no replacement for the old undeploy command, and in fact it it not really undeploy - merely hiding the old network.
Be aware that everytime you upgrade a network it creates a new Docker Image and Container so you may want to tidy these up periodically. (You could also try to delete the BNA from the Peer servers but these are very small in comparison to the docker images.)
It might not help your situation, but if you are rapidly developing and iterating you could try this in the online Playground or local Playground with the Web profile - this is fast and does not create any new images/containers.

Where can I create images from? Locally at all? (Docker remote API)

May/may not be an easy question here, but where can I pull images from to create a new docker image via the API?
Documentation
My (unsuccessful) attempts have been trying to build an image from something local. Using docker images to get a list of images, then trying to use their Image ID or Repository has not worked for me while using the fromImage query param like so:
curl --data '' host:port/images/create?fromImage=test/hello-world&tag=webTesting
I consistently get the following error:
{"errorDetail":{"message":"Error: image test/hello-world not found"},"error":"Error: image test/hello-world not found"}
In running docker images, we can very clearly see the following:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
test/hello-world latest 6d9bd5e6da4e 2 days ago 556.9 MB
In all combinations of using the repository/tag/id the error still displays. I understand that we can create images from urls with fromSrc, and there are alternative create image routes by uploading .tar files, but is it possible in the case to create an image from one that already exists locally? I've had success in compiling images from ubuntu or centos, but I'm looking basically to replicate something local with new tags/repository.
I do see in the documentation that fromImage parameter may only be used when pulling an image -- does this mean we can only import images that are hosted on Dockerhub?
As you noted, the Docker remote API documentation clearly states that a pull operation must be triggered for this image reference.
This does not require that you use DockerHub, but it means that the image must be located on a registry and not simply in your daemon's local cache. If you were running a Docker registry instance on your host (which is easily done with the public registry image in DockerHub) on port 5000, then you could use "localhost:5000/test/hello-world" as the fromImage, and it will pull it from your locally hosted registry (after you push it locally of course).

Resources