Where can I create images from? Locally at all? (Docker remote API) - image

May/may not be an easy question here, but where can I pull images from to create a new docker image via the API?
Documentation
My (unsuccessful) attempts have been trying to build an image from something local. Using docker images to get a list of images, then trying to use their Image ID or Repository has not worked for me while using the fromImage query param like so:
curl --data '' host:port/images/create?fromImage=test/hello-world&tag=webTesting
I consistently get the following error:
{"errorDetail":{"message":"Error: image test/hello-world not found"},"error":"Error: image test/hello-world not found"}
In running docker images, we can very clearly see the following:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
test/hello-world latest 6d9bd5e6da4e 2 days ago 556.9 MB
In all combinations of using the repository/tag/id the error still displays. I understand that we can create images from urls with fromSrc, and there are alternative create image routes by uploading .tar files, but is it possible in the case to create an image from one that already exists locally? I've had success in compiling images from ubuntu or centos, but I'm looking basically to replicate something local with new tags/repository.
I do see in the documentation that fromImage parameter may only be used when pulling an image -- does this mean we can only import images that are hosted on Dockerhub?

As you noted, the Docker remote API documentation clearly states that a pull operation must be triggered for this image reference.
This does not require that you use DockerHub, but it means that the image must be located on a registry and not simply in your daemon's local cache. If you were running a Docker registry instance on your host (which is easily done with the public registry image in DockerHub) on port 5000, then you could use "localhost:5000/test/hello-world" as the fromImage, and it will pull it from your locally hosted registry (after you push it locally of course).

Related

How can I push docker compose based images for Gitlab Registry Using CI/CD

I've linked Postgres DB & API images using docker-compose, its working in local, I've to push both image for gitlab registry Using CI/CD.
enter image description here
The gitlab docs cover how to build and push Docker images during CI quite well.
It is important to note/use the CI_REGISTRY_IMAGE variable. If the image name you try to push is different from CI_REGISTRY_IMAGE, the push will fail. If you want to have multiple images, you either need to label them in the tag ($CI_REGISTRY_IMAGE:db-$CI_COMMIT_REF_SLUG) or create a separate repo and manage the images there.

Windows Docker Image keeps hanging up randomly on Azure (web-app)

I won't be able to provide the docker file so I'll try to provide as much context as I can to the issue. I keep running into issues with Azure and Windows based Docker containers randomly. The app will run fine for weeks with no issues and then suddenly bug out (using the same exact image) and go into an endless cycle of "Waiting for container to be start." followed by "Container failed to reach the container's http endpoint".
I have been able to resolve the issue in the past by re-creating the service (again using the same exact image) but it seems this time its not working.
Various Tests:
The same exact docker image runs locally no problem
As I mentioned, re-creating the service before did the trick (using the same exact image)
Below are the exact steps I have in place:
Build a windows based image using a Docker Compose File. I specify in the docker compose file to map ports 2000:2000.
Push the Docker Image to a private repository on docker hub
Create a web app service in Azure using the image
Any thoughts on why this randomly happens? My only next idea is to re-create the docker image as a Linux based image.
Does your app need to access more than a single port? Please note that as of right now, we only allow a single port. More information on that here.
Lastly, please see if the turning off the container check, mentioned here, helps to resolve the matter.
Let us know the outcome of these two steps and we can assist you further if needed.

Docker runs old container instead of new one that I'm pulling (AWS)

I recently deployed a Springboot application to AWS using Docker.
I'm going crazy trying to update my image/container. I've tried deleting everything, used and unused containers, images, tags, etc. and pushing everything again. Docker system prune, docker rm, docker rmi, using a different account too... It still runs that old version of the project.
It all indicates that there's something going on at the server level. I'm using PuTTY.
Help is much appreciated.
What do you mean by old container ? Is it some changes you did from some version control then didn't update on the container ?or just do docker restart container I'd
There's a lot to unpack here, so if you can provide more information - that'd be good. But here's a likely common denominator.
If you're using any AWS service (EKS, Farget, ECS), these are just based on the docker image you provide them. So if your image isn't correctly updated, they won't update.
But the same situation will occur with docker run also. If you keep pointing to the same image (or the image is conceptually unchanged) then you won't have a change.
So I doubt the problem is within Docker or AWS.
Most Likely
You're not rebuilding the spring application binaries with each change
Your Dockerfile is pulling in the wrong binaries
If you are using an image hosting service, like ECR or Nexus, then you need to make sure the image name is pointing to the correct image:tag combination. If you update the image, it should be given a unique tag and then that tagged image referenced by Docker/AWS
After you build an image, you can verify that the correct binaries were copied by using docker export <container_id> | tar -xf - <location_of_binary_in_image_filesystem>
That will pull out the binary. Then you can run it locally to test if it's what you wanted.
You can view the entire filesystem with docker export <container_id> | tar -tf - | less

Can't Query Pushed Images on Remote Registry

First, I push an image to a remote, private repository (not
registry.hub.docker.com)
I can pull that image back
I can search for it and see it in a list of pushed images
But
When I log into the remote server, I cannot see those pushed images in the regular CLI (i.e., sudo docker images)
If I'm logged into the remote server, how do I query those pushed images?
I do it in two ways.
First is the registry API: https://docs.docker.com/reference/api/registry_api/
(Sample call to list images: https://yourregistry.com/v1/search)
Second - if you have control of the private registry - you can install a web UI that makes things even easier: https://registry.hub.docker.com/u/atcol/docker-registry-ui/

Blobstore Images Disappearing on Google App Engine Development Server

I'm using App Engine's high performance image serving on my site, and I'm able to get everything working properly on both my local machine and in production i.e. I can upload an image and successfully display the images using get_serving_url on the blob key. However, these images don't seem to persist on my development server, i.e. after I come back from a computer restart, the images no longer show up. The development server spits out:
images_service_pb.ImagesServiceError.BAD_IMAGE_DATA
which I'm guessing is actually because the underlying blobs are no longer there (although this is just a hunch). The rest of my datastore is still intact though, as I'm using the launch setting "--datastore_path" to ensure my data persists. Is there a separate flag I need to be using to persist the blobs as well? Or is there a separate problem here that I'm missing?
You must use --blobstore_path=DIR:
--blobstore_path=DIR Path to directory to use for storing Blobstore
file stub data.
You can see all options typing dev_appserver.py --help in the command line.

Resources