I deleted all the containers and started the network and REST from scratch. But endpoints do not match what is written in the model (a very old version of the network is displayed).
This is definitely not a browser cache problem. I do not understand what's wrong.
Where does REST take these old models? What do I need to update or delete?
Solved. I just needed to delete old images with docker rmi $(docker images -aq) and start network again.
Related
I won't be able to provide the docker file so I'll try to provide as much context as I can to the issue. I keep running into issues with Azure and Windows based Docker containers randomly. The app will run fine for weeks with no issues and then suddenly bug out (using the same exact image) and go into an endless cycle of "Waiting for container to be start." followed by "Container failed to reach the container's http endpoint".
I have been able to resolve the issue in the past by re-creating the service (again using the same exact image) but it seems this time its not working.
Various Tests:
The same exact docker image runs locally no problem
As I mentioned, re-creating the service before did the trick (using the same exact image)
Below are the exact steps I have in place:
Build a windows based image using a Docker Compose File. I specify in the docker compose file to map ports 2000:2000.
Push the Docker Image to a private repository on docker hub
Create a web app service in Azure using the image
Any thoughts on why this randomly happens? My only next idea is to re-create the docker image as a Linux based image.
Does your app need to access more than a single port? Please note that as of right now, we only allow a single port. More information on that here.
Lastly, please see if the turning off the container check, mentioned here, helps to resolve the matter.
Let us know the outcome of these two steps and we can assist you further if needed.
I recently deployed a Springboot application to AWS using Docker.
I'm going crazy trying to update my image/container. I've tried deleting everything, used and unused containers, images, tags, etc. and pushing everything again. Docker system prune, docker rm, docker rmi, using a different account too... It still runs that old version of the project.
It all indicates that there's something going on at the server level. I'm using PuTTY.
Help is much appreciated.
What do you mean by old container ? Is it some changes you did from some version control then didn't update on the container ?or just do docker restart container I'd
There's a lot to unpack here, so if you can provide more information - that'd be good. But here's a likely common denominator.
If you're using any AWS service (EKS, Farget, ECS), these are just based on the docker image you provide them. So if your image isn't correctly updated, they won't update.
But the same situation will occur with docker run also. If you keep pointing to the same image (or the image is conceptually unchanged) then you won't have a change.
So I doubt the problem is within Docker or AWS.
Most Likely
You're not rebuilding the spring application binaries with each change
Your Dockerfile is pulling in the wrong binaries
If you are using an image hosting service, like ECR or Nexus, then you need to make sure the image name is pointing to the correct image:tag combination. If you update the image, it should be given a unique tag and then that tagged image referenced by Docker/AWS
After you build an image, you can verify that the correct binaries were copied by using docker export <container_id> | tar -xf - <location_of_binary_in_image_filesystem>
That will pull out the binary. Then you can run it locally to test if it's what you wanted.
You can view the entire filesystem with docker export <container_id> | tar -tf - | less
I have created a compute engine VM-instance in Google Cloud Platform. Then I have installed go using the standard procedure downloading it from https://dl.google.com/go/go1.11.4.linux-amd64.tar.gz. Everything worked properly and I was able to run go application. However, after closing the instance when I reopened it, it says go is not installed. The message is following.
-bash: go: command not found
How can I save the instance setup?
Creating, Deleting, and Deprecating Custom Images
You can create custom images of boot disks and use these images to create new instances. This is ideal for situations where you have created and modified a persistent boot disk to a certain state and need to save that state to create new instances.
I think you should also consider using Docker containers.
Pushing and pulling images
Container-optimized VM images
Using HyperLedger Composer 0.19.1, I can't find a way to undeploy my business network. I don't necessarily want to upgrade to a newer version each time, but rather replacing the one deployed with a fix in the JS code for instance. Any replacement for the undeploy command that existed before?
There is no replacement for the old undeploy command, and in fact it it not really undeploy - merely hiding the old network.
Be aware that everytime you upgrade a network it creates a new Docker Image and Container so you may want to tidy these up periodically. (You could also try to delete the BNA from the Peer servers but these are very small in comparison to the docker images.)
It might not help your situation, but if you are rapidly developing and iterating you could try this in the online Playground or local Playground with the Web profile - this is fast and does not create any new images/containers.
I have the following
docker registry :http://myPrivateRegistry:5000
repository : myRepo
Image : myImage
I pushed this image to the remote repo by the following
docker push http://myPrivateRegistry:5000/myRepo/myImage
How do I delete this image from the 'remote repo' not just locally??
docker rmi http://myPrivateRegistry:5000/myRepo/myImage untags the image but does not remove it from teh remote repo
After some time googling I've found that you could use Curl command to delete images, e.g:
curl -X DELETE registry-url/v1/repositories/repository-name/
As far as I can see, this is still being debated in issue 422
While deletes are part of the API, they cannot be safely implemented on top of an eventually consistent backend (read: s3).
The main blocker comes from the complexity of reference counting on top of an eventually consistent storage system.
We need to consider whether it is worth facing that complexity or to adopt a hybrid storage model, where references are stored consistently.
As long as the registry supports varied backends, using an eventually consistent VFS model, safe deletions are not really possible without more infrastructure.
Issue 210 does mention
Soft delete have been implemented as part of the API, and more specialized issues have been opened for garbage collection.
https://github.com/docker/distribution/issues/422#issuecomment-114963170