I have an application that is running in a docker container. Is it possible to deploy this docker container containing the application in Cloud Foundry without making any changes to the application or container itself?
To answer your specific question about whether you will need to make changes to your Docker image or not, here's the relevant info.
Currently there is no support for mounting volumes or linking containers, but projects to support these use cases are actively in flight, so if your docker run workflow normally involves that you will have to wait.
There is only support for v2 Docker registries, so if your image repository is in a Docker registry with an older API, it won't work.
There is no support for private repositories (that is, repositories that require a username and password to access the image in the registry). You can, however, provide your own custom registry and make it only accessible to your CF backend, and then push your image as a public repo to that custom registry.
(Info filtered from official CF docs site and Diego design notes)
As discussed on Cloud Foundry's documentation, you should first enable the diego_docker feature flag with the following command:
cf enable-feature-flag diego_docker
Then use the cf push in order to push your docker image. Versions 6.13.0 and later of the CF CLI include native support for pushing a Docker image as a CF app, with the cf push command's -o or --docker-image flags. For example, running:
cf push lattice-app -o cloudfoundry/lattice-app
will push the image located at cloudfoundry/lattice-app. You can also read here for more information about Docker Support in CF + Diego.
Related
I've linked Postgres DB & API images using docker-compose, its working in local, I've to push both image for gitlab registry Using CI/CD.
enter image description here
The gitlab docs cover how to build and push Docker images during CI quite well.
It is important to note/use the CI_REGISTRY_IMAGE variable. If the image name you try to push is different from CI_REGISTRY_IMAGE, the push will fail. If you want to have multiple images, you either need to label them in the tag ($CI_REGISTRY_IMAGE:db-$CI_COMMIT_REF_SLUG) or create a separate repo and manage the images there.
I have a SCDF running in opneshift. The batch application I want to register in SCDF is a docker image configured with latest tag. The docker image also has a webhook configured with corresponding git repo. So the docker image is always the latest.
But once I register the application, consecutive changes to my applications are not picked up by SCDF. Though the docker image was built (via webhook) once the code committed. How do I configure the SCDF to pick up the latest version or newly pushed version ? Right now the only option is to register a new application for the changes to take effect.
I tried using the FORCE option in app registration page. but it seems it'll work only if not being used already.
Is there any configuration I could add to deployment.yaml to get the latest version? Thanks.
Due to this I couldn't restart a failed job with a fixed version of code. As the Restart job always pointing to older version.
You need to set the image pull policy for the task as part of the deployer property when launching the task.
For more info, you can refer the documentation here
I recently deployed a Springboot application to AWS using Docker.
I'm going crazy trying to update my image/container. I've tried deleting everything, used and unused containers, images, tags, etc. and pushing everything again. Docker system prune, docker rm, docker rmi, using a different account too... It still runs that old version of the project.
It all indicates that there's something going on at the server level. I'm using PuTTY.
Help is much appreciated.
What do you mean by old container ? Is it some changes you did from some version control then didn't update on the container ?or just do docker restart container I'd
There's a lot to unpack here, so if you can provide more information - that'd be good. But here's a likely common denominator.
If you're using any AWS service (EKS, Farget, ECS), these are just based on the docker image you provide them. So if your image isn't correctly updated, they won't update.
But the same situation will occur with docker run also. If you keep pointing to the same image (or the image is conceptually unchanged) then you won't have a change.
So I doubt the problem is within Docker or AWS.
Most Likely
You're not rebuilding the spring application binaries with each change
Your Dockerfile is pulling in the wrong binaries
If you are using an image hosting service, like ECR or Nexus, then you need to make sure the image name is pointing to the correct image:tag combination. If you update the image, it should be given a unique tag and then that tagged image referenced by Docker/AWS
After you build an image, you can verify that the correct binaries were copied by using docker export <container_id> | tar -xf - <location_of_binary_in_image_filesystem>
That will pull out the binary. Then you can run it locally to test if it's what you wanted.
You can view the entire filesystem with docker export <container_id> | tar -tf - | less
After my app is successfully pushed via cf I usually need do manually ssh-log into the container and execute a couple of PHP scripts to clear and warmup my cache, potentially execute some DB schema updates etc.
Today I found out about Cloudfoundry Tasks which seems to offer a pretty way to do exactly this kind of things and I wanted to test it whether I can integrate it into my build&deploy script.
So used cf login, got successfully connected to the right org and space, app has been pushed and is running and I tried this command:
cf run-task MYAPP "bin/console doctrine:schema:update --dump-sql --env=prod" --name dumpsql
(tried it with a couple of folder changes like app/bin/console etc.)
and this was the output:
Creating task for app MYAPP in org MYORG / space MYSPACE as me#myemail...
Unexpected Response
Response Code: 404
FAILED
Uses CF CLI: 6.32.0
cf logs ArcticTenTestBackend --recent does not output anything (this might be the case because I have enabled an ELK instance for logging - as I wanted to service-connect to ELK to look up the logs I found out that the service-connector cf plugin is gone for which I will open a new ticket).
Created new Issue for that: https://github.com/cloudfoundry/cli/issues/1242
This is not a CF CLI issue. Swisscom Application Cloud does not yet support the Cloud Foundry tasks. This explains the 404 you are currently receiving. We will expose this feature of Cloud Foundry in an upcoming release of Swisscom Application Cloud.
In the meantime, maybe you can find a way to execute your one-off tasks (cache warming, DB migrations) at application startup.
As mentioned by #Mathis Kretz Swisscom has gotten around to enable cf run-task since this question was posted. They send out e-mails on 22. November 2018 to announce the feature.
As discussed on your linked documentation you use the following commands to manage tasks:
cf tasks [APP_NAME]
cf run-task [APP_NAME] [COMMAND]
cf terminate-task [APP_NAME] [TASK_ID]
For running docker build of my Java application I need sensitive information (a password to access the nexus maven repository).
What is the best way to make it available to the docker build process?
I thought about adding the ~/.m2/settings.xml to the container but it lies outside of the current directory/context and ADD is not able to access it.
UPDATE: in my current setup I need the credentials to run the build and create the image, not when running the container later based on the created image
You probably want to look into mounting a volume from HOST into the container