How to clear obsolete docker images created via Jenkins CI/CD? - spring-boot

How to clear obsolete docker images created via Jenkins CI/CD?
I have created a CI/CD Jenkins Pipeline which does the following tasks
Run a gradle build. The gradle build does the next set of tasks
Build several springboot microservices
Create a docker image for each of the micro-service
It pushes the images to a private Docker registry
Execute helm templates to create/refresh the k8s deployments in a k8s cluster.
While the entire process works well. I am looking for a solution for a couple of scenarios.
Since it is CI/CD the build is triggered for every code push. So, the docker images in the private registry will be created as well and eventually eat up all the disk space. How do I conditionally clear the docker images?
If a script is developed to use the docker REST APIs to clear the images how do I conditionally skip to delete certain images (ex: images related to tagged Jenkins builds)
Are there any recommendations or standards for this task?

One way of addressing this issue is to have a different jenkins job especially for the purpose of deleting older images.
That job would probably trigger on some appropriate schedule, say every night, once a week and so on, depending on how quickly you're worried you'll run out of space.
As to how you would delete the images, take a look at the docker image prune command, with the --filter option, as explained in this answer. That will allow you to only delete images, for example, older than 7 days, etc.
Hope that helps!

I think below should be the way to go forward
Find all the containers
docker ps -a -f "your condition"
Then stop and remove all containers which you found with
below commands
docker stop "container name"
docker rm "container name"
find all dangling images
docker images -f "dangling=true"
Remove all images
docker rmi "image name"

Related

CI-CD binary dependency not in image GitLab

My teams uses taskfiles to manage tasks in their code (like building/publishing containers to ECS.) Essentially, to make it easy to setup a local environment, all steps needed are in a taskfile. Most of the steps being used in the CI/CD are just re-written taskfiles. This can be difficult to mantain as it is essentially code duplicated in the same place. I prefer not using a shell runner and to use a docker image for builds.
Is there anyway I can use taskfiles in any container?

Is it possible to copy objects from a runner on a GitLabCI server to another runner on another GitLabCI server?

I am trying to copy container images from a runner on one server to a runner on another server is that possible? I am new to CI/CD and do not know where to begin.
if you want to build an image on one runner, then transfer it to another, you can not do that directly.
you have to upload it to a shared registry.
you might want to explain what you want to achieve a little deeper, are you looking to share the runner image itself? or an image that lives inside the job?
for the latter there might be a workaround, if you save the image to file with docker save, then use the gitlab-ci artifacts: statement to save the file for later use in another job. but this is probably not what you want.
(using artifacts is generally the way to preserve files from one job for another (running later) one

Multiple Docker files for dev / production

I have a Laravel application with a Dockerfile and a docker-compose.yml which i use for local development. It does some volume sharing currently so that code updates are reflected immediately. The docker-compose also spins up containers for MySQL, Redis, etc.
However, in preparation for deploying my container to production (ECS), I wonder how best to structure my Dockerfile.
Essentially, on production, there are several other steps I would need to do that would not be done locally in the Dockerfile:
install dependencies
modify permissions
download a production env file
My first solution was to have a build script, which essentially takes the codebase, copies it to an empty sub-folder, runs the above three commands in that folder, and then runs docker build. This way, the Dockerfile doesn't need to change between dev and production and i can include the extra steps before the build process.
However, the drawback is that the first 3 commands don't get included as part of the docker image layering. So even if my dependencies haven't changed in the last 100 builds, it'll still download them all from scratch each time, which is fairly time consuming.
Another option would be to have multiple docker files, but that doesn't seem very dry.
Is there a preferred or standardized approach for handling this sort of situation?

Using MongoDB (in a container?) in Visual Studio Team Services pipelines

I have a node.js server that communicates with a MongoDB database. As part of the continuous-integration process I'd like to spin up a MongoDB database and run my tests against the server + DB.
With bitbucket pipelines I can spin up a container that has both node.js and MongoDB. I then run my tests against this setup.
What would be the best way to achieve this with Visual Studio Team Services? Some options that come to mind:
1) Hosted pipelines seem easiest but they don't have MongoDB on them. I could use Tool Installers, but there's no mention of a MongoDB installer, and in fact I don't see any tool installer in my list of available tasks. Also, it is mentioned that there is no admin access to the hosted pipeline machines and I believe MongoDB requires admin access. Lastly, downloading and installing Mongo takes quite a bit of time.
2) Set up my own private pipeline - i.e. a VM with Node + Mongo, and install the pipeline agent on it. Do I have to spin up a dedicate Azure instance for this? Will this instance be torn down and set up again on each test run, or will it remain up between test runs (meaning I have to take extra care to clean it up)?
3) Magically use a container in the pipeline through an option that I haven't yet discovered...?
I'd really like to use a container to run my tests because then I can use the same container locally during the development process, rather than having to maintain multiple environments. Can this be done?
So as it turns out, VSTS now has Docker support in its pipeline (when I wrote my question it was in beta and I didn't find it for whatever reason). It can be found at https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.docker.
This command allows you to spin up a container of your choice and run a single command on it. If this command is to be synchronously run as part of the pipeline, then Run in Background needs to be unchecked (this will be the case for regular build commands, I guess). I ended up pushing a build script into my git repository and running it on a container.
And re. my question in (2) above - machines in private pipelines aren't cleaned up between pipeline runs.

How best to use Docker in continuous delivery?

What is the best way to use Docker in a Continuous Delivery pipeline?
Should the build artefact be a Docker Image rather than a Jar/War? If so, how would that work - I'm struggling to work out how to seamlessly use Docker in development (on laptop) and then have the CI server use the same base image to build the artefact with.
Well, there are of course multiple best practices and many approaches on how to do this. One approach that I have found successful is the following:
Separate the deployable code (jars/wars etc) from the docker containers in separate VCS-repos (we used two different Git-repos in my latest project). This means that the docker images that you use to deploy your code on is built in a separate step. A docker-build so to say. Here you can e.g. build the docker images for your database, application server, redis cache or similar. When a `Dockerfile´ or similar has changed in your VCS then Jenkins or whatever you use can trigger the build of the docker images. These images should be tagged and pushed to some registry (may it be Docker Hub or some local registry).
The deployable code (jars/wars etc) should be built as usual using Jenkins and commit-hooks. In one of my projects we actually ran Jenkins in a Docker container as described here.
All docker containers that uses dynamic data (such as the storage for a database, the war-files for a Tomcat/Jetty or configuration files that is part of the code base) should mount these files as data volumes or as data volume containers.
The test servers or whatever steps that are part of your pipeline should be set up according to a spec that is known by your build server. We used a descriptor that connected your newly built tag from the code base to the tag on the docker containers. Jenkins pipeline plugin could then run a script that first moved the build artifacts to the host server, pulled the correct docker images from the local registry and then started all processes using data volume containers (we used Fig for managing the docker lifecycle.
With this approach, we were also able to run our local processes (databases etc) as Docker containers. These containers were of course based on the same images as the ones in production and could also be developed on the dev machines. The only real difference between local dev environment and the production environment was the operating system. The dev machines typically ran Mac OS X/Boot2Docker and prod ran on Linux.
Yes, you should shift from jar/war files to images as your build artefacts. The process #wassgren describes should work well, however I found the following to fit our use case better, especially for development:
1- Make a base image. It looks like you're a java shop so as an example, lets pretend your base image is FROM ubuntu:14.04 and installs the jdk and some of the more common libs. Let's call it myjava.
2- During development, use fig to bring up your container(s) locally and mount your dev code to the right spot. Fig uses the myjava base image and doesn't care about the Dockerfile.
3- When you build the project for deployment it uses the Dockerfile, and somewhere it does a COPY of the code/build artefacts to the right place. That image then gets tagged appropriately and deployed.
simple steps to follow.
1)Install jenkins in a container
2)Install framework tool in the same container.(I used SBT).
3)Create a project in jenkins with necessary plugins to integrate data from gitlab and build all jar's to a compressed format (say build.tgz).
4)This build.tgz can be moved anywhere and be triggered but it should satisfy all its environment dependencies (for eg say it required mysql).
5)Now we create a base environment image(in our case mysql installed).
6)With every build triggered, we should trigger a dockerfile on the server which will combine build.tgz and environment image.
7)Now we have our build.tgz along with its environment satisfied.This image should be pushed into registry.This is our final image.It is portable and can be deployed anywhere.
8)This final image can be kept on a container with necessary mountppoints to get outputs and a script(script.sh) can be triggered by mentioning the entrypoint command in dockerfile.
9)This script.sh will be inside the image(base image) and will be configured to do things according to our purpose.
10)Before making this container up you need to stop the previously running container.
Important notes:
An image is created everytime you build and is stored in registry.
Thus this can be reused.This image can be pushed into main production server and can be triggered.
This helps to maintain a clean environment everytime because we are using the base image.
You can also create a stable CD pipeline with Bamboo and Docker. Bamboo Docker integrations come in both a build agent form and as a bundled task within the application. You might find this article helpful: http://blogs.atlassian.com/2015/06/docker-containers-bamboo-winning-continuous-delivery/
You can use the task to build a Docker image that you can use in another environment or deploy your application to a container.
Good luck!

Resources