How best to use Docker in continuous delivery? - continuous-integration

What is the best way to use Docker in a Continuous Delivery pipeline?
Should the build artefact be a Docker Image rather than a Jar/War? If so, how would that work - I'm struggling to work out how to seamlessly use Docker in development (on laptop) and then have the CI server use the same base image to build the artefact with.

Well, there are of course multiple best practices and many approaches on how to do this. One approach that I have found successful is the following:
Separate the deployable code (jars/wars etc) from the docker containers in separate VCS-repos (we used two different Git-repos in my latest project). This means that the docker images that you use to deploy your code on is built in a separate step. A docker-build so to say. Here you can e.g. build the docker images for your database, application server, redis cache or similar. When a `Dockerfile´ or similar has changed in your VCS then Jenkins or whatever you use can trigger the build of the docker images. These images should be tagged and pushed to some registry (may it be Docker Hub or some local registry).
The deployable code (jars/wars etc) should be built as usual using Jenkins and commit-hooks. In one of my projects we actually ran Jenkins in a Docker container as described here.
All docker containers that uses dynamic data (such as the storage for a database, the war-files for a Tomcat/Jetty or configuration files that is part of the code base) should mount these files as data volumes or as data volume containers.
The test servers or whatever steps that are part of your pipeline should be set up according to a spec that is known by your build server. We used a descriptor that connected your newly built tag from the code base to the tag on the docker containers. Jenkins pipeline plugin could then run a script that first moved the build artifacts to the host server, pulled the correct docker images from the local registry and then started all processes using data volume containers (we used Fig for managing the docker lifecycle.
With this approach, we were also able to run our local processes (databases etc) as Docker containers. These containers were of course based on the same images as the ones in production and could also be developed on the dev machines. The only real difference between local dev environment and the production environment was the operating system. The dev machines typically ran Mac OS X/Boot2Docker and prod ran on Linux.

Yes, you should shift from jar/war files to images as your build artefacts. The process #wassgren describes should work well, however I found the following to fit our use case better, especially for development:
1- Make a base image. It looks like you're a java shop so as an example, lets pretend your base image is FROM ubuntu:14.04 and installs the jdk and some of the more common libs. Let's call it myjava.
2- During development, use fig to bring up your container(s) locally and mount your dev code to the right spot. Fig uses the myjava base image and doesn't care about the Dockerfile.
3- When you build the project for deployment it uses the Dockerfile, and somewhere it does a COPY of the code/build artefacts to the right place. That image then gets tagged appropriately and deployed.

simple steps to follow.
1)Install jenkins in a container
2)Install framework tool in the same container.(I used SBT).
3)Create a project in jenkins with necessary plugins to integrate data from gitlab and build all jar's to a compressed format (say build.tgz).
4)This build.tgz can be moved anywhere and be triggered but it should satisfy all its environment dependencies (for eg say it required mysql).
5)Now we create a base environment image(in our case mysql installed).
6)With every build triggered, we should trigger a dockerfile on the server which will combine build.tgz and environment image.
7)Now we have our build.tgz along with its environment satisfied.This image should be pushed into registry.This is our final image.It is portable and can be deployed anywhere.
8)This final image can be kept on a container with necessary mountppoints to get outputs and a script(script.sh) can be triggered by mentioning the entrypoint command in dockerfile.
9)This script.sh will be inside the image(base image) and will be configured to do things according to our purpose.
10)Before making this container up you need to stop the previously running container.
Important notes:
An image is created everytime you build and is stored in registry.
Thus this can be reused.This image can be pushed into main production server and can be triggered.
This helps to maintain a clean environment everytime because we are using the base image.

You can also create a stable CD pipeline with Bamboo and Docker. Bamboo Docker integrations come in both a build agent form and as a bundled task within the application. You might find this article helpful: http://blogs.atlassian.com/2015/06/docker-containers-bamboo-winning-continuous-delivery/
You can use the task to build a Docker image that you can use in another environment or deploy your application to a container.
Good luck!

Related

CI-CD binary dependency not in image GitLab

My teams uses taskfiles to manage tasks in their code (like building/publishing containers to ECS.) Essentially, to make it easy to setup a local environment, all steps needed are in a taskfile. Most of the steps being used in the CI/CD are just re-written taskfiles. This can be difficult to mantain as it is essentially code duplicated in the same place. I prefer not using a shell runner and to use a docker image for builds.
Is there anyway I can use taskfiles in any container?

Why Use Spring Boot with Docker?

I'm quite new in docker, and i'm wondering that when using spring-boot, we can easily build, ship and deploy the application with maven or gradle plugin; and we can easily add Load Balance feature. So, what is the main reason to use docker in this case? Is containerized really needed in everywhere? Thanks for the reply!
Containers helps you to get the software to run reliably when moved from one computing environment to another. Docker consists of an entire runtime environment: your application and all its dependencies, libraries and other binaries and configuration files needed for its execution.
It also simplifies your deployment process, reducing a hell lot of mess to just one file.
Once you are done with your code, you can simply build and push the image on docker hub. All you need to do now on other systems is to pull the image and run container. It will take care of all the dependencies and everything.

How to clear obsolete docker images created via Jenkins CI/CD?

How to clear obsolete docker images created via Jenkins CI/CD?
I have created a CI/CD Jenkins Pipeline which does the following tasks
Run a gradle build. The gradle build does the next set of tasks
Build several springboot microservices
Create a docker image for each of the micro-service
It pushes the images to a private Docker registry
Execute helm templates to create/refresh the k8s deployments in a k8s cluster.
While the entire process works well. I am looking for a solution for a couple of scenarios.
Since it is CI/CD the build is triggered for every code push. So, the docker images in the private registry will be created as well and eventually eat up all the disk space. How do I conditionally clear the docker images?
If a script is developed to use the docker REST APIs to clear the images how do I conditionally skip to delete certain images (ex: images related to tagged Jenkins builds)
Are there any recommendations or standards for this task?
One way of addressing this issue is to have a different jenkins job especially for the purpose of deleting older images.
That job would probably trigger on some appropriate schedule, say every night, once a week and so on, depending on how quickly you're worried you'll run out of space.
As to how you would delete the images, take a look at the docker image prune command, with the --filter option, as explained in this answer. That will allow you to only delete images, for example, older than 7 days, etc.
Hope that helps!
I think below should be the way to go forward
Find all the containers
docker ps -a -f "your condition"
Then stop and remove all containers which you found with
below commands
docker stop "container name"
docker rm "container name"
find all dangling images
docker images -f "dangling=true"
Remove all images
docker rmi "image name"

Use docker compose with compiling

I want to deploy a maven application with docker container and if possible also test with docker, but a have some problems.
I because of using java I need to compile my application before using is.
In the process of compiling there also running unit test, which need a database connection.
For testing I used a database container started from hand who run on localhost:5432.
If I start docker-compose now this causes an error because the container can't reach localhost:5432 any more. If I write postgres:5432 in my application.properties it does not compile because of the unknown host postgres.
How to handle this. Is there a way to start a with maven and an with postgres to building time.
As you see I am new to docker-compose, and don't have a workflow yet.
Thanks for your help
You should use your existing desktop-oriented build process to build and test the application and only use Docker to build the final deployment artifact. If you are hard-coding the database location in your source code, there is lurking trouble there of exactly the sort you describe (what will you do if you have separate staging and production databases hosted by your cloud provider?) and you should make that configurable.
During the docker build phase there’s no way to guarantee that any particular network environment, external services, or DNS names will be present, so you can’t do things like run integration tests that depend on an external database. Fortunately that’s a problem the software engineering community has spent a long time addressing in the decades before Docker existed. While many Docker setup are very enthusiastic about mounting application source code directly into containers, that’s much less useful for compiled languages and not really appropriate for controlled production deployments.
In short: run Maven the same way you did before you had Docker, and then just have your Dockerfile COPY the resulting (fully-tested) .jar file into the image.

Multiple Docker files for dev / production

I have a Laravel application with a Dockerfile and a docker-compose.yml which i use for local development. It does some volume sharing currently so that code updates are reflected immediately. The docker-compose also spins up containers for MySQL, Redis, etc.
However, in preparation for deploying my container to production (ECS), I wonder how best to structure my Dockerfile.
Essentially, on production, there are several other steps I would need to do that would not be done locally in the Dockerfile:
install dependencies
modify permissions
download a production env file
My first solution was to have a build script, which essentially takes the codebase, copies it to an empty sub-folder, runs the above three commands in that folder, and then runs docker build. This way, the Dockerfile doesn't need to change between dev and production and i can include the extra steps before the build process.
However, the drawback is that the first 3 commands don't get included as part of the docker image layering. So even if my dependencies haven't changed in the last 100 builds, it'll still download them all from scratch each time, which is fairly time consuming.
Another option would be to have multiple docker files, but that doesn't seem very dry.
Is there a preferred or standardized approach for handling this sort of situation?

Resources