CI-CD binary dependency not in image GitLab - image

My teams uses taskfiles to manage tasks in their code (like building/publishing containers to ECS.) Essentially, to make it easy to setup a local environment, all steps needed are in a taskfile. Most of the steps being used in the CI/CD are just re-written taskfiles. This can be difficult to mantain as it is essentially code duplicated in the same place. I prefer not using a shell runner and to use a docker image for builds.
Is there anyway I can use taskfiles in any container?

Related

Should I use a Docker container as a GitLab Runner?

Newbie to GitLab CI/CD here.
I'd like to use a linux base container as the Runner. Reason being I only have access to a Windows VM, and I'd rather not have to write PowerShell scripts build/test/deploy. Also, the build is using Spring Boot + maven, and I'm doubtful that the build scripts provided by Spring will run on Windows.
To further the problem, the build done by maven+spring spins up a container to execute the build. In other words, my build would run a container in a container, which seems feasible based on this blog.
Any feedback is much appreciated.
Edit based on #JoSSte 's feedback:
Are there any better approaches to setup a runner besides needing a container inside a container? For instance, if WSL enables running bash scripts, can the Windows VM act as the build server?
And to satisfy #JoSSte's request to make this question less opinion based, are there any best practices to approach a problem like this?

How to migrate Github Actions YAML to Bitbucket Pipelines?

Both Github Actions and Bitbucket Pipelines seem to fill similar functions at a surface level. Is it trivial to migrate the YAML for Actions into a Pipeline - or do they operate fundamentally differently?
For example: running something simple like SuperLinter (used on Github Actions) on Bitbucket Pipelines.
I've searched for examples or explanations of the migration process but with little success so far - perhabs they're just not compatible or am I missing something. This is my first time using Bitbucket over Github. Any resources and tips welcome.
They are absolutely unrelated CI systems and there is no straightforward migration path from one to another.
Both systems base their definitions in YAML, just like GitLab-CI, but the only thing that can be reused is your very YAML knowledge (syntax and anchors).
As CI systems, both will start some kind of agent to run a list of instructions, a script, so you can probably reuse most of the ideas of your scripts. But the execution environment is very different so be ready to write tons of tweaks like Benjamin commented.
E.g: about that "superlinter", just forget about it. Instead, Bitbucket Pipelines has a concept of pipes which have a similar purpose but are implemented in a rather different approach.
Another key difference: GHA runs on VMs and you configure whatever you need with "setup-actions". BBP runs on docker containers that should feature most of the runtime and tooling you will need upfront, as "setup-pipes" can not exist. So you will end up installing tooling on every run (via apt, yum, apk, wget...) so as to not maintain and keep updated a crazy amount of images with tooling and language runtimes: https://stackoverflow.com/a/72959639/11715259

Jenkins + Docker Compose + Integration Tests

I have a crazy idea to run integration tests (xUnit in .Net) in the Jenkins pipeline by using Docker Compose. The goal is to create testing environment ad-hoc and run integration tests form Jenkins (and Visual Studio) wthout using DBs etc. on physical server. In my previous project sometimes there was a case, when two builds override test data from the second build and I would like to avoid it.
The plan is the following:
Add dockerfile for each test project
Add references in the docker compose file (with creation of DBs on docker)
Add step in the Jenkins that will run integration tests
I have no long experience with contenerization, so I cannot predict what problems can appear.
The questions are:
Does it have any sence?
Is it possible?
Can it be done simpler?
I suppose that Visual Sutio test runner won't be able to get results from the docker images. I am right?
It looks that development of tests will be more difficult, because test will be run on the docker. I am right?
Thanks for all your suggestions.
Depends very much on the details. In a small project - no, in a big project with multiple micro services and many devs - sure.
Absolutely. Anything that can be done with shell commands can be automated with Jenkins
Yes, just have a test DB running somewhere. Or just run it locally with a simple script. Automation and containerization is the opposite of simple, you would only do it if the overhead is worth it in the long run
Normally it wouldn't even run on the same machine, so that could be tricky. I am no VS Code expert though
The goal of containers is to make it simpler because the environment does not change, but they add configuration overhead. Most days it shouldn't make a difference but whenever you make a big change it will cost some time.
I'd say running a Jenkins on your local machine is rarelly worth it, you could just use docker locally with scripts (bash or WSL).

Why Use Spring Boot with Docker?

I'm quite new in docker, and i'm wondering that when using spring-boot, we can easily build, ship and deploy the application with maven or gradle plugin; and we can easily add Load Balance feature. So, what is the main reason to use docker in this case? Is containerized really needed in everywhere? Thanks for the reply!
Containers helps you to get the software to run reliably when moved from one computing environment to another. Docker consists of an entire runtime environment: your application and all its dependencies, libraries and other binaries and configuration files needed for its execution.
It also simplifies your deployment process, reducing a hell lot of mess to just one file.
Once you are done with your code, you can simply build and push the image on docker hub. All you need to do now on other systems is to pull the image and run container. It will take care of all the dependencies and everything.

How best to use Docker in continuous delivery?

What is the best way to use Docker in a Continuous Delivery pipeline?
Should the build artefact be a Docker Image rather than a Jar/War? If so, how would that work - I'm struggling to work out how to seamlessly use Docker in development (on laptop) and then have the CI server use the same base image to build the artefact with.
Well, there are of course multiple best practices and many approaches on how to do this. One approach that I have found successful is the following:
Separate the deployable code (jars/wars etc) from the docker containers in separate VCS-repos (we used two different Git-repos in my latest project). This means that the docker images that you use to deploy your code on is built in a separate step. A docker-build so to say. Here you can e.g. build the docker images for your database, application server, redis cache or similar. When a `Dockerfile´ or similar has changed in your VCS then Jenkins or whatever you use can trigger the build of the docker images. These images should be tagged and pushed to some registry (may it be Docker Hub or some local registry).
The deployable code (jars/wars etc) should be built as usual using Jenkins and commit-hooks. In one of my projects we actually ran Jenkins in a Docker container as described here.
All docker containers that uses dynamic data (such as the storage for a database, the war-files for a Tomcat/Jetty or configuration files that is part of the code base) should mount these files as data volumes or as data volume containers.
The test servers or whatever steps that are part of your pipeline should be set up according to a spec that is known by your build server. We used a descriptor that connected your newly built tag from the code base to the tag on the docker containers. Jenkins pipeline plugin could then run a script that first moved the build artifacts to the host server, pulled the correct docker images from the local registry and then started all processes using data volume containers (we used Fig for managing the docker lifecycle.
With this approach, we were also able to run our local processes (databases etc) as Docker containers. These containers were of course based on the same images as the ones in production and could also be developed on the dev machines. The only real difference between local dev environment and the production environment was the operating system. The dev machines typically ran Mac OS X/Boot2Docker and prod ran on Linux.
Yes, you should shift from jar/war files to images as your build artefacts. The process #wassgren describes should work well, however I found the following to fit our use case better, especially for development:
1- Make a base image. It looks like you're a java shop so as an example, lets pretend your base image is FROM ubuntu:14.04 and installs the jdk and some of the more common libs. Let's call it myjava.
2- During development, use fig to bring up your container(s) locally and mount your dev code to the right spot. Fig uses the myjava base image and doesn't care about the Dockerfile.
3- When you build the project for deployment it uses the Dockerfile, and somewhere it does a COPY of the code/build artefacts to the right place. That image then gets tagged appropriately and deployed.
simple steps to follow.
1)Install jenkins in a container
2)Install framework tool in the same container.(I used SBT).
3)Create a project in jenkins with necessary plugins to integrate data from gitlab and build all jar's to a compressed format (say build.tgz).
4)This build.tgz can be moved anywhere and be triggered but it should satisfy all its environment dependencies (for eg say it required mysql).
5)Now we create a base environment image(in our case mysql installed).
6)With every build triggered, we should trigger a dockerfile on the server which will combine build.tgz and environment image.
7)Now we have our build.tgz along with its environment satisfied.This image should be pushed into registry.This is our final image.It is portable and can be deployed anywhere.
8)This final image can be kept on a container with necessary mountppoints to get outputs and a script(script.sh) can be triggered by mentioning the entrypoint command in dockerfile.
9)This script.sh will be inside the image(base image) and will be configured to do things according to our purpose.
10)Before making this container up you need to stop the previously running container.
Important notes:
An image is created everytime you build and is stored in registry.
Thus this can be reused.This image can be pushed into main production server and can be triggered.
This helps to maintain a clean environment everytime because we are using the base image.
You can also create a stable CD pipeline with Bamboo and Docker. Bamboo Docker integrations come in both a build agent form and as a bundled task within the application. You might find this article helpful: http://blogs.atlassian.com/2015/06/docker-containers-bamboo-winning-continuous-delivery/
You can use the task to build a Docker image that you can use in another environment or deploy your application to a container.
Good luck!

Resources