Should I use a Docker container as a GitLab Runner? - spring-boot

Newbie to GitLab CI/CD here.
I'd like to use a linux base container as the Runner. Reason being I only have access to a Windows VM, and I'd rather not have to write PowerShell scripts build/test/deploy. Also, the build is using Spring Boot + maven, and I'm doubtful that the build scripts provided by Spring will run on Windows.
To further the problem, the build done by maven+spring spins up a container to execute the build. In other words, my build would run a container in a container, which seems feasible based on this blog.
Any feedback is much appreciated.
Edit based on #JoSSte 's feedback:
Are there any better approaches to setup a runner besides needing a container inside a container? For instance, if WSL enables running bash scripts, can the Windows VM act as the build server?
And to satisfy #JoSSte's request to make this question less opinion based, are there any best practices to approach a problem like this?

Related

CI-CD binary dependency not in image GitLab

My teams uses taskfiles to manage tasks in their code (like building/publishing containers to ECS.) Essentially, to make it easy to setup a local environment, all steps needed are in a taskfile. Most of the steps being used in the CI/CD are just re-written taskfiles. This can be difficult to mantain as it is essentially code duplicated in the same place. I prefer not using a shell runner and to use a docker image for builds.
Is there anyway I can use taskfiles in any container?

Jenkins + Docker Compose + Integration Tests

I have a crazy idea to run integration tests (xUnit in .Net) in the Jenkins pipeline by using Docker Compose. The goal is to create testing environment ad-hoc and run integration tests form Jenkins (and Visual Studio) wthout using DBs etc. on physical server. In my previous project sometimes there was a case, when two builds override test data from the second build and I would like to avoid it.
The plan is the following:
Add dockerfile for each test project
Add references in the docker compose file (with creation of DBs on docker)
Add step in the Jenkins that will run integration tests
I have no long experience with contenerization, so I cannot predict what problems can appear.
The questions are:
Does it have any sence?
Is it possible?
Can it be done simpler?
I suppose that Visual Sutio test runner won't be able to get results from the docker images. I am right?
It looks that development of tests will be more difficult, because test will be run on the docker. I am right?
Thanks for all your suggestions.
Depends very much on the details. In a small project - no, in a big project with multiple micro services and many devs - sure.
Absolutely. Anything that can be done with shell commands can be automated with Jenkins
Yes, just have a test DB running somewhere. Or just run it locally with a simple script. Automation and containerization is the opposite of simple, you would only do it if the overhead is worth it in the long run
Normally it wouldn't even run on the same machine, so that could be tricky. I am no VS Code expert though
The goal of containers is to make it simpler because the environment does not change, but they add configuration overhead. Most days it shouldn't make a difference but whenever you make a big change it will cost some time.
I'd say running a Jenkins on your local machine is rarelly worth it, you could just use docker locally with scripts (bash or WSL).

Why Use Spring Boot with Docker?

I'm quite new in docker, and i'm wondering that when using spring-boot, we can easily build, ship and deploy the application with maven or gradle plugin; and we can easily add Load Balance feature. So, what is the main reason to use docker in this case? Is containerized really needed in everywhere? Thanks for the reply!
Containers helps you to get the software to run reliably when moved from one computing environment to another. Docker consists of an entire runtime environment: your application and all its dependencies, libraries and other binaries and configuration files needed for its execution.
It also simplifies your deployment process, reducing a hell lot of mess to just one file.
Once you are done with your code, you can simply build and push the image on docker hub. All you need to do now on other systems is to pull the image and run container. It will take care of all the dependencies and everything.

Is it possible to download and configure jenkins with a script?

I want to develop a continuous integration with one or many scripts locally and then on a server.
For that I need Jenkins. I installed jenkins in a docker container, but would it be possible to configure it with a script so that the configuration can be used on any computer that runs it? When I talk about configuration, I'm talking about jenkins jobs and plugins.
You can use a configuration management tool like chef or ansible to install and configure in automated way. If using chef you can use the community cookbook. If you are only looking for creating jobs automated way check this thread. Similar way you will be able to create groovy script to install plugins as well.
Also take a look at this article

Trouble deploying static site using Gitlab CI

I'm currently developing a project containing an Angular SPA Frontend Repo and a nodeJS backend repo.
I've been looking into ways, how to deploy my Applications to RHEL/centOS using Gitlab-CI after compiling/minifying my project.
The problem is, I can't figure out, how to use eg. the YUI Compressor for shrinking within the gitlab-ci.yml file.
I also have trouble to use eg SSH to deploy my files into my Public Folder on my webserver or to trigger pm2 to reload the application.
I'd love to implement a basic unit testing in this approach, but I still can't get the hang of how it's done either.
I'd be glad to hear any suggestions from you that could expand my knownledge.
Thanks!
Assuming you're using yuicompressor as a jar, how about writing this in .gitlab-ci.yml:
build:
script:
- yuicompressor.sh
Make sure you have a shell script in your path, which chmod +x bit set that does this:
#!/bin/sh
java -jar /path/to/your/yuicompressor-x.y.z.jar
That file must be on your runner vm called yuicompressor.sh. It doesn't seem good to me to hard-code paths to resources on your runners into .gitlab-ci.yml.
Note you might need different args to the java app.
I put all executable tools (mostly scripts) that my runners need into a folder /glrunner/tools and place /glrunner/tools into the PATH of my runner when I start it.
If you're having trouble because you're using Dockerized runners, get everything working OUTSIDE docker with a shell runner on a Linux VM, and later move to containers. This is the number one rookie mistake people make.
Once you're using containers and you have a Dockerfile to bring up your tooling, perhaps you won't need to consider static "tool/script" folders like I have initially suggested, but it's a good way to get started, learn Gitlab CI Runners first, then learn Docker.

Resources