Newbie to GitLab CI/CD here.
I'd like to use a linux base container as the Runner. Reason being I only have access to a Windows VM, and I'd rather not have to write PowerShell scripts build/test/deploy. Also, the build is using Spring Boot + maven, and I'm doubtful that the build scripts provided by Spring will run on Windows.
To further the problem, the build done by maven+spring spins up a container to execute the build. In other words, my build would run a container in a container, which seems feasible based on this blog.
Any feedback is much appreciated.
Edit based on #JoSSte 's feedback:
Are there any better approaches to setup a runner besides needing a container inside a container? For instance, if WSL enables running bash scripts, can the Windows VM act as the build server?
And to satisfy #JoSSte's request to make this question less opinion based, are there any best practices to approach a problem like this?
I have a crazy idea to run integration tests (xUnit in .Net) in the Jenkins pipeline by using Docker Compose. The goal is to create testing environment ad-hoc and run integration tests form Jenkins (and Visual Studio) wthout using DBs etc. on physical server. In my previous project sometimes there was a case, when two builds override test data from the second build and I would like to avoid it.
The plan is the following:
Add dockerfile for each test project
Add references in the docker compose file (with creation of DBs on docker)
Add step in the Jenkins that will run integration tests
I have no long experience with contenerization, so I cannot predict what problems can appear.
The questions are:
Does it have any sence?
Is it possible?
Can it be done simpler?
I suppose that Visual Sutio test runner won't be able to get results from the docker images. I am right?
It looks that development of tests will be more difficult, because test will be run on the docker. I am right?
Thanks for all your suggestions.
Depends very much on the details. In a small project - no, in a big project with multiple micro services and many devs - sure.
Absolutely. Anything that can be done with shell commands can be automated with Jenkins
Yes, just have a test DB running somewhere. Or just run it locally with a simple script. Automation and containerization is the opposite of simple, you would only do it if the overhead is worth it in the long run
Normally it wouldn't even run on the same machine, so that could be tricky. I am no VS Code expert though
The goal of containers is to make it simpler because the environment does not change, but they add configuration overhead. Most days it shouldn't make a difference but whenever you make a big change it will cost some time.
I'd say running a Jenkins on your local machine is rarelly worth it, you could just use docker locally with scripts (bash or WSL).
I have a golang web app, and I need to deploy it. I am trying to figure out the best practice on how to run a golang app on production. The way I am doing it now, very simple,
just upload the built binary to production, without actually having
the source code on prod at all.
However I found an issue, my source code actually reads a config/<local/prod>.yml config file from the source. If I just upload the binary without source code, the app cant run because it is missing config. So I wonder what is the best practice here.
I thought about a couple solutions:
Upload source code, and the binary or build from the source
Only upload the binary and the config file.
Move yml config to Env Variables, but I think with this solution, the code will be less structured, because if you have lots of configs, env variables will be hard to manage.
Thanks in advance.
Good practice for deployment is to have reproducible build process that runs in a clean room (e.g. Docker image) and produces artifacts (binaries, configs, assets) to deploy, ideally also runs some tests that prove nothing was broken from the last time.
It is a good idea to package service - binary and all its needed files (configs, auxiliary files such as systemd, nginx or logrotate configs, etc.) into some sort of package - be it package native to your target environment Linux distribution (DPKG, RPM), virtual machine image, docker image, etc. That way you (or someone else tasked with deployment) won't forget any files, etc. Once you have package you can easily verify and deploy using native tools for that packaging format (apt, yum, docker...) to production environment.
For configuration and other files I recommend to make software to read it from well known locations or at least have option to pass paths in command line arguments. If you deploy to Linux I recommend following LFHS (tldr; configuration to /etc/yourapp/, binaries to /usr/bin/)
It is not recommended to build the software from source in production environment as build requires tools that are normally unnecessary there (e.g. go, git, dependencies, etc.). Installing and running these requires more maintenance and might cause security and performance risks. Generally you want to keep your production environment minimal as required to run the application.
I think the most common deployment strategy for an app is trying to comply with the 12-factor-app methodology.
So, in this case, if your YAML file is the configuration file, then it would be better if you put the configuration on the Environment Variables(ENV vars). So that when you deploy your app on the container, it is easier to config your running instance from the ENV vars rather than copying a config file to the container.
However, while writing system software, it is better to comply with the file system hierarchy structure defined by the OS you are using. If you are using a Unix-like system you could read the hierarchy structure by typing man hier on the terminal. Usually, I install the compiled binary on the /usr/local/bin directory and put the configuration inside the /usr/local/etc.
For the deployment on the production, I created a simple Makefile that will do the building and installation process. If the deployment environment is a bare metal server or a VM, I commonly use Ansible to do the deployment using ansible-playbook. The playbook will fetch the source code from the code repository, then build, compile, and install the software by running the make command.
If the app will be deployed on containers, I suggest that you create an image and use multi-stage builds so the source code and other tools that needed while building the binary would not be in the production environment and the image size would be smaller. But, as I mentioned before, it is a common practice to read the app configuration from the ENV vars instead of a config file. If the app has a lot of things to be configured, the file could be copied to the image while building the image.
While we wait for the proposal: cmd/go: support embedding static assets (files) in binaries to be implemented (see the current proposal), you can use one of the embedding static asset files tools listed in that proposal.
The idea is to include your static file in your executable.
That way, you can distribute your program without being dependent on your sources.
I have a doubt.
How can i create scritps to :
Get my code from repository (GitHub, GitLab...)
Build
Publish
Test
Run in IIS
This script should run in windows or linux OS, and consider that i have a empty VM.
This application is an .Net Core WebApi.
I searched in web but not found an template geting code from repository.
This is doable with scripts like #Scott said and you should consider using solutions for this because there are some great free ones out there like teamcity with octopus integration. Here is what you need to consider if you decide on making scripts for this.
The vm you have is empty so the runtimes need to be installed and
checked are they compatible with code you are trying to deploy to
them.
The scripts for some parts of deployment will need to be run under user with sufficient privileges
You will need to handle the webserver configuration with the scripts as well for all of this
And those only a few things that are on the list for that path. Now having said that there is the path of containers which handle most of this through code and can be deployed to all of environments you mentioned before and you only need to worry that there is a container service on those vm-s you want to deploy to and it will be much easier to handle since like i mentioned it is all in code and is easily changed unlike some scripts.
I have a Laravel application with a Dockerfile and a docker-compose.yml which i use for local development. It does some volume sharing currently so that code updates are reflected immediately. The docker-compose also spins up containers for MySQL, Redis, etc.
However, in preparation for deploying my container to production (ECS), I wonder how best to structure my Dockerfile.
Essentially, on production, there are several other steps I would need to do that would not be done locally in the Dockerfile:
install dependencies
modify permissions
download a production env file
My first solution was to have a build script, which essentially takes the codebase, copies it to an empty sub-folder, runs the above three commands in that folder, and then runs docker build. This way, the Dockerfile doesn't need to change between dev and production and i can include the extra steps before the build process.
However, the drawback is that the first 3 commands don't get included as part of the docker image layering. So even if my dependencies haven't changed in the last 100 builds, it'll still download them all from scratch each time, which is fairly time consuming.
Another option would be to have multiple docker files, but that doesn't seem very dry.
Is there a preferred or standardized approach for handling this sort of situation?