How to stick with a Gitlab CI Runner within a pipeline? - continuous-integration

In our Gitlab project group, we are using multiple shared runners for CI.
However, some of the jobs have dependencies, such that the previous job must have been executed on the same runner.
Here is an Example:
Job 1 builds a docker container
Job 2 checks the docker for execution, so it needs the docker image from job 1
Job 3 pushes the docker image to a container hub, so it needs the docker image from job 1
Now, with multiple shared runners it may happen, that job1 is executed on runner 1 and job 2 and 3 on a different runner than runner 1. This throws an error in job 2 and 3 as the docker image is locally not available on that runner.
On the other hand, we need multiple runners due to the amount of computation in our project. So it would be great if once a runner is picked in a specific job, it keeps the same runner for the ongoing jobs.
Any ideas to solve this?

Gitlab scheduling doesn't permit it easily. The balancing between runners work as the following:
When a job is created on Gitlab instance, it registers a job in
pending state
Runners check every 3 seconds (3s by default,
configured with check_interval) if there are jobs in the pending
queue. If yes, and if the runner is the right runner for the job (for
example if job tags are compliant with runner), then the runner launch N jobs from the queues, N
limited by the maximum number of concurrent job per runner (concurrent
option)
So this isn't Gitlab itself that schedule which runner runs which job. Gitlab just put jobs to run in a queue, runners frequently check the queue and select jobs. It's great for scalability but not great for your use case.
In my point of view, you have 2 options:
First, put a specific tag on only one runner and use it on jobs that need to be run on the same host
Second, more flexible, is to store the resulting docker image in your Gitlab project registry at the end of the build and pull it from any job that needs the image (job2 and job3) See https://docs.gitlab.com/ee/user/packages/container_registry/#build-and-push-by-using-gitlab-cicd

Related

How to prevent race condition between Jenkins jobs from the same repository when running in parallel

In my application I'm running several integration tests in parallel, spread among different maven profiles.
before each intgeration-test step, I'm using fabric8 to start Couchbase container, for which I use the reserve-network-port task (of builder-helper-maven-plugin) in order to allocate avaliable ports to be used by the Couchbase container.
The issue is that, multiple Jenkins jobs (different BB branches) of by (BB) repository are running in parallel on the same Jenkins machine.
from time to time, integration tests init is failed (for one or more profile), on 'address is already binded' error - meaning that 1 of the allocated tcp ports, been reserverd by the reserve-network-port task (of builder-helper-maven-plugin) has been already been allocated by another job.
Is there a way to avoid this race condition? (e.g. using some kind of resource locking when the reserve-network-port task is been execute? other solution?)

gitlab-runner - accept jobs only at certain times?

Is there a way to configure a gitlab-runner (13.6.0) to accept jobs only between certain times of the day?
For example - I would like to kick off a 'deployment' pipeline with test, build and deploy stages at any time of the day, and the test and build stages can start immediately, but I would like the final deploy stage to happen only between, say, midnight and 2am.
Thanks
GitLab Documentation describes how to use cron to trigger nightly pipelines. Additionally, there is a $CI_PIPELINE_SOURCE predefined environment variable that can be used to limit the jobs that run in a pipeline.
Using these 2 features it shall be possible to run the same pipeline in 2 different ways. The "normal" runs will be only for test/build Jobs. The "nightly" runs triggered by cron will be only for deploy job that has to check the $CI_PIPELINE_SOURCE value.
Let me know if this option fits your environment.

Cluster deployment with gitlab ci shell runner

I've trying to migrate our CI and CD processes from Jenkins to Gitlab CI. How should I setup gitlab to build our application in cluster?
In general, I expect gitlab clone repository to all nodes in cluster, execute my Bash deployment script and run some tests if needed. From my point of view, I think I should start runners in all cluster nodes and start build with all neccessary tasks. Is it possible in Gitlab? I can start only one runner for one build. May be there are some different approaches for this task?
For example, I have cluster with 2 nodes, A and B. I need clone repository to both nodes and start build script on each of them. I have register one gitlab-ci-multi-runner on each node, but build executed only on one of this node.
What you are describing can be achieved by setting up each executor with a different tag and setting multiple build tasks in gitlab (yaml anchors help with it) but it's not the desired way. The desired way would be to use gitlab runner to run tests and then another build to run Ansible/Chef/Salt/Puppet or your other desired deployment tool.

Set build number of downstream jobs from master job in Jenkins

I have 2 Jenkins slaves with 1 master (3 machines). For example Slave1 and Slave2. I have two jobs and used labels to bind the jobs to the slaves. For example Job1 is bound to Slave1 and Job2 is bound to Slave2. Both are free style jobs. I created a free style job which only invokes Job1 and Job2 so they run on the slaves at the same time. I'd like for the two jobs to always build with the same build number or inherit the build number from the upstream job. Is there a way I could send the build number from the main job to the the two downstream jobs? I'd like to prevent Job1 and Job2's build numbers from getting out of sync which would happen if one is run by itself.
There is a method in Jenkins Java API: Job::updateNextBuildNumber(int). So you can try the following: from a system Groovy script (that can be run via Groovy Plugin) locate the child job objects, set the build number on them via the method above; then trigger them.
You'll still may get problems, however. For example, if one of those jobs is triggered manually you may not be able to set a number on it (build numbers have to increase).

Build schedule in Jenkins

I am working on a POC currently using Jenkins as CI server. I have setup jobs based on certain lifecycle stages such as test suite and QA. I have configured these jobs to become scheduled builds based on a cron expression.
I have a request to know how to find out what the next scheduled build will be in Jenkins based on the jobs i have created. I know what was the last succesful build, the last failed but i dont know the next proposed build. Any clues!? Or is there a view plugin for this? Sorry if this is a strange request but i need to find out.
Also i need to discover if there is an issue when more than one job is running concurrently what will happen. I would have understood this is not an issue. I do not have any slaves setup, i only have the master.
Jenkins version: 1.441
I found the first issue!
https://wiki.jenkins-ci.org/display/JENKINS/Next+Executions
So can you help me on the second question please? Is there any issue with more than one job building concurrently?
Thanks,
Shane.
For the next execution date take a look at Next Execution Plugin here.
For your second question .
The number of build you can run concurrently is configurable in the jenkins server params(http:///configure : executors param).
If the number of executor is reached each new job triggered will be add in jenkins's execution queue and will be run when one running job will end

Resources