Set build number of downstream jobs from master job in Jenkins - continuous-integration

I have 2 Jenkins slaves with 1 master (3 machines). For example Slave1 and Slave2. I have two jobs and used labels to bind the jobs to the slaves. For example Job1 is bound to Slave1 and Job2 is bound to Slave2. Both are free style jobs. I created a free style job which only invokes Job1 and Job2 so they run on the slaves at the same time. I'd like for the two jobs to always build with the same build number or inherit the build number from the upstream job. Is there a way I could send the build number from the main job to the the two downstream jobs? I'd like to prevent Job1 and Job2's build numbers from getting out of sync which would happen if one is run by itself.

There is a method in Jenkins Java API: Job::updateNextBuildNumber(int). So you can try the following: from a system Groovy script (that can be run via Groovy Plugin) locate the child job objects, set the build number on them via the method above; then trigger them.
You'll still may get problems, however. For example, if one of those jobs is triggered manually you may not be able to set a number on it (build numbers have to increase).

Related

How to stick with a Gitlab CI Runner within a pipeline?

In our Gitlab project group, we are using multiple shared runners for CI.
However, some of the jobs have dependencies, such that the previous job must have been executed on the same runner.
Here is an Example:
Job 1 builds a docker container
Job 2 checks the docker for execution, so it needs the docker image from job 1
Job 3 pushes the docker image to a container hub, so it needs the docker image from job 1
Now, with multiple shared runners it may happen, that job1 is executed on runner 1 and job 2 and 3 on a different runner than runner 1. This throws an error in job 2 and 3 as the docker image is locally not available on that runner.
On the other hand, we need multiple runners due to the amount of computation in our project. So it would be great if once a runner is picked in a specific job, it keeps the same runner for the ongoing jobs.
Any ideas to solve this?
Gitlab scheduling doesn't permit it easily. The balancing between runners work as the following:
When a job is created on Gitlab instance, it registers a job in
pending state
Runners check every 3 seconds (3s by default,
configured with check_interval) if there are jobs in the pending
queue. If yes, and if the runner is the right runner for the job (for
example if job tags are compliant with runner), then the runner launch N jobs from the queues, N
limited by the maximum number of concurrent job per runner (concurrent
option)
So this isn't Gitlab itself that schedule which runner runs which job. Gitlab just put jobs to run in a queue, runners frequently check the queue and select jobs. It's great for scalability but not great for your use case.
In my point of view, you have 2 options:
First, put a specific tag on only one runner and use it on jobs that need to be run on the same host
Second, more flexible, is to store the resulting docker image in your Gitlab project registry at the end of the build and pull it from any job that needs the image (job2 and job3) See https://docs.gitlab.com/ee/user/packages/container_registry/#build-and-push-by-using-gitlab-cicd

Multiple instances of a partitioned spring batch job

I have a Spring batch partitioned job. The job is always started with a unique set of parameters so always a new job.
My remoting fabric is JMS with request/response queues configured for communication between the masters and slaves.
One instance of this partitioned job processes files in a given folder. Master step gets the file names from the folder and submits the file names to the slaves; each slave instance processes one of the files.
Job works fine.
Recently, I started to execute multiple instances (completely separate JVMs) of this job to process files from multiple folders. So I essentially have multiple master steps running but the same set of slaves.
Randomly; I notice the following behavior sometimes - the slaves will finish their work but the master keeps spinning thinking the slaves are still doing something. The step status will show successful in the job repo but at the job level the status is STARTING with an exit code of UNKNOWN.
All masters share the set of request/response queues; one queue for requests and one for responses.
Is this a supported configuration? Can you have multiple master steps sharing the same set of queues running concurrently? Because of the behavior above I'm thinking the responses back from the workers are going to the incorrect master.

gitlab-runner - accept jobs only at certain times?

Is there a way to configure a gitlab-runner (13.6.0) to accept jobs only between certain times of the day?
For example - I would like to kick off a 'deployment' pipeline with test, build and deploy stages at any time of the day, and the test and build stages can start immediately, but I would like the final deploy stage to happen only between, say, midnight and 2am.
Thanks
GitLab Documentation describes how to use cron to trigger nightly pipelines. Additionally, there is a $CI_PIPELINE_SOURCE predefined environment variable that can be used to limit the jobs that run in a pipeline.
Using these 2 features it shall be possible to run the same pipeline in 2 different ways. The "normal" runs will be only for test/build Jobs. The "nightly" runs triggered by cron will be only for deploy job that has to check the $CI_PIPELINE_SOURCE value.
Let me know if this option fits your environment.

Yarn capacity-scheduler Parallelize

Does capacity-scheduler in yarn run app in parallel on the same queue for the same user.
For example:If we have 2 hive CLI on 2 terminals with same user, and the same query is started on both, do they execute on the default queue in parallel or sequentially.
Currently, the UI shows 1 running, and 1 in pending state:
Is there a way to run it in parallel?
Yarn capacity scheduler run jobs in FIFO manner for the jobs submitted in the same queue. For example if both the hive cli's got submitted for default queue then which ever able to secure resources first will get into running state and other will wait(only if enough resources are not present in the queue).
If you want parallel execution
1) you can run other job in different queue.You can define the queue name while launching job on yarn.
2) You need to define resources in a manner so that both job can get resources as desired.

Build schedule in Jenkins

I am working on a POC currently using Jenkins as CI server. I have setup jobs based on certain lifecycle stages such as test suite and QA. I have configured these jobs to become scheduled builds based on a cron expression.
I have a request to know how to find out what the next scheduled build will be in Jenkins based on the jobs i have created. I know what was the last succesful build, the last failed but i dont know the next proposed build. Any clues!? Or is there a view plugin for this? Sorry if this is a strange request but i need to find out.
Also i need to discover if there is an issue when more than one job is running concurrently what will happen. I would have understood this is not an issue. I do not have any slaves setup, i only have the master.
Jenkins version: 1.441
I found the first issue!
https://wiki.jenkins-ci.org/display/JENKINS/Next+Executions
So can you help me on the second question please? Is there any issue with more than one job building concurrently?
Thanks,
Shane.
For the next execution date take a look at Next Execution Plugin here.
For your second question .
The number of build you can run concurrently is configurable in the jenkins server params(http:///configure : executors param).
If the number of executor is reached each new job triggered will be add in jenkins's execution queue and will be run when one running job will end

Resources