I am working on a POC currently using Jenkins as CI server. I have setup jobs based on certain lifecycle stages such as test suite and QA. I have configured these jobs to become scheduled builds based on a cron expression.
I have a request to know how to find out what the next scheduled build will be in Jenkins based on the jobs i have created. I know what was the last succesful build, the last failed but i dont know the next proposed build. Any clues!? Or is there a view plugin for this? Sorry if this is a strange request but i need to find out.
Also i need to discover if there is an issue when more than one job is running concurrently what will happen. I would have understood this is not an issue. I do not have any slaves setup, i only have the master.
Jenkins version: 1.441
I found the first issue!
https://wiki.jenkins-ci.org/display/JENKINS/Next+Executions
So can you help me on the second question please? Is there any issue with more than one job building concurrently?
Thanks,
Shane.
For the next execution date take a look at Next Execution Plugin here.
For your second question .
The number of build you can run concurrently is configurable in the jenkins server params(http:///configure : executors param).
If the number of executor is reached each new job triggered will be add in jenkins's execution queue and will be run when one running job will end
Related
In our Gitlab project group, we are using multiple shared runners for CI.
However, some of the jobs have dependencies, such that the previous job must have been executed on the same runner.
Here is an Example:
Job 1 builds a docker container
Job 2 checks the docker for execution, so it needs the docker image from job 1
Job 3 pushes the docker image to a container hub, so it needs the docker image from job 1
Now, with multiple shared runners it may happen, that job1 is executed on runner 1 and job 2 and 3 on a different runner than runner 1. This throws an error in job 2 and 3 as the docker image is locally not available on that runner.
On the other hand, we need multiple runners due to the amount of computation in our project. So it would be great if once a runner is picked in a specific job, it keeps the same runner for the ongoing jobs.
Any ideas to solve this?
Gitlab scheduling doesn't permit it easily. The balancing between runners work as the following:
When a job is created on Gitlab instance, it registers a job in
pending state
Runners check every 3 seconds (3s by default,
configured with check_interval) if there are jobs in the pending
queue. If yes, and if the runner is the right runner for the job (for
example if job tags are compliant with runner), then the runner launch N jobs from the queues, N
limited by the maximum number of concurrent job per runner (concurrent
option)
So this isn't Gitlab itself that schedule which runner runs which job. Gitlab just put jobs to run in a queue, runners frequently check the queue and select jobs. It's great for scalability but not great for your use case.
In my point of view, you have 2 options:
First, put a specific tag on only one runner and use it on jobs that need to be run on the same host
Second, more flexible, is to store the resulting docker image in your Gitlab project registry at the end of the build and pull it from any job that needs the image (job2 and job3) See https://docs.gitlab.com/ee/user/packages/container_registry/#build-and-push-by-using-gitlab-cicd
Is there a way to configure a gitlab-runner (13.6.0) to accept jobs only between certain times of the day?
For example - I would like to kick off a 'deployment' pipeline with test, build and deploy stages at any time of the day, and the test and build stages can start immediately, but I would like the final deploy stage to happen only between, say, midnight and 2am.
Thanks
GitLab Documentation describes how to use cron to trigger nightly pipelines. Additionally, there is a $CI_PIPELINE_SOURCE predefined environment variable that can be used to limit the jobs that run in a pipeline.
Using these 2 features it shall be possible to run the same pipeline in 2 different ways. The "normal" runs will be only for test/build Jobs. The "nightly" runs triggered by cron will be only for deploy job that has to check the $CI_PIPELINE_SOURCE value.
Let me know if this option fits your environment.
So I have 2 yml pipelines currently... one starts running the server and after server is up and running I start the other pipeline that runs tests in one job and once that's completed starts a job that shuts down the server from first pipeline.
I'm kinda new to yml and wondering if there is a way to run all this in a single pipeline...
The problem I came across is that if I put server to run in a first job I do not know how to condition the second job to kick off after server is running. This job doesn't have succeeded of failed condition because it's still in progress as the server has to run in order for tests to be run.
I tried adding a variable that I set to true after server is running but it still never jumps to the next job?
I looked into templates too but those are not very clear to me so any suggestion or documentation or tutorial would be very helpful on how to achive putting this in one pipeline...
I already googled a bunch and will keep googling but figured someone here might have an answer already.
Each agent can run only one job at a time. To run multiple jobs in parallel you must configure multiple agents. You also need sufficient parallel jobs.
You can specify the conditions under which each job runs. By default, a job runs if it does not depend on any other job, or if all of the jobs that it depends on have completed and succeeded. You can customize this behavior by forcing a job to run even if a previous job fails or by specifying a custom condition.
Since you have added a variable that you set to true after server is running. Then try to enable a custom condition, set that job run if a variable is xxx.
More details please kindly check official doc here:
Specify jobs in your pipeline
Specify conditions
I have a spring scheduler task configured with either of fixedDelay or cron, and have multiple instances of this app running on multiple JVMs.
The default behavior is all the instances are executing the scheduler task.
Is there a way by which we can control this behavior so that only one instance will execute the scheduler task and others don't.
Please let me know if you know any approaches.
Thank you
We had similar problem. We fixed it like this:
Removed all #Scheduled beans from our Spring Boot services.
Created AWS Lambda function scheduled with desired schedule.
Lambda function hits our top level domain with scheduling request.
Load balancer forwards this request to one of the service instances.
This way we are sure that scheduled task is executed only once across the cluster of our services.
I have faced similar problem where same scheduled batch job was running on two server where it was intended to be running on one node at a time. But later on I found a solution to not to execute the job if it is already running on other server.
Job someJob = ...
Set<JobExecution> jobs = jobExplorer.findRunningJobExecutions("someJobName");
if (jobs == null || jobs.isEmpty()) {
jobLauncher.run(someJob, jobParametersBuilder.toJobParameters());
}
}
So before launching the job, a check is needed if the job is already in execution on other node.
Please note that this approach will work only with DB based job repository.
We had the same problem our three instance were running same job and doing the tasks three times every day. We solved it by making use of Spring batch. Spring batch can have only unique job id so if you start the job with a job id like date it will restricts duplicate jobs to start with same id. In our case we used date like '2020-1-1' (since it runs only once a day) . All three instance tries to start the job with id '2020-1-1' but spring rejects two duplicate job stating already job '2020-1-1' is running.
If my understanding is correct on your question, that you want to run this scheduled job on a single instance, then i think you should look at ShedLock
ShedLock makes sure that your scheduled tasks are executed at most once at the same time. If a task is being executed on one node, it acquires a lock which prevents execution of the same task from another node (or thread). Please note, that if one task is already being executed on one node, execution on other nodes does not wait, it is simply skipped.
I am using a jenkins configuration where the same job is being executed in different locations: one in farm1 and another in an overseas farm2.
The Jenkins master server is located in farm1.
I encounter a situation where the job on farm2 takes much more time to finish, sometimes twice the elapsed time.
Do you have an idea what could be the reason for that?
is there a continuous master-slave discussion during the build that can cause such delay?
The job is a maven junit test + ui seleniun using vnc server on the slave
Thanks in advance,
Roy
I assume your server farms have identical hardware specs?
Network differences while checking out code, downloading dependencies, etc. Workspace of Master and Slave are on different servers
If you are archiving artifacts, they are usually archived back on Master, even when the job is run on Slave.
Install Timestamper plugin, enable it, and then review the logs of both the Master and the Slave runs, and see where there is a big time difference (you can configure Timestamper to show time as increments from the start of job, this would be helpful here)