Jenkins pipeline usage without chaining the jobs - jenkins-pipeline

I want to get a chained job from start to the end. But these jobs are also needed for separate execution.
I want a scheduled build and need those jobs chained.
When running manually, not running the Jobs in cascaded mode.
I don't want to click configure, remove downstream run and re-add the downstream in case of manual execution.
What is the best solution for this?
Thanks in advance.

So this can be done, however I do recommend putting the functions I have outlined below in a Jenkins Shared Library. Doing it this way will cause at least 4 sandbox security approvals, which will require your jenkins administrator to approve. 1 of which warns it is a security vulnerability.... so assess the impact for your environment and your risk profile.
#!groovy
List jobparameters = [
booleanParam(name: 'CHECKBOX', defaultValue: true, description: 'Tick a checkbox'),
string(name: 'STRING', defaultValue: 'stringhere', description: 'Enter a string')
]
properties([
pipelineTriggers([cron('''TZ=Australia/Victoria
H 1 * * *''')]),
buildDiscarder(logRotator(numToKeepStr: '20')),
parameters(jobparameters),
])
stage('Stage') {
node {
// do something always
echo(params.STRING)
}
}
if ( hasAutomatedCauses() ) {
stage('folder1/reponame/branch') {
//do something conditionally
build(job: "folder1/reponame/branch", parameters: jobparameters, propagate: true)
}
} else {
stage('folder1/reponame/branch') {
node {
echo("Not running downstream job")
}
}
}
/**
* Checks if job causes contain automated causes
* Return true if automated cause found
*
* #return boolean
*/
boolean hasAutomatedCauses() {
List automatedCauses = ['UpstreamCause', 'TimerTriggerCause']
List intersection = []
intersection = automatedCauses.intersect(getCauses())
// if no automated causes are found means intersection is empty and then return false
return !intersection.isEmpty()
}
/**
* Retrieves list of causes that generated job execution
*
* #return list
*/
List getCauses() {
return currentBuild.rawBuild.getCauses().collect { it.getClass().getCanonicalName().tokenize('.').last() }
}

Related

How do I run multiple jobs with a given IJobConsumer within a single service instance?

I want to be able to execute multiple jobs concurrently on a Job Consumer. At the moment if I run one service instance and try to execute 2 jobs concurrently, 1 job waits for the other to complete (i.e. waits for the single job slot to become available).
However if I run 2 instances by using dotnet run twice to create 2 separate processes I am able to get the desired behavior where both jobs run at the same time.
Is it possible to run 2 (or more) jobs at the same time for a given consumer inside a single process? My application requires the ability to run several jobs concurrently but I don't have the ability to deploy many instances of my application.
Checking the application log I see this line which I feel may have something to do with it:
[04:13:43 DBG] Concurrent Job Limit: 1
I tried changing the SagaPartitionCount to something other than 1 on instance.ConfigureJobServiceEndpoints to no avail. I can't seem to get the Concurrent Job Limit to change.
My configuration looks like this:
services.AddMassTransit(x =>
{
x.AddDelayedMessageScheduler();
x.SetKebabCaseEndpointNameFormatter();
// registering the job consumer
x.AddConsumer<DeploymentConsumer>(typeof(DeploymentConsumerDefinition));
x.AddSagaRepository<JobSaga>()
.EntityFrameworkRepository(r =>
{
r.ExistingDbContext<JobServiceSagaDbContext>();
r.LockStatementProvider = new SqlServerLockStatementProvider();
});
// add other saga repositories here for JobTypeSaga and JobAttemptSaga here as well
x.UsingRabbitMq((context, cfg) =>
{
var rmq = configuration.GetSection("RabbitMq").Get<RabbitMq>();
cfg.Host(rmq.Host, rmq.Port, rmq.VirtualHost, h =>
{
h.Username(rmq.Username);
h.Password(rmq.Password);
});
cfg.UseDelayedMessageScheduler();
var options = new ServiceInstanceOptions()
.SetEndpointNameFormatter(context.GetService<IEndpointNameFormatter>() ?? KebabCaseEndpointNameFormatter.Instance);
cfg.ServiceInstance(options, instance =>
{
instance.ConfigureJobServiceEndpoints(js =>
{
js.SagaPartitionCount = 1;
js.FinalizeCompleted = true;
js.ConfigureSagaRepositories(context);
});
instance.ConfigureEndpoints(context);
});
});
}
Where DeploymentConsumerDefinition looks like
public class DeploymentConsumerDefinition : ConsumerDefinition<DeploymentConsumer>
{
protected override void ConfigureConsumer(IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<DeploymentConsumer> consumerConfigurator)
{
consumerConfigurator.Options<JobOptions<DeploymentConsumer>>(options =>
{
options.SetJobTimeout(TimeSpan.FromMinutes(20));
options.SetConcurrentJobLimit(10);
options.SetRetry(r =>
{
r.Ignore<InvalidOperationException>();
r.Interval(5, TimeSpan.FromSeconds(10));
});
});
}
}
Your definition should specify the job consumer message type, not the job consumer type:
public class DeploymentConsumerDefinition : ConsumerDefinition<DeploymentConsumer>
{
protected override void ConfigureConsumer(IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<DeploymentConsumer> consumerConfigurator)
{
// MESSAGE TYPE NOT CONSUMER TYPE
consumerConfigurator.Options<JobOptions<DeploymentCommand>>(options =>
{
options.SetJobTimeout(TimeSpan.FromMinutes(20));
options.SetConcurrentJobLimit(10);
options.SetRetry(r =>
{
r.Ignore<InvalidOperationException>();
r.Interval(5, TimeSpan.FromSeconds(10));
});
});
}
}

adding jenkins pipeline triggers on agent node

i am trying to adding trigger to my pipeline file
pipeline {
agent {
node {
label 'Deploymentserver'
triggers {
cron('H 09 * * 1-5')
}
}
}
This code gives the error:
WorkflowScript: 22: Invalid config option "triggers" for agent type "node". Valid config options are [label, customWorkspace] # line 22, column 11.
triggers {
Then i tried to put it outside the agent asuming i wont work but just to test
pipeline {
agent {
node {
label 'Deploymentserver'
}
}
triggers {
cron('H 09 * * 1-5')
}
It doesn't give any errors, but it dont trigger my pipeline either.
It seems that trigger option is not support in agent node.
it is a declarative pipeline integrated with bitbucket. How can i get this to work.
You second attempt is the correct syntax.
As you can see in the Documentation the correct location for the triggers as at the same level of the agent directive:
pipeline {
agent {
label 'Deploymentserver'
}
triggers {
cron('H 09 * * 1-5')
}
stages {
...
}
...
}
Therefore the configuration is not the issue and should work as expected.
One reason that is might causing you issues is that you must run the pipeline at least once (manual or automated) after adding the trigger configuration in order for the configuration to take effect.
You can go into the job configuration in the Jenkins UI and validate you see there the cron trigger settings, if so your pipeline trigger is configured properly.

Parameterized Scheduler in scripted pipeline

I switched from declerativ pipeline to scripted pipeline. Everything works fine only the Parameterized Scheduler Plugin makes problems. If i have one Trigger it works and the pipeline is scheduled. If i add another trigger only the second one works. May be it's a syntax problem but everything i tried doesn't work. Any ideas?
properties([
parameters([
booleanParam (defaultValue: true, description: 'test', name: 'test')
]),
pipelineTriggers([
parameterizedCron('15 20 * * * test=true'),
parameterizedCron('05 20 * * * test=false')
])
])//properties
according to official documentation your syntax is wrong, you are missing %. Also you can use one multiline parameterizedCron.
pipeline {
agent any
parameters {
string(name: 'PLANET', defaultValue: 'Earth', description: 'Which planet are we on?')
string(name: 'GREETING', defaultValue: 'Hello', description: 'How shall we greet?')
}
triggers {
cron('* * * * *')
parameterizedCron('''
# leave spaces where you want them around the parameters. They'll be trimmed.
# we let the build run with the default name
*/2 * * * * %GREETING=Hola;PLANET=Pluto
*/3 * * * * %PLANET=Mars
''')
}
stages {
stage('Example') {
steps {
echo "${GREETING} ${PLANET}"
script { currentBuild.description = "${GREETING} ${PLANET}" }
}
}
}
}
So in your case it should be
properties([
parameters([
booleanParam (defaultValue: true, description: 'test', name: 'test')
]),
pipelineTriggers([
parameterizedCron('''
15 20 * * * %test=true
05 20 * * * %test=false''')
])
])//properties
Also please note that there's this open issue, which indicates that for your trigger to register for the scripted, it would need to be run manually at least twice.

Jenkins scripted pipeline

write a simple Jenkins scripted pipeline.
it should have 2 parameters(one checkbox, one textbox).
include 2 stages in the pipeline, the first stage will be called based on whether the checkbox is check or not.
A more targeted question I think would provide more benefit. However to directly answer your request:
#!groovy
properties([
buildDiscarder(logRotator(numToKeepStr: '20')),
parameters([
booleanParam(name: 'CHECKBOX', defaultValue: true, description: 'Tick a checkbox'),
string(name: 'STRING', defaultValue: 'stringhere', description: 'Enter a string'),
])
])
node {
try {
if (params.CHECKBOX) {
stage('Stage 1') {
//do something conditionally
echo("${params.CHECKBOX}")
}
}
stage('Stage 2') {
// do someting else always
echo(params.STRING)
}
}
catch (err) {
// catch an error and do something else
throw err
}
finally {
// Finish with final mandatory tasks regardless of success/failure
deleteDir()
}
}
This starts off with Jenkins pipeline parameter syntax: https://jenkins.io/doc/book/pipeline/syntax/#parameters
and using some basic pipeline steps: https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/ such as echo and shell
interspersed with standard groovy for your conditional logic.

Jenkins declarative pipeline - User input parameters

I've looked for some example of user input parameters using Jenkins declarative pipeline, however all the examples are using the scripted pipelines. Here is a sample of code I'm trying to get working:
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
input id: 'test', message: 'Hello', parameters: [string(defaultValue: '', description: '', name: 'myparam')]
sh "echo ${env}"
}
}
}
}
I can't seem to work out how I can access the myparam variable, it would be great if someone could help me out.
Thanks
When using input, it is very important to use agent none on the global pipeline level, and assign agents to individual stages. Put the input procedures in a separate stage that also uses agent none. If you allocate an agent node for the input stage, that agent executor will remain reserved by this build until a user continues or aborts the build process.
This example should help with using the Input:
def approvalMap // collect data from approval step
pipeline {
agent none
stages {
stage('Stage 1') {
agent none
steps {
timeout(60) { // timeout waiting for input after 60 minutes
script {
// capture the approval details in approvalMap.
approvalMap = input
id: 'test',
message: 'Hello',
ok: 'Proceed?',
parameters: [
choice(
choices: 'apple\npear\norange',
description: 'Select a fruit for this build',
name: 'FRUIT'
),
string(
defaultValue: '',
description: '',
name: 'myparam'
)
],
submitter: 'user1,user2,group1',
submitterParameter: 'APPROVER'
}
}
}
}
stage('Stage 2') {
agent any
steps {
// print the details gathered from the approval
echo "This build was approved by: ${approvalMap['APPROVER']}"
echo "This build is brought to you today by the fruit: ${approvalMap['FRUIT']}"
echo "This is myparam: ${approvalMap['myparam']}"
}
}
}
}
When the input function returns, if it only has a single parameter to return, it returns that value directly. If there are multiple parameters in the input, it returns a map (hash, dictionary), of the values. To capture this value we have to drop to groovy scripting.
It is good practice to wrap your input code in a timeout step so that build don't remain in an unresolved state for an extended time.

Resources