how to run a job based on the failure condition of a job in autosys - job-scheduling

I have two jobs which are Job_A and Job_B.
I want Job_B to run based on the failure condition of Job_A.
Please suggest me how I have to set the JIL for the above condition.

In Job_B's definition, include the line:
condition: failure(Job_A)

you can achieve this by using the following attribute in JOb_B JIL file (job definition),
condition: f(Job_A)

Related

Check that gitlab branch has changes before running jobs

In order to stop the current default Gitlab behaviour of starting pipelines on branch creation I am trying to add a check in each job so that only merge requests trigger jobs when they have changes.
This is what I got so far:
rules:
- if: '[$CI_PIPELINE_SOURCE == "merge_request_event"] && [! git diff-index --quiet HEAD --]'
I am not quite familiar with bash which is surely the problem because I am currently encountering a 'yaml invalid' error :d
PS: Is there maybe a better way to do this instead of adding the check to each task?
i don't know if it can be useful, but Gitlab-ci provide the only job keyword that you can combine with changes and insert a path to files, in this way you can execute jobs only if there are changes on the code you are interested on.
Example
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
only:
refs:
- branches
changes:
- Dockerfile
- docker/scripts/*
- dockerfiles/**/*
- more_scripts/*.{rb,py,sh}
- "**/*.json"
DOC: https://docs.gitlab.com/ee/ci/yaml/#onlychanges--exceptchanges
I am not quite familiar with bash which is surely the problem because
I am currently encountering a 'yaml invalid' error :d
The issue seems to be with
[! git diff-index --quiet HEAD --]
You can not use bash syntax in Gitlab rules but to script section you can, as the name implies
In order to stop the current default Gitlab behaviour of starting
pipelines on branch creation
If this is your goal I would recommend the following rules
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BEFORE_SHA == "0000000000000000000000000000000000000000"
when: never
- if: $CI_PIPELINE_SOURCE == "push"
Let's break down the aforementioned rules
The following rule will be true for merge requests
if: $CI_PIPELINE_SOURCE == "merge_request_event"
The predefined variable CI_COMMIT_BEFORE_SHA will be populated with this value 0000000000000000000000000000000000000000, when you create a new pipeline or a merge request
Therefore, the following rule will stop the execution of a new pipeline and of a merge request.
- if: $CI_COMMIT_BEFORE_SHA == "0000000000000000000000000000000000000000"
when: never
BUT the merge requests are accepted from the previous rule, taking into account how gitlab evaluates them.
Quoting from https://docs.gitlab.com/ee/ci/jobs/job_control.html#specify-when-jobs-run-with-rules
Rules are evaluated in order until the first match. When a match is
found, the job is either included or excluded from the pipeline
Finally, the following rule will be true for a new pushed commit
- if: $CI_PIPELINE_SOURCE == "push"
PS: Is there maybe a better way to do this instead of adding the check
to each task?
The aforementioned rules dont have to be added for each job, but instead are configured once for each pipeline take a look
https://docs.gitlab.com/ee/ci/yaml/workflow.html
Basicaly, add the workflow rules statement at the start of your gitlab.yml and you are good to go

How to pass the Variable values(VM Name) form text file to jenkins job so that it should perform the task in each VM

I have file consist of 10 VM name.I want to pass the each VM name as parameter for a jenkins job.so that my jenkins job should perform task specified in each machine.Can some one suggest how it can be done.How it can be done using pipeline script.
Example
File.txt consist below variables
VM1
VM2
VM3
..
vm10
and I want to pass the values to jenkins job name called "Setupenvironment"
Please suggest.
You can use the file parameter to pass a file to your job. For a declarative pipeline it looks like the following
pipeline {
parameters {
file(name: 'FILE', description: 'Some file to upload')

How to access an environment variable that I defined from the last successful build

I am trying to access an environment variable that I defined in the job 'A' from another job 'B'
job 'A' defines it by -
evn.upload_loaction = "loc"
In job B I am trying to access the last successful build of JOb A and get that variable -
def item = Jenkins.instance.getItem("deploy-artifact-pipeline")
def dev_deployed_build=item.getLastSuccessfulBuild()
def envVars= dev_deployed_build.getEnvVars()
echo envVars['upload_loaction'] // prints null
echo envVars['BUILD_NUMBER'] // prints 21
My custom variable is not recognized but generic ones like build_number is available.
When I trigger Job A as downstream job then I can access using -
def jLz = build (job: 'deploy-artifact-pipeline')
echo jLz.buildVariables.PROCESSOR_UPLOAD_LOCATION // prints loc
Can someone help me with this? Or is there a better way to store and access that variable from a previous build ?
EnvInject plugin and Pipeline job doesn't go hand in hand, if some environment variable needs to be stored I usually prefer that job to be a free style job where you can send your env variable to be stored at the end of the build and that can be accessed for the specific build with "injectedEnvVars/api/json"
An alternate approach that I handled was to create an artifact file with the required information. And in the other Jenkins pipeline I download the artifact file and read the information in it.

Bamboo: Access script variable in subsequent maven task

I have seen many posts where people asking to access Bamboo variables in script but this is not about that.
I am defining a variable in Shell Script task, as below, and then I would like to access that variable in the subsequent maven task.
#!/bin/sh
currentBuildNumber=${bamboo.buildNumber}
toSubtract=1
newVersion=$(( currentBuildNumber - toSubtract ))
echo "Value of newVersion: ${newVersion}"
This one goes perfectly fine. However I have a subsequent maven 3 task where I try to access this variable by typing ${newVersion} I get below error
error 07-Jun-2019 14:12:20 Exception in thread "main" java.lang.StackOverflowError
simple 07-Jun-2019 14:12:21 Failing task since return code of [mvn --batch-mode -Djava.io.tmpdir=/tmp versions:set -DnewVersion=1.0.${newVersion}] was 1 while expected 0
Basically, I would like to automate the version number of the built jar files just by using ${bamboo.buildNumber} and subtracting some number so that I won't have to enter the new version number every time I run a build.
Appreciate your help... thanks,
EDIT: I posted the same question on Atlassian forum too... I will update this post when I get an answer there... https://community.atlassian.com/t5/Bamboo-questions/Bamboo-Access-script-variable-in-subsequent-maven-task/qaq-p/1104334
Generally, the best solution I have found is to output the result to a file and use the Inject Variables task to read the variable into the build.
For example, in some builds I need a SUFFIX variable, so in a bash script I end up doing
SUFFIX=suffix=-beta-my-feature
echo $SUFFIX >> .suffix.cfg
Then I can use the Inject Variables Task to read that file
Inject Variables Task
Make sure it is a Result variable and you should be able to get to it using ${bamboo.NAMESPACE.name} for the suffix one, it would be ${bamboo.VERSION.suffix}

How can I run a specific 'thread group' in a JMeter 'test plan' from the command line?

How can I run a specific thread group in a Test Plan from the command line? I have a Test Plan (project file) that contains two "thread groups": one for crawling a site and another for calling specific urls with parameters. From the command line I execute with Maven, like so:
mvn.bat -Dnamescsv=src/test/resources/RandomLastNames.csv
-Ddomainhost=stgweb.domain.com -Dcrawlerthreads=2 -Dcrawlerloopcount=10 -Dsearchthreads=5 -Dsearchloopcount=5 -Dresultscsv=JmeterResults.csv clean test verify
I want to pass an argument to run only one of the two "thread group" in that project file. Can you do that with JMeter? I don't want to use an IF controller unless I have to because it feels like a "hack". I know that SoapUI lets you do this with the '-s' option.
I asked this question on the JMeter forum also.
In our tests we use the while controller. It doesn't look like a hack to me and works well. You can turn thread groups on and off easily with the JMeter properties. Note you can't change its status when the test is already running though.
Add While controller - ${__P(threadActive)}
Set JMeter property on JMeter load ( -JthreadActive = true )
Run test
Please note ${__P(threadActive)} equates to ${__P(threadActive)} == true, anything other than true will result in that thread group not running

Resources