I want to share a variable across rundeck job steps.
Initialized a job option "target_files"
Set the variable on STEP 1.
RD_OPTION_TARGET_FILES=some bash command
echo $RD_OPTION_TARGET_FILES
The value is printed here.
Read the variable from STEP 2.
echo $RD_OPTION_TARGET_FILES
Step 3 doesn't recognize the variable set in STEP 1.
What's a good way of doing this on rundeck other than using environment variables?
The detailed procedure from RUNDECK 2.9+:
1) set the values - three methods:
1.a) use a "global variable" workflow step type
e.g. fill in: Group:="export", Name:="varOne", Value:="hello"
1.b) add to the workflow a "global log filter" (the Data Capture Plugin cited by 'Amos' here) which takes a regular expression that is evaluated on job step log outputs. For instance with a job step command like:
echo "CaptureThis:varTwo=world"
and a global log filter pattern like:
"CaptureThis:(.*?)=(.*)"
('Name Data' field not needed unless you supply a single capturing group in the pattern)
1.c) use a workflow Data Step to define multiple variables explicitly. Example contents:
varThree=foo
varFour=bar
2) get the values back:
you must use the syntax ${ctx.name} in command strings and args, and #ctx.name# within INLINE scripts. In our example, with a job step command or inline script line like:
echo "values : #export.varOne#, #data.varTwo#, #stub.varThree#, #stub.varFour#"
you'll echo the four values.
The context is implicitly 'data' for method 1.b and 'stub' for method 1.c.
Note that a data step is quite limitative! It only allows to benefit from #stub.name# notations within inline scripts. Value substitution is not performed in remote files, and notations like ${stub.name} are not available in job step command strings or arguments.
After Rundeck 2.9, there is a Data Capture Plugin to allow pass data between job steps.
The plugin is contained in the Rundeck application by default.
Data capture plugin to match a regular expression in a step’s log output and pass the values to later steps
Details see Data Capture/Data Passing between steps (Published: 03 Aug 2017)
Nearly there are no ways in job Inline scripts other than 1, exporting the value to env or 2, writing the value to a 3rd file at step1 and step2 reading it from there.
If you are using "Scriptfile or URL" method, may be you can execute step2 script with in script1 as a work around.. like
Script1
#!/bin/bash
. ./script2
In the above case script2 will execute in the same session as script1 so the variables and values are preserved.
EDIT
Earlier there was no such options but later there are available plugins. Hence check Amos's answer.
Related
I'm trying to only run a Semaphore CI job when it's being ran by the automated scheduler.
To do this, the docs has the environment variable SEMAPHORE_WORKFLOW_TRIGGERED_BY_SCHEDULE.
I'm trying to use this in the when clause. e.g.
dependencies: [System Tests]
run:
when: "SEMAPHORE_WORKFLOW_TRIGGERED_BY_SCHEDULE = true"
This does not work, I get the error
Unprocessable YAML file.
Error: "Invalid 'when' condition on path '#/blocks/5/run/when':
Syntax error on line 1. - Invalid expression on the left of '=' operator."
Referring to the env variable with a $ doesn't work either, as apparently in the when clause it's an invalid character.
How can I only run the CI job when it's scheduled?
when doesn't support environment variables. It only supports the notation specified here: https://docs.semaphoreci.com/reference/conditions-reference/.
Let me get your use case straight. Do you want to run specific blocks based on whether the pipeline is executed from the scheduler or not? If that's the case, when will not help you, unfortunately.
However, I think I can provide a workaround by using an IF statement. You could add all the commands from the job inside an IF statement that checks SEMAPHORE_WORKFLOW_TRIGGERED_BY_SCHEDULE. This way, if the variable matches you run the commands, if not you skip all the commands and the job ends successfully there. Effectively "skipping" the job with almost no execution time. Something like, please check if the syntax is correct:
if [ SEMAPHORE_WORKFLOW_TRIGGERED_BY_SCHEDULE = true ]; then run commands; else echo "skip commands"; fi
TL;DR
Trying to pass a computation of the form $(($LSB_JOBINDEX-1)) to a cluster call, but getting an error
$((2-1)): syntax error: operand expected (error token is "$((2-1))")
How do I escape correctly or what alternative command to use so that this works?
Detailed:
For automatations in my workflow I am currently trying to write a script that automatically issues bsub commands in a predefined order.
Some of these commands are array jobs that are supposed to work on a file each.
If done without the cluster calls, it would look something like this:
samplearray=(sample0.fasta sample1.fasta) #array of input files
for s in samplearray
echo $s #some command on $s
done
for the cluster call I want to use an array job, the command for this looks like this:
bsub -J test[1-2] 'samplearray=(sample0.fastq sample1.fastq)' echo '${samplearray[$(($LSB_JOBINDEX-1))]}'
which launches two jobs with LSB_JOBINDEXset to 1 or 2 respectively, which is why I need to subtract 1 for correct indexing of the array.
The problem now is in the $((...)) part, because what is being executed on the node is ${samplearray[$\(\($LSB_JOBINDEX-1\)\)]} which does not trigger the computation but instead throws an error:
$((2-1)): syntax error: operand expected (error token is "$((2-1))")
What am I doing wrong here? I have tried other ways of escaping and quoting, but this was the closest I got to the correct solution
Is it possible to concatenate a string to call a Bamboo variable.
Using a script task in Bamboo, I want to generalize the following:
python my.py moon ${bamboo.mynamespace.moon}
to
SET planet=MOON
python my.py %planet% ${bamboo.mynamespace.%planet%}
But doing it like the second example above results in my python script receiving
${bamboo.mynamespace.%planet%}
as a string and not the value of
${bamboo.mynamespace.moon}
I know... moon is not a planet
I don't think it's going to be possible in the way how you're using it. Because once you use ${bamboo.variableName} Bamboo tries to resolve the variable and substitute it with a variable value. Since there's no variable%planet% Bamboo can't reference it.
But I think you could reorganise your solution a bit and make use environment variables (all Bamboo variables are passed to process as environment variables). So e.g. if Bamboo variable's name is variable.name you're allowed to reference to it via ${bamboo_variable_name} (bamboo prefix + all dots are replaced with underscore)
Then I can imagine you could get variable which interests you via print os.environ['bamboo_mynamespace_' + 'planet'] (more info on env variables in python here)
Is it possible to build an array from a parameterized jenkins build?
I have tried the https://wiki.jenkins-ci.org/display/JENKINS/Extended+Choice+Parameter+plugin which allows me to create a single title with multiple options within it. So I build a extended choice called services with 5 services listed as check boxes.
However when I try to do a loop over what I thought would be an array ${services[#]} I just get the single value of comma separated values. I tried setting IFS=',' and that does not work.
Any ideas?
This just doesn't work with check boxes. If you use a text field and specify each variable there it will loop as if it were a true array.
You can create an array first from Jenkins multiple choice variable:
IFS=',' read -r -a services <<< "$services"
for service in ${services[#]}; do
echo "$service"
done
Hi I have written a shellscript which accepts the "jenkins job name" and starts triggering that build via command line , In my Jenkins server I have different jobs which differs in count and names of parameters.
$build_name= Jenkins_job_name ##(which is the only input)
java -jar jenkins-cli.jar -s http://mydomain:8080/ build $build_name -s -p PARA1=$paravalue
will work only if i am giving the exact name and count of the parameter which the job expects
I want to generalize this build triggering script, which fetches the names of parameter for a specific job,accepts it and then triggers the build
NB: When I try the above script with different parameter name ("KEY1" instead of "PARA1"), I am getting the message like:
'KEY1' is not a valid parameter. Did you mean "PARA1"?
You can use the get-job $build_name on the cli to get the job configuration
The parameters are under:
<parameterDefinitions>
<name>PARA1</name>
There will be multiple instances of <name>...</name>
Alternatively, you can make a call to
http://mydomain:8080/job/$build_name/api/json
The parameters will be in format:
"parameterDefinitions":[{..., "name":"PARA1", ...}, ...]
There will be multiple instances of {..., "name":"...", ...},
Parse the file to get all parameters defined for the job. The json format is either for shell since it is all on a single line. A RegEx can detect the start of parameterDefinitions and get the following name values, before the end of brackets.