Build array from jenkins parameters - bash

Is it possible to build an array from a parameterized jenkins build?
I have tried the https://wiki.jenkins-ci.org/display/JENKINS/Extended+Choice+Parameter+plugin which allows me to create a single title with multiple options within it. So I build a extended choice called services with 5 services listed as check boxes.
However when I try to do a loop over what I thought would be an array ${services[#]} I just get the single value of comma separated values. I tried setting IFS=',' and that does not work.
Any ideas?

This just doesn't work with check boxes. If you use a text field and specify each variable there it will loop as if it were a true array.

You can create an array first from Jenkins multiple choice variable:
IFS=',' read -r -a services <<< "$services"
for service in ${services[#]}; do
echo "$service"
done

Related

Dynamic variable names based on value

Probably a noob question, but I'm having a particular challenge where I want to dynamically generate variable names based on a value. Eg. If I'd require 3 variables, variable names to be incremented dynamically var_0,var_1,var_3 respectively.
The solution I have right now is barebones. I'm just using read -p get user input and save it to variable. So for 5 hosts, I've just duplicated this 5 times to get the job done, but not a clean solution. I was looking around and reading through declare and eval but haven't got anywhere
Thinking of a solution where I input no. of hosts first and this dynamically picks up and asks for user input based on number of hosts and saves to variables incremented dynamically
Any help is appreciated, cheers
Use arrays. However, you don't have to ask the user for the number of hosts. Let them enter as many hosts as they like and finish by entering an empty line:
echo "Please enter hosts, one per line."
echo "Press enter on an empty line to finish."
hosts=()
while read -p "host ${#hosts[#]}: " host && [ -n "$host" ]; do
hosts+=("$host")
done
echo "Finished"
Alternatively, let the user enter all hosts on the same line separated by whitespace:
read -p "Please enter hosts, separated by whitespace: " -a hosts
To access the individual hosts use ${hosts[0]}, ${hosts[1]}, ..., ${hosts[${#hosts[#]}]}.
To list them all at once use ${hosts[#]}.
Use an array instead!
But read can actualy create dynamic vars, just like this:
for i in {1..3}; do
read -p 'helo: ' var_$i
done

BASH : Problem regarding hash table/dictionary and bad substitution

Context:
I started writing this script for easily changing connections for my raspberry pi zero (Raspibian Lite as OS), this is because I always needed to edit the wpa-supplicant config file and decided to do something about it as it is a really portable pc.
How it works:
The core of the program is to create profiles in format of dictionaries to store the name and passwd and apply that profile when needed. The profiles are added to the script code itself. I made it like this, every time a new profile is created this 2 lines are generated with the profile name corresponded. For example:
declare -A profile1
profile1=( ["name"]="name" ["pass"]="pass")
Problem:
To apply that profile I put at terminal prompt "./script --use profile1" so my goal is that it gets the details of the profile desired.
When I write that by :
echo "${$2[name]}" it outputs me a " bad substitution" error.
Things I tried and checked:
Shebang is #!/bin/bash
I tried substituting the $2 in a string and trying to execute it but I dont get anything good.
Things to consider:
Here is the link to the script so you can test it yourself, there are some things are a bit more complex than the thing indicated in the post but I just simplified it.
https://github.com/gugeldot23/wpa_scrip
You need nameref variables if you want to address the profile array name by reference:
declare -n profile # nameref variable profile
profile="$2" # Fills-in the nameref from argument 2
# Address the nameref instead of echo "${$2[name]}"
echo "${profile[name]}"
See: gnu.org Bash Manual / Bash Builtins / declare:
-n
Give each name the nameref attribute, making it a name reference to another variable. That other variable is defined by the value of name. All references, assignments, and attribute modifications to name, except for those using or changing the -n attribute itself, are performed on the variable referenced by name’s value. The nameref attribute cannot be applied to array variables.

Creating a list of variables and calling them later bash

I want to create a list of variables,so later i can do a prompts for users and if their input is matching any variable within that list, I want to use that variable to execute it within a command, for example:
var1=a
var2=b
...
read input
(user chooses var1) command $var1,rest of the command
The biggest problem is that this list will be huge, what would be the best solution? Thanks!
You are looking for the associative array feature in bash 4 or later.
declare -A foo
foo[us]="United States"
foo[uk]="Great Britain"
read -p "Region? " region
echo "${foo[$region]}"
If the value of $region is not a defined key, then ${foo[$region]} will be treated like any unset variable.

Rundeck sharing variables across job steps

I want to share a variable across rundeck job steps.
Initialized a job option "target_files"
Set the variable on STEP 1.
RD_OPTION_TARGET_FILES=some bash command
echo $RD_OPTION_TARGET_FILES
The value is printed here.
Read the variable from STEP 2.
echo $RD_OPTION_TARGET_FILES
Step 3 doesn't recognize the variable set in STEP 1.
What's a good way of doing this on rundeck other than using environment variables?
The detailed procedure from RUNDECK 2.9+:
1) set the values - three methods:
1.a) use a "global variable" workflow step type
e.g. fill in: Group:="export", Name:="varOne", Value:="hello"
1.b) add to the workflow a "global log filter" (the Data Capture Plugin cited by 'Amos' here) which takes a regular expression that is evaluated on job step log outputs. For instance with a job step command like:
echo "CaptureThis:varTwo=world"
and a global log filter pattern like:
"CaptureThis:(.*?)=(.*)"
('Name Data' field not needed unless you supply a single capturing group in the pattern)
1.c) use a workflow Data Step to define multiple variables explicitly. Example contents:
varThree=foo
varFour=bar
2) get the values back:
you must use the syntax ${ctx.name} in command strings and args, and #ctx.name# within INLINE scripts. In our example, with a job step command or inline script line like:
echo "values : #export.varOne#, #data.varTwo#, #stub.varThree#, #stub.varFour#"
you'll echo the four values.
The context is implicitly 'data' for method 1.b and 'stub' for method 1.c.
Note that a data step is quite limitative! It only allows to benefit from #stub.name# notations within inline scripts. Value substitution is not performed in remote files, and notations like ${stub.name} are not available in job step command strings or arguments.
After Rundeck 2.9, there is a Data Capture Plugin to allow pass data between job steps.
The plugin is contained in the Rundeck application by default.
Data capture plugin to match a regular expression in a step’s log output and pass the values to later steps
Details see Data Capture/Data Passing between steps (Published: 03 Aug 2017)
Nearly there are no ways in job Inline scripts other than 1, exporting the value to env or 2, writing the value to a 3rd file at step1 and step2 reading it from there.
If you are using "Scriptfile or URL" method, may be you can execute step2 script with in script1 as a work around.. like
Script1
#!/bin/bash
. ./script2
In the above case script2 will execute in the same session as script1 so the variables and values are preserved.
EDIT
Earlier there was no such options but later there are available plugins. Hence check Amos's answer.

Simple map for pipeline in shell script

I'm dealing with a pipeline of predominantly shell and Perl files, all of which pass parameters (paths) to the next. I decided it would be better to use a single file to store all the paths and just call that for every file. The issue is I am using awk to grab the files at the beginning of each file, and it's turning out to be a lot of repetition.
My question is: I do not know if there is a way to store key-value pairs in a file so shell can natively do something with the key and return the value? It needs to access an external file, because the pipeline uses many scripts and a map in a specific file would result in parameters being passed everywhere. Is there some little quirk I do not know of that performs a map function on an external file?
You can make a file of env var assignments and source that file as need, ie.
$ cat myEnvFile
path1=/x/y/z
path2=/w/xy
path3=/r/s/t
otherOpt1="-x"
Inside your script you can source with either . myEnvFile or the more versbose version of the same feature sourc myEnvFile (assuming bash shell) , i.e.
$cat myScript
#!/bin/bash
. /path/to/myEnvFile
# main logic below
....
# references to defined var
if [[ -d $path2 ]] ; then
cd $path2
else
echo "no pa4h2=$path2 found, can't continue" 1>&1
exit 1
fi
Based on how you've described your problem this should work well, and provide a-one-stop-shop for all of your variable settings.
IHTH
In bash, there's mapfile, but that reads the lines of a file into a numerically-indexed array. To read a whitespace-separated file into an associative array, I would
declare -A map
while read key value; do
map[$key]=$value
done < filename
However this sounds like an XY problem. Can you give us an example (in code) of what you're actually doing? When I see long piplines of grep|awk|sed, there's usually a way to simplify. For example, is passing data by parameters better than passing via stdout|stdin?
In other words, I'm questioning your statement "I decided it would be better..."

Resources