How to interpret a string as command to execute with parameters in perl - bash

Trying to find an equivalent solution for below in perl. Suppose I have the following POSIX shell script:
#!/bin/sh
MY_CMD='WDT=$1; shift ; printf "%-${WDT}.${WDT}s\n" "$#"'
eval $MY_CMD
An suppose that above is being saved as my_script.sh
And if I execute it like: ./my_script.sh 4 Hello to thise nice girls then the output will be like:
Hell
to
this
nice
girl
How can I do the same using perl. Here the main problems is how do I take perl input parameters and pass them to my command saved in an variable so that those parameters will get evaluated accordingly to the ones from input (in shell eval does this properly).
Should it be that I asked horribly stupid questions, please excuse me as I just started learning perl .o)
EDIT: I think I need to clarify myself more with the question... I NEED to have the command in a variable. I have simplified the script example here to understand easily the problem. In all solutions so far you are giving me solution how that same task to be done in perl, but please don't focus on the command itself, that is just an example:
Other Examples would be:
MY_CMD='export AWKNUMF="%.2f"; exe 93 "$1" "$2" "$3" $(shift 3; echo "$#") | sort -k1,1 | exe 93 ":" 1 2'
and so on...
The idea that MY_CMD variable would be populated with some command retrieved from a repository which expects some parameters and I want those parameters to be provided in the input to the perl.
SYNOPSYS would be
./perl_script.pl my_command_name [param1 [param2 ... [paramN]]]
The point here is that you should not focus on the content of the MY_CMD variable. It is just a shell command(s) which gets parameters along.
the perl equivalent would be something like:
system ($my_cmd, "#ARGV"); but this of course does not work as expected.

No eval needed in Perl (I doubt it's needed in the shell, either).
#!/usr/bin/perl
use warnings;
use strict;
my $width = shift;
printf "%-$width.${width}s\n", $_ for #ARGV;

There's two elements to your request.
Firstly - perl uses #ARGV to hold command line parameters. You can access individual elements either with shift or $ARGV[1].
Secondly - eval works in perl too - but you'll probably find there's a better way of doing what you're trying to do.
So to take your example:
#!/usr/bin/perl
use strict;
use warnings;
my ( $WDT, #words ) = #ARGV;
foreach my $word ( #words ) {
printf ( "%-${WDT}.${WDT}s\n", $word );
}

How can I do the same using perl.
Perl too has the eval() function. It can take a string and evaluate it in the context it was called from.
Here the main problems is how do I take perl input parameters and pass them to my command saved in an variable so that those parameters will get evaluated accordingly to the ones from input (in shell eval does this properly).
You have to create the variables before the eval'ed Perl code can access. That also would, depending on the implementation, bypass or conflict directly with the use strict.
I'm away of two way to create the variables: using eval or by creating them by manipulating the symbol table. Consider:
my $value_of_a = 10;
my $value_of_b = 20;
# alt1 : create vars using `eval`
eval '$a = '.$value_of_a;
eval '$b = '.$value_of_b;
eval q{ print "$a + $b = ", $a+$b, "\n"; };
# alt2 : create the vars by manipulating the symbol table directly
$var_name = "aa";
${"::$var_name"} = $value_of_a;
$var_name = "bb";
${"::$var_name"} = $value_of_b;
eval q{ print "$aa + $bb = ", $aa+$bb, "\n"; };
To avoid calling the eval every time to interpret the code, one can also try to wrap the code into a nameless function and eval it once to create the subroutine which can be called at any time. Consider:
# the input data:
my $var_name_a = 'a';
my $var_name_b = 'b';
my $value_of_a = 10;
my $value_of_b = 20;
my $cmd = 'print "$a + $b = ", $a+$b, "\n";';
# the preparations:
eval '$'.$var_name_a.' = '.$value_of_a;
eval '$'.$var_name_b.' = '.$value_of_b;
my $sub = eval 'sub { '. $cmd .' }';
# the execution:
$sub->();

Related

How to iterate over multiple variables and echo them using Shell Script?

Consider the below variables which are dynamic and might change each time. Sometimes there might even be 5 variables, But the length of all the variables will be the same every time.
var1='a b c d e... upto z'
var2='1 2 3 4 5... upto 26'
var3='I II III IV V... upto XXVI'
I am looking for a generalized approach to iterate the variables in a for loop & My desired output should be like below.
a,1,I
b,2,II
c,3,III
d,4,IV
e,5,V
.
.
goes on upto
z,26,XXVI
If I use nested loops, then I get all possible combinations which is not the expected outcome.
Also, I know how to make this work for 2 variables using for loop and shift using below link
https://unix.stackexchange.com/questions/390283/how-to-iterate-two-variables-in-a-sh-script
With paste
paste -d , <(tr ' ' '\n' <<<"$var1") <(tr ' ' '\n' <<<"$var2") <(tr ' ' '\n' <<<"$var3")
a,1,I
b,2,II
c,3,III
d,4,IV
e...z,5...26,V...XXVI
But clearly having to add other parameter substitutions for more varN's is not scalable.
You need to "zip" two variables at a time.
var1='a b c d e...z'
var2='1 2 3 4 5...26'
var3='I II III IV V...XXVI'
zip_var1_var2 () {
set $var1
for v2 in $var2; do
echo "$1,$v2"
shift
done
}
zip_var12_var3 () {
set $(zip_var1_var2)
for v3 in $var3; do
echo "$1,$v3"
shift
done
}
for x in $(zip_var12_var3); do
echo "$x"
done
If you are willing to use eval and are sure it is safe to do so, you can write a single function like
zip () {
if [ $# -eq 1 ]; then
eval echo \$$1
return
fi
a1=$1
shift
x=$*
set $(eval echo \$$a1)
for v in $(zip $x); do
printf '=== %s\n' "$1,$v" >&2
echo "$1,$v"
shift
done
}
zip var1 var2 var3 # Note the arguments are the *names* of the variables to zip
If you can use arrays, then (for example, in bash)
var1=(a b c d e)
var2=(1 2 3 4 5)
var3=(I II III IV V)
for i in "${!var1[#]}"; do
printf '%s,%s,%s\n' "${var1[i]}" "${var2[i]}" "${var3[i]}"
done
Use this Perl one-liner:
perl -le '#in = map { [split] } #ARGV; for $i ( 0..$#{ $in[0] } ) { print join ",", map { $in[$_][$i] } 0..$#in; }' "$var1" "$var2" "$var3"
Prints:
a,1,I
b,2,II
c,3,III
d,4,IV
e,5,V
z,26,XXVI
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
The input variables must be quoted with double quotes "like so", to keep the blank-separated words from being treated as separate arguments.
#ARGV is an array of the command line arguments, here $var1, $var2, $var3.
#in is an array of 3 elements, each element being a reference to an array obtained as a result of splitting the corresponding element of #ARGV on whitespace. Note that split splits the string on whitespace by default, but you can specify a different delimiter, it accepts regexes.
The subsequent for loop prints #in elements separated by comma.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlvar: Perl predefined variables
The following is (almost) a copy of this answer with a few tweaks that make it fit this question.
The Original Question
First let’s assign a few variables to play with, 26 tokens in each of them:
var1="$(echo {a..z})"
var2="$(echo {1..26})"
var3="$(echo I II III IV \
V{,I,II,III} IX \
X{,I,II,III} XIV \
XV{,I,II,III} XIX \
XX{,I,II,III} XXIV \
XXV XXVI)"
var4="$(echo {A..Z})"
var5="$(echo {010101..262626..10101})"
Now we want a “magic” function that zips an arbitrary number of variables, ideally in pure Bash:
zip_vars var1 # a trivial test
zip_vars var{1..2} # a slightly less trivial test
zip_vars var{1..3} # the original question
zip_vars var{1..4} # more vars, becasuse we can
zip_vars var{1..5} # more vars, because why not
What could zip_vars look like? Here’s one in pure Bash, without any external commands:
zip_vars() {
local var
for var in "$#"; do
local -a "array_${var}"
local -n array_ref="array_${var}"
array_ref=(${!var})
local -ar "array_${var}"
done
local -n array_ref="array_${1}"
local -ir size="${#array_ref[#]}"
local -i i
local output
for ((i = 0; i < size; ++i)); do
output=
for var in "$#"; do
local -n array_ref="array_${var}"
output+=",${array_ref[i]}"
done
printf '%s\n' "${output:1}"
done
}
How it works:
It splits all variables (passed by reference (by variable name)) into arrays. For each variable varX it creates a local array array_varX.
It would be actually way easier if the input variables were already Bash arrays to start with (see below), but … we stick with the original question initially.
It determines the size of the first array and then blindly expects all arrays to be of that size.
For each index i from 0 to size - 1 it concatenates the ith elements of all arrays, separated by ,.
Arrays Make Things Easier
If you use Bash arrays from the very start, the script will be shorter and look simpler and there won’t be any string-to-array conversions.
zip_arrays() {
local -n array_ref="$1"
local -ir size="${#array_ref[#]}"
local -i i
local output
for ((i = 0; i < size; ++i)); do
output=
for arr in "$#"; do
local -n array_ref="$arr"
output+=",${array_ref[i]}"
done
printf '%s\n' "${output:1}"
done
}
arr1=({a..z})
arr2=({1..26})
arr3=( I II III IV
V{,I,II,III} IX
X{,I,II,III} XIV
XV{,I,II,III} XIX
XX{,I,II,III} XXIV
XXV
XXVI)
arr4=({A..Z})
arr5=({010101..262626..10101})
zip_arrays arr1 # a trivial test
zip_arrays arr{1..2} # a slightly less trivial test
zip_arrays arr{1..3} # (almost) the original question
zip_arrays arr{1..4} # more arrays, becasuse we can
zip_arrays arr{1..5} # more arrays, because why not

Is there a way to loop variables from another file into my bash script?

Sorry to be a pain, but I'm not sure how I can loop values from an outside file, into my bash script as variables. I have three variable names in my bash script:
$TAGBEGIN
$TAGEND
$MYCODE
In a separate varSrc.txt file, I have several variables:
# a - Some marker
tagBegin_a='/<!-- Begin A -->/'
tagEnd_a='/<!-- End A -->/'
code_a=' [ some code to replace in between tags ] '
# b - Some marker
tagBegin_b='/<!-- Begin B -->/'
tagEnd_b='/<!-- End B -->/'
code_b=' [ some code to replace in between tags ] '
# c - Some marker
...
I need my bash script to be able to loop through each "# marker"* section and perform a function:
source varSrc.txt
$TAGBEGIN
$TAGEND
$MYCODE
...
sed '
'"$TAGEND"' R '"$MYCODE"'
'"$TAGBEGIN"','"$TAGEND"' d
' -i $TARGETDIR
Note: sed code logic (not quoting mess) courtesy of Glenn J.
I need some kind of looping logic like:
for (var i = 0; i <= markers in varSrc.txt ; i++) {
// set bash vars equal to varSrc values
$TAGBEGIN= $tagBegin_i
$TAGEND= $tagEnd_i
$MYCODE= $code_i
// run the 'sed' replace command
sed '
'"$TAGEND"' R '"$MYCODE"'
'"$TAGBEGIN"','"$TAGEND"' d
' -i $TARGETDIR
}
Is this something that can be feasibly done in a bash script and is this a good approach? Any suggestions, pointers or guidance is very, very appreciated!
*(which I don't think is a real marker I can use)
[Answering the question as amended]
There's no need use use, iterate over, or think about markers at all. Leave them out.
source varSrc.txt
for beginVar in "${!tagBegin_#}"; do # Iterate over defined begin variable names
endVar=tagEnd_${var#tagBegin_} # Generate the name of the end variable
codeVar=code_${var#tagBegin_} # Generate the name of the code variable
begin=${!beginVar} # Look up the contents of the begin variable
end=${!endVar} # Look up the contents of the end variable
code=${!codeVar} # Look up the contents of the code variable
sed -e "$end R $code" -e "$begin,$end d" -i "$file"
done
[Answers original, pre-amended question]
source only works if your input file is valid bash syntax; it isn't. Thus, you'll need to parse it yourself, something like the following:
begin= end= code=
while IFS= read -r; do
case $REPLY in
#*)
# we saw a marker; process all vars seen so far
[[ $begin && $end && $code ]] || continue # do nothing if we have no vars seen
sed -e "$end R $code" -e "$begin,$end d" -i "$file"
;;
'$TAGBEGIN='*) begin=${REPLY#'$TAGBEGIN='} ;;
'$TAGEND='*) end=${REPLY#'$TAGEND='} ;;
'$MYCODE='*) code=${REPLY#'$MYCODE='} ;;
esac
done <varSrc.txt
What you can do is export your variables in your second file an the execute the script within your current environment (with a dot before the script) to get the variable names/markers you can parse the file and search for an $ or #

Set a shell array from an array in Perl script

I have the following Perl script:
sub {
my $sequence="SEQUENCE1";
my $sequence2="SEQUENCE2";
my #Array = ($sequence, $sequence2);
return \#Array;
}
-1
I want to retrieve the values of the array via a bash script
#!/bin/bash
seq=$(perl vectTEST.pl)
# retrieve the column 1 of the Array
echo $seq[0]
My approach doesn't work.
You can't return an array. The concept makes no sense since print produces a stream of bytes, not variables.
One solution is to output a text representation of the array and have the shell parse it.
For example,
$ IFS=$'\n' array=( $(
perl -e'
my #array = ("a b", "c d", "e f");
print "$_\n" for #array;
'
) )
$ echo ${#array[#]}
3
$ echo "${array[1]}"
c d
This particular implementation assumes your array can't contain newlines.
The other alternative is to print out shell code that recreates the array and eval that code in the shell.
For example,
$ eval "array=( $(
perl -e'
use String::ShellQuote qw( shell_quote );
my #array = ("a b", "c d", "e f");
print join " ", map shell_quote($_), #array;
'
) )"
$ echo ${#array[#]}
3
$ echo "${array[1]}"
c d
This is a robust solution.
You can do this, but you need to change vectTEST.pl -- currently you have an anonymous sub that you're not assigning to anything. Change the perl script to:
$vect = sub {
my $sequence="SEQUENCE1";
my $sequence2="SEQUENCE2";
my #Array=();
push(#Array,$sequence,$sequence2);
return \#Array;
};
1;
Then, you can do this in bash:
mapfile -t seq < <(perl -E 'do "vectTEST.pl"; say join "\n", #{$vect->()}')
for idx in "${!seq[#]}"; do echo "$idx ${seq[idx]}"; done
0 SEQUENCE1
1 SEQUENCE2
Did you test your Perl script? In order to have that Perl script give you something to put into your shell script, you to make sure your Perl script works:
$ test.pl
No output at all.
First issue, you put the whole Perl script in sub. Subroutines in Perl don't execute unless you call them. You can't even do that since your subroutine doesn't even have a name. Let's get rid of the subroutine:
my $sequence="SEQUENCE1";
my $sequence2="SEQUENCE2";
my #Array = ($sequence, $sequence2);
print \#Array . "\n";
Okay, now let's try the program:
$ test.pl
ARRAY(0x7f8bab8303e0)
You're printing out an array reference with that \ in the front of #Array. Let's print out the array itself:
my $sequence="SEQUENCE1";
my $sequence2="SEQUENCE2";
my #Array = ($sequence, $sequence2);
print #Array, "\n";
That will now print #Array:
$ test.pl
SEQUENCE1SEQUENCE2
Not quite. There's no spaces between each element. Let's set $, which is the output field separator to be a single space:
my $sequence="SEQUENCE1";
my $sequence2="SEQUENCE2";
my #Array = ($sequence, $sequence2);
$,=' ';
print #Array, "\n";
Now:
$ test.pl
SEQUENCE1 SEQUENCE2
Now, we have a working Perl program that outputs what we need to put into your shell array:
seq=($(test.pl))
echo ${seq[*]}
SEQUENCE1 SEQUENCE2
When you have an issue, you need to break it down into pieces. Your first issue is that your Perl script wasn't working. Once that was fixed, you could now use it to initialize your array in your Bash shell.

How to parse a string into variables?

I know how to parse a string into variables in the manner of this SO question, e.g.
ABCDE-123456
becomes:
var1=ABCDE
var2=123456
via, say, cut. I can do that in one script, no problem.
But I have a few dozen scripts which parse strings/arguments all in the same fashion (same arguments & variables, i.e. same parsing strategy).
And sometimes I need to make a change or add a variable to the parsing mechanism.
Of course, I could go through every one of my dozens of scripts and change the parsing manually (even if just copy & paste), but that would be tedious and more error-prone to bugs/mistakes.
Is there a modular way to do parse strings/arguments as such?
I thought of writing a script which parses the string/args into variables and then exports, but the export command does not work form child-to-parent, (only vice-versa).
Something like this might work:
parse_it () {
SEP=${SEP--}
string=$1
names=${#:2}
IFS="$SEP" read $names <<< "$string"
}
$ parse_it ABCDE-123456 var1 var2
$ echo "$var1"
ABCDE
$ echo "$var2"
123456
$ SEP=: parse_it "foo:bar:baz" id1 id2 id3
$ echo $id2
bar
The first argument is the string to parse, the remaining arguments are names of variables that get passed to read as the variables to set. (Not quoting $names here is intentional, as we will let the shell split the string into multiple words, one per variable. Valid variable names consist of only _, letters, and numbers, so there are no worries about undesired word splitting or pathname generation by not quoting $names). The function assumes the string uses a single separator of "-", which can be overridden via the environment.
For more complex parsing, you may want to use a custom regular expression (bash 4 or later required for the -g flag to declare):
parse_it () {
reg_ex=$1
string=$2
shift 2
[[ $string =~ $reg_ex ]] || return
i=1
for name; do
declare -g "$name=${BASH_REMATCH[i++]}"
done
}
$ parse_it '(.*)-(.*):(.*)' "abc-123:xyz" id1 id2 id3
$ echo "$id2"
123
I think what you really want is to write your function in one script and include it in all of your other scripts.
You can include other shell scripts by the source or . command.
For example, you can define your parse function in parseString.sh
function parseString {
...
}
And then in any of your other script, do
source parseString.sh
# now we can call parseString function
parseString abcde-12345

Create associative array in bash 3

After thoroughly searching for a way to create an associative array in bash, I found that declare -A array will do the trick. But the problem is, it is only for bash version 4 and the bash version the server has in our system is 3.2.16.
How can I achieve some sort of associative array-like hack in bash 3? The values will be passed to a script like
ARG=array[key];
./script.sh ${ARG}
EDIT: I know that I can do this in awk, or other tools but strict bash is needed for the scenario I am trying to solve.
Bash 3 has no associative arrays, so you're going to have to use some other language feature(s) for your purpose. Note that even under bash 4, the code you wrote doesn't do what you claim it does: ./script.sh ${ARG} does not pass the associative array to the child script, because ${ARG} expands to nothing when ARG is an associative array. You cannot pass an associative array to a child process, you need to encode it anyway.
You need to define some argument passing protocol between the parent script and the child script. A common one is to pass arguments in the form key=value. This assumes that the character = does not appear in keys.
You also need to figure out how to represent the associative array in the parent script and in the child script. They need not use the same representation.
A common method to represent an associative array is to use separate variables for each element, with a common naming prefix. This requires that the key name only consists of ASCII letters (of either case), digits and underscores. For example, instead of ${myarray[key]}, write ${myarray__key}. If the key is determined at run time, you need a round of expansion first: instead of ${myarray[$key]}, write
n=myarray__${key}; echo ${!n}
For an assignment, use printf -v. Note the %s format to printf to use the specified value. Do not write printf -v "myarray__${key}" %s "$value" since that would treat $value as a format and perform printf % expansion on it.
printf -v "myarray__${key}" %s "$value"
If you need to pass an associative array represented like this to a child process with the key=value argument representation, you can use ${!myarray__*} to enumerate over all the variables whose name begins with myarray__.
args=()
for k in ${!myarray__*}; do
n=$k
args+=("$k=${!n}")
done
In the child process, to convert arguments of the form key=value to separate variables with a prefix:
for x; do
if [[ $x != *=* ]]; then echo 1>&2 "KEY=VALUE expected, but got $x"; exit 120; fi
printf -v "myarray__${x%%=*}" %s "${x#*=}"
done
By the way, are you sure that this is what you need? Instead of calling a bash script from another bash script, you might want to run the child script in a subshell instead. That way it would inherit from all the variables of the parent.
Here is another post/explanation on associative arrays in bash 3 and older using parameter expansion:
https://stackoverflow.com/a/4444841
Gilles' method has a nice if statement to catch delimiter issues, sanitize oddball input ...etc. Use that.
If you are somewhat familiar with parameter expansion:
http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
To use in your scenario [ as stated: sending to script ]:
Script 1:
sending_array.sh
# A pretend Python dictionary with bash 3
ARRAY=( "cow:moo"
"dinosaur:roar"
"bird:chirp"
"bash:rock" )
bash ./receive_arr.sh "${ARRAY[#]}"
Script 2: receive_arr.sh
argAry1=("$#")
function process_arr () {
declare -a hash=("${!1}")
for animal in "${hash[#]}"; do
echo "Key: ${animal%%:*}"
echo "Value: ${animal#*:}"
done
}
process_arr argAry1[#]
exit 0
Method 2, sourcing the second script:
Script 1:
sending_array.sh
source ./receive_arr.sh
# A pretend Python dictionary with bash 3
ARRAY=( "cow:moo"
"dinosaur:roar"
"bird:chirp"
"bash:rock" )
process_arr ARRAY[#]
Script 2: receive_arr.sh
function process_arr () {
declare -a hash=("${!1}")
for animal in "${hash[#]}"; do
echo "Key: ${animal%%:*}"
echo "Value: ${animal#*:}"
done
}
References:
Passing arrays as parameters in bash
If you don't want to handle a lot of variables, or keys are simply invalid variable identifiers, and your array is guaranteed to have less than 256 items, you can abuse function return values. This solution does not require any subshell as the value is readily available as a variable, nor any iteration so that performance screams. Also it's very readable, almost like the Bash 4 version.
Here's the most basic version:
hash_index() {
case $1 in
'foo') return 0;;
'bar') return 1;;
'baz') return 2;;
esac
}
hash_vals=("foo_val"
"bar_val"
"baz_val");
hash_index "foo"
echo ${hash_vals[$?]}
More details and variants in this answer
You can write the key-value pairs to a file and then grep by key. If you use a pattern like
key=value
then you can egrep for ^key= which makes this pretty safe.
To "overwrite" a value, just append the new value at the end of the file and use tail -1 to get just the last result of egrep
Alternatively, you can put this information into a normal array using key=value as value for the array and then iterator over the array to find the value.
This turns out to be ridiculously easy. I had to convert a bash 4 script that used a bunch of associative arrays to bash 3. These two helper functions did it all:
array_exp() {
exp=${#//[/__}
eval "${exp//]}"
}
array_clear() {
unset $(array_exp "echo \${!$1__*}")
}
I'm flabbergasted that this actually works, but that's the beauty of bash.
E.g.
((all[ping_lo] += counts[ping_lo]))
becomes
array_exp '((all[ping_lo] += counts[ping_lo]))'
Or this print statement:
printf "%3d" ${counts[ping_lo]} >> $return
becomes
array_exp 'printf "%3d" ${counts[ping_lo]}' >> $return
The only syntax that changes is clearing. This:
counts=()
becomes
array_clear counts
and you're set. You could easily tell array_exp to recognize expressions like "=()" and handle them by rewriting them as array_clear expressions, but I prefer the simplicity of the above two functions.

Resources