Passing several values between two shell scripts - shell

I have two shell scripts, i.e. (master, function); the master calls the function-script and tries to pass values to it.
Please note that function-script is an interactive script; i.e. it waits for user's answers to perform according to the answer.
So to pass one value I can write the following:
echo "string" | ./function-script
The problem is that I have to pass several values. Any advice?

Can the "function-script" operate on positional parameters? If so, you'd call it like:
./function-script arg1 "argument 2" arg3
And then "function-script" would use "$1", "$2" and "$3" as required.
If "function-script" only takes input on stdin, do something like this:
printf "%s\n" arg1 "argument 2" arg3 | ./function-script
And "function-script" would do:
IFS= read -r arg1
IFS= read -r arg2
IFS= read -r arg3

Simple solution:
Don't try to pass multiple variable.
Just export all the variable within master script using export a=1 syntax.
Then call child script from master like a regular script
All the variable will be available in child script.

Use command line arguments.
./function-script "string" "another string"
If you pre-empt standard input by piping data into the function script, you make interactive operation of the function script hard.
You could instead export the variables as environment variables, but just as global variables in regular programming are not a good idea because their use is hidden, so too with environment variables.

Related

What shellenv command does? [duplicate]

After reading the Bash man pages and with respect to this post, I am still having trouble understanding what exactly the eval command does and which would be its typical uses.
For example, if we do:
$ set -- one two three # Sets $1 $2 $3
$ echo $1
one
$ n=1
$ echo ${$n} ## First attempt to echo $1 using brackets fails
bash: ${$n}: bad substitution
$ echo $($n) ## Second attempt to echo $1 using parentheses fails
bash: 1: command not found
$ eval echo \${$n} ## Third attempt to echo $1 using 'eval' succeeds
one
What exactly is happening here and how do the dollar sign and the backslash tie into the problem?
eval takes a string as its argument, and evaluates it as if you'd typed that string on a command line. (If you pass several arguments, they are first joined with spaces between them.)
${$n} is a syntax error in bash. Inside the braces, you can only have a variable name, with some possible prefix and suffixes, but you can't have arbitrary bash syntax and in particular you can't use variable expansion. There is a way of saying “the value of the variable whose name is in this variable”, though:
echo ${!n}
one
$(…) runs the command specified inside the parentheses in a subshell (i.e. in a separate process that inherits all settings such as variable values from the current shell), and gathers its output. So echo $($n) runs $n as a shell command, and displays its output. Since $n evaluates to 1, $($n) attempts to run the command 1, which does not exist.
eval echo \${$n} runs the parameters passed to eval. After expansion, the parameters are echo and ${1}. So eval echo \${$n} runs the command echo ${1}.
Note that most of the time, you must use double quotes around variable substitutions and command substitutions (i.e. anytime there's a $): "$foo", "$(foo)". Always put double quotes around variable and command substitutions, unless you know you need to leave them off. Without the double quotes, the shell performs field splitting (i.e. it splits value of the variable or the output from the command into separate words) and then treats each word as a wildcard pattern. For example:
$ ls
file1 file2 otherfile
$ set -- 'f* *'
$ echo "$1"
f* *
$ echo $1
file1 file2 file1 file2 otherfile
$ n=1
$ eval echo \${$n}
file1 file2 file1 file2 otherfile
$eval echo \"\${$n}\"
f* *
$ echo "${!n}"
f* *
eval is not used very often. In some shells, the most common use is to obtain the value of a variable whose name is not known until runtime. In bash, this is not necessary thanks to the ${!VAR} syntax. eval is still useful when you need to construct a longer command containing operators, reserved words, etc.
Simply think of eval as "evaluating your expression one additional time before execution"
eval echo \${$n} becomes echo $1 after the first round of evaluation. Three changes to notice:
The \$ became $ (The backslash is needed, otherwise it tries to evaluate ${$n}, which means a variable named {$n}, which is not allowed)
$n was evaluated to 1
The eval disappeared
In the second round, it is basically echo $1 which can be directly executed.
So eval <some command> will first evaluate <some command> (by evaluate here I mean substitute variables, replace escaped characters with the correct ones etc.), and then run the resultant expression once again.
eval is used when you want to dynamically create variables, or to read outputs from programs specifically designed to be read like this. See Eval command and security issues for examples. The link also contains some typical ways in which eval is used, and the risks associated with it.
In my experience, a "typical" use of eval is for running commands that generate shell commands to set environment variables.
Perhaps you have a system that uses a collection of environment variables, and you have a script or program that determines which ones should be set and their values. Whenever you run a script or program, it runs in a forked process, so anything it does directly to environment variables is lost when it exits. But that script or program can send the export commands to standard output.
Without eval, you would need to redirect standard output to a temporary file, source the temporary file, and then delete it. With eval, you can just:
eval "$(script-or-program)"
Note the quotes are important. Take this (contrived) example:
# activate.sh
echo 'I got activated!'
# test.py
print("export foo=bar/baz/womp")
print(". activate.sh")
$ eval $(python test.py)
bash: export: `.': not a valid identifier
bash: export: `activate.sh': not a valid identifier
$ eval "$(python test.py)"
I got activated!
The eval statement tells the shell to take eval’s arguments as commands and run them through the command-line. It is useful in a situation like below:
In your script if you are defining a command into a variable and later on you want to use that command then you should use eval:
a="ls | more"
$a
Output:
bash: command not found: ls | more
The above command didn't work as ls tried to list file with name pipe (|) and more. But these files are not there:
eval $a
Output:
file.txt
mailids
remote_cmd.sh
sample.txt
tmp
Update: Some people say one should -never- use eval. I disagree. I think the risk arises when corrupt input can be passed to eval. However there are many common situations where that is not a risk, and therefore it is worth knowing how to use eval in any case. This stackoverflow answer explains the risks of eval and alternatives to eval. Ultimately it is up to the user to determine if/when eval is safe and efficient to use.
The bash eval statement allows you to execute lines of code calculated or acquired, by your bash script.
Perhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order. That's essentially the same behavior as the bash source statement, which is what one would use, unless it was necessary to perform some kind of transformation (e.g. filtering or substitution) on the content of the imported script.
I rarely have needed eval, but I have found it useful to read or write variables whose names were contained in strings assigned to other variables. For example, to perform actions on sets of variables, while keeping the code footprint small and avoiding redundancy.
eval is conceptually simple. However, the strict syntax of the bash language, and the bash interpreter's parsing order can be nuanced and make eval appear cryptic and difficult to use or understand. Here are the essentials:
The argument passed to eval is a string expression that is calculated at runtime. eval will execute the final parsed result of its argument as an actual line of code in your script.
Syntax and parsing order are stringent. If the result isn't an executable line of bash code, in scope of your script, the program will crash on the eval statement as it tries to execute garbage.
When testing you can replace the eval statement with echo and look at what is displayed. If it is legitimate code in the current context, running it through eval will work.
The following examples may help clarify how eval works...
Example 1:
eval statement in front of 'normal' code is a NOP
$ eval a=b
$ eval echo $a
b
In the above example, the first eval statements has no purpose and can be eliminated. eval is pointless in the first line because there is no dynamic aspect to the code, i.e. it already parsed into the final lines of bash code, thus it would be identical as a normal statement of code in the bash script. The 2nd eval is pointless too, because, although there is a parsing step converting $a to its literal string equivalent, there is no indirection (e.g. no referencing via string value of an actual bash noun or bash-held script variable), so it would behave identically as a line of code without the eval prefix.
Example 2:
Perform var assignment using var names passed as string values.
$ key="mykey"
$ val="myval"
$ eval $key=$val
$ echo $mykey
myval
If you were to echo $key=$val, the output would be:
mykey=myval
That, being the final result of string parsing, is what will be executed by eval, hence the result of the echo statement at the end...
Example 3:
Adding more indirection to Example 2
$ keyA="keyB"
$ valA="valB"
$ keyB="that"
$ valB="amazing"
$ eval eval \$$keyA=\$$valA
$ echo $that
amazing
The above is a bit more complicated than the previous example, relying more heavily on the parsing-order and peculiarities of bash. The eval line would roughly get parsed internally in the following order (note the following statements are pseudocode, not real code, just to attempt to show how the statement would get broken down into steps internally to arrive at the final result).
eval eval \$$keyA=\$$valA # substitution of $keyA and $valA by interpreter
eval eval \$keyB=\$valB # convert '$' + name-strings to real vars by eval
eval $keyB=$valB # substitution of $keyB and $valB by interpreter
eval that=amazing # execute string literal 'that=amazing' by eval
If the assumed parsing order doesn't explain what eval is doing enough, the third example may describe the parsing in more detail to help clarify what is going on.
Example 4:
Discover whether vars, whose names are contained in strings, themselves contain string values.
a="User-provided"
b="Another user-provided optional value"
c=""
myvarname_a="a"
myvarname_b="b"
myvarname_c="c"
for varname in "myvarname_a" "myvarname_b" "myvarname_c"; do
eval varval=\$$varname
if [ -z "$varval" ]; then
read -p "$varname? " $varname
fi
done
In the first iteration:
varname="myvarname_a"
Bash parses the argument to eval, and eval sees literally this at runtime:
eval varval=\$$myvarname_a
The following pseudocode attempts to illustrate how bash interprets the above line of real code, to arrive at the final value executed by eval. (the following lines descriptive, not exact bash code):
1. eval varval="\$" + "$varname" # This substitution resolved in eval statement
2. .................. "$myvarname_a" # $myvarname_a previously resolved by for-loop
3. .................. "a" # ... to this value
4. eval "varval=$a" # This requires one more parsing step
5. eval varval="User-provided" # Final result of parsing (eval executes this)
Once all the parsing is done, the result is what is executed, and its effect is obvious, demonstrating there is nothing particularly mysterious about eval itself, and the complexity is in the parsing of its argument.
varval="User-provided"
The remaining code in the example above simply tests to see if the value assigned to $varval is null, and, if so, prompts the user to provide a value.
I originally intentionally never learned how to use eval, because most people will recommend to stay away from it like the plague. However I recently discovered a use case that made me facepalm for not recognizing it sooner.
If you have cron jobs that you want to run interactively to test, you might view the contents of the file with cat, and copy and paste the cron job to run it. Unfortunately, this involves touching the mouse, which is a sin in my book.
Lets say you have a cron job at /etc/cron.d/repeatme with the contents:
*/10 * * * * root program arg1 arg2
You cant execute this as a script with all the junk in front of it, but we can use cut to get rid of all the junk, wrap it in a subshell, and execute the string with eval
eval $( cut -d ' ' -f 6- /etc/cron.d/repeatme)
The cut command only prints out the 6th field of the file, delimited by spaces. Eval then executes that command.
I used a cron job here as an example, but the concept is to format text from stdout, and then evaluate that text.
The use of eval in this case is not insecure, because we know exactly what we will be evaluating before hand.
I've recently had to use eval to force multiple brace expansions to be evaluated in the order I needed. Bash does multiple brace expansions from left to right, so
xargs -I_ cat _/{11..15}/{8..5}.jpg
expands to
xargs -I_ cat _/11/8.jpg _/11/7.jpg _/11/6.jpg _/11/5.jpg _/12/8.jpg _/12/7.jpg _/12/6.jpg _/12/5.jpg _/13/8.jpg _/13/7.jpg _/13/6.jpg _/13/5.jpg _/14/8.jpg _/14/7.jpg _/14/6.jpg _/14/5.jpg _/15/8.jpg _/15/7.jpg _/15/6.jpg _/15/5.jpg
but I needed the second brace expansion done first, yielding
xargs -I_ cat _/11/8.jpg _/12/8.jpg _/13/8.jpg _/14/8.jpg _/15/8.jpg _/11/7.jpg _/12/7.jpg _/13/7.jpg _/14/7.jpg _/15/7.jpg _/11/6.jpg _/12/6.jpg _/13/6.jpg _/14/6.jpg _/15/6.jpg _/11/5.jpg _/12/5.jpg _/13/5.jpg _/14/5.jpg _/15/5.jpg
The best I could come up with to do that was
xargs -I_ cat $(eval echo _/'{11..15}'/{8..5}.jpg)
This works because the single quotes protect the first set of braces from expansion during the parsing of the eval command line, leaving them to be expanded by the subshell invoked by eval.
There may be some cunning scheme involving nested brace expansions that allows this to happen in one step, but if there is I'm too old and stupid to see it.
You asked about typical uses.
One common complaint about shell scripting is that you (allegedly) can't pass by reference to get values back out of functions.
But actually, via "eval", you can pass by reference. The callee can pass back a list of variable assignments to be evaluated by the caller. It is pass by reference because the caller can allowed to specify the name(s) of the result variable(s) - see example below. Error results can be passed back standard names like errno and errstr.
Here is an example of passing by reference in bash:
#!/bin/bash
isint()
{
re='^[-]?[0-9]+$'
[[ $1 =~ $re ]]
}
#args 1: name of result variable, 2: first addend, 3: second addend
iadd()
{
if isint ${2} && isint ${3} ; then
echo "$1=$((${2}+${3}));errno=0"
return 0
else
echo "errstr=\"Error: non-integer argument to iadd $*\" ; errno=329"
return 1
fi
}
var=1
echo "[1] var=$var"
eval $(iadd var A B)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[2] var=$var (unchanged after error)"
eval $(iadd var $var 1)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[3] var=$var (successfully changed)"
The output looks like this:
[1] var=1
errstr=Error: non-integer argument to iadd var A B
errno=329
[2] var=1 (unchanged after error)
[3] var=2 (successfully changed)
There is almost unlimited band width in that text output! And there are more possibilities if the multiple output lines are used: e.g., the first line could be used for variable assignments, the second for continuous 'stream of thought', but that's beyond the scope of this post.
In the question:
who | grep $(tty | sed s:/dev/::)
outputs errors claiming that files a and tty do not exist. I understood this to mean that tty is not being interpreted before execution of grep, but instead that bash passed tty as a parameter to grep, which interpreted it as a file name.
There is also a situation of nested redirection, which should be handled by matched parentheses which should specify a child process, but bash is primitively a word separator, creating parameters to be sent to a program, therefore parentheses are not matched first, but interpreted as seen.
I got specific with grep, and specified the file as a parameter instead of using a pipe. I also simplified the base command, passing output from a command as a file, so that i/o piping would not be nested:
grep $(tty | sed s:/dev/::) <(who)
works well.
who | grep $(echo pts/3)
is not really desired, but eliminates the nested pipe and also works well.
In conclusion, bash does not seem to like nested pipping. It is important to understand that bash is not a new-wave program written in a recursive manner. Instead, bash is an old 1,2,3 program, which has been appended with features. For purposes of assuring backward compatibility, the initial manner of interpretation has never been modified. If bash was rewritten to first match parentheses, how many bugs would be introduced into how many bash programs? Many programmers love to be cryptic.
As clearlight has said, "(p)erhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order". I'm no expert, but the textbook I'm currently reading (Shell-Programmierung by Jürgen Wolf) points to one particular use of this that I think would be a valuable addition to the set of potential use cases collected here.
For debugging purposes, you may want to go through your script line by line (pressing Enter for each step). You could use eval to execute every line by trapping the DEBUG signal (which I think is sent after every line):
trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG
I like the "evaluating your expression one additional time before execution" answer, and would like to clarify with another example.
var="\"par1 par2\""
echo $var # prints nicely "par1 par2"
function cntpars() {
echo " > Count: $#"
echo " > Pars : $*"
echo " > par1 : $1"
echo " > par2 : $2"
if [[ $# = 1 && $1 = "par1 par2" ]]; then
echo " > PASS"
else
echo " > FAIL"
return 1
fi
}
# Option 1: Will Pass
echo "eval \"cntpars \$var\""
eval "cntpars $var"
# Option 2: Will Fail, with curious results
echo "cntpars \$var"
cntpars $var
The curious results in option 2 are that we would have passed two parameters as follows:
First parameter: "par1
Second parameter: par2"
How is that for counter intuitive? The additional eval will fix that.
It was adapted from another answer on How can I reference a file for variables using Bash?

Appending command line arguments to a Bash array

I am trying to write a Bash script that appends a string to a Bash array, where the string contains the path to a Python script together with the arguments passed into the Bash script, enclosed in double quotes.
If I call the script using ./script.sh -o "a b", I would like a CMD_COUNT of 1, but I am getting 2 instead.
script.sh:
#!/bin/bash
declare -a COMMANDS=()
COMMANDS+=("/path/to/myscript.py \"${#}\"")
CMD_COUNT=${#COMMANDS[*]}
echo $CMD_COUNT
How can I ensure that the appended string is /path/to/myscript.py "-o" "a b"?
EDIT: The full script is actually like this:
script.sh:
#!/bin/bash
declare -a COMMANDS=()
COMMANDS+=("/path/to/myscript2.py")
COMMANDS+=("/path/to/myscript.py \"${#}\"")
CMD_COUNT=${#COMMANDS[*]}
echo $CMD_COUNT
for i in ${!COMMANDS[*]}
do
echo "${0} - command: ${COMMANDS[${i}]}"
${COMMANDS[${i}]}
done
It's a bad idea, but if it's what you really want, printf %q can be used to generate a string that, when parsed by the shell, will result in a given list of arguments. (The exact escaping might not be identical to what you'd write by hand, but the effect of evaluating it -- using eval -- will be).
#!/bin/bash
declare -a COMMANDS=( )
printf -v command '%q ' "/path/to/myscript" "$#"
COMMANDS+=( "$command" )
CMD_COUNT=${#COMMANDS[#]}
echo "$CMD_COUNT"
...but, as I said, this is all a bad idea.
Best-practice ways to encapsulate code as data in bash involve using functions, or arrays with one element per argument.
eval results in code that's prone to security bugs.

Filtering GNU split with a custom shell function

Is there a way to use GNU split --filter with a custom shell function, like
my_func () {
echo $1
}
split -d 10 INPUT_FILE chunk_ --filter="my_func $FILE$"
which I would expect to output
chunk_00
chunk_01
...
Of course the echo in the custom func is just for expressing my question here, in my concrete case the custom function creates a script that uses the chunks from split as input.
It seems that GNU shell only accepts standard shell commands within --filter.
Any smart way around this?
You can do this by exporting the function to the environment, which is available to the sub-shell run by split. For example with bash:
ex.sh
#!/bin/bash
my_func() {
echo $1
}
export -f my_func
seq inf | split -d --filter='my_func $FILE' /dev/stdin chunk_
If you run it like this:
bash ex.sh | head
The output is:
chunk_00
chunk_01
chunk_02
chunk_03
chunk_04
chunk_05
chunk_06
chunk_07
chunk_08
chunk_09
More details in this answer on UL.
Note that split uses whatever the SHELL variable is set to as the sub-shell to run the --filter command. If you are running a different shell, you may need to add export SHELL=/bin/bash before running split.

shell programming - how to properly pass variables within a function

What's wrong with my bash script? I'm trying to pass positional parameters within a function. My last test - Test 4 works but its basically the command that I would run on the command line, no variable substitution.
I would like to call my function. Can someone tell me if the construction of my first 3 tests are valid and how to I can correct them? Thanks!
To execute: ./myscript.sh dev01 tester
#!/bin/bash
set +x
if [[ $# != 2 ]]; then
echo "Usage: ./script.sh <ENV> <COMPONENT>"
exit 1
fi
# Setup VARS
CREDS="-x foobar -a ec2.local_ipv4"
ENVIRONMENT="$1"
ROLES="$2"
function deploy {
knife ssh "$CREDS" "chef_environment:"$ENVIRONMENT" AND roles:*"$ROLES"*" "uname"
}
echo "Test 1"
deploy
echo "Test 2"
DEPLOY=$(knife ssh "$CREDS" "chef_environment:"${ENVIRONMENT}" AND roles:*"${ROLES}"*" "uname")
$DEPLOY
echo "Test 3"
knife ssh "$CREDS" "chef_environment:"$ENVIRONMENT" AND roles:*"$ROLES"*" "uname"
echo "Test 4"
knife ssh -x foobar -a ec2.local_ipv4 "chef_environment:dev01 AND roles:*tester*" "uname"
Again, Test 4 works only.
Your problem is unrelated to using a function; it has to do with how you're storing arguments in a variable and using that variable later:
If you want to store multiple arguments in a (non-array) variable, you cannot reference that variable double-quoted, because the value is then passed as a single argument to the target utility.
An immediate fix would be to use $CREDS unquoted, but that makes the value subject to potentially unwanted shell expansions, so the robust way to pass multiple arguments is to use an array:
# Store args. individually as array elements
CREDS=( '-x' 'foobar' '-a' 'ec2.local_ipv4' )
# ...
# "${CREDS[#]}" passes the elements of the array safely as *individual*
# arguments.
knife ssh "${CREDS[#]}" "chef_environment:$ENVIRONMENT AND roles:*$ROLES*" "uname"
Also note how I've embedded the $ENVIRONMENT and $ROLES variable references directly in the double-quoted string, which also makes the command more robust.
Finally, it's better not to use all-uppercase shell-variable names in order to avoid conflicts with environment variables and special shell variables.

put awk or grep output to command line arguments in bash

I'm pretty new to shell programming and I'm trying to write a shell script to assign grep or awk pattern filtering output to command line parameter in bash shell.
a.sh
source ./b.sh
called a function like // a(function name) parameter1 parameter2
b.sh
function a{
$2=grep -ai "some string" a.txt(parameter 1)
echo "$2"
}
I wanna do like, but it won't let me to do it.
Is this even possible?
In bash, you cannot set positional parameters in a way that the caller can read that value. If you want to 'return' a string from a function, you must write it to stdout, like so:
function myfunc()
{
echo "test"
}
VAR=$(myfunc)
When the above code is run, VAR will contain the string 'test'.
For reference questions, look at the man pages; for example, man bash, man grep etc. For internal shell commands like function there's a bash built-in with similar functionality called help, for example help function.
To set positional parameters, you can use the built-in set. For example, set -- "a b" "c d" sets $1 to a b and $2 to c d.
For a pragmatic introduction to bash programming see the Bash wiki. It's simply the best Bash resource out there.
You can't assign to positional parameters, but you can do something like this:
function myf {
#do something with $1,$2, etc
}
FOO=$(awk command)
BAR=$(other command)
myf $FOO $BAR #the function will use $FOO and $BAR as $1 and $2 positional parameters
So you can pass the content of those commands to the function myf through the use of variables (FOO and BAR) in this case.
You could even do it without dummy variables calling myf $(some command) but the way I wrote it improves readability.
Before you try function, try a script first.
#!/bin/sh
arg1=${1?'Missing argument'}
grep -ai "some string" $arg1
And then put this script in your ~/bin folder (make sure you have changed your PATH directory to include ~/bin
Then just execute the script.
If you really need a function, then do
#!/bin/sh
b() {
grep -ai "some string" $1
}
b filename

Resources