Passing parameter from one script to another- Shell Scripting - shell

I have a main_script.sh which will call sub_script.
I have a variable in sub_script which i would like to access in main script
I tried "export" and "env" with the variable but when i'm trying to echo it in main_script i'm not getting values.
for example:
sub_script.sh
export a=hello
echo $a
main_script.sh
$PGMHOME/sub_script.sh > output_file
echo $a
FYI : sub_script.sh is executing properly because I'm getting value of 'a' in output_file
But when I'm echoing the value of a in main_script, I'm not getting it.
p.s : I know I can assign the variable directly in main_sript.sh but this is just an example and i have big processing done in sub_script.sh

Environments (export-ed variables) are passed only "downwards" (from parent to child process), not upwards.
This means that if you want to run the sub-script from the main-script as a process, the sub-script must write the names-and-values somewhere so that the parent process (the main script) can read and process them.
There are many ways to do this, including simply printing them to standard output and having the parent script eval the result:
eval $(./sub_script)
There are numerous pitfalls to this (including, of course, that the sub-script could print rm -rf $HOME and the main script would execute that—of course the sub-script can simply do that directly, but it's even easier to accidentally print something bad than to accidentally do something bad, so this serves as an illustration). Note that the sub-script must carefully quote things:
#! /bin/sh
# sub-script
echo a=value for a
When evaled, this fails because value for a gets split on word boundaries and evals to running for a with a=value set. The sub-script must use something more like:
echo a=\'value for a\'
so that the main script's eval $(./sub_script) sees a quoted assignment.
If the sub-script needs to send output to standard output, it will need to write its variable settings elsewhere (perhaps to a temporary file, perhaps to a file descriptor set up in the main script). Note that if the output is sent to a file—this includes stdout, really—the main script can read the file carefully (rather than using a simple eval).
Another alternative (usable only in some, not all, cases) is to source the sub-script from the main script. This allows the sub-script to access everything from the main script directly. This is usually the simplest method, and therefore often the best. To source a sub-script you can use the . command:
#! /bin/sh
# main script
# code here
. ./sub_script # run commands from sub_script
# more code here

Parameters to script are passed like $1, $2 etc. You can call main_script.sh from sub_script.sh and call main_script.sh again.
main_script.sh
#!/bin/sh
echo "main_script"
./sub_script.sh "hello world!"
sub_script.sh
#!/bin/sh
if [ "${1}" = "" ]; then
echo "calling main_script"
./main_script.sh
else
echo "sub_script called with parameter ${1}"
fi
./main_script.sh
calling main_script
main_script
sub_script called with parameter hello world!

Related

Is there a good way to preload or include a script prior to executing another script?

I am looking to execute a script but have it include another script before it executes. The problem is, the included script would be generated and the executed script would be unmodifiable. One solution I came up with, was to actually reverse the include, by having the include script as a wrapper, calling set to set the arguments for the executed script and then dotting/sourcing it. E.g.
#!/bin/bash
# Generated wrapper or include script.
: Performing some setup...
target_script=$1 ; shift
set -- "$#"
. "$target_script"
Where target_script is the script I actually want to run, importing settings from the wrapper.
However, the potential problem I face is that callers of the target script or even the target script itself may be expecting $0 to be set to the path of it's location on the file system. But because this wrapper approach overrides $0, the value of $0 may be unexpected and could produce undefined behaviour.
Is there another way to perform what is in effect, an LD_PRELOAD but in the scripted form, through bash without interfering with its runtime parameters?
I have looked at --init-file or --rcfile, but these only seem to be included for interactive shells.
Forcing interactive mode does seem to allow me to specify --rcfile:
$ bash --rcfile /tmp/x-include.sh -i /tmp/xx.sh
include_script: $0=bash, $BASH_SOURCE=/tmp/x-include.sh
target_script: $0=/tmp/xx.sh, $BASH_SOURCE=/tmp/xx.sh
Content of the x-include.sh script:
#!/bin/bash
echo "include_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
Content of the xx.sh script:
#!/bin/bash
echo "target_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
From the bash documentation:
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in
the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read
and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
So that settles it then:
BASH_ENV=/tmp/x-include.sh /bin/bash /tmp/xx.sh

Concatenate command string in a shell script

I am maintaining an existing shell script which assigns a command to a variable in side a shell script like:
MY_COMMAND="/bin/command -dosomething"
and then later on down the line it passes an "argument" to $MY_COMMAND by doing this :
MY_ARGUMENT="fubar"
$MY_COMMAND $MY_ARGUMENT
The idea being that $MY_COMMAND is supposed to execute with $MY_ARGUMENT appended.
Now, I am not an expert in shell scripts, but from what I can tell, $MY_COMMAND does not execute with $MY_ARGUMENT as an argument. However, if I do:
MY_ARGUMENT="itworks"
MY_COMMAND="/bin/command -dosomething $MY_ARGUMENT"
It works just fine.
Is it valid syntax to call $MY_COMMAND $MY_ARGUMENT so it executes a shell command inside a shell script with MY_ARGUMENT as the argument?
With Bash you could use arrays:
MY_COMMAND=("/bin/command" "-dosomething") ## Quoting is not necessary sometimes. Just a demo.
MY_ARGUMENTS=("fubar") ## You can add more.
"${MY_COMMAND[#]}" "${MY_ARGUMENTS[#]}" ## Execute.
It works just the way you expect it to work, but fubar is going to be the second argument ( $2 ) and not $1.
So if you echo arguments in your /bin/command you will get something like this:
echo "$1" # prints '-dosomething'
echo "$2" # prints 'fubar'

Bash function fails when it's part of a pipe

This is an attempt to simplify a problem I'm having.
I define a function which sets a variable and this works in this scenario:
$ function myfunc { res="ABC" ; }
$ res="XYZ"
$ myfunc
$ echo $res
ABC
So res has been changed by the call to myfunc. But:
$ res="XYZ"
$ myfunc | echo
$ echo $res
XYZ
So when myfunc is part of a pipe the value doesn't change.
How can I make myfunc work the way I desire even when a pipe is involved?
(In the real script "myfunc" does something more elaborate of course and the other side of the pipe has a zenity progress dialogue rather than a pointless echo)
Thanks
This isn't possible on Unix. To understand this better, you need to know what a variable is. Bash keeps two internal tables with all defined variables. One is for variables local to the current shell. You can create those with set name=value or just name=value. These are local to the process; they are not inherited when a new process is created.
To export a variable to new child processes, you must export it with export name. That tells bash "I want children to see the value of this variable". It's a security feature.
When you invoke a function in bash, it's executed within the context of the current shell, so it can access and modify all the variables.
But a pipe is a list of processes which are connected with I/O pipes. That means your function is executed in a shell and only the output of this shell is visible to echo.
Even exporting in myfunc wouldn't work because export works only for processes started by the shell where you did the export and echo was started by the same shell as myfunc:
bash
+-- myfunc
+-- echo
that is echo is not a child of myfunc.
Workarounds:
Write the variable into a file
Use a more complex output format like XML or several lines where the first line of output is always the variable and the real output comes in the next line.
As #Aaron said, the problem is caused by the function running in a subshell. But there is a way to avoid this in bash, using process substitution instead of a pipe:
myfunc > >(echo)
This does much the same thing a pipe would, but myfunc runs in the main shell rather than a subprocess. Note that this is a bash-only feature, and you must use #!/bin/bash as your shebang for this to work.

Pass all variables from one shell script to another?

Lets say I have a shell / bash script named test.sh with:
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh
My test2.sh looks like this:
#!/bin/bash
echo ${TESTVARIABLE}
This does not work. I do not want to pass all variables as parameters since imho this is overkill.
Is there a different way?
You have basically two options:
Make the variable an environment variable (export TESTVARIABLE) before executing the 2nd script.
Source the 2nd script, i.e. . test2.sh and it will run in the same shell. This would let you share more complex variables like arrays easily, but also means that the other script could modify variables in the source shell.
UPDATE:
To use export to set an environment variable, you can either use an existing variable:
A=10
# ...
export A
This ought to work in both bash and sh. bash also allows it to be combined like so:
export A=10
This also works in my sh (which happens to be bash, you can use echo $SHELL to check). But I don't believe that that's guaranteed to work in all sh, so best to play it safe and separate them.
Any variable you export in this way will be visible in scripts you execute, for example:
a.sh:
#!/bin/sh
MESSAGE="hello"
export MESSAGE
./b.sh
b.sh:
#!/bin/sh
echo "The message is: $MESSAGE"
Then:
$ ./a.sh
The message is: hello
The fact that these are both shell scripts is also just incidental. Environment variables can be passed to any process you execute, for example if we used python instead it might look like:
a.sh:
#!/bin/sh
MESSAGE="hello"
export MESSAGE
./b.py
b.py:
#!/usr/bin/python
import os
print 'The message is:', os.environ['MESSAGE']
Sourcing:
Instead we could source like this:
a.sh:
#!/bin/sh
MESSAGE="hello"
. ./b.sh
b.sh:
#!/bin/sh
echo "The message is: $MESSAGE"
Then:
$ ./a.sh
The message is: hello
This more or less "imports" the contents of b.sh directly and executes it in the same shell. Notice that we didn't have to export the variable to access it. This implicitly shares all the variables you have, as well as allows the other script to add/delete/modify variables in the shell. Of course, in this model both your scripts should be the same language (sh or bash). To give an example how we could pass messages back and forth:
a.sh:
#!/bin/sh
MESSAGE="hello"
. ./b.sh
echo "[A] The message is: $MESSAGE"
b.sh:
#!/bin/sh
echo "[B] The message is: $MESSAGE"
MESSAGE="goodbye"
Then:
$ ./a.sh
[B] The message is: hello
[A] The message is: goodbye
This works equally well in bash. It also makes it easy to share more complex data which you could not express as an environment variable (at least without some heavy lifting on your part), like arrays or associative arrays.
Fatal Error gave a straightforward possibility: source your second script! if you're worried that this second script may alter some of your precious variables, you can always source it in a subshell:
( . ./test2.sh )
The parentheses will make the source happen in a subshell, so that the parent shell will not see the modifications test2.sh could perform.
There's another possibility that should definitely be referenced here: use set -a.
From the POSIX set reference:
-a: When this option is on, the export attribute shall be set for each variable to which an assignment is performed; see the Base Definitions volume of IEEE Std 1003.1-2001, Section 4.21, Variable Assignment. If the assignment precedes a utility name in a command, the export attribute shall not persist in the current execution environment after the utility completes, with the exception that preceding one of the special built-in utilities causes the export attribute to persist after the built-in has completed. If the assignment does not precede a utility name in the command, or if the assignment is a result of the operation of the getopts or read utilities, the export attribute shall persist until the variable is unset.
From the Bash Manual:
-a: Mark variables and function which are modified or created for export to the environment of subsequent commands.
So in your case:
set -a
TESTVARIABLE=hellohelloheloo
# ...
# Here put all the variables that will be marked for export
# and that will be available from within test2 (and all other commands).
# If test2 modifies the variables, the modifications will never be
# seen in the present script!
set +a
./test2.sh
# Here, even if test2 modifies TESTVARIABLE, you'll still have
# TESTVARIABLE=hellohelloheloo
Observe that the specs only specify that with set -a the variable is marked for export. That is:
set -a
a=b
set +a
a=c
bash -c 'echo "$a"'
will echo c and not an empty line nor b (that is, set +a doesn't unmark for export, nor does it “save” the value of the assignment only for the exported environment). This is, of course, the most natural behavior.
Conclusion: using set -a/set +a can be less tedious than exporting manually all the variables. It is superior to sourcing the second script, as it will work for any command, not only the ones written in the same shell language.
There's actually an easier way than exporting and unsetting or sourcing again (at least in bash, as long as you're ok with passing the environment variables manually):
let a.sh be
#!/bin/bash
secret="winkle my tinkle"
echo Yo, lemme tell you \"$secret\", b.sh!
Message=$secret ./b.sh
and b.sh be
#!/bin/bash
echo I heard \"$Message\", yo
Observed output is
[rob#Archie test]$ ./a.sh
Yo, lemme tell you "winkle my tinkle", b.sh!
I heard "winkle my tinkle", yo
The magic lies in the last line of a.sh, where Message, for only the duration of the invocation of ./b.sh, is set to the value of secret from a.sh.
Basically, it's a little like named parameters/arguments. More than that, though, it even works for variables like $DISPLAY, which controls which X Server an application starts in.
Remember, the length of the list of environment variables is not infinite. On my system with a relatively vanilla kernel, xargs --show-limits tells me the maximum size of the arguments buffer is 2094486 bytes. Theoretically, you're using shell scripts wrong if your data is any larger than that (pipes, anyone?)
In Bash if you export the variable within a subshell, using parentheses as shown, you avoid leaking the exported variables:
#!/bin/bash
TESTVARIABLE=hellohelloheloo
(
export TESTVARIABLE
source ./test2.sh
)
The advantage here is that after you run the script from the command line, you won't see a $TESTVARIABLE leaked into your environment:
$ ./test.sh
hellohelloheloo
$ echo $TESTVARIABLE
#empty! no leak
$
Adding to the answer of Fatal Error, There is one more way to pass the variables to another shell script.
The above suggested solution have some drawbacks:
using Export : It will cause the variable to be present out of their scope which is not a good design practice.
using Source : It may cause name collisions or accidental overwriting of a predefined variable in some other shell script file which have sourced another file.
There is another simple solution avaiable for us to use.
Considering the example posted by you,
test.sh
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh "$TESTVARIABLE"
test2.sh
#!/bin/bash
echo $1
output
hellohelloheloo
Also it is important to note that "" are necessary if we pass multiword strings.
Taking one more example
master.sh
#!/bin/bash
echo in master.sh
var1="hello world"
sh slave1.sh $var1
sh slave2.sh "$var1"
echo back to master
slave1.sh
#!/bin/bash
echo in slave1.sh
echo value :$1
slave2.sh
#!/bin/bash
echo in slave2.sh
echo value : $1
output
in master.sh
in slave1.sh
value :"hello
in slave2.sh
value :"hello world"
It happens because of the reasons aptly described in this link
Another option is using eval. This is only suitable if the strings are trusted. The first script can echo the variable assignments:
echo "VAR=myvalue"
Then:
eval $(./first.sh) ./second.sh
This approach is of particular interest when the second script you want to set environment variables for is not in bash and you also don't want to export the variables, perhaps because they are sensitive and you don't want them to persist.
Another way, which is a little bit easier for me is to use named pipes. Named pipes provided a way to synchronize and sending messages between different processes.
A.bash:
#!/bin/bash
msg="The Message"
echo $msg > A.pipe
B.bash:
#!/bin/bash
msg=`cat ./A.pipe`
echo "message from A : $msg"
Usage:
$ mkfifo A.pipe #You have to create it once
$ ./A.bash & ./B.bash # you have to run your scripts at the same time
B.bash will wait for message and as soon as A.bash sends the message, B.bash will continue its work.

Is there any mechanism in Shell script alike "include guard" in C++?

let's see an example: in my main.sh, I'd like to source a.sh and b.sh. a.sh, however, might have already sourced b.sh. Thus it will cause the codes in b.sh executed twice. Is there any mechanism alike "include guard" in C++?
If you're sourcing scripts, you are usually using them to define functions and/or variables.
That means you can test whether the script has been sourced before by testing for (one of) the functions or variables it defines.
For example (in b.sh):
if [ -z "$B_SH_INCLUDED" ]
then
B_SH_INCLUDED=yes
...rest of original contents of b.sh
fi
There is no other way to do it that I know of. In particular, you can't do early exits or returns because that will affect the shell sourcing the file. You don't have to use a name that is solely for the file; you could use a name that the file always has defined.
In bash, an early return does not affect the sourcing file, it returns to it as if the current file were a function. I prefer this method because it avoids wrapping the entire content in if...fi.
if [ -n "$_for_example" ]; then return; fi
_for_example=`date`
TL;DR:
Bash has a source guard mechanism which lets you decide what to do if executed or sourced.
Longer version:
Over the years working with Bash sourcing I found that a different approach worked excellently which I will discuss below.
The problem for me was similar to the one of the original poster:
sourcing other scripts led to double script execution
additionally, scripts are less testable with unit test frameworks like BATS
The main idea of my solution is to write scripts in a way that can safely sourced multiple times. A major part plays the extraction of functionality (compared to have a large script which would not render very testable).
So, only functions and global variables are defined, other scripts can be sourced at will.
As an example, consider the following three bash scripts:
main.sh
#!/bin/env bash
source script2.sh
source script3.sh
GLOBAL_VAR=value
function_1() {
echo "do something"
function_2 "completely different"
}
run_main() {
echo "starting..."
function_1
}
# Enter: the source guard
# make the script only run when executed, not when sourced)
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
run_main "$#"
fi
script2.sh
#!/bin/env bash
source script3.sh
ALSO_A_GLOBAL_VAR=value2
function_2() {
echo "do something ${1}"
}
# this file can be sourced or be executed if called directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
function_2 "$#"
fi
script3.sh
#!/bin/env bash
export SUPER_USEFUL_VAR=/tmp/path
function_3() {
echo "hello again"
}
# no source guard here: this script defines only the variable and function if called but does not executes them because no code inside refers to the function.
Note that script3.sh is sourced twice. But since only functions and variables are (re-)defined, no functional code is executed during the sourcing.
The execution starts with with running main.sh as one would expect.
There might be a drawback when it comes to dependency cycles (in general a bad idea): I have no idea how Bash reacts if files source (directly or indirectly) each other.
Personally I usually use
set +o nounset # same as set -u
on most of my scripts, therefore I always turn it off and back on.
#!/usr/bin/env bash
set +u
if [ -n "$PRINTF_SCRIPT_USAGE_SH" ] ; then
set -u
return
else
set -u
readonly PRINTF_SCRIPT_USAGE_SH=1
fi
If you do not prefer nounset, you can do this
[[ -n "$PRINTF_SCRIPT_USAGE_SH" ]] && return || readonly PRINTF_SCRIPT_USAGE_SH=1

Resources