Executing generated commands in bash - bash

I want to run a series of bash commands generated by a Python script. The commands are of the form export foo="bar" and alias foo=bar. They must modify the environment of the current process.
This works great:
$(./generate_commands.py)
until an export command contains a space e.g. export x="a b". This generates an error, and only "a is exported (quotes included).
Currently I'm working around this by outputting generate_commands to a temporary file and sourcing that, but is there a more elegant solution?

./generate_commands | bash
This will pipe the output of the script as input to bash
Edit:
To allow for variables to be visible in the current shell, you need to source the output:
source <(./generate_commands)
or
. <(./generate_commands)

I think the OP's problem is
cmd="export x=\"a b\""
${cmd}
does not work, but
export x="a b"
works. My way around this is
export x="a"
echo $x
x+=" b"
echo $x

Related

shell program to build a stack in Linux ubuntu [duplicate]

How to set a global environment variable in a bash script?
If I do stuff like
#!/bin/bash
FOO=bar
...or
#!/bin/bash
export FOO=bar
...the vars seem to stay in the local context, whereas I'd like to keep using them after the script has finished executing.
Run your script with .
. myscript.sh
This will run the script in the current shell environment.
export governs which variables will be available to new processes, so if you say
FOO=1
export BAR=2
./runScript.sh
then $BAR will be available in the environment of runScript.sh, but $FOO will not.
When you run a shell script, it's done in a sub-shell so it cannot affect the parent shell's environment. You want to source the script by doing:
. ./setfoo.sh
This executes it in the context of the current shell, not as a sub shell.
From the bash man page:
. filename [arguments]
source filename [arguments]
Read and execute commands from filename in the current shell
environment and return the exit status of the last command executed
from filename.
If filename does not contain a slash, file names in PATH are used to
find the directory containing filename.
The file searched for in PATH need not be executable. When bash is not
in POSIX mode, the current directory is searched if no file is found
in PATH.
If the sourcepath option to the shopt builtin command is turned off,
the PATH is not searched.
If any arguments are supplied, they become the positional parameters
when filename is executed.
Otherwise the positional parameters are unchanged. The return status
is the status of the last command exited within the script (0 if no
commands are executed), and false if filename is not found or cannot
be read.
source myscript.sh is also feasible.
Description for linux command source:
source is a Unix command that evaluates the file following the command,
as a list of commands, executed in the current context
#!/bin/bash
export FOO=bar
or
#!/bin/bash
FOO=bar
export FOO
man export:
The shell shall give the export attribute to the variables corresponding to the specified names, which shall cause them to be in the environment of subsequently executed commands. If the name of a variable is followed by = word, then the value of that variable shall be set to word.
A common design is to have your script output a result, and require the cooperation of the caller. Then you can say, for example,
eval "$(yourscript)"
or perhaps less dangerously
cd "$(yourscript)"
This extends to tools in other languages besides shell script.
In your shell script, write the variables to another file like below and source these files in your ~/.bashrc or ~/.zshrc
echo "export FOO=bar" >> environment.sh
In your ~/.bashrc or ~/.zshrc, source it like below:
source Path-to-file/environment.sh
You can then access it globally.
FOO=bar
export FOO

The eval command does not work inside a loop [duplicate]

This question already has answers here:
Why piping input to "read" only works when fed into "while read ..." construct? [duplicate]
(4 answers)
Why does my variable set in a do loop disappear? (unix shell)
(2 answers)
Closed 6 years ago.
If I have a simple bash script set_token.sh:
#!/bin/bash
output='export AWS_ACCESS_KEY_ID="111"
export AWS_SECRET_ACCESS_KEY="222"
export AWS_SESSION_TOKEN="333"'
echo "$output" | while read line; do eval $line; done
Executed set_token.sh did not successfully set the 3 environment variables. However if I run eval on each line separately, it works.
$ eval 'export AWS_ACCESS_KEY_ID="111"'
$ eval 'export AWS_SECRET_ACCESS_KEY="222"'
$ eval 'export AWS_SESSION_TOKEN="333"'
Why is that so?
You can achieve the desired result without a loop and without eval.
source <(echo "$output")
The <() construct is a process substitution. It executes the command found inside, creates a FIFO (special first-in, first-out file), and is then transformed into an actual file path (pointing to the FIFO) which source can read from.
Of course, you could also store the actual assignments in a file rather than putting them in the output variable.
source config_file
The source command (or its more standard form .) reads commands from a file and executes them in the current shell, without launching a separate process or subshell, so variable assignments in sourced files work. Useful for config files, but of course you must be sure no one can put arbitrary commands in those files as that would be a security risk.
IMPORTANT
If you want to put declarations in a script (set_token.sh in your case), this script must be sourced (i.e. executed with source or .), not executed with bash or by calling it directly (if it is executable). Any method other than source or . will launch a child process, and there is no way for a child process to assign variables that will be visible to the parent process afterwards. Sourcing does not create a separate process, which is why assignments will work. The export keyword will make assignments visible to children process only, they cannot make assignments visible to the parent.
Not sure why you want to use eval in this case. Why not set the variables more directly like this:
export AWS_ACCESS_KEY_ID="111"
export AWS_SECRET_ACCESS_KEY="222"
export AWS_SESSION_TOKEN="333"
Your loop is running in a sub shell (because of echo "$output" | ...) and that's why your variables are not visible outside. It's not that eval is not working! Don't worry - this happens to a lot of people.
If you are insistent on using the loop and eval, you could use process substitution < <(command):
while read line; do eval $line; done < <(printf "%s\n" "$output")
printf is better than echo
see also:
Shell variables set inside while loop not visible outside of it
What is more portable? echo -e or using printf?
Fred's helpful answer contains a viable solution and good pointers (and the problem with the original approach is explained in Bash FAQ #24 - "I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates?").
That said, in your specific scenario - assuming you're willing to accept the risk of using eval - you can apply it directly to your multi-line string:
#!/bin/bash
output='export AWS_ACCESS_KEY_ID="111"
export AWS_SECRET_ACCESS_KEY="222"
export AWS_SESSION_TOKEN="333"'
# This defines all 3 AWS_* environment variables.
eval "$output"
To reiterate Fred's point: For the environment variables to take effect in the current shell, you must source the script (using builtin . or its (effective) alias source):
. ./set_token.sh

Is there a good way to preload or include a script prior to executing another script?

I am looking to execute a script but have it include another script before it executes. The problem is, the included script would be generated and the executed script would be unmodifiable. One solution I came up with, was to actually reverse the include, by having the include script as a wrapper, calling set to set the arguments for the executed script and then dotting/sourcing it. E.g.
#!/bin/bash
# Generated wrapper or include script.
: Performing some setup...
target_script=$1 ; shift
set -- "$#"
. "$target_script"
Where target_script is the script I actually want to run, importing settings from the wrapper.
However, the potential problem I face is that callers of the target script or even the target script itself may be expecting $0 to be set to the path of it's location on the file system. But because this wrapper approach overrides $0, the value of $0 may be unexpected and could produce undefined behaviour.
Is there another way to perform what is in effect, an LD_PRELOAD but in the scripted form, through bash without interfering with its runtime parameters?
I have looked at --init-file or --rcfile, but these only seem to be included for interactive shells.
Forcing interactive mode does seem to allow me to specify --rcfile:
$ bash --rcfile /tmp/x-include.sh -i /tmp/xx.sh
include_script: $0=bash, $BASH_SOURCE=/tmp/x-include.sh
target_script: $0=/tmp/xx.sh, $BASH_SOURCE=/tmp/xx.sh
Content of the x-include.sh script:
#!/bin/bash
echo "include_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
Content of the xx.sh script:
#!/bin/bash
echo "target_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
From the bash documentation:
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in
the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read
and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
So that settles it then:
BASH_ENV=/tmp/x-include.sh /bin/bash /tmp/xx.sh

Defining common variables across multiple scripts?

I have a number of Bash and Perl scripts which are unrelated in functionality, but are related in that they work within the same project.
The fact that they work in the same project means that I commonly specify the same directories, the same project specific commands, the same keywords at the top of every script.
Currently, this has not bitten me, but I understand that it would be easier to have all of these values in one place, then if something changes I can change a value once and have the various scripts pick up on those changes.
The question is - how is best to declare these values? A single Perl script that is 'required' in each script would require less changes to the Perl scripts, though doesn't provide a solution to the Bash script. A configuration file using a "key=value" format would perhaps be more universal, but requires each script to parse the configuration and has the potential to introduce issues. Is there a better alternative? Using environmental variables? Or a Bash specific way that Perl can easily execute and interpret?
When you run a shell script, it's done in a sub-shell so it cannot affect the parent shell's environment. So when you declare a variable as key=value its scope is limited to the sub-shell context. You want to source the script by doing:
. ./myscript.sh
This executes it in the context of the current shell, not as a sub shell.
From the bash man page:
. filename [arguments]
source filename [arguments]
Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename.
If filename does not contain a slash, file names in PATH are used to find the directory containing filename.
Also you can use the export command to create a global environment variable. export governs which variables will be available to new processes, so if you say
FOO=1
export BAR=2
./myscript2.sh
then $BAR will be available in the environment of myscript2.sh, but $FOO will not.
Define environments variables :
user level : in your ~/.profile or ~/.bash_profile or ~/.bash_login or ~/.bashrc
system level : in /etc/profile or /etc/bash.bashrc or /etc/environment
For example add tow lines foreach variable :
FOO=myvalue
export FOO
To read this variable in bash script :
#! /bin/bash
echo $FOO
in perl script :
#! /bin/perl
print $ENV{'FOO'};
You could also source another file, so you do not create extra env variables, that may lead to unexpected behaviours.
source_of_truth.sh:
FOO="bar"
scritp1.sh
#!/usr/bin/env bash
source source_of_truth.sh
echo ${FOO}
# ... doing something
scritp2.sh
#!/usr/bin/env bash
source source_of_truth.sh
echo ${FOO}
# ... doing something else

Pass all variables from one shell script to another?

Lets say I have a shell / bash script named test.sh with:
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh
My test2.sh looks like this:
#!/bin/bash
echo ${TESTVARIABLE}
This does not work. I do not want to pass all variables as parameters since imho this is overkill.
Is there a different way?
You have basically two options:
Make the variable an environment variable (export TESTVARIABLE) before executing the 2nd script.
Source the 2nd script, i.e. . test2.sh and it will run in the same shell. This would let you share more complex variables like arrays easily, but also means that the other script could modify variables in the source shell.
UPDATE:
To use export to set an environment variable, you can either use an existing variable:
A=10
# ...
export A
This ought to work in both bash and sh. bash also allows it to be combined like so:
export A=10
This also works in my sh (which happens to be bash, you can use echo $SHELL to check). But I don't believe that that's guaranteed to work in all sh, so best to play it safe and separate them.
Any variable you export in this way will be visible in scripts you execute, for example:
a.sh:
#!/bin/sh
MESSAGE="hello"
export MESSAGE
./b.sh
b.sh:
#!/bin/sh
echo "The message is: $MESSAGE"
Then:
$ ./a.sh
The message is: hello
The fact that these are both shell scripts is also just incidental. Environment variables can be passed to any process you execute, for example if we used python instead it might look like:
a.sh:
#!/bin/sh
MESSAGE="hello"
export MESSAGE
./b.py
b.py:
#!/usr/bin/python
import os
print 'The message is:', os.environ['MESSAGE']
Sourcing:
Instead we could source like this:
a.sh:
#!/bin/sh
MESSAGE="hello"
. ./b.sh
b.sh:
#!/bin/sh
echo "The message is: $MESSAGE"
Then:
$ ./a.sh
The message is: hello
This more or less "imports" the contents of b.sh directly and executes it in the same shell. Notice that we didn't have to export the variable to access it. This implicitly shares all the variables you have, as well as allows the other script to add/delete/modify variables in the shell. Of course, in this model both your scripts should be the same language (sh or bash). To give an example how we could pass messages back and forth:
a.sh:
#!/bin/sh
MESSAGE="hello"
. ./b.sh
echo "[A] The message is: $MESSAGE"
b.sh:
#!/bin/sh
echo "[B] The message is: $MESSAGE"
MESSAGE="goodbye"
Then:
$ ./a.sh
[B] The message is: hello
[A] The message is: goodbye
This works equally well in bash. It also makes it easy to share more complex data which you could not express as an environment variable (at least without some heavy lifting on your part), like arrays or associative arrays.
Fatal Error gave a straightforward possibility: source your second script! if you're worried that this second script may alter some of your precious variables, you can always source it in a subshell:
( . ./test2.sh )
The parentheses will make the source happen in a subshell, so that the parent shell will not see the modifications test2.sh could perform.
There's another possibility that should definitely be referenced here: use set -a.
From the POSIX set reference:
-a: When this option is on, the export attribute shall be set for each variable to which an assignment is performed; see the Base Definitions volume of IEEE Std 1003.1-2001, Section 4.21, Variable Assignment. If the assignment precedes a utility name in a command, the export attribute shall not persist in the current execution environment after the utility completes, with the exception that preceding one of the special built-in utilities causes the export attribute to persist after the built-in has completed. If the assignment does not precede a utility name in the command, or if the assignment is a result of the operation of the getopts or read utilities, the export attribute shall persist until the variable is unset.
From the Bash Manual:
-a: Mark variables and function which are modified or created for export to the environment of subsequent commands.
So in your case:
set -a
TESTVARIABLE=hellohelloheloo
# ...
# Here put all the variables that will be marked for export
# and that will be available from within test2 (and all other commands).
# If test2 modifies the variables, the modifications will never be
# seen in the present script!
set +a
./test2.sh
# Here, even if test2 modifies TESTVARIABLE, you'll still have
# TESTVARIABLE=hellohelloheloo
Observe that the specs only specify that with set -a the variable is marked for export. That is:
set -a
a=b
set +a
a=c
bash -c 'echo "$a"'
will echo c and not an empty line nor b (that is, set +a doesn't unmark for export, nor does it “save” the value of the assignment only for the exported environment). This is, of course, the most natural behavior.
Conclusion: using set -a/set +a can be less tedious than exporting manually all the variables. It is superior to sourcing the second script, as it will work for any command, not only the ones written in the same shell language.
There's actually an easier way than exporting and unsetting or sourcing again (at least in bash, as long as you're ok with passing the environment variables manually):
let a.sh be
#!/bin/bash
secret="winkle my tinkle"
echo Yo, lemme tell you \"$secret\", b.sh!
Message=$secret ./b.sh
and b.sh be
#!/bin/bash
echo I heard \"$Message\", yo
Observed output is
[rob#Archie test]$ ./a.sh
Yo, lemme tell you "winkle my tinkle", b.sh!
I heard "winkle my tinkle", yo
The magic lies in the last line of a.sh, where Message, for only the duration of the invocation of ./b.sh, is set to the value of secret from a.sh.
Basically, it's a little like named parameters/arguments. More than that, though, it even works for variables like $DISPLAY, which controls which X Server an application starts in.
Remember, the length of the list of environment variables is not infinite. On my system with a relatively vanilla kernel, xargs --show-limits tells me the maximum size of the arguments buffer is 2094486 bytes. Theoretically, you're using shell scripts wrong if your data is any larger than that (pipes, anyone?)
In Bash if you export the variable within a subshell, using parentheses as shown, you avoid leaking the exported variables:
#!/bin/bash
TESTVARIABLE=hellohelloheloo
(
export TESTVARIABLE
source ./test2.sh
)
The advantage here is that after you run the script from the command line, you won't see a $TESTVARIABLE leaked into your environment:
$ ./test.sh
hellohelloheloo
$ echo $TESTVARIABLE
#empty! no leak
$
Adding to the answer of Fatal Error, There is one more way to pass the variables to another shell script.
The above suggested solution have some drawbacks:
using Export : It will cause the variable to be present out of their scope which is not a good design practice.
using Source : It may cause name collisions or accidental overwriting of a predefined variable in some other shell script file which have sourced another file.
There is another simple solution avaiable for us to use.
Considering the example posted by you,
test.sh
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh "$TESTVARIABLE"
test2.sh
#!/bin/bash
echo $1
output
hellohelloheloo
Also it is important to note that "" are necessary if we pass multiword strings.
Taking one more example
master.sh
#!/bin/bash
echo in master.sh
var1="hello world"
sh slave1.sh $var1
sh slave2.sh "$var1"
echo back to master
slave1.sh
#!/bin/bash
echo in slave1.sh
echo value :$1
slave2.sh
#!/bin/bash
echo in slave2.sh
echo value : $1
output
in master.sh
in slave1.sh
value :"hello
in slave2.sh
value :"hello world"
It happens because of the reasons aptly described in this link
Another option is using eval. This is only suitable if the strings are trusted. The first script can echo the variable assignments:
echo "VAR=myvalue"
Then:
eval $(./first.sh) ./second.sh
This approach is of particular interest when the second script you want to set environment variables for is not in bash and you also don't want to export the variables, perhaps because they are sensitive and you don't want them to persist.
Another way, which is a little bit easier for me is to use named pipes. Named pipes provided a way to synchronize and sending messages between different processes.
A.bash:
#!/bin/bash
msg="The Message"
echo $msg > A.pipe
B.bash:
#!/bin/bash
msg=`cat ./A.pipe`
echo "message from A : $msg"
Usage:
$ mkfifo A.pipe #You have to create it once
$ ./A.bash & ./B.bash # you have to run your scripts at the same time
B.bash will wait for message and as soon as A.bash sends the message, B.bash will continue its work.

Resources