After reading the Bash man pages and with respect to this post, I am still having trouble understanding what exactly the eval command does and which would be its typical uses.
For example, if we do:
$ set -- one two three # Sets $1 $2 $3
$ echo $1
one
$ n=1
$ echo ${$n} ## First attempt to echo $1 using brackets fails
bash: ${$n}: bad substitution
$ echo $($n) ## Second attempt to echo $1 using parentheses fails
bash: 1: command not found
$ eval echo \${$n} ## Third attempt to echo $1 using 'eval' succeeds
one
What exactly is happening here and how do the dollar sign and the backslash tie into the problem?
eval takes a string as its argument, and evaluates it as if you'd typed that string on a command line. (If you pass several arguments, they are first joined with spaces between them.)
${$n} is a syntax error in bash. Inside the braces, you can only have a variable name, with some possible prefix and suffixes, but you can't have arbitrary bash syntax and in particular you can't use variable expansion. There is a way of saying “the value of the variable whose name is in this variable”, though:
echo ${!n}
one
$(…) runs the command specified inside the parentheses in a subshell (i.e. in a separate process that inherits all settings such as variable values from the current shell), and gathers its output. So echo $($n) runs $n as a shell command, and displays its output. Since $n evaluates to 1, $($n) attempts to run the command 1, which does not exist.
eval echo \${$n} runs the parameters passed to eval. After expansion, the parameters are echo and ${1}. So eval echo \${$n} runs the command echo ${1}.
Note that most of the time, you must use double quotes around variable substitutions and command substitutions (i.e. anytime there's a $): "$foo", "$(foo)". Always put double quotes around variable and command substitutions, unless you know you need to leave them off. Without the double quotes, the shell performs field splitting (i.e. it splits value of the variable or the output from the command into separate words) and then treats each word as a wildcard pattern. For example:
$ ls
file1 file2 otherfile
$ set -- 'f* *'
$ echo "$1"
f* *
$ echo $1
file1 file2 file1 file2 otherfile
$ n=1
$ eval echo \${$n}
file1 file2 file1 file2 otherfile
$eval echo \"\${$n}\"
f* *
$ echo "${!n}"
f* *
eval is not used very often. In some shells, the most common use is to obtain the value of a variable whose name is not known until runtime. In bash, this is not necessary thanks to the ${!VAR} syntax. eval is still useful when you need to construct a longer command containing operators, reserved words, etc.
Simply think of eval as "evaluating your expression one additional time before execution"
eval echo \${$n} becomes echo $1 after the first round of evaluation. Three changes to notice:
The \$ became $ (The backslash is needed, otherwise it tries to evaluate ${$n}, which means a variable named {$n}, which is not allowed)
$n was evaluated to 1
The eval disappeared
In the second round, it is basically echo $1 which can be directly executed.
So eval <some command> will first evaluate <some command> (by evaluate here I mean substitute variables, replace escaped characters with the correct ones etc.), and then run the resultant expression once again.
eval is used when you want to dynamically create variables, or to read outputs from programs specifically designed to be read like this. See Eval command and security issues for examples. The link also contains some typical ways in which eval is used, and the risks associated with it.
In my experience, a "typical" use of eval is for running commands that generate shell commands to set environment variables.
Perhaps you have a system that uses a collection of environment variables, and you have a script or program that determines which ones should be set and their values. Whenever you run a script or program, it runs in a forked process, so anything it does directly to environment variables is lost when it exits. But that script or program can send the export commands to standard output.
Without eval, you would need to redirect standard output to a temporary file, source the temporary file, and then delete it. With eval, you can just:
eval "$(script-or-program)"
Note the quotes are important. Take this (contrived) example:
# activate.sh
echo 'I got activated!'
# test.py
print("export foo=bar/baz/womp")
print(". activate.sh")
$ eval $(python test.py)
bash: export: `.': not a valid identifier
bash: export: `activate.sh': not a valid identifier
$ eval "$(python test.py)"
I got activated!
The eval statement tells the shell to take eval’s arguments as commands and run them through the command-line. It is useful in a situation like below:
In your script if you are defining a command into a variable and later on you want to use that command then you should use eval:
a="ls | more"
$a
Output:
bash: command not found: ls | more
The above command didn't work as ls tried to list file with name pipe (|) and more. But these files are not there:
eval $a
Output:
file.txt
mailids
remote_cmd.sh
sample.txt
tmp
Update: Some people say one should -never- use eval. I disagree. I think the risk arises when corrupt input can be passed to eval. However there are many common situations where that is not a risk, and therefore it is worth knowing how to use eval in any case. This stackoverflow answer explains the risks of eval and alternatives to eval. Ultimately it is up to the user to determine if/when eval is safe and efficient to use.
The bash eval statement allows you to execute lines of code calculated or acquired, by your bash script.
Perhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order. That's essentially the same behavior as the bash source statement, which is what one would use, unless it was necessary to perform some kind of transformation (e.g. filtering or substitution) on the content of the imported script.
I rarely have needed eval, but I have found it useful to read or write variables whose names were contained in strings assigned to other variables. For example, to perform actions on sets of variables, while keeping the code footprint small and avoiding redundancy.
eval is conceptually simple. However, the strict syntax of the bash language, and the bash interpreter's parsing order can be nuanced and make eval appear cryptic and difficult to use or understand. Here are the essentials:
The argument passed to eval is a string expression that is calculated at runtime. eval will execute the final parsed result of its argument as an actual line of code in your script.
Syntax and parsing order are stringent. If the result isn't an executable line of bash code, in scope of your script, the program will crash on the eval statement as it tries to execute garbage.
When testing you can replace the eval statement with echo and look at what is displayed. If it is legitimate code in the current context, running it through eval will work.
The following examples may help clarify how eval works...
Example 1:
eval statement in front of 'normal' code is a NOP
$ eval a=b
$ eval echo $a
b
In the above example, the first eval statements has no purpose and can be eliminated. eval is pointless in the first line because there is no dynamic aspect to the code, i.e. it already parsed into the final lines of bash code, thus it would be identical as a normal statement of code in the bash script. The 2nd eval is pointless too, because, although there is a parsing step converting $a to its literal string equivalent, there is no indirection (e.g. no referencing via string value of an actual bash noun or bash-held script variable), so it would behave identically as a line of code without the eval prefix.
Example 2:
Perform var assignment using var names passed as string values.
$ key="mykey"
$ val="myval"
$ eval $key=$val
$ echo $mykey
myval
If you were to echo $key=$val, the output would be:
mykey=myval
That, being the final result of string parsing, is what will be executed by eval, hence the result of the echo statement at the end...
Example 3:
Adding more indirection to Example 2
$ keyA="keyB"
$ valA="valB"
$ keyB="that"
$ valB="amazing"
$ eval eval \$$keyA=\$$valA
$ echo $that
amazing
The above is a bit more complicated than the previous example, relying more heavily on the parsing-order and peculiarities of bash. The eval line would roughly get parsed internally in the following order (note the following statements are pseudocode, not real code, just to attempt to show how the statement would get broken down into steps internally to arrive at the final result).
eval eval \$$keyA=\$$valA # substitution of $keyA and $valA by interpreter
eval eval \$keyB=\$valB # convert '$' + name-strings to real vars by eval
eval $keyB=$valB # substitution of $keyB and $valB by interpreter
eval that=amazing # execute string literal 'that=amazing' by eval
If the assumed parsing order doesn't explain what eval is doing enough, the third example may describe the parsing in more detail to help clarify what is going on.
Example 4:
Discover whether vars, whose names are contained in strings, themselves contain string values.
a="User-provided"
b="Another user-provided optional value"
c=""
myvarname_a="a"
myvarname_b="b"
myvarname_c="c"
for varname in "myvarname_a" "myvarname_b" "myvarname_c"; do
eval varval=\$$varname
if [ -z "$varval" ]; then
read -p "$varname? " $varname
fi
done
In the first iteration:
varname="myvarname_a"
Bash parses the argument to eval, and eval sees literally this at runtime:
eval varval=\$$myvarname_a
The following pseudocode attempts to illustrate how bash interprets the above line of real code, to arrive at the final value executed by eval. (the following lines descriptive, not exact bash code):
1. eval varval="\$" + "$varname" # This substitution resolved in eval statement
2. .................. "$myvarname_a" # $myvarname_a previously resolved by for-loop
3. .................. "a" # ... to this value
4. eval "varval=$a" # This requires one more parsing step
5. eval varval="User-provided" # Final result of parsing (eval executes this)
Once all the parsing is done, the result is what is executed, and its effect is obvious, demonstrating there is nothing particularly mysterious about eval itself, and the complexity is in the parsing of its argument.
varval="User-provided"
The remaining code in the example above simply tests to see if the value assigned to $varval is null, and, if so, prompts the user to provide a value.
I originally intentionally never learned how to use eval, because most people will recommend to stay away from it like the plague. However I recently discovered a use case that made me facepalm for not recognizing it sooner.
If you have cron jobs that you want to run interactively to test, you might view the contents of the file with cat, and copy and paste the cron job to run it. Unfortunately, this involves touching the mouse, which is a sin in my book.
Lets say you have a cron job at /etc/cron.d/repeatme with the contents:
*/10 * * * * root program arg1 arg2
You cant execute this as a script with all the junk in front of it, but we can use cut to get rid of all the junk, wrap it in a subshell, and execute the string with eval
eval $( cut -d ' ' -f 6- /etc/cron.d/repeatme)
The cut command only prints out the 6th field of the file, delimited by spaces. Eval then executes that command.
I used a cron job here as an example, but the concept is to format text from stdout, and then evaluate that text.
The use of eval in this case is not insecure, because we know exactly what we will be evaluating before hand.
I've recently had to use eval to force multiple brace expansions to be evaluated in the order I needed. Bash does multiple brace expansions from left to right, so
xargs -I_ cat _/{11..15}/{8..5}.jpg
expands to
xargs -I_ cat _/11/8.jpg _/11/7.jpg _/11/6.jpg _/11/5.jpg _/12/8.jpg _/12/7.jpg _/12/6.jpg _/12/5.jpg _/13/8.jpg _/13/7.jpg _/13/6.jpg _/13/5.jpg _/14/8.jpg _/14/7.jpg _/14/6.jpg _/14/5.jpg _/15/8.jpg _/15/7.jpg _/15/6.jpg _/15/5.jpg
but I needed the second brace expansion done first, yielding
xargs -I_ cat _/11/8.jpg _/12/8.jpg _/13/8.jpg _/14/8.jpg _/15/8.jpg _/11/7.jpg _/12/7.jpg _/13/7.jpg _/14/7.jpg _/15/7.jpg _/11/6.jpg _/12/6.jpg _/13/6.jpg _/14/6.jpg _/15/6.jpg _/11/5.jpg _/12/5.jpg _/13/5.jpg _/14/5.jpg _/15/5.jpg
The best I could come up with to do that was
xargs -I_ cat $(eval echo _/'{11..15}'/{8..5}.jpg)
This works because the single quotes protect the first set of braces from expansion during the parsing of the eval command line, leaving them to be expanded by the subshell invoked by eval.
There may be some cunning scheme involving nested brace expansions that allows this to happen in one step, but if there is I'm too old and stupid to see it.
You asked about typical uses.
One common complaint about shell scripting is that you (allegedly) can't pass by reference to get values back out of functions.
But actually, via "eval", you can pass by reference. The callee can pass back a list of variable assignments to be evaluated by the caller. It is pass by reference because the caller can allowed to specify the name(s) of the result variable(s) - see example below. Error results can be passed back standard names like errno and errstr.
Here is an example of passing by reference in bash:
#!/bin/bash
isint()
{
re='^[-]?[0-9]+$'
[[ $1 =~ $re ]]
}
#args 1: name of result variable, 2: first addend, 3: second addend
iadd()
{
if isint ${2} && isint ${3} ; then
echo "$1=$((${2}+${3}));errno=0"
return 0
else
echo "errstr=\"Error: non-integer argument to iadd $*\" ; errno=329"
return 1
fi
}
var=1
echo "[1] var=$var"
eval $(iadd var A B)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[2] var=$var (unchanged after error)"
eval $(iadd var $var 1)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[3] var=$var (successfully changed)"
The output looks like this:
[1] var=1
errstr=Error: non-integer argument to iadd var A B
errno=329
[2] var=1 (unchanged after error)
[3] var=2 (successfully changed)
There is almost unlimited band width in that text output! And there are more possibilities if the multiple output lines are used: e.g., the first line could be used for variable assignments, the second for continuous 'stream of thought', but that's beyond the scope of this post.
In the question:
who | grep $(tty | sed s:/dev/::)
outputs errors claiming that files a and tty do not exist. I understood this to mean that tty is not being interpreted before execution of grep, but instead that bash passed tty as a parameter to grep, which interpreted it as a file name.
There is also a situation of nested redirection, which should be handled by matched parentheses which should specify a child process, but bash is primitively a word separator, creating parameters to be sent to a program, therefore parentheses are not matched first, but interpreted as seen.
I got specific with grep, and specified the file as a parameter instead of using a pipe. I also simplified the base command, passing output from a command as a file, so that i/o piping would not be nested:
grep $(tty | sed s:/dev/::) <(who)
works well.
who | grep $(echo pts/3)
is not really desired, but eliminates the nested pipe and also works well.
In conclusion, bash does not seem to like nested pipping. It is important to understand that bash is not a new-wave program written in a recursive manner. Instead, bash is an old 1,2,3 program, which has been appended with features. For purposes of assuring backward compatibility, the initial manner of interpretation has never been modified. If bash was rewritten to first match parentheses, how many bugs would be introduced into how many bash programs? Many programmers love to be cryptic.
As clearlight has said, "(p)erhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order". I'm no expert, but the textbook I'm currently reading (Shell-Programmierung by Jürgen Wolf) points to one particular use of this that I think would be a valuable addition to the set of potential use cases collected here.
For debugging purposes, you may want to go through your script line by line (pressing Enter for each step). You could use eval to execute every line by trapping the DEBUG signal (which I think is sent after every line):
trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG
I like the "evaluating your expression one additional time before execution" answer, and would like to clarify with another example.
var="\"par1 par2\""
echo $var # prints nicely "par1 par2"
function cntpars() {
echo " > Count: $#"
echo " > Pars : $*"
echo " > par1 : $1"
echo " > par2 : $2"
if [[ $# = 1 && $1 = "par1 par2" ]]; then
echo " > PASS"
else
echo " > FAIL"
return 1
fi
}
# Option 1: Will Pass
echo "eval \"cntpars \$var\""
eval "cntpars $var"
# Option 2: Will Fail, with curious results
echo "cntpars \$var"
cntpars $var
The curious results in option 2 are that we would have passed two parameters as follows:
First parameter: "par1
Second parameter: par2"
How is that for counter intuitive? The additional eval will fix that.
It was adapted from another answer on How can I reference a file for variables using Bash?
I have a bash script a.sh that looks like this:
#!/bin/bash
echo $#
echo $1
and a script b.sh that looks like this:
#!/bin/bash
source ./a.sh
If I call ./a.sh I'm correctly getting 0 and an empty line as output. When calling ./a.sh blabla I'm getting 1 and blabla as output.
However when I call ./b.sh blabla I'm also getting 1 and blabla as output, even though no argument was passed to a.sh from within b.sh.
This seems to be related to the use of source (which I have to use since in my real use case, a.sh exports some variables). How can I avoid arguments from b.sh being propagated to a.sh? I thought about using eval $(a.sh) but this makes my echo statements in a.sh fail. I thought of using shift to consume the arguments from b.sh before calling a.sh but I don't necessarily know how many arguments there are.
The root of the problem is an anomaly in how the source command works. From the bash man page, in the "Shell Builtin Commands" section:
. filename [arguments]
source filename [arguments]
[...] If any arguments are supplied, they become the positional parameters when filename is executed. Otherwise the positional parameters are unchanged.
...which means you can override the main script's arguments by supplying different arguments to the sourced script, but you can't just not pass arguments to it.
Fortunately, there's a workaround; just source the script in a context where there are no arguments:
#!/bin/bash
wrapperfunction() {
source ./a.sh
}
wrapperfunction
Since no arguments are supplied to wrapperfunction, inside it the arg list is empty. Since a.sh's commands are run in that context, the arg list is empty there as well. And variables assigned inside a.sh are available outside the function (unless they're declared as local or something similar).
(Note: I tested this in bash, zsh, dash, and ksh93, and it works in all of them -- well, except that dash doesn't have the source command, so you have to use . instead.)
Update: I realized you can write a generic wrapper function that allows you to specify a filename as an argument:
sourceWithoutArgs() {
local fileToSource="$1"
shift
source "$fileToSource"
}
sourceWithoutArgs ./a.sh
The shift command removes the filename from the function's arg list, so it's empty when the file actually gets sourced. Well, unless you passed additional arguments to the function, in which case those will be in the arg list and will get passed on... so you can actually use this function to replace both the without-args and the with-args usage of source.
(This works in bash and zsh. If you want to use it in ksh, you have to remove local; and to use it in dash, replace source with .)
You can even keep passing normal arguments using
source() {
local f="${1}"; shift;
builtin source "${f}" "${#}"
}
It is also possible to check from the sourced file what arguments have actually been given
# in x.bash, a file meant to be sourced
# fix `source` arguments
__ARGV=( "${#}" )
__shopts=$( shopt -p ) # save shopt
shopt -u extdebug
shopt -s extdebug # create BASH_ARGV
# no args have been given to `source x.bash`
if [[ ${BASH_ARGV[0]} == "${BASH_SOURCE[0]}" ]]; then
__ARGV=() # clear `${__ARGV[#]}`
fi
eval "${__shopts}" # restore shopt
unset __shopts
# Actual args are in ${__ARGV[#]}
My understanding of shell is very minimal, I'm working on a small task and need some help.
I've been given a python script that can parse command line arguments. One of these arguments is called "-targetdir". When -targetdir is unspecified, it defaults to a /tmp/{USER} folder on the user's machine. I need to direct -targetdir to a specific filepath.
I effectively want to do something like this in my script:
set ${-targetdir, "filepath"}
So that the python script doesn't set a default value. Would anyone know how to do this? I also am not sure if I'm giving sufficient information, so please let me know if I'm being ambiguous.
I strongly suggest modifying the Python script to explicitly specify the desired default rather than engaging in this kind of hackery.
That said, some approaches:
Option A: Function Wrapper
Assuming that your Python script is called foobar, you can write a wrapper function like the following:
foobar() {
local arg found=0
for arg; do
[[ $arg = -targetdir ]] && { found=1; break; }
done
if (( found )); then
# call the real foobar command without any changes to its argument list
command foobar "$#"
else
# call the real foobar, with ''-targetdir filepath'' added to its argument list
command foobar -targetdir "filepath" "$#"
fi
}
If put in a user's .bashrc, any invocation of foobar from the user's interactive shell (assuming they're using bash) will be replaced with the above wrapper. Note that this doesn't impact other shells; export -f foobar will cause other instances of bash to honor the wrapper, but that isn't guaranteed to extend to instances of sh, as used by system() invocations, Python's Popen(..., shell=True), and other places in the system.
Option B: Shell Wrapper
Assume you rename the original foobar script to foobar.real. Then you can make foobar a wrapper, like the following:
#!/usr/bin/env bash
found=0
for arg; do
[[ $arg = -targetdir ]] && { found=1; break; }
done
if (( found )); then
exec foobar.real "$#"
else
exec foobar.real -targetdir "filepath" "$#"
fi
Using exec terminates the wrapper's execution, replacing it with foobar.real without remaining in memory.
I am trying to check if all the non POSIX commands that my script depends on are present before my script proceeds with its main job. This will help me to ensure that my script does not generate errors later due to missing commands.
I want to keep the list of all such non POSIX commands in a variable called DEPS so that as the script evolves and depends on more commands, I can edit this variable.
I want the script to support commands with spaces in them, e.g. my program.
This is my script.
#!/bin/sh
DEPS='ssh scp "my program" sftp'
for i in $DEPS
do
echo "Checking $i ..."
if ! command -v "$i"
then
echo "Error: $i not found"
else
echo "Success: $i found"
fi
echo
done
However, this doesn't work, because "my program" is split into two words while the for loop iterates: "my and program" as you can see in the output below.
# sh foo.sh
Checking ssh ...
/usr/bin/ssh
Success: ssh found
Checking scp ...
/usr/bin/scp
Success: scp found
Checking "my ...
Error: "my not found
Checking program" ...
Error: program" not found
Checking sftp ...
/usr/bin/sftp
Success: sftp found
The output I expected is:
# sh foo.sh
Checking ssh ...
/usr/bin/ssh
Success: ssh found
Checking scp ...
/usr/bin/scp
Success: scp found
Checking my program ...
Error: my program not found
Checking sftp ...
/usr/bin/sftp
Success: sftp found
How can I solve this problem while keeping the script POSIX compliant?
I'll repeat the answer I gave to your previous question: use a while loop with a here document rather than a for loop. You can embed newlines in a string, which is all you need to separate command names in a string if those command names might contain whitespace. (If your command names contain newlines, strongly consider renaming them.)
For maximum POSIX compatibility, use printf, since the POSIX specification of echo is remarkably lax due to differences in how echo was implemented in various shells prior to the definition of the standard.
deps="ssh
scp
my program
sftp
"
while read -r cmd; do
printf "Checking $cmd ...\n"
if ! command -v "$cmd"; then
printf "Error: $i not found\n"
else
printf "Success: $cmd found\n"
fi
printf "\n"
done <<EOF
$deps
EOF
This happens because the steps after parameter expansion are string-splitting and glob-expansion -- not syntax-level parsing (such as handling quoting). To go all the way back to the beginning of the parsing process, you need to use eval.
Frankly, the best approaches are to either:
Target a shell that supports arrays (ksh, bash, zsh, etc) rather than trying to support POSIX
Don't try to retrieve the value from a variable.
...there's a reason proper array support is ubiquitous in modern shells; writing unambiguously correct code, particularly when handling untrusted data, is much harder without it.
That said, you have the option of using $# to store your contents, which can be set, albeit dangerously, using eval:
deps='goodbye "cruel world"'
eval "set -- $deps"
for program; do
echo "processing $program"
done
If you do this inside of a function, you'll override only the function's argument list, leaving the global list unmodified.
Alternately, eval "yourfunction $deps" will have the same effect, setting the argument list within the function to the results of running all the usual parsing and expansion phases on the contents of $deps.
Because the script is in your controll, you can use the eval with reasonable safety, so #Charles Duffy's answer is an simple and good solution. Use it. :)
Also, consider to use the autoconf for generating the usual configure script what is doing good job for what you need - e.g. checking commands and much more... At least, check some configure scripts for ideas how to solvle common problems...
If you want play with your own implementation:
divide the dependecies into two groups
core_deps - unix tools, what are commonly needed for the script itself, like sed, cat cp and such. Those programs doesn't contains spaces in their names, nor in the $PATH.
runtime_deps - programs, what are needed for your application, but not for the script itself.
do the checks in two steps (or more, for example if you need check e.g. libraries)
never use the for loop for space delimited elements unless you getting them as the function arguments - so you can use the "$#"
As starting script could be something like the following:
_check_core_deps() {
for _cmd
do
_cpath=$(command -v "$_cmd")
case "$_cpath" in
/*) continue;;
*) echo "Missing install dependency [$_cmd] - can't continue" ; exit 1 ;;
esac
done
return 0
}
core_deps="grep sed hooloovoo cp" #list of "core" commands - they doesn't contains spaces
_check_core_deps $core_deps || exit 1
The above will blow up on non-existent "hooloovoo" command. :)
Now you can safely continue, all core commands needed for the install script are available. In the next step, you can check other strange dependencies.
Some ideas:
# function what returns your dependecies as lines from HEREDOC
# (e.g. could contain any character except "\n")
# you can decorate the dependecies with comments...
# because we have sed (checked in the 1st step, can use it)
# if want, you can add "fields" too, for some extended functinality with an specified delimiter
list_deps() {
_sptab=$(printf " \t") # the $' \t' is approved by POSIX for the next version only
#the "sed" removes comments and empty lines
#the UUOC (useless use of cat) is intentional here
#for example if you want add "tr" before the "sed"
#of course, you can remove it...
cat - <<DEPS |sed "s/[$_sptab]*#.*//;/^[$_sptab]*$/d"
########## DEPENDECIES ############
#some comment
ssh
scp
sftp
#comment
#bla bla
my program #some comment
/Applications/Some Long And Spaced OSX Apllication.app
DEPS
########## END of DEPENDECIES #####
}
_check_deps() {
#in the "while" loop you can use IFS=: or such and adding anouter variable to read
#for getting more fields for some extended functionality
list_deps | while read -r line
do
#do any checks with the line
#implement additional functionalities as functions
#etc...
#remember - your in an subshell here
printf "command:%s\n" "$line"
done
}
_check_deps
One more thing :), (or two)
if you doubt about the content of some variables, don't use the echo. The POSIX isn't defines how it should act when contains escaped characters (e.g. echo "some\nwed"). Use:
printf '%s' "$variable"
never use uppercase only variables like "DEPS"... they're only for environment variables...
Here is a problem:
In my bash scripts I want to source several file with some checks, so I have:
if [ -r foo ] ; then
source foo
else
logger -t $0 -p crit "unable to source foo"
exit 1
fi
if [ -r bar ] ; then
source bar
else
logger -t $0 -p crit "unable to source bar"
exit 1
fi
# ... etc ...
Naively I tried to create a function that do:
function safe_source() {
if [ -r $1 ] ; then
source $1
else
logger -t $0 -p crit "unable to source $1"
exit 1
fi
}
safe_source foo
safe_source bar
# ... etc ...
But there is a snag there.
If one of the files foo, bar, etc. have a global such as --
declare GLOBAL_VAR=42
-- it will effectively become:
function safe_source() {
# ...
declare GLOBAL_VAR=42
# ...
}
thus a global variable becomes local.
The question:
An alias in bash seems too weak for this, so must I unroll the above function, and repeat myself, or is there a more elegant approach?
... and yes, I agree that Python, Perl, Ruby would make my life easier, but when working with legacy system, one doesn't always have the privilege of choosing the best tool.
It's a bit late answer, but now declare supports a -g parameter, which makes a variable global (when used inside function). Same works in sourced file.
If you need a global (read exported) variable, use:
declare -g DATA="Hello World, meow!"
Yes, Bash's 'eval' command can make this work. 'eval' isn't very elegant, and it sometimes can be difficult to understand and debug code that uses it. I usually try to avoid it, but Bash often leaves you with no other choice (like the situation that prompted your question). You'll have to weigh the pros and cons of using 'eval' for yourself.
Some background on 'eval'
If you're not familiar with 'eval', it's a Bash built-in command that expects you to pass it a string as its parameter. 'eval' dynamically interprets and executes your string as a command in its own right, in the current shell context and scope. Here's a basic example of a common use (dynamic variable assignment):
$> a_var_name="color"
$> eval ${a_var_name}="blue"
$> echo -e "The color is ${color}."
The color is blue.
See the Advanced Bash Scripting Guide for more info and examples: http://tldp.org/LDP/abs/html/internal.html#EVALREF
Solving your 'source' problem
To make 'eval' handle your sourcing issue, you'd start by rewriting your function, 'safe_source()'. Instead of actually executing the command, 'safe_source()' should just PRINT the command as a string on STDOUT:
function safe_source() { echo eval " \
if [ -r $1 ] ; then \
source $1 ; \
else \
logger -t $0 -p crit \"unable to source $1\" ; \
exit 1 ; \
fi \
"; }
Also, you'll need to change your function invocations, slightly, to actually execute the 'eval' command:
`safe_source foo`
`safe_source bar`
(Those are backticks/backquotes, BTW.)
How it works
In short:
We converted the function into a command-string emitter.
Our new function emits an 'eval' command invocation string.
Our new backticks call the new function in a subshell context, returning the 'eval' command string output by the function back up to the main script.
The main script executes the 'eval' command string, captured by the backticks, in the main script context.
The 'eval' command string re-parses and executes the 'eval' command string in the main script context, running the whole if-then-else block, including (if the file exists) executing the 'source' command.
It's kind of complex. Like I said, 'eval' is not exactly elegant. In particular, there are a couple of special things you should notice about the changes we made:
The entire IF-THEN-ELSE block has becomes one whole double-quoted string, with backslashes at the end of each line "hiding" the newlines.
Some of the shell special characters like '"') have been backslash-escaped, while others ('$') have been left un-escaped.
'echo eval' has been prepended to the whole command string.
Extra semicolons have been appended to all of the lines where a command gets executed to terminate them, a role that the (now-hidden) newlines originally performed.
The function invocation has been wrapped in backticks.
Most of these changes are motived by the fact that 'eval' won't handle newlines. It can only deal with multiple commands if we combine them into a single line delimited by semicolons, instead. The new function's line breaks are purely a formatting convenience for the human eye.
If any of this is unclear, run your script with Bash's '-x' (debug execution) flag turned on, and that should give you a better picture of exactly what's happening. For instance, in the function context, the function actually produces the 'eval' command string by executing this command:
echo eval ' if [ -r <INCL_FILE> ] ; then source <INCL_FILE> ; else logger -t <SCRIPT_NAME> -p crit "unable to source <INCL_FILE>" ; exit 1 ; fi '
Then, in the main context, the main script executes this:
eval if '[' -r <INCL_FILE> ']' ';' then source <INCL_FILE> ';' else logger -t <SCRIPT_NAME> -p crit '"unable' to source '<INCL_FILE>"' ';' exit 1 ';' fi
Finally, again in the main context, the eval command executes these two commands if exists:
'[' -r <INCL_FILE> ']'
source <INCL_FILE>
Good luck.
declare inside a function makes the variable local to that function. export affects the environment of child processes not the current or parent environments.
You can set the values of your variables inside the functions and do the declare -r, declare -i or declare -ri after the fact.