I've been working on our intro scripting assignment, and am having issues calling functions within the script. I am in the second portion of the assignment, and I am just testing to make sure what I have is (hopefully) going to work. I have gathered some directories, and ask a yes or no question. When I get a 'y', I wrote a little function that I call, and when I get a 'n' I have another function, both simple echoes. What is the issue?
part_two(){
answer=""
for value in "$#";do
echo "$value"
while [ "$answer" != "y" -a "$answer" != "n" ]
do
echo -n "Would you like to save the results to a file? (y/n): "
read answer
done
if [ "$answer" = "n" ]
then
part_six
elif [ "$answer" = "y" ]
then
part_five
fi
done
}
part_two $#
part_five(){
echo -n "working yes";
}
part_six(){
echo -n "working no";
}
Any help would be greatly appreciated, as always.
Much like in C a function must be defined before it's used. In your code snippet you are calling part_two (which is calling part_five and part_six) before declaring the two functions.
Have you tried moving their definitions to the start of the script?
EDIT:
In most cases, the best way to deal with this in Bash is to simply define all functions at the start of the script before executing any actual commands. The order of the definitions does not really matter - the shell only looks up a function when it's about to use it - so generally there are no dependency issues etc. that you may have to think about.
EDIT 2:
There are cases where you may not be able to just define a function at the start of the script. A common case is when you use conditional constructs to dynamically select or modify the declaration of a function e..g.:
if [[ "$1" = 0 ]]; then
function show() {
echo Zero
}
else
function show() {
echo Not-zero
}
fi
In these cases you have to make sure that each function call happens after that function (and any others that it calls) is declared.
EDIT 3:
In bash a function declaration is actually the function foo() { ... } block where you define its implementation - and yes, the function keyword is not strictly necessary. There are no function prototypes as in C - they would not make sense anyway because shell scripts are generally parsed as they are executed. Newer Bash version do read a script at once, but they mostly check for syntax errors and not for logical errors such as this one.
BTW the official term is "function declaration", but even the Bash info page uses "declaration" and "definition" interchangeably.
Related
So I have an array like:
al_ap_version=('ap_version' '[[ $data -ne $version ]]')
And the condition gets evaluated inside a loop like:
for alert in alert_list; do
data=$(tail -1 somefile)
condition=$(eval echo \${$alert[1]})
if eval "$condition" ; then
echo SomeAlert
fi
done
Whilst this generally works with many scenarios, if $data returns something like "-/-" or "4.2.9", I get errors as it doesn't seem to like complex strings in the variable.
Obviously I can't enclose the variable in single quotes as it won't expand so I'm after any ideas to expand the $data variable (or indeed the $version var which suffers the same possible fate) in a way that the evaluation can handle?
Ignoring the fact that eval is probably super dangerous to use here (unless the data in somefile is controlled by you and only you), there are a few issues to fix in your example code.
In your for loop, alert_list needs to be $alert_list.
Also, as pointed out by #choroba, you should be using != instead of -ne since your input isn't always an integer.
Finally, while debugging, you can add set -x to the top of your script, or add -x to the end of your shebang line to enable verbose output (helps to determine how bash is expanding your variables).
This works for me:
#!/bin/bash -x
data=2.2
version=1
al_ap_version=('ap_version' '[[ $data != $version ]]')
alert_list='al_ap_version'
for alert in $alert_list; do
condition=$(eval echo \${$alert[1]})
if eval "$condition"; then
echo "alert"
fi
done
You could try a more functional approach, even though bash is only just barely capable of such things. On the whole, it is usually a lot easier to pack an action to be executed into a bash function and refer to it with the name of the function, than to try to maintain the action as a string to be evaluated.
But first, the use of an array of names of arrays is awkward. Let's get rid of it.
It's not clear to me the point of element 0, ap_version, in the array al_ap_version but I suppose it has something to do with error messages. If the order of alert processing isn't important, you could replace the list of names of arrays with a single associative array:
declare -A alert_list
alert_list[ap_version]=... # see below
alert_list[os_dsk]=...
and then process them with:
for alert_name in ${!alert_list[#]}; do
alert=${alert_list[$alert_name]}
...
done
Having done that, we can get rid of the eval, with its consequent ugly necessity for juggling quotes, by creating a bash function for each alert:
check_ap_version() {
(($version != $1))
}
Edit: It seems that $1 is not necessarily numeric, so it would be better to use a non-numeric comparison, although exact version match might not be what you're after either. So perhaps it would be better to use:
check_ap_version() {
[[ $version != $1 ]]
}
Note the convention that the first argument of the function is the data value.
Now we can insert the name of the function into the alert array, and call it indirectly in the loop:
declare -A alert_list
alert_list[ap_version]=check_ap_version
alert_list[os_dsk]=check_op_dsk
check_alerts() {
local alert_name alert
local data=$(tail -1 somefile)
for alert_name in ${!alert_list[#]}; do
alert=${alert_list[$alert_name]}
if $alert "$data"; then
signal_alert $alert_name
fi
done
}
If you're prepared to be more disciplined about the function names, you can avoid the associative array, and thereby process the alerts in order. Suppose, for example, that every function has the name check_<alert_name>. Then the above could be:
alert_list=(ap_version os_dsk)
check_alerts() {
local alert_name
local data=$(tail -1 somefile)
for alert_name in $alert_list[#]; do
if check_$alert_name "$data"; then
signal_alert $alert_name
fi
done
}
I have a rather complex series of commands in bash that ends up returning a meaningful exit code. Various places later in the script need to branch conditionally on whether the command set succeed or not.
Currently I am storing the exit code and testing it numerically, something like this:
long_running_command | grep -q trigger_word
status=$?
if [ $status -eq 0 ]; then
: stuff
else
: more code
if [ $status -eq 0 ]; then
: stuff
else
For some reason it feels like this should be simpler. We have a simple exit code stored and now we are repeatedly typing out numerical test operations to run on it. For example I can cheat use the string output instead of the return code which is simpler to test for:
status=$(long_running_command | grep trigger_word)
if [ $status ]; then
: stuff
else
: more code
if [ $status ]; then
: stuff
else
On the surface this looks more straight forward, but I realize it's dirty.
If the other logic wasn't so complex and I was only running this once, I realize I could embed it in place of the test operator, but this is not ideal when you need to reuse the results in other locations without re-running the test:
if long_running_command | grep -q trigger_word; then
: stuff
else
The only thing I've found so far is assigning the code as part of command substitution:
status=$(long_running_command | grep -q trigger_word; echo $?)
if [ $status -eq 0 ]; then
: stuff
else
Even this is not technically a one shot assignment (although some may argue the readability is better) but the necessary numerical test syntax still seems cumbersome to me. Maybe I'm just being OCD.
Am I missing a more elegant way to assign an exit code to a variable then branch on it later?
The simple solution:
output=$(complex_command)
status=$?
if (( status == 0 )); then
: stuff with "$output"
fi
: more code
if (( status == 0 )); then
: stuff with "$output"
fi
Or more eleganter-ish
do_complex_command () {
# side effects: global variables
# store the output in $g_output and the status in $g_status
g_output=$(
command -args | commands | grep -q trigger_word
)
g_status=$?
}
complex_command_succeeded () {
test $g_status -eq 0
}
complex_command_output () {
echo "$g_output"
}
do_complex_command
if complex_command_succeeded; then
: stuff with "$(complex_command_output)"
fi
: more code
if complex_command_succeeded; then
: stuff with "$(complex_command_output)"
fi
Or
do_complex_command () {
# side effects: global variables
# store the output in $g_output and the status in $g_status
g_output=$(
command -args | commands
)
g_status=$?
}
complex_command_output () {
echo "$g_output"
}
complex_command_contains_keyword () {
complex_command_output | grep -q "$1"
}
if complex_command_contains_keyword "trigger_word"; then
: stuff with "$(complex_command_output)"
fi
If you don't need to store the specific exit status, just whether the command succeeded or failed (e.g. whether grep found a match), I's use a fake boolean variable to store the result:
if long_running_command | grep trigger_word; then
found_trigger=true
else
found_trigger=false
fi
# ...later...
if ! $found_trigger; then
# stuff to do if the trigger word WASN'T found
fi
#...
if $found_trigger; then
# stuff to do if the trigger WAS found
fi
Notes:
The shell doesn't really have boolean (true/false) variables. What's actually happening here is that "true" and "false" are stored as strings in the found_trigger variable; when if $found_trigger; then executes, it runs the value of $found_trigger as a command, and it just happens that the true command always succeeds and the false command always fails, thus causing "the right thing" to happen. In if ! $found_trigger; then, the "!" toggles the success/failure status, effectively acting as a boolean "not".
if long_running_command | grep trigger_word; then is equivalent to running the command, then using if [ $? -ne 0 ]; then to check its exit status. I find it a little cleaner, but you have to get used to thinking of if as checking the success/failure of a command, not just testing boolean conditions. If "active" if commands aren't intuitive to you, use a separate test instead.
As Charles Duffy pointed out in a comment, this trick executes data as a command, and if you don't have full control over that data... you don't have control over what your script is going to do. So never set a fake-boolean variable to anything other than the fixed strings "true" and "false", and be sure to set the variable before using it. If you have any nontrivial execution flow in the script, set all fake-boolean variables to sane default values (i.e. "true" or "false") before the execution flow gets complicated.
Failure to follow these rules can lead to security holes large enough to drive a freight train through.
Why don't you set flags for the stuff that needs to happen later?
cheeseballs=false
nachos=false
guppies=false
command
case $? in
42) cheeseballs=true ;;
17 | 31) cheeseballs=true; nachos=true; guppies=true;;
66) guppies=true; echo "Bingo!";;
esac
$cheeseballs && java -crash -burn
$nachos && python ./tex.py --mex
if $guppies; then
aquarium --light=blue --door=hidden --decor=squid
else
echo SRY
fi
As pointed out by #CharlesDuffy in the comments, storing an actual command in a variable is slightly dubious, and vaguely triggers Bash FAQ #50 warnings; the code reads (slightly & IMHO) more naturally like this, but you have to be really careful that you have total control over the variables at all times. If you have the slightest doubt, perhaps just use string values and compare against the expected value at each junction.
[ "$cheeseballs" = "true" ] && java -crash -burn
etc etc; or you could refactor to some other implementation structure for the booleans (an associative array of options would make sense, but isn't portable to POSIX sh; a PATH-like string is flexible, but perhaps too unstructured).
Based on the OP's clarification that it's only about success v. failure (as opposed to the specific exit codes):
long_running_command | grep -q trigger_word || failed=1
if ((!failed)); then
: stuff
else
: more code
if ((!failed)); then
: stuff
else
Sets the success-indicator variable only on failure (via ||, i.e, if a non-zero exit code is returned).
Relies on the fact that variables that aren't defined evaluate to false in an arithmetic conditional (( ... )).
Care must be taken that the variable ($failed, in this example) hasn't accidentally been initialized elsewhere.
(On a side note, as #nos has already mentioned in a comment, you need to be careful with commands involving a pipeline; from man bash (emphasis mine):
The return status of a pipeline is the exit status of the last command,
unless the pipefail option is enabled. If pipefail is enabled, the
pipeline's return status is the value of the last (rightmost) command
to exit with a non-zero status, or zero if all commands exit successfully.
To set pipefail (which is OFF by default), use set -o pipefail; to turn it back off, use set +o pipefail.)
If you don't care about the exact error code, you could do:
if long_running_command | grep -q trigger_word; then
success=1
: success
else
success=0
: failure
fi
if ((success)); then
: success
else
: failure
fi
Using 0 for false and 1 for true is my preferred way of storing booleans in scripts. if ((flag)) mimics C nicely.
If you do care about the exit code, then you could do:
if long_running_command | grep -q trigger_word; then
status=0
: success
else
status=$?
: failure
fi
if ((status == 0)); then
: success
else
: failure
fi
I prefer an explicit test against 0 rather than using !, which doesn't read right.
(And yes, $? does yield the correct value here.)
Hmm, the problem is a bit vague - if possible, I suggest considering refactoring/simplify, i.e.
function check_your_codes {
# ... run all 'checks' and store the results in an array
}
###
function process_results {
# do your 'stuff' based on array values
}
###
create_My_array
check_your_codes
process_results
Also, unless you really need to save the exit code then there is no need to store_and_test - just test_and_do, i.e. use a case statement as suggested above or something like:
run_some_commands_and_return_EXIT_CODE_FROM_THE_LAST_ONE
if [[ $? -eq 0 ]] ; then do_stuff else do_other_stuff ; fi
:)
Dale
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Ternary operator (?:) in Bash
If this were AS3 or Java, I would do the following:
fileName = dirName + "/" + (useDefault ? defaultName : customName) + ".txt";
But in shell, that seems needlessly complicated, requiring several lines of code, as well as quite a bit of repeated code.
if [ $useDefault ]; then
fileName="$dirName/$defaultName.txt"
else
fileName="$dirName/$customName.txt"
fi
You could compress that all into one line, but that sacrifices clarity immensely.
Is there any better way of writing an inline if with variable assignment in shell?
Just write:
fileName=${customName:-$defaultName}.txt
It's not quite the same as what you have, since it does not check useDefault. Instead, it just checks if customName is set. Instead of setting useDefault when you want to use the default, you simply unset customName.
There is no ?: conditional operator in the shell, but you could make the code a little less redundant like this:
if [ $useDefault ]; then
tmpname="$defaultName"
else
tmpname="$customName"
fi
fileName="$dirName/$tmpname.txt"
Or you could write your own shell function that acts like the ?: operator:
cond() {
if [ "$1" ] ; then
echo "$2"
else
echo "$3"
fi
}
fileName="$dirname/$(cond "$useDefault" "$defaultName" "$customName").txt"
though that's probably overkill (and it evaluates all three arguments).
Thanks to Gordon Davisson for pointing out in comments that quotes nest within $(...).
I need to pass a function as a parameter in Bash. For example, the following code:
function x() {
echo "Hello world"
}
function around() {
echo "before"
eval $1
echo "after"
}
around x
Should output:
before
Hello world
after
I know eval is not correct in that context but that's just an example :)
Any idea?
If you don't need anything fancy like delaying the evaluation of the function name or its arguments, you don't need eval:
function x() { echo "Hello world"; }
function around() { echo before; $1; echo after; }
around x
does what you want. You can even pass the function and its arguments this way:
function x() { echo "x(): Passed $1 and $2"; }
function around() { echo before; "$#"; echo after; }
around x 1st 2nd
prints
before
x(): Passed 1st and 2nd
after
I don't think anyone quite answered the question. He didn't ask if he could echo strings in order. Rather the author of the question wants to know if he can simulate function pointer behavior.
There are a couple of answers that are much like what I'd do, and I want to expand it with another example.
From the author:
function x() {
echo "Hello world"
}
function around() {
echo "before"
($1) <------ Only change
echo "after"
}
around x
To expand this, we will have function x echo "Hello world:$1" to show when the function execution really occurs. We will pass a string that is the name of the function "x":
function x() {
echo "Hello world:$1"
}
function around() {
echo "before"
($1 HERE) <------ Only change
echo "after"
}
around x
To describe this, the string "x" is passed to the function around() which echos "before", calls the function x (via the variable $1, the first parameter passed to around) passing the argument "HERE", finally echos after.
As another aside, this is the methodology to use variables as function names. The variables actually hold the string that is the name of the function and ($variable arg1 arg2 ...) calls the function passing the arguments. See below:
function x(){
echo $3 $1 $2 <== just rearrange the order of passed params
}
Z="x" # or just Z=x
($Z 10 20 30)
gives: 30 10 20, where we executed the function named "x" stored in variable Z and passed parameters 10 20 and 30.
Above where we reference functions by assigning variable names to the functions so we can use the variable in place of actually knowing the function name (which is similar to what you might do in a very classic function pointer situation in c for generalizing program flow but pre-selecting the function calls you will be making based on command line arguments).
In bash these are not function pointers, but variables that refer to names of functions that you later use.
there's no need to use eval
function x() {
echo "Hello world"
}
function around() {
echo "before"
var=$($1)
echo "after $var"
}
around x
You can't pass anything to a function other than strings. Process substitutions can sort of fake it. Bash tends to hold open the FIFO until a command its expanded to completes.
Here's a quick silly one
foldl() {
echo $(($(</dev/stdin)$2))
} < <(tr '\n' "$1" <$3)
# Sum 20 random ints from 0-999
foldl + 0 <(while ((n=RANDOM%999,x++<20)); do echo $n; done)
Functions can be exported, but this isn't as interesting as it first appears. I find it's mainly useful for making debugging functions accessible to scripts or other programs that run scripts.
(
id() {
"$#"
}
export -f id
exec bash -c 'echowrap() { echo "$1"; }; id echowrap hi'
)
id still only gets a string that happens to be the name of a function (automatically imported from a serialization in the environment) and its args.
Pumbaa80's comment to another answer is also good (eval $(declare -F "$1")), but its mainly useful for arrays, not functions, since they're always global. If you were to run this within a function all it would do is redefine it, so there's no effect. It can't be used to create closures or partial functions or "function instances" dependent on whatever happens to be bound in the current scope. At best this can be used to store a function definition in a string which gets redefined elsewhere - but those functions also can only be hardcoded unless of course eval is used
Basically Bash can't be used like this.
A better approach is to use local variables in your functions. The problem then becomes how do you get the result to the caller. One mechanism is to use command substitution:
function myfunc()
{
local myresult='some value'
echo "$myresult"
}
result=$(myfunc) # or result=`myfunc`
echo $result
Here the result is output to the stdout and the caller uses command substitution to capture the value in a variable. The variable can then be used as needed.
You should have something along the lines of:
function around()
{
echo 'before';
echo `$1`;
echo 'after';
}
You can then call around x
eval is likely the only way to accomplish it. The only real downside is the security aspect of it, as you need to make sure that nothing malicious gets passed in and only functions you want to get called will be called (along with checking that it doesn't have nasty characters like ';' in it as well).
So if you're the one calling the code, then eval is likely the only way to do it. Note that there are other forms of eval that would likely work too involving subcommands ($() and ``), but they're not safer and are more expensive.
I don't have much practical programming experience.
I wrote a function in shell script to take parameters based on whose values an external program is invoked with other parameters. (the program name is passed as a parameter to the function, $1 in this case).
Somehow I find this code shitty and would like to reduce the lines and make it more efficient. I feel like there are too many conditions here and there might be an easier way to do this. Any ideas? There has to be a better way to do this. Thanks.
Code is here
You can build the argument list as an array, with separate conditionals for each variable part of the list (rather than nested conditionals as you have). This is very similar to Roland Illig's answer, except that using an array rather than a text variable avoids possible problems with funny characters (like spaces) in arguments (see BashFAQ #050) -- it's probably not needed here, but I prefer to do it the safe way to avoid surprises later. Also, note that I've assumed the arguments must be in a certain order -- if -disp can go before `refresh, then the "all versions" parts can all be put together at the beginning.
# Build the argument list, depending on a variety of conditions
arg_list=("-res" "$2" "$3") # all versions start this way
if [ "$7" = "allresolutions" ]; then
# allresolutions include the refresh rate parameter to be passed
arg_list+=("-refresh" "$4")
fi
arg_list+=("-disp" "$5") # all versions need the disp value
if [ "$6" = "nooptions" ]; then
: # nothing to add
elif [ "$6" = "vcaa" ]; then
arg_list+=("-vcaa" "5")
elif [ "$6" = "fsaa" ]; then
arg_list+=("-fsaa" "4")
elif [ "$6" = "vcfsaa" ]; then
arg_list+=("-vcaa" "5" "-fsaa" "4")
fi
if [ "$1" != "gears" ]; then
# ctree has time-to-run passed as parameter to account for suitable time for
# logging fps score. If bubble fps capture strategy is different from ctree,
# then this if statement has to be modified accordingly
arg_list+=("-sec" "$8")
fi
# Now actually run it
./"$1" "${arg_list[#]}" &>> ~/fps.log
Instead of the repeated inner "if … elif … fi" you can do this part of the argument processing once and save it in a variable (here: options).
case $6 in
nooptions) options="";;
vcaa) options="-vcaa 5";;
fsaa) options="-fsaa 4";;
vcfsaa) options="-vcaa 5 -fsaa 4";;
esac
...
./$1 -res $2 $3 -refresh $4 -disp $5 $options -sec $8
You could use getopts (see example) if you process that large number of arguments.