ubuntu function, works when sourced, but not with the bash command - bash

I'm trying to learn how to write some basic functions in Ubuntu, and I've found that some of them work, and some do not, and I can't figure out why.
Specifically, the following function addseq2.sh will work when I source it, but when I just try to run it with bash addseq2.shit doesn't work. When I check with $? I get a 0: command not found. Does anyone have an idea why this might be the case? Thanks for any suggestions!
Here's the code for addseq2.sh:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
Thanks everyone for all the useful advice and help!
To expand on my original question, I have two simple functions already written. The first one, hello.sh looks like this:
#!/usr/bin/env bash
# File: hello.sh
function hello {
echo "Hello"
}
hello
hello
hello
When I call this function, without having done anything else, I would type:
$ bash hello.sh
Which seems to work fine. After I source it with $ source hello.sh, I'm then able to just type hello and it also runs as expected.
So what has been driving me crazy is the first function I mentioned here, addseq2.sh. If I try to repeat the same steps, calling it just with $ bash addseq2.sh 1 2 3. I don't see any result. I can see after checking as you suggested with $ echo $?that I get a 0 and it executed correctly, but nothing prints to the screen.
After I source it with $ source addseq2.sh, then I call it just by typing $ addseq2 1 2 3 it returns 6 as expected.
I don't understand why the two functions are behaving differently.

When you do bash foo.sh, it spawns a new instance of bash, which then reads and executes every command in foo.sh.
In the case of hello.sh, the commands are:
function hello {
echo "Hello"
}
This command has no visible effects, but it defines a function named hello.
hello
hello
hello
These commands call the hello function three times, each printing Hello to stdout.
Upon reaching the end of the script, bash exits with a status of 0. The hello function is gone (it was only defined within the bash process that just stopped running).
In the case of addseq2.sh, the commands are:
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
This command has no visible effects, but it defines a function named addseq2.
Upon reaching the end of the script, bash exits with a status of 0. The addseq2 function is gone (it was only defined within the bash process that just stopped running).
That's why bash addseq2.sh does nothing: It simply defines (and immediately forgets) a function without ever calling it.
The source command is different. It tells the currently running shell to execute commands from a file as if you had typed them on the command line. The commands themselves still execute as before, but now the functions persist because the bash process they were defined in is still alive.
If you want bash addseq2.sh 1 2 3 to automatically call the addseq2 function and pass it the list of command line arguments, you have to say so explicitly: Add
addseq2 "$#"
at the end of addseq2.sh.

When I check with $? I get a 0: command not found
This is because of the way you are checking it, for example:
(the leading $ is the convention for showing the command-line prompt)
$ $?
-bash: 0: command not found
Instead you could do this:
$ echo $?
0
By convention 0 indicated success. A better way to test in a script is something like this:
if addseq.sh
then
echo 'script worked'
else
# Redirect error message to stderr
echo 'script failed' >&2
fi
Now, why might your script not "work" even though it returned 0? You have a function but you are not calling it. With your code I appended a call:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
addseq2 1 2 3 4 # <<<<<<<
and I got:
10
By the way, an alternative way of saying:
let sum=sum+$element
is:
sum=$((sum + element))

Related

Cron stops the script if the file is not found

I have the following simple script:
#!/bin/sh
a() {
echo 1
}
a
b() {
for file in "${DOWNLOADS}"123_*; do
mv "${file}" "${DOWNLOADS}321"
done
}
b
c() {
echo 2
}
c
it is executable and if I call it from the terminal it works exactly right: a, b, c. But if I try to execute it via cron and there is no "123_{something}" file in the "${DOWNLOADS}" directory, then only function a is executed, and the beginning of the foor loop. Function c is not called because the script stops.
crontab -l
=>
10 20 * * * zsh /user/file
Debugging showed the following:
10 20 * * * zsh /user/file >> ~/tmp/cron.txt 2>&1
=>
+/user/file:47> a
+a:1> echo 1
1
+/user/file:67> b
file:12: no matches found: /Users/ivan/Downloads/123_*
As can be seen the execution of the script stopped immediately after the file was not found.
I don't understand why the execution of this script via cron stops if the file is not found, and how this can be avoided; can anyone explain this?
Or maybe it's just the limitations of my environment?
I don't understand why the execution of this script via cron stops if the file is not found, and how this can be avoided; can anyone explain this?
Using globs that don't match anything is an error:
% print nope*
zsh: no matches found: nope*
You can fix this by setting the null_glob option:
% setopt null_glob
% print nope*
[no output, as nothing was found]
You can set this for a single pattern by adding (N) to the end:
% print nope*(N)
So in your example, you end up with something like:
b() {
for file in ${DOWNLOADS}123_*(N); do
mv $file ${DOWNLOADS}321
done
}
NOTE: this only applies to zsh; in your script you have #!/bin/sh at the top, but you're running it with zsh. It's best to change that to #!/usr/bin/env zsh if you plan on using zsh.
In standard /bin/sh, it behaves different: a pattern that doesn't match gets replaced by the pattern itself:
% sh
$ echo nope*
nope*
In that case, you need to check if $file in the loop matches nope* and continue if it does. But you're running zsh so it's not really applicable: just be aware that /bin/sh and zsh behave quite different with the default settings (you can also get the same behaviour in zsh if you want with setopt no_nomatch).

Is Exception handling possible in Unix shell script, which includes calling another scripts internally

I have a Scenario where in I need to fetch some data by triggering another external bash file from my shell script. If I end up with any error output from external bash, My shell script should handle and should go through the fall back approach. But I am actually facing issue with that external bash file, Wherein bash returns (exit 1) in failure cases, which causes my script also to exit and never executing fall back approach. Can anyone guide how to handle the exit from external bash and run my fall back approach.
Not sure if this works in sh, but it works in bash.
I made a try / except tool out of this, but it will work here too I believe.
#! /bin/bash
try() {
exec 2> /dev/null
#direct stderr out to /dev/null
#main block
input_function="$1"
#fallback code
catch_function="$3"
#open a sub shell
(
#tell it to exit upon encountering an error
set -e
#main block
"$#"
)
#if exit code of above is > 0, then run fallback code
if [ "$?" != 0 ]; then
$catch_function
else
#success, it ran with no errors
test
fi
#put stderr back into stdout
exec 2> /dev/tty
}
An example of using this would be:
try [function 1] except [function 2]
Function 1 would be main block of code, and 2 would be fallback function/block of code.
Your first function could be:
run() {
/path/to/external/script
}
And your second can be whatever you want to fall back on.
Hope this helps.

Capturing the exit code of sourced shell script without invoking subshell

I have two shell scripts:
test.sh
function func() {
echo $1
exit 1
}
run.sh
source ./test.sh
func "Hello"
exitCode=$?
echo "Output: ${exitCode}"
Result:
Hello
The current problem which I'm facing is that when the function func returns 1, my run.sh script breaks and nothing gets executed after it. So, is there any way I can effectively capture the exit code without breaking run.sh. I know there is way to invoke a subshell using ( func "Hello" ) but I want to do it without invoking sub-shell using flock. I looked up for reference example but could'nt find any close to it.
2 ideas that are "pushing the boundary". Use with care, as future changes to the sourced script(s) might break the logic. Recommended only if there is a way to monitor that script is working OK - e.g. when executing interactively, etc.
I would not use this kind of solutions in any production/critical system.
Option 1: Alias 'exit' to 'return' when sourcing the file.
Assuming that ALL 'exit' statement in the test.sh are to be replaced with 'return', and assuming that you are willing to take the risk of future changes to test.sh, consider using alias before sourcing
alias exit=return
source test.sh
unalias exit
func "foo"
Option 2: automatically update the function that is using 'exit' to use return.
source test.sh
body=$(type func | tail +2 | sed -e 's/exit/return/g')
eval "$body"

Bash script does not quit on first "exit" call when calling the problematic function using $(func)

Sorry I cannot give a clear title for what's happening but here is the simplified problem code.
#!/bin/bash
# get the absolute path of .conf directory
get_conf_dir() {
local path=$(some_command) || { echo "please install some_command first."; exit 100; }
echo "$path"
}
# process the configuration
read_conf() {
local conf_path="$(get_conf_dir)/foo.conf"
[ -r "$conf_path" ] || { echo "conf file not found"; exit 200; }
# more code ...
}
read_conf
So basically here what I am trying to do is, reading a simple configuration file in bash script, and I have some trouble in error handling.
The some_command is a command which comes from a 3rd party library (i.e. greadlink from coreutils), required for obtain the path.
When running the code above, I expect it outputs "command not found" because that's where the FIRST error occurs, but actually it always prints "conf file not found".
I am very confused about such behavior, and I think BASH probably intent to handle thing like this but I don't know why. And most importantly, how to fix it?
Any idea would be greatly appreciated.
Do you see your please install some_command first message anywhere? Is it in $conf_path from the local conf_path="$(get_conf_dir)/foo.conf" line? Do you have a $conf_path value of please install some_command first/foo.conf? Which then fails the -r test?
No, you don't. (But feel free to echo the value of $conf_path in that exit 200 block to confirm this fact.) (Also Error messages should, in general, get sent to standard error and not standard output anyway. So they should be echo "..." 2>&1. That way they don't be caught by the normal command substitution at all.)
The reason you don't is because that exit 100 block is never happening.
You can see this with set -x at the top of your script also. Go try it.
See what I mean?
The reason it isn't happening is that the failure return of some_command is being swallowed by the local path=$(some_command) assignment statement.
Try running this command:
f() { local a=$(false); echo "Returned: $?"; }; f
Do you expect to see Returned: 1? You might but you won't see that.
What you will see is Returned: 0.
Now try either of these versions:
f() { a=$(false); echo "Returned: $?"; }; f
f() { local a; a=$(false); echo "Returned: $?"; }; f
Get the output you expected in the first place?
Right. local and export and declare and typeset are statements on their own. They have their own return values. They ignore (and replace) the return value of the commands that execute in their contexts.
The solution to your problem is to split the local path and path=$(some_command) statements.
http://www.shellcheck.net/ catches this (and many other common errors). You should make it your friend.
In addition to the above (if you've managed to follow along this far) even with the changes mentioned so far your exit 100 won't exit the main script since it will only exit the sub-shell spawned by the command substitution in the assignment.
If you want that exit 100 to exit your script then you either need to notice and re-exit with it (check for get_conf_dir failure after the conf_path assignment and exit with the previous exit code) or drop the get_conf_dir function itself and just do that inline in read_conf.

Bash conditional execution

I took this code from a script I found online :
[ $# = 0 ] && usage
If there are no parameters at the command line, then call the usage method (which print the help info).
The thing I don't understand is why does the script exits after calling usage? Shouldn't it simply continue to other code?
There are multiple ways this can happen:
The usage method has an exit command in it
The usage method has a return 1 command (or other non-zero value) and the script is invoked with -e flag, for example #!/bin/sh -e shebang
The usage method has an operation that fails and the script is invoked with -e flag
Maybe there are more ways that I don't recall now.
Personally, I always use exit 1 as the last command in a usage method, so the behavior seems all natural to me.
It would carry on unless 'usage' executes 'exit' command

Resources