I am trying to do something similar to "try.. catch" with bash, for which I read from this post that a good option to somehow replicate try/catch is to use the || and && operators.
In my case, I have a piece of code supposed to zip some files, but the zip is actually empty, so it throw an error:
zip -v OUTPUT.zip file1 file2 >> $LOGFILE 2>&1
Then in the LOGFILE, I see:
zip error: Nothing to do! (OUTPUT.zip)
zip warning: OUTPUT.zip not found or empty
...(continued error message)
So I do this instead, to "catch the error"
{ zip -v OUTPUT.zip file1 file2 >> $LOGFILE 2>&1 } || { printf "Error while zipping!" >> $LOGFILE && exit 1 }
..which partially works. It does exit the code, but doesn't do the printf command (or at least I can't see it in the LOGFILE).
Also, although it encounters the "exit 1" command (on line 115), I also have the message (line 120 is my last line):
line 120: syntax error: unexpected end of file
What is wrong in my command? Why doesn't it do the print, and why the "unexpected end of file" message? Sorry if this question is more related to general bash programming than to the use of || &&, but I didn't know how to categorize this otherwise.
Thanks a lot!
Within braces, you must terminate all commands with a semi-colon, as per the following transcript (you can see on the last line bash is waiting for more command to be typed in):
pax> { false; } || echo x;
x
pax> { false } || echo x;
+++> _
You also don't need to "brace up" a simple command, so the correct thing in your case would be:
zip -v OUTPUT.zip file1 file2 >> $LOGFILE 2>&1 || { echo "Error while zipping!" >> $LOGFILE ; exit 1 }
I've also used echo rather than printf, on the assumption you'll want a newline character at the end of the file, and made the exit unconditional on the success or otherwise of printf/echo.
Related
I want to create a utility function for bash to remove duplicate lines. I am using function
function remove_empty_lines() {
if ! command -v awk &> /dev/null
then
echo '[x] ERR: "awk" command not found'
return
fi
if [[ -z "$1" ]]
then
echo "usage: remove_empty_lines <file-name> [--replace]"
echo
echo "Arguments:"
echo -e "\t--replace\t (Optional) If not passed, the result will be redirected to stdout"
return
fi
if [[ ! -f "$1" ]]
then
echo "[x] ERR: \"$1\" file not found"
return
fi
echo $0
local CMD="awk '!seen[$0]++' $1"
if [[ "$2" = '--reload' ]]
then
CMD+=" > $1"
fi
echo $CMD
}
If I am running the main awk command directly, it is working. But when i execute the same $CMD in the function, I am getting this error
$ remove_empty_lines app.js
/bin/bash
awk '!x[/bin/bash]++' app.js
The original code is broken in several ways:
When used with --reload, it would truncate the output file's contents before awk could ever read those contents (see How can I use a file in a command and redirect output to the same file without truncating it?)
It didn't ever actually run the command, and for the reasons described in BashFAQ #50, storing a shell command in a string is inherently buggy (one can work around some of those issues with eval; BashFAQ #48 describes why doing so introduces security bugs).
It wrote error messages (and other "diagnostic content") to stdout instead of stderr; this means that if your function's output was redirected to a file, you could never see its errors -- they'd end up jumbled into the output.
Error cases were handled with a return even in cases where $? would be zero; this means that return itself would return a zero/successful/truthy status, not revealing to the caller that any error had taken place.
Presumably the reason you were storing your output in CMD was to be able to perform a redirection conditionally, but that can be done other ways: Below, we always create a file descriptor out_fd, but point it to either stdout (when called without --reload), or to a temporary file (if called with --reload); if-and-only-if awk succeeds, we then move the temporary file over the output file, thus replacing it as an atomic operation.
remove_empty_lines() {
local out_fd rc=0 tempfile=
command -v awk &>/dev/null || { echo '[x] ERR: "awk" command not found' >&2; return 1; }
if [[ -z "$1" ]]; then
printf '%b\n' >&2 \
'usage: remove_empty_lines <file-name> [--replace]' \
'' \
'Arguments:' \
'\t--replace\t(Optional) If not passed, the result will be redirected to stdout'
return 1
fi
[[ -f "$1" ]] || { echo "[x] ERR: \"$1\" file not found" >&2; return 1; }
if [ "$2" = --reload ]; then
tempfile=$(mktemp -t "$1.XXXXXX") || return
exec {out_fd}>"$tempfile" || { rc=$?; rm -f "$tempfile"; return "$rc"; }
else
exec {out_fd}>&1
fi
awk '!seen[$0]++' <"$1" >&$out_fd || { rc=$?; rm -f "$tempfile"; return "$rc"; }
exec {out_fd}>&- # close our file descriptor
if [[ $tempfile ]]; then
mv -- "$tempfile" "$1" || return
fi
}
First off the output from your function call is not an error but rather the output of two echo commands (echo $0 and echo $CMD).
And as Charles Duffy has pointed out ... at no point is the function actually running the $CMD.
As for the inclusion of /bin/bash in your function's echo output ... the main problem is the reference to $0; by definition $0 is the name of the running process, which in the case of a function is the shell under which the function is being called. Consider the following when run from a bash command prompt:
$ echo $0
-bash
As you can see from your output this generates /bin/bash in your environment. See this and this for more details.
On a related note, the reference to $0 within double quotes causes the $0 to be evaluated, so this:
local CMD="awk '!seen[$0]++' $1"
becomes
local CMD="awk '!seen[/bin/bash]++' app.js"
I'm thinking what you want is something like:
echo $1 # the name of the file to be processed
local CMD="awk '!seen[\$0]++' $1" # escape the '$' in '$0'
becomes
local CMD="awk '!seen[$0]++' app.js"
That should fix the issues shown in your function's output; as for the other issues ... you're getting a good bit of feedback in the various comments ...
In bash scripting, you can check for errors by doing something like this:
if some_command; then
printf 'some_command succeeded\n'
else
printf 'some_command failed\n'
fi
If you have a large script, constantly having to do this becomes very tedious and long. It would be nice to write a function so all you have to do is pass the command and have it output whether it failed or succeeded and exiting if it failed.
I thought about doing something like this:
function cmdTest() {
if $($#); then
echo OK
else
echo FAIL
exit
fi
}
So then it would be called by doing something like cmdTest ls -l, but when Id o this I get:
ls -l: Command not found.
What is going on? How can I fix this?
Thank you.
Your cmdtest function is limited to simple commands (no pipes, no &&/||, no redirections) and its arguments. It's only a little more typing to use the far more robust
{ ... ;} && echo OK || { echo Fail; exit 1; }
where ... is to be replaced by any command line you might want to execute. For simple commands, the braces and trailing semi-colon are unnecessary, but in general you will need
them to ensure that your command line is treated as a single command for purposes of using &&.
If you tried to run something like
cmdTest grep "foo" file.txt || echo "foo not found"
your function would only run grep "foo" file.txt, and if your function failed, then echo "foo not found" would run. An attempt to pass the entirex || y` to your function like
cmdTest 'grep "foo" file.txt || echo "foo not found"
would fail even worse: the entire command string would be treated as a command name, which is not what you intended. Likewise, you cannot pass pipelines
cmdTest grep "foo" file.txt | grep "bar"
because the pipeline consists of two commands: cmdTest grep "foo" file.txt and grep "bar".
Shell (whether bash, zsh, POSIX sh or nearly any other kind) is simply not a language that is useful for passing arbitrary command lines as an argument to another function to be executed. The only work around, to use the eval command, is fragile at best and a security risk at worst, and cannot be recommended against enough. (And no, I will not provide an example of how to use eval, even in a limited capacity.)
Use this function like this:
function cmdTest() { if "$#"; then echo "OK"; else return $?; fi; }
OR to print Fail also:
function cmdTest() { if "$#"; then echo "OK"; else ret=$?; echo "Fail"; return $ret; fi; }
First, hello to stackoverflow community. I have learnt a lot thank to very clear questions and professional replies. I never needed to ask question because there is always someone who has asked the same question before.
But not today. I haven't found any solution to my problem, and I beg for your help.
I need to process the output of a function line by line in order to update a log online. I am working with bash.
The following block works pretty well:
convertor some parameters | while read line
do
if [ "${line:0:14}" != "[informations]" ]
then
update_online_log "${line}"
fi
done
But convertor may exit with different status. And I need to know what was the exit status. The code below doesn't work as it gives me the exit status of the last executed command (update_online_log).
convertor some parameters | while read line
do
if [ "${line:0:14}" != "[informations]" ]
then
update_online_log "${line}"
fi
done
exit_status=$?
The code below should work (I haven't tried it yet):
convertor some parameters > out.txt
exit_status=$?
while read line
do
if [ "${line:0:14}" != "[informations]" ]
then
update_online_log "${line}"
fi
done < out.txt
rm out.txt
But if I use this, the online log will be updated at the end of the conversion. Conversion may be a very long process, and I want to keep users updated while the conversion is in progress.
Thank you in advance for your help.
The PIPESTATUS array may be helpful for you: it saves the exit statuses of each component of the previous pipeline:
$ (echo a; exit 42) | (cat; echo b; exit 21) | (cat; echo c; exit 3) | { cat; echo hello; }
a
b
c
hello
$ echo "${PIPESTATUS[*]}"
42 21 3 0
That array is pretty fragile, so if you want to do stuff with it, immediately save it to another array:
$ (echo a; exit 42) | ... as above
$ ps=( "${PIPESTATUS[#]}" )
$ for i in "${!ps[#]}"; do echo "$i ${ps[$i]}"; done
0 42
1 21
2 3
3 0
Add set -o pipefail at the top of the script. From the documentation:
the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
That means it's zero if all commands succeed and non-zero if any command fails.
Just check the exit status after the done, that should work. Test it with
convertor some parameters | false | while read line
...
No line should be processed and the exit code should be 1.
As a POSIX-compliant solution, and to show that your approach was close to working, you can use a named pipe instead of a regular file.
mkfifo out.txt
while read line; do
if [[ $line != \[informations\]* ]]; then
update_online_log "$line"
fi
done < out.txt &
convertor some parameters > out.txt
rm out.txt
This code creates the named pipe, then runs the loop that consumes it in the background. It will block waiting for data from converter. Once converter exits (and you can get its exit code at that time), out.txt will be closed, the while loop process will exit, and you can remove the named pipe.
I'd like to achieve the test with a cleaner, less ugly one-liner:
#!/bin/bash
test -d "$1" || (echo "Argument 1: '$1' is not a directory" 1>&2 ; exit 1) || exit 1
# ... script continues if $1 is directory...
Basically I am after something which does not duplicate the exit, and preferably does not spawn a sub-shell (and as a result should also look less ugly), but still fits in one line.
Without a subshell and without duplicating exit:
test -d "$1" || { echo "Argument 1: '$1' is not a directory" 1>&2 ; exit 1; }
You may also want to refer to Grouping Commands:
{}
{ list; }
Placing a list of commands between curly braces causes the list to be executed in the current shell context. No subshell is created. The
semicolon (or newline) following list is required.
I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.