Bash: ( commands ) | command - bash

I stumbled upon Zenity, a command-line based GUI today. I noticed the there was some syntax of the form ( commands ) | command. Could anyone shed some light on what this is and where I can read more about it?
I found the below script within the docs
(
echo "10" ; sleep 1
echo "# Updating mail logs" ; sleep 1
echo "50" ; sleep 1
echo "This line will just be ignored" ; sleep 1
echo "100" ; sleep 1
) |
zenity --progress \
--title="Update System Logs" \
--text="Scanning mail logs..." \
--percentage=0

The parentheses are delimiting a subshell, which means the commands inside the parens are run in a separate process, and interpreted by a separate instance of the bash interpreter. In this case, it appears they are using a subshell just to group together all the echo and sleep commands so that they can then pipe the combined output of the entire group of commands through zenity. Which makes sense given that the goal in this example is to simulate a progress bar.
You can read more about subshells here: http://tldp.org/LDP/abs/html/subshells.html

The parentheses create a subshell, with all the implications that it has for the current shell.
The subshell cannot change the environment of the parent shell; sometimes, you want a subshell so that you can, say, quickly cd to a different directory without affecting the working directory for the rest of the script
The subshell has a single standard input and a single standard output stream; this is frequently a reason to start a subshell
The parent shell waits while the subshell completes the commands (unless you run it in the background)
If it helps,, think of ( foo; bar ) as a quick way to say sh -c 'foo; bar'.
A related piece of syntax is the brace, which runs a compound command in the current shell, not a subshell.
test -f file.rc || { echo "$0: file.rc not found -- aborting" >&2; exit 127; }
The exit in particular causes the current shell to exit with a failure exit code, while a subshell which exits does not directly affect the rest of the parent shell script.
(Weirdly, POSIX requires a statement terminator before the closing brace, but not before the closing parenthesis.)

Related

`grep` cause bash script stop [duplicate]

I'm studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
The script has the following code:
#!/bin/bash
set -e
# Automatically added by dh_installinit
if [ "$1" = install ]; then
if [ -d /usr/share/MyApplicationName ]; then
echo "MyApplicationName is just installed"
return 1
fi
rm -Rf $HOME/.config/nautilus-actions/nautilus-actions.conf
rm -Rf $HOME/.local/share/file-manager/actions/*
fi
# End automatically added section
My first query is about the line:
set -e
I think that the rest of the script is pretty simple: It checks whether the Debian/Ubuntu package manager is executing an install operation. If it is, it checks whether my application has just been installed on the system. If it has, the script prints the message "MyApplicationName is just installed" and ends (return 1 mean that ends with an “error”, doesn’t it?).
If the user is asking the Debian/Ubuntu package system to install my package, the script also deletes two directories.
Is this right or am I missing something?
From help set :
-e Exit immediately if a command exits with a non-zero status.
But it's considered bad practice by some (bash FAQ and irc freenode #bash FAQ authors). It's recommended to use:
trap 'do_something' ERR
to run do_something function when errors occur.
See http://mywiki.wooledge.org/BashFAQ/105
set -e stops the execution of a script if a command or pipeline has an error - which is the opposite of the default shell behaviour, which is to ignore errors in scripts. Type help set in a terminal to see the documentation for this built-in command.
I found this post while trying to figure out what the exit status was for a script that was aborted due to a set -e. The answer didn't appear obvious to me; hence this answer. Basically, set -e aborts the execution of a command (e.g. a shell script) and returns the exit status code of the command that failed (i.e. the inner script, not the outer script).
For example, suppose I have the shell script outer-test.sh:
#!/bin/sh
set -e
./inner-test.sh
exit 62;
The code for inner-test.sh is:
#!/bin/sh
exit 26;
When I run outer-script.sh from the command line, my outer script terminates with the exit code of the inner script:
$ ./outer-test.sh
$ echo $?
26
As per bash - The Set Builtin manual, if -e/errexit is set, the shell exits immediately if a pipeline consisting of a single simple command, a list or a compound command returns a non-zero status.
By default, the exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (it's disabled by default).
If so, the pipeline's return status of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
If you'd like to execute something on exit, try defining trap, for example:
trap onexit EXIT
where onexit is your function to do something on exit, like below which is printing the simple stack trace:
onexit(){ while caller $((n++)); do :; done; }
There is similar option -E/errtrace which would trap on ERR instead, e.g.:
trap onerr ERR
Examples
Zero status example:
$ true; echo $?
0
Non-zero status example:
$ false; echo $?
1
Negating status examples:
$ ! false; echo $?
0
$ false || true; echo $?
0
Test with pipefail being disabled:
$ bash -c 'set +o pipefail -e; true | true | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; false | false | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; true | true | false; echo success'; echo $?
1
Test with pipefail being enabled:
$ bash -c 'set -o pipefail -e; true | false | true; echo success'; echo $?
1
This is an old question, but none of the answers here discuss the use of set -e aka set -o errexit in Debian package handling scripts. The use of this option is mandatory in these scripts, per Debian policy; the intent is apparently to avoid any possibility of an unhandled error condition.
What this means in practice is that you have to understand under what conditions the commands you run could return an error, and handle each of those errors explicitly.
Common gotchas are e.g. diff (returns an error when there is a difference) and grep (returns an error when there is no match). You can avoid the errors with explicit handling:
diff this that ||
echo "$0: there was a difference" >&2
grep cat food ||
echo "$0: no cat in the food" >&2
(Notice also how we take care to include the current script's name in the message, and writing diagnostic messages to standard error instead of standard output.)
If no explicit handling is really necessary or useful, explicitly do nothing:
diff this that || true
grep cat food || :
(The use of the shell's : no-op command is slightly obscure, but fairly commonly seen.)
Just to reiterate,
something || other
is shorthand for
if something; then
: nothing
else
other
fi
i.e. we explicitly say other should be run if and only if something fails. The longhand if (and other shell flow control statements like while, until) is also a valid way to handle an error (indeed, if it weren't, shell scripts with set -e could never contain flow control statements!)
And also, just to be explicit, in the absence of a handler like this, set -e would cause the entire script to immediately fail with an error if diff found a difference, or if grep didn't find a match.
On the other hand, some commands don't produce an error exit status when you'd want them to. Commonly problematic commands are find (exit status does not reflect whether files were actually found) and sed (exit status won't reveal whether the script received any input or actually performed any commands successfully). A simple guard in some scenarios is to pipe to a command which does scream if there is no output:
find things | grep .
sed -e 's/o/me/' stuff | grep ^
It should be noted that the exit status of a pipeline is the exit status of the last command in that pipeline. So the above commands actually completely mask the status of find and sed, and only tell you whether grep finally succeeded.
(Bash, of course, has set -o pipefail; but Debian package scripts cannot use Bash features. The policy firmly dictates the use of POSIX sh for these scripts, though this was not always the case.)
In many situations, this is something to separately watch out for when coding defensively. Sometimes you have to e.g. go through a temporary file so you can see whether the command which produced that output finished successfully, even when idiom and convenience would otherwise direct you to use a shell pipeline.
I believe the intention is for the script in question to fail fast.
To test this yourself, simply type set -e at a bash prompt. Now, try running ls. You'll get a directory listing. Now, type lsd. That command is not recognized and will return an error code, and so your bash prompt will close (due to set -e).
Now, to understand this in the context of a 'script', use this simple script:
#!/bin/bash
# set -e
lsd
ls
If you run it as is, you'll get the directory listing from the ls on the last line. If you uncomment the set -e and run again, you won't see the directory listing as bash stops processing once it encounters the error from lsd.
set -e The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
By default, bash does not do this. This default behavior is exactly what you want if you are using bash on the command line
you don't want a typo to log you out! But in a script, you really want the opposite.
If one line in a script fails, but the last line succeeds, the whole script has a successful exit code. That makes it very easy to miss the error.
Again, what you want when using bash as your command-line shell and using it in scripts are at odds here. Being intolerant of errors is a lot better in scripts, and that's what set -e gives you.
copied from : https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
this may help you .
Script 1: without setting -e
#!/bin/bash
decho "hi"
echo "hello"
This will throw error in decho and program continuous to next line
Script 2: With setting -e
#!/bin/bash
set -e
decho "hi"
echo "hello"
# Up to decho "hi" shell will process and program exit, it will not proceed further
It stops execution of a script if a command fails.
A notable exception is an if statement. eg:
set -e
false
echo never executed
set -e
if false; then
echo never executed
fi
echo executed
false
echo never executed
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
#set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
hi
0
with set -e commented out we see that echo "hi" exit status being reported and hi is printed.
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
1
Now we see b.txt error being reported instead and no hi printed.
So default behaviour of shell script is to ignore command errors and continue processing and report exit status of last command. If you want to exit on error and report its status we can use -e option.

Quit from pipe in bash

For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).

grep in bash script + Jenkins

I have a grep command that works in a bash script:
if grep 'stackoverflow' outFile.txt; then
exit 1
fi
This works fine when run on my host. When I call this from a Jenkins build step however, it exits 0 everytime, not seeing 'stackoverflow'. What is going wrong?
Add the following line as the first line in your "Execute Shell" command
#!/bin/sh
grep command exits with a non zero code when it does not find match and that causes jenkins to mark the job as failed. See Below.
In the help section of "Execute Shell"
Runs a shell script (defaults to sh, but this is configurable) for building the project. The script will be run with the workspace as the current directory. Type in the contents of your shell script. If your shell script has no header line like #!/bin/sh —, then the shell configured system-wide will be used, but you can also use the header line to write script in another language (like #!/bin/perl) or control the options that shell uses.
By default, the shell will be invoked with the "-ex" option. So all of the commands are printed before being executed, and the build is considered a failure if any of the commands exits with a non-zero exit code. Again, add the #!/bin/... line to change this behavior.
As a best practice, try not to put a long shell script in here. Instead, consider adding the shell script in SCM and simply call that shell script from Jenkins (via bash -ex myscript.sh or something like that), so that you can track changes in your shell script.
I am a bit confused by the answers on this question! i.e. Sorry, but the answers here are incorrect for this question. The question is good/interesting as plain grep in scripts does cause scripts to exit with failure if the grep is not successful (which can be unexpected), whereas a grep inside an if will not cause exit with failure.
For the example shown in the question exit 1 will be done IF the grep command runs successfully(file exists) AND if the string is found in file. (grep command returns 0 exit code to if).
#Gonen's comment to add 'ls -l outFile.txt' should have been followed up on to see what the real reason for failure was.
TLDR; if catches the exit code of commands inside the if clause:
A grep command that 'fails'(no match or error) inside an if statement in jenkins will not cause jenkins script to stop. Whereas a grep command that fails not inside an if will cause jenkins to stop and exit with fail.
The exit/return code handling is different for commands inside an if statement in shell. if catches the return code and no matter if command was successful or failed the if will return success to $0(after if) (and do actions in if or else).
From man bash:
if list; then list; [ elif list; then list; ] ... [ else list; ] fi
The if list is executed. If its exit status is zero, the then list is executed. Otherwise, each elif list is executed in turn, and
if its exit status is zero, the corresponding then list is executed
and the command completes. Otherwise, the else list is executed, if
present. The exit status is the exit status of the last command
executed, or zero if no condition tested true.
To illustrate, try this (same result in bash or sh):
$ if grep foo bar ; then echo got it; fi; echo $?
grep: bar: No such file or directory
0
$ touch bar
$ if grep foo bar ; then echo got it; fi; echo $?
0
$ echo foo >bar
$ if grep foo bar ; then echo got it; fi; echo $?
foo
got it
0
$ if grep foo bar ; then echo gotit; grep gah mah; fi; echo $?
foo
gotit
grep: mah: No such file or directory
2
I think you have error in your script. You must add 'fi' at the end of 'if' block:
if grep 'stackoverflow' outFile.txt; then
exit 1
fi
If the two were exactly the same it should work. Is your current directory or user different in the two environments? You might not be able to read the file.

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

Run shell command from child shell

I have a Unix shell script test.sh. Within the script i would like to invoke another shell and then execute the rest of the commands in the shell script from the child shell and exit
To make it clear:
test.sh
#! /bin/bash
/bin/bash /* create child shell */
<shell-command1>
<shell-command2>
......
<shell-commandN>
exit 0
What my intention is to run the shell-commands1 to shell-commandN from the child shell. Kindly tell me how to do this
You can setup in a group, like.
#!/bin/bash
(
Command1
Command2
etc..
)
subshell() {
echo "this is also within a subshell"
}
subshell
( and ) creates a subshell in which you run a group of commands, otherwise a simple function will do. I don't know if ( and ) is POSIX compatible.
Update: If I understand your comment correctly, you want to be using -c option with bash, like.
/bin/bash -c "Command1 && Command2...." &
From http://tldp.org/LDP/abs/html/subshells.html here is an example:
#!/bin/bash
# subshell-test.sh
(
# Inside parentheses, and therefore a subshell . . .
while [ 1 ] # Endless loop.
do
echo "Subshell running . . ."
done
)
# Script will run forever,
#+ or at least until terminated by a Ctl-C.
exit $? # End of script (but will never get here).

Resources