How to suppress function output in Bash? - bash

I wrote a little helper to start an Android emulator which looks like this:
# START ANDROID EMULATION
android-sim () {
if [[ -z "$1" ]]; then
echo "No Android device supplied as argument!"
return 1
fi
if [[ -d "$ANDROID_HOME" ]]; then
cd "$ANDROID_HOME/emulator" &>/dev/null
./emulator "#$1" &>/dev/null &
cd - &>/dev/null
else
echo "The variable ANDROID_HOME is not set to a correct directory!"
fi
}
I tried to suppress all the generated output with &>/dev/null. For the emulator itself I also tried to put the execution in the background.
But when I run the function I still get the following output:
# before exeuction
[09:11:48] sandrowinkler:~ $ android-sim pixel-29
[1] 23217
# after execution, I closed the emulator via GUI
[09:11:55] sandrowinkler:~ $
[1]+ Done ./emulator "#$1" &> /dev/null
(wd: /usr/local/share/android-sdk/emulator)
(wd now: ~)
According to this superuser post and this unix post I used the right way to silence the output. So what am I doing wrong here?
PS: I would also appreciate tips to improve my Bash code.

If the monitoring of the background job is not needed the double fork may be a solution.
( ./emulator "#$1" &>/dev/null & )
when the parent exit before the child process the child process is attached to process 1 so no more monitored by current shell.
also looking at the script, cd - could be avoided changing directory in the sub shell:
(
cd "$ANDROID_HOME/emulator"
./emulator "#$1" &
) &>/dev/null
because current shell will not be affected by subshell change directory

You've done it correctly. That output is not from your script, but from the shell, displaying information about the background process (the PID, the done message and the working directory when in ends).
If you redirect the input externally, like this:
andoroid-sim foo > log.txt
and after execution you open log.txt, you'll see what actually your program outputed (and those messages won't be there, even if they appear in the console).

Related

Bash: redirect to screen or /dev/null depending on flag

I'm trying to come up with a way script to pass a silent flag in a bash so that all output will be directed to /dev/null if it is present and to the screen if it is not.
An MWE of my script would be:
#!/bin/bash
# Check if silent flag is on.
if [ $2 = "-s" ]; then
echo "Silent mode."
# Non-working line.
out_var = "to screen"
else
echo $1
# Non-working line.
out_var = "/dev/null"
fi
command1 > out_var
command2 > out_var
echo "End."
I call the script with two variables, the first one is irrelevant and the second one ($2) is the actual silent flag (-s):
./myscript.sh first_variable -s
Obviously the out_var lines don't work, but they give an idea of what I want: a way to direct the output of command1 and command2 to either the screen or to /dev/null depending on -s being present or not.
How could I do this?
You can use the naked exec command to redirect the current program without starting a new one.
Hence, a -s flag could be processed with something like:
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
The following complete script shows how to do it:
#!/bin/bash
echo XYZZY
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
echo PLUGH
If you run it with -s, you get XYZZY but no PLUGH output (well, technically, you do get PLUGH output but it's sent to the /dev/null bit bucket).
If you run it without -s, you get both lines.
The before and after echo statements show that exec is acting as described, simply changing redirection for the current program rather than attempting to re-execute it.
As an aside, I've assumed you meant "to screen" to be "to the current standard output", which may or may not be the actual terminal device (for example if it's already been redirected to somewhere else). If you do want the actual terminal device, it can still be done (using /dev/tty for example) but that would be an unusual requirement.
There are lots of things that could be wrong with your script; I won't attempt to guess since you didn't post any actual output or errors.
However, there are a couple of things that can help:
You need to figure out where your output is really going. Standard output and standard error are two different things, and redirecting one doesn't necessarily redirect the other.
In Bash, you can send output to /dev/stdout or /dev/stderr, so you might want to try something like:
# Send standard output to the tty/pty, or wherever stdout is currently going.
cmd > /dev/stdout
# Do the same thing, but with standard error instead.
cmd > /dev/stderr
Redirect standard error to standard output, and then send standard output to /dev/null. Order matters here.
cmd 2>&1 > /dev/null
There may be other problems with your script, too, but for issues with Bash shell redirections the GNU Bash manual is the canonical source of information. Hope it helps!
If you don't want to redirect all output from your script, you can use eval. For example:
$ fd=1
$ eval "echo hi >$a" >/dev/null
$ fd=2
$ eval "echo hi >$a" >/dev/null
hi
Make sure you use double quotes so that the variable is replaced before eval evaluates it.
In your case, you just needed to change out_var = "to screen" to out_var = "/dev/tty". And use it like this command1 > $out_var (see the '$' you are lacking)
I implemented it like this
# Set debug flag as desired
DEBUG=1
# DEBUG=0
if [ "$DEBUG" -eq "1" ]; then
OUT='/dev/tty'
else
OUT='/dev/null'
fi
# actual script use commands like this
command > $OUT 2>&1
# or like this if you need
command 2> $OUT
Of course you can also set the debug mode from a cli option, see How do I parse command line arguments in Bash?
And you can have multiple debug or verbose levels like this
# Set VERBOSE level as desired
# VERBOSE=0
VERBOSE=1
# VERBOSE=2
VERBOSE1='/dev/null'
VERBOSE2='/dev/null'
if [ "$VERBOSE" -gte 1 ]; then
VERBOSE1='/dev/tty'
fi
if [ "$VERBOSE" -gte 2 ]; then
VERBOSE2='/dev/tty'
fi
# actual script use commands like this
command > $VERBOSE1 2>&1
# or like this if you need
command 2> $VERBOSE2

How can I prevent bash from reporting an error when attempting to call a non-existing script?

I am writing a simple script in bash to check whether or not a bunch of dependencies are installed on the current system. My script attempts to run a sample script with the -h flag, greps the output for a keyword i would expected to be returned by the sample scripts, and therefore knows whether or not the sample script is installed on the system.
I then pass this through a conditional statement that basically says sample scripts = OK or sample scripts = FAIL. However, in the case in which the sample script isn't installed on the system, bash throws the warning -bash: sample_script: command not found. How can I prevent this from displaying? I tried using the 1>&2 error redirection, but the warning still appears on the screen (I want the OK/FAIL output text to be displayed on the user's screen upon running my script).
Thanks for any suggestions!
If you just want to suppress errors (stderr) and let the "OK" or "FAIL" you are echoing (stdout) pass through, you would do:
./yourscript.sh 2> /dev/null
Although the better approach would be to test whether sample_script is executable before trying to execute it. For instance:
if [ -x "$script" ]; then
*do whatever generates FAIL or OK*
fi
#devnull dixit
command -h 2>/dev/null
I use this function to be independent of which, whence, type -p and whatnot:
pathto () {
DIRLIST=$(echo "$PATH"|tr : ' ')
for e in "$#"; do
for d in $DIRLIST; do
test -f "$d/$e" -a -x "$d/$e" && echo "$d/$e"
done
done
}
pathto script will echo the full path if it can be found (and is executable). Returning 0 or 1 instead left as an exercise :-)
for bash:
if ! type -P sample_script &> /dev/null; then
echo Error: sample_script is not installed. Come back later. >&2
exit 1
fi
sample_script "$#"

Getting exit code of last shell command in another script

I am trying to beef up my notify script. The way the script works is that I put it behind a long running shell command and then all sorts of notifications get invoked after the long running script finished.
For example:
sleep 100; my_notify
It would be nice to get the exit code of the long running script. The problem is that calling my_notify creates a new process that does not have access to the $? variable.
Compare:
~ $: ls nonexisting_file; echo "exit code: $?"; echo "PPID: $PPID"
ls: nonexisting_file: No such file or directory
exit code: 1
PPID: 6203
vs.
~ $: ls nonexisting_file; my_notify
ls: nonexisting_file: No such file or directory
exit code: 0
PPID: 6205
The my_notify script has the following in it:
#!/bin/sh
echo "exit code: $?"
echo "PPID: $PPID"
I am looking for a way to get the exit code of the previous command without changing the structure of the command too much. I am aware of the fact that if I change it to work more like time, e.g. my_notify longrunning_command... my problem would be solved, but I actually like that I can tack it at the end of a command and I fear complications of this second solution.
Can this be done or is it fundamentally incompatible with the way that shells work?
My shell is Z shell (zsh), but I would like it to work with Bash as well.
You'd really need to use a shell function in order to accomplish that. For a simple script like that it should be pretty easy to have it working in both zsh and bash. Just place the following in a file:
my_notify() {
echo "exit code: $?"
echo "PPID: $PPID"
}
Then source that file from your shell startup files. Although since that would be run from within your interactive shell, you may want to use $$ rather than $PPID.
It is incompatible. $? only exists within the current shell; if you want it available in subprocesses then you must copy it to an environment variable.
The alternative is to write a shell function that uses it in some way instead.
One method to implement this could be to use EOF tag and a master script which will create your my_notify script.
#!/bin/bash
if [ -f my_notify ] ; then
rm -rf my_notify
fi
if [ -f my_temp ] ; then
rm -rf my_temp
fi
retval=`ls non_existent_file &> /dev/null ; echo $?`
ppid=$PPID
echo "retval=$retval"
echo "ppid=$ppid"
cat >> my_notify << 'EOF'
#!/bin/bash
echo "exit code: $retval"
echo " PPID =$ppid"
EOF
sh my_notify
You can refine this script for your purpose.

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

How do I make sure my bash script isn't already running?

I have a bash script I want to run every 5 minutes from cron... but there's a chance the previous run of the script isn't done yet... in this case, i want the new run to just exit. I don't want to rely on just a lock file in /tmp.. I want to make sure sure the process is actually running before i honor the lock file (or whatever)...
Here is what I have stolen from the internet so far... how do i smarten it up a bit? or is there a completely different way that's better?
if [ -f /tmp/mylockFile ] ; then
echo 'Script is still running'
else
echo 1 > /tmp/mylockFile
/* Do some stuff */
rm -f /tmp/mylockFile
fi
# Use a lockfile containing the pid of the running process
# If script crashes and leaves lockfile around, it will have a different pid so
# will not prevent script running again.
#
lf=/tmp/pidLockFile
# create empty lock file if none exists
cat /dev/null >> $lf
read lastPID < $lf
# if lastPID is not null and a process with that pid exists , exit
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo not running
# save my pid in the lock file
echo $$ > $lf
# sleep just to make testing easier
sleep 5
There is at least one race condition in this script. Don't use it for a life support system, lol. But it should work fine for your example, because your environment doesn't start two scripts simultaneously. There are lots of ways to use more atomic locks, but they generally depend on having a particular thing optionally installed, or work differently on NFS, etc...
You might want to have a look at the man page for the flock command, if you're lucky enough to get it on your distribution.
NAME
flock - Manage locks from shell scripts
SYNOPSIS
flock [-sxon] [-w timeout] lockfile [-c] command...
Never use a lock file always use a lock directory.
In your specific case, it's not so important because the start of the script is scheduled in 5min intervals. But if you ever reuse this code for a webserver cgi-script you are toast.
if mkdir /tmp/my_lock_dir 2>/dev/null
then
echo "running now the script"
sleep 10
rmdir /tmp/my_lock_dir
fi
This has a problem if you have a stale lock, means the lock is there but no associated process. Your cron will never run.
Why use a directory? Because mkdir is an atomic operation. Only one process at a time can create a directory, all other processes get an error. This even works across shared filesystems and probably even between different OS types.
Store your pid in mylockFile. When you need to check, look up ps for the process with the pid you read from file. If it exists, your script is running.
If you want to check the process's existence, just look at the output of
ps aux | grep your_script_name
If it's there, it's not dead...
As pointed out in the comments and other answers, using the PID stored in the lockfile is much safer and is the standard approach most apps take. I just do this because it's convenient and I almost never see the corner cases (e.g. editing the file when the cron executes) in practice.
If you use a lockfile, you should make sure that the lockfile is always removed. You can do this with 'trap':
if ( set -o noclobber; echo "locked" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
echo "Locking succeeded" >&2
rm -f "$lockfile"
else
echo "Lock failed - exit" >&2
exit 1
fi
The noclobber option makes the creation of lockfile atomic, like using a directory.
As a one-liner and if you do not want to use a lockfile (e.g. b/c/ of a read only filesystem, etc)
test "$(pidof -x $(basename $0))" != $$ && exit
It checks that the full list of PID that bear the name of your script is equal to the current PID. The "-x" also checks for the name of shell scripts.
Bash makes it even shorter and faster:
[[ "$(pidof -x $(basename $0))" != $$ ]] && exit
In some cases, you might want to be able to distinguish between who is running the script and allow some concurrency but not all. In that case, you can use per-user, per-tty or cron-specific locks.
You can use environment variables such as $USER or the output of a program such as tty to create the filename. For cron, you can set a variable in the crontab file and test for it in your script.
you can use this one:
pgrep -f "/bin/\w*sh .*scriptname" | grep -vq $$ && exit
I was trying to solve this problem today and I came up with the below:
COMMAND_LINE="$0 $*"
JOBS=$(SUBSHELL_PID=$BASHPID; ps axo pid,command | grep "${COMMAND_LINE}" | grep -v $$ | g rep -v ${SUBSHELL_PID} | grep -v grep)
if [[ -z "${JOBS}" ]]
then
# not already running
else
# already running
fi
This relies on $BASHPID which contains the PID inside a subshell ($$ in the subshell is the parent pid). However, this relies on Bash v4 and I needed to run this on OSX which has Bash v3.2.48. I ultimately came up with another solution and it is cleaner:
JOBS=$(sh -c "ps axo pid,command | grep \"${COMMAND_LINE}\" | grep -v grep | grep -v $$")
You can always just:
if ps -e -o cmd | grep scriptname > /dev/null; then
exit
fi
But I like the lockfile myself, so I wouldn't do this without the lock file as well.
Since a socket solution has not yet been mentioned it is worth pointing out that sockets can be used as effective mutexes. Socket creation is an atomic operation, like mkdir is as Gunstick pointed out, so a socket is suitable to use as a lock or mutex.
Tim Kay's Perl script 'Solo' is a very small and effective script to make sure only one copy of a script can be run at any one time. It was designed specifically for use with cron jobs, although it works perfectly for other tasks as well and I've used it for non-crob jobs very effectively.
Solo has one advantage over the other techniques mentioned so far in that the check is done outside of the script you only want to run one copy of. If the script is already running then a second instance of that script will never even be started. This is as opposed to isolating a block of code inside the script which is protected by a lock. EDIT: If flock is used in a cron job, rather than from inside a script, then you can also use that to prevent a second instance of the script from starting - see example below.
Here's an example of how you might use it with cron:
*/5 * * * * solo -port=3801 /path/to/script.sh args args args
# "/path/to/script.sh args args args" is only called if no other instance of
# "/path/to/script.sh" is running, or more accurately if the socket on port 3801
# is not open. Distinct port numbers can be used for different programs so that
# if script_1.sh is running it does not prevent script_2.sh from starting, I've
# used the port range 3801 to 3810 without conflicts. For Linux non-root users
# the valid port range is 1024 to 65535 (0 to 1023 are reserved for root).
* * * * * solo -port=3802 /path/to/script_1.sh
* * * * * solo -port=3803 /path/to/script_2.sh
# Flock can also be used in cron jobs with a distinct lock path for different
# programs, in the example below script_3.sh will only be started if the one
# started a minute earlier has already finished.
* * * * * flock -n /tmp/path.to.lock -c /path/to/script_3.sh
Links:
Solo web page: http://timkay.com/solo/
Solo script: http://timkay.com/solo/solo
Hope this helps.
You can use this.
I'll just shamelessly copy-paste the solution here, as it is an answer for both questions (I would argue that it's actually a better fit for this question).
Usage
include sh_lock_functions.sh
init using sh_lock_init
lock using sh_acquire_lock
check lock using sh_check_lock
unlock using sh_remove_lock
Script File
sh_lock_functions.sh
#!/bin/bash
function sh_lock_init {
sh_lock_scriptName=$(basename $0)
sh_lock_dir="/tmp/${sh_lock_scriptName}.lock" #lock directory
sh_lock_file="${sh_lock_dir}/lockPid.txt" #lock file
}
function sh_acquire_lock {
if mkdir $sh_lock_dir 2>/dev/null; then #check for lock
echo "$sh_lock_scriptName lock acquired successfully.">&2
touch $sh_lock_file
echo $$ > $sh_lock_file # set current pid in lockFile
return 0
else
touch $sh_lock_file
read sh_lock_lastPID < $sh_lock_file
if [ ! -z "$sh_lock_lastPID" -a -d /proc/$sh_lock_lastPID ]; then # if lastPID is not null and a process with that pid exists
echo "$sh_lock_scriptName is already running.">&2
return 1
else
echo "$sh_lock_scriptName stopped during execution, reacquiring lock.">&2
echo $$ > $sh_lock_file # set current pid in lockFile
return 2
fi
fi
return 0
}
function sh_check_lock {
[[ ! -f $sh_lock_file ]] && echo "$sh_lock_scriptName lock file removed.">&2 && return 1
read sh_lock_lastPID < $sh_lock_file
[[ $sh_lock_lastPID -ne $$ ]] && echo "$sh_lock_scriptName lock file pid has changed.">&2 && return 2
echo "$sh_lock_scriptName lock still in place.">&2
return 0
}
function sh_remove_lock {
rm -r $sh_lock_dir
}
Usage example
sh_lock_usage_example.sh
#!/bin/bash
. /path/to/sh_lock_functions.sh # load sh lock functions
sh_lock_init || exit $?
sh_acquire_lock
lockStatus=$?
[[ $lockStatus -eq 1 ]] && exit $lockStatus
[[ $lockStatus -eq 2 ]] && echo "lock is set, do some resume from crash procedures";
#monitoring example
cnt=0
while sh_check_lock # loop while lock is in place
do
echo "$sh_scriptName running (pid $$)"
sleep 1
let cnt++
[[ $cnt -gt 5 ]] && break
done
#remove lock when process finished
sh_remove_lock || exit $?
exit 0
Features
Uses a combination of file, directory and process id to lock to make sure that the process is not already running
You can detect if the script stopped before lock removal (eg. process kill, shutdown, error etc.)
You can check the lock file, and use it to trigger a process shutdown when the lock is missing
Verbose, outputs error messages for easier debug

Resources