Check that process is running [duplicate] - bash

This question already has answers here:
How to check if a process id (PID) exists
(11 answers)
Closed 6 years ago.
I have the following bash code which checks that a process is running:
is_running() {
ps `cat "$pid_file"` > /dev/null 2>&1
}
The problem is that is_running is always evaluated to true.
$pid_file contains a process ID that isn't listed when I run ps.
I would like in this case, is_running to return false.
How can I modify it for that purpose?

You are missing the -p option:
ps -p PID
In bash, i would simply do:
is_running() { ps -p $(<"$1") &>/dev/null ;}
and will give the filename as an argument to the function:
is_running /pid/file
You need to start using the newer command substitution syntax $() instead of the arcane `...`
bash supports a shorthand (&>) for indicating the redirection of STDOUT and STDERR

Related

`shopt -s inherit_errexit` has no effect in declare command [duplicate]

This question already has answers here:
Exit code of variable assignment to command substitution in Bash
(5 answers)
Closed 1 year ago.
#!/bin/bash
set -e
set -o pipefail
shopt -s inherit_errexit
declare _x=`command-with-error`
echo "_x=${_x}"
Run the script shows:
bash xx.sh
xx.sh: line 6: command-with-error: command not found
_x=
Apparently line 6 did not exit the shell. What option should I use to make the script exit when the subshell command on declare line fails?
The successful exit status of declare is overriding the unsuccessful exit status of command-with-error.
Break it into two separate commands:
declare _x
_x=$(command-with-error)
...as you can see running correctly (which is to say, without writing anything to stdout) at https://ideone.com/TGyFCZ

Bash - if any process is killed, exit without passing to the next steps [duplicate]

This question already has answers here:
How to exit if a command failed? [duplicate]
(9 answers)
Closed 6 years ago.
I have such a bash file;
./process1
./process2
./process3
./process4
./process5
Let's say I run this bash script, and process2 is killed for some reason. Without passing to process3, I directly want to exit. How can I manage this?
Thanks,
Just exit if non-zero exit code:
./process1 || exit
and so on ...
Another way in bash, use -e flag:
#!/bin/bash
set -e
-e Exit immediately if a command exits with a non-zero status.
You can try it so:
./process1 && ./process2 && ./process3 && ./process4 && ./process5

bash get an exit code from a subshell and pipe [duplicate]

This question already has answers here:
Get exit code from subshell through the pipes
(3 answers)
Closed 8 years ago.
I have the following bash script java_debug to log all java executions (standard and error console):
#! /bin/bash
echo param1: $1
echo param2: $2
(java HelloWorld "$#" 2>&1 ) | tee /tmp/log.txt
I run it:
$ java_debug v1 "v2 with space"
param1: v1
param2: v2 wirh space
Error: Could not find or load main class HelloWorld
$ echo $?
0
in this example, java cannot find the HelloWorld class, and so it shows an error.
However, the error is lost in $? (we get 0 instead of 1) because of the subshell and/or the pipe.
I need that java_debug returns the same exit code as the java execution
How to fix this script?
note: I could use the script command instead of 2>&1 | tee, but unfortunately the implementation of the script command changes in different systems (the script parameters are not the same in redhat than in OSX).
note: I am aware that bash is an horrible language and it should not be used; but I have no choice in this case.
found answer here: Get exit code from subshell through the pipes
in my case:
#! /bin/bash
echo param1: $1
echo param2: $2
(java HelloWorld "$#" 2>&1 ) | tee /tmp/log.txt
exit ${PIPESTATUS[0]}

pass 0 argument (executable filename by default) to called programs [duplicate]

This question already has answers here:
How to change argv0 in bash so command shows up with different name in ps?
(8 answers)
Closed 8 years ago.
By default bash passes executable filename as first (0 to be more precise) argument while invoking programs
Is there any special form for calling programs that can be used to pass 0 argument?
It is usefull for bunch of programs that behave in different ways depending on location where they were called from
I think the only way to set argument 0 is to change the name of the executable. For example:
$ echo 'echo $0' > foo.sh
$ ln foo.sh bar.sh
$ sh foo.sh
foo.sh
$ sh bar.sh
bar.sh
Some shells have a non-POSIX extension to the exec command that allow you to specify an alternate value:
$ exec -a specialshell bash
$ echo $0
specialshell
I'm not aware of a similar technique for changing the name of a child process like this, other than to run in a subshell
$ ( exec -a subshell-bash bash )
Update: three seconds later, I find the argv0 command at http://cr.yp.to/ucspi-tcp/argv0.html.

Automatic exit from Bash shell script on error [duplicate]

This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 3 years ago.
I've been writing some shell script and I would find it useful if there was the ability to halt the execution of said shell script if any of the commands failed. See below for an example:
#!/bin/bash
cd some_dir
./configure --some-flags
make
make install
So in this case, if the script can't change to the indicated directory, then it would certainly not want to do a ./configure afterwards if it fails.
Now I'm well aware that I could have an if check for each command (which I think is a hopeless solution), but is there a global setting to make the script exit if one of the commands fails?
Use the set -e builtin:
#!/bin/bash
set -e
# Any subsequent(*) commands which fail will cause the shell script to exit immediately
Alternatively, you can pass -e on the command line:
bash -e my_script.sh
You can also disable this behavior with set +e.
You may also want to employ all or some of the the -e -u -x and -o pipefail options like so:
set -euxo pipefail
-e exits on error, -u errors on undefined variables, -x prints commands before execution, and -o (for option) pipefail exits on command pipe failures. Some gotchas and workarounds are documented well here.
(*) Note:
The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword,
part of the test following the if or elif reserved words, part
of any command executed in a && or || list except the command
following the final && or ||, any command in a pipeline but
the last, or if the command's return value is being inverted with
!
(from man bash)
To exit the script as soon as one of the commands failed, add this at the beginning:
set -e
This causes the script to exit immediately when some command that is not part of some test (like in a if [ ... ] condition or a && construct) exits with a non-zero exit code.
Use it in conjunction with pipefail.
set -e
set -o pipefail
-e (errexit): Abort the script at the first error, when a command exits with non-zero status (except in until or while loops, if-tests, and list constructs)
-o pipefail: Causes a pipeline to return the exit status of the last command in the pipe that returned a non-zero return value.
Chapter 33. Options
Here is how to do it:
#!/bin/sh
abort()
{
echo >&2 '
***************
*** ABORTED ***
***************
'
echo "An error occurred. Exiting..." >&2
exit 1
}
trap 'abort' 0
set -e
# Add your script below....
# If an error occurs, the abort() function will be called.
#----------------------------------------------------------
# ===> Your script goes here
# Done!
trap : 0
echo >&2 '
************
*** DONE ***
************
'
An alternative to the accepted answer that fits in the first line:
#!/bin/bash -e
cd some_dir
./configure --some-flags
make
make install
One idiom is:
cd some_dir && ./configure --some-flags && make && make install
I realize that can get long, but for larger scripts you could break it into logical functions.
I think that what you are looking for is the trap command:
trap command signal [signal ...]
For more information, see this page.
Another option is to use the set -e command at the top of your script - it will make the script exit if any program / command returns a non true value.
One point missed in the existing answers is show how to inherit the error traps. The bash shell provides one such option for that using set
-E
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases.
Adam Rosenfield's answer recommendation to use set -e is right in certain cases but it has its own potential pitfalls. See GreyCat's BashFAQ - 105 - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
According to the manual, set -e exits
if a simple commandexits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in a if statement, part of an && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted via !".
which means, set -e does not work under the following simple cases (detailed explanations can be found on the wiki)
Using the arithmetic operator let or $((..)) ( bash 4.1 onwards) to increment a variable value as
#!/usr/bin/env bash
set -e
i=0
let i++ # or ((i++)) on bash 4.1 or later
echo "i is $i"
If the offending command is not part of the last command executed via && or ||. For e.g. the below trap wouldn't fire when its expected to
#!/usr/bin/env bash
set -e
test -d nosuchdir && echo no dir
echo survived
When used incorrectly in an if statement as, the exit code of the if statement is the exit code of the last executed command. In the example below the last executed command was echo which wouldn't fire the trap, even though the test -d failed
#!/usr/bin/env bash
set -e
f() { if test -d nosuchdir; then echo no dir; fi; }
f
echo survived
When used with command-substitution, they are ignored, unless inherit_errexit is set with bash 4.4
#!/usr/bin/env bash
set -e
foo=$(expr 1-1; true)
echo survived
when you use commands that look like assignments but aren't, such as export, declare, typeset or local. Here the function call to f will not exit as local has swept the error code that was set previously.
set -e
f() { local var=$(somecommand that fails); }
g() { local var; var=$(somecommand that fails); }
When used in a pipeline, and the offending command is not part of the last command. For e.g. the below command would still go through. One options is to enable pipefail by returning the exit code of the first failed process:
set -e
somecommand that fails | cat -
echo survived
The ideal recommendation is to not use set -e and implement an own version of error checking instead. More information on implementing custom error handling on one of my answers to Raise error in a Bash script

Resources