Assign results of a command to a variable while checking results of said command - bash

I would like to combine the following loop:
while ps -p PID_OF_JAVA_PROCESS; do
sleep 1;
done;
Into the following loop:
if pgrep -f java.*name_of_file > /dev/null; then
echo "Shutting down java process!"
pkill -f java.*name_of_file
else
echo "Not currently running!"
fi
By assigning the results of pgrep into a variable (the PID of this java process) -- like something along the following:
if pgrep -f java.*name_of_file > /dev/null; then
echo "Our java process is currently running!"
pkill -f java.*name_of_file
echo "Please wait while our process shuts down!"
while ps -p $(pgrep -f java.*name_of_file); do
sleep 1;
done;
else
echo "Not currently running!"
fi
I would like to combine the above while keeping the results of each command quiet (except echo, of course).

if pids=${pgrep -f java.*name_of_file 2>/dev/null }; then
echo "Our java process is currently running!"
kill -f $pids > /dev/null 2>&1
echo "Please wait while our process shuts down!"
while ps -p $(pgrep -f java.*name_of_file 2> /dev/null); do
sleep 1;
done;
else
echo "Not currently running!"
fi
> /dev/null redirects stdout to /dev/null
2> /dev/null redirects stderr to /dev/null
> /dev/null 2>&1 redirects stderr to stdout to /dev/null, thus silencing the command whatsoever
If I assume your two scripts above run right, this slightly modified version should be what you want :)

Related

How can I send multiple commands' output to a single shell pipeline?

I have multiple pipelines, which looks like:
tee -a $logfilename.txt | jq string2object.jq >> $logfilename.json
or
tee -a $logfilename.txt | jq array2object.jq >> $logfilename.json
For each pipeline, I want to apply to multiple commands.
Each set of commands looks something like:
echo "start filelist:"
printf '%s\n' "$PWD"/*
or
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
and the output from those commands should all go through the pipe.
What I've tried in the past is putting the pipeline on each command separately:
echo "start filelist:" | tee -a $logfilename | jq -sRf array2object.jq >>$logfilename.json
printf '%s\n' "$PWD"/* | tee -a $logfilename | jq -sRf array2object.jq >>$logfilename.json
but in that case the JSON script can only see one line at a time, so it doesn't work correctly.
The Portable Approach
The following is portable to POSIX sh:
#!/bin/sh
die() { rm -rf -- "$tempdir"; [ "$#" -gt 0 ] && echo "$*" >&2; exit 1; }
logfilename="whatever"
tempdir=$(mktemp -d "${TMPDIR:-/tmp}"/fifodir.XXXXXX) || exit
mkfifo "$tempdir/fifo" || die "mkfifo failed"
tee -a "$logfilename" <"$tempdir/fifo" \
| jq -sRf json_log_s2o.jq \
>>"$logfilename.json" & fifo_pid=$!
exec 3>"$tempdir/fifo" || die "could not open fifo for write"
echo "start filelist:" >&3
printf '%s\n' "$PWD"/* >&3
echo "start wget:" >&3
wget -nv http://web.site.com/downloads/2017/file_1.zip >&3 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip >&3 2>&1
exec 3>&- # close the write end of the FIFO
wait "$fifo_pid" # and wait for the process to exit
rm -rf "$tempdir" # delete the temporary directory with the FIFO
Avoiding FIFO Management (Using Bash)
With bash, one can avoid needing to manage the FIFO by using a process substitution:
#!/bin/bash
logfilename="whatever"
exec 3> >(tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"$logfilename.json")
echo "start filelist:" >&3
printf '%s\n' "$PWD/*" >&3
echo "start wget:" >&3
wget -nv http://web.site.com/downloads/2017/file_1.zip >&3 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip >&3 2>&1
exec 3>&1
Waiting For Exit (Using Linux-y Tools)
However, the thing this doesn't let you do (without bash 4.4) is detect when jq failed, or wait for jq to finish writing before your script exits. If you want to ensure that jq finishes before your script exits, then you might consider using flock, like so:
writelogs() {
exec 4>"${1}.json"
flock -x 4
tee -a "$1" | jq -sRf json_log_s2o.jq >&4
}
exec 3> >(writelogs "$logfilename")
and later:
exec 3>&-
flock -s "$logfilename.json" -c :
Because the jq process inside the writelogs function holds a lock on the output file, the final flock -s command isn't able to also grab a lock on the output file until jq exits.
An Aside: Avoiding All The >&3 Redirections
In either shell, the below is just as valid:
{
echo "start filelist:"
printf '%s\n' "$PWD"/*
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
} >&3
It's also possible, but not advisable, to pipe a code block into a pipeline, thus replacing the FIFO use or process substitution altogether:
{
echo "start filelist:"
printf '%s\n' "$PWD"/*
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
} | tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"${logfilename}.json"
...why not advisable? Because there's no guarantee in POSIX sh as to which components of a pipeline if any run in the same shell interpreter as the rest of your script; and if the above isn't run in the same piece of the script, then variables will be thrown away (and without extensions such as pipefail, exit status as well). See BashFAQ #24 for more information.
Waiting For Exit On Bash 4.4
With bash 4.4, process substitutions export their PIDs in $!, and these can be waited for. Thus, you get an alternate way to wait for the FIFO to exit:
exec 3> >(tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"$logfilename.json"); log_pid=$!
...and then, later on:
wait "$log_pid"
as an alternative to the flock approach given earlier. Obviously, do this only if you have bash 4.4 available.

Redirecting output in shell script from subprocess with simulated ctrl-c

I am working on a shell script, where it sets some parameters and calls a python script. Sometimes the python script hangs and when I press ctrl-c, it generates some error output. I am trying to write this to a file. When I execute the shell script and press ctrl-c, I get the output in the redirected file, but if I try to simulate the ctrl-c by sleeping for some time and killing the process, the output is not redirected to the file. I have used some examples from SO to do the sleep and terminate which works, but the output file doesn't have the error which I get from ctrl-c action manually.
I cannot change the python script that this script is executing, so I have to implement this in the calling script.
[ -z "$1" ] && echo "No environment argument supplied" && exit 1
. env_params.txt
. run_params.txt
echo "========================================================== RUN STARTED AT $B ==========================================================" >> $OUTFILE
echo " " >> $OUTFILE
export RUN_COMMAND="$PYTHON_SCRIPT_LOC/pyscript.py PARAM1=VALUE1 PARAM2=VALU2 PAram3=value3"
echo "Run command from test.sh $RUN_COMMAND" >> $OUTFILE
echo " " >> $OUTFILE
echo " " >> $OUTFILE
echo "========================================================== Running Python script ==========================================================" >> $OUTFILE
echo "Before python command"
###############################################
( python $RUN_COMMAND >> $OUTFILE 2>&1 ) & pid=$!
SLEEP_TIME=1m
echo "before sleep - sleeping for $SLEEP_TIME $pid"
( sleep $SLEEP_TIME && kill -HUP $pid ) 2>/dev/null & watcher=$!
if wait $pid 2>/dev/null; then
echo "after sleep - sleeping for $SLEEP_TIME $pid"
echo "your_command finished"
pkill -HUP -P $watcher
wait $watcher
else
echo "after sleep - sleeping for $SLEEP_TIME $pid"
echo "your_command interrupted"
fi
### also tried this - did not work either
### python $RUN_COMMAND >> $OUTFILE 2>&1 &; (pythonPID=$! ; sleep 1m ;kill -s 2 $pythonPID)
.
.
What changes do I need to make such that the output is written to the $OUTFILE when the process is killed in the script itself, rather than pressing ctrl-c on the terminal?
Probabelly you do not want to use SIGHUP (Hangup detected on controlling terminal or death of controlling process, but rather
SIGINT 2 Term Interrupt from keyboard
for more info read man 7 signal

Shell script: How to restart a process (with pipe) if it dies

I currently use the technique described in How do I write a bash script to restart a process if it dies? by lhunath in order to restart a dead process.
until myserver; do
echo "Server 'myserver' crashed with exit code $?. Respawning.." >&2
sleep 1
done
But rather than just invoking the process myserver, I would like to invoke such a thing:
myserver 2>&1 | /usr/bin/logger -p local0.info &
How to use the first technique with a process with pipe?
The until loop itself can be piped into logger:
until myserver 2>&1; do
echo "..."
sleep 1
done | /usr/bin/logger -p local0.info &
since myserver inherits its standard output and error from the loop (which inherits from the shell).
You can use the PIPESTATUS variable to get the exit code from a specific command in a pipeline:
while :; do
myserver 2>&1 | /usr/bin/logger -p local0.info
if [[ ${PIPESTATUS[0]} != 0 ]]
then echo "Server 'myserver' crashed with exit code ${PIPESTATUS[0]}. Respawning.." >&2
sleep 1
else break
fi
done

Self-daemonizing bash script

I want to make a script to be self-daemonizing, i.e., no need to invoke nohup $SCRIPT &>/dev/null & manually on the shell prompt.
My plan is to create a section of code like the following:
#!/bin/bash
SCRIPTNAME="$0"
...
# Preps are done above
if [[ "$1" != "--daemonize" ]]; then
nohup "$SCRIPTNAME" --daemonize "${PARAMS[#]}" &>/dev/null &
exit $?
fi
# Rest of the code are the actual procedures of the daemon
Is this wise? Do you have better alternatives?
Here are things I see.
if [[ $1 != "--daemonize" ]]; then
Shouln't that be == --daemonize?
nohup $SCRIPTNAME --daemonize "${PARAMS[#]}" &>/dev/null &
Instead of calling your script again, you could just summon a subshell that's placed in a background:
(
Codes that run in daemon mode.
) </dev/null >/dev/null 2>&1 &
disown
Or
function daemon_mode {
Codes that run in daemon mode.
}
daemon_mode </dev/null >/dev/null 2>&1 &
disown

stop bash script from outputting in terminal

I believe I have everything setup correctly for my if else statement however it keeps outputting content into my shell terminal as if i ran the command myself. is there anyway i can escape this so i can run these commands without it populating my terminal with text from the results?
#!/bin/bash
ps cax | grep python > /dev/null
if [ $? -eq 0 ]; then
echo "Process is running." &
echo $!
else
echo "Process is not running... Starting..."
python likebot.py &
echo $!
fi
Here is what the output looks like a few minutes after running my bash script
[~]# sh check.sh
Process is not running... Starting...
12359
[~]# Your account has been rated. Sleeping on kranze for 1 minute(s). Liked 0 photo(s)...
Your account has been rated. Sleeping on kranze for 2 minute(s). Liked 0 photo(s)...
If you want to redirect output from within the shell script, you use exec:
exec 1>/dev/null 2>&1
This will redirect everything from now on. If you want to output to a log:
exec 1>/tmp/logfile 2>&1
To append a log:
exec 1>>/tmp/logfile 2>&1
To backup your handles so you can restore them:
exec 3>&1 4>&2
exec 1>/dev/null 2>&1
# Do some stuff
# Restore descriptors
exec 1>&3 2>&4
# Close the descriptors.
exec 3>&- 4>&-
If there is a particular section of a script you want to silence:
#!/bin/bash
echo Hey, check me out, I can make noise!
{
echo Thats not fair, I am being silenced!
mv -v /tmp/a /tmp/b
echo Me too.
} 1>/dev/null 2>&1
If you want to redirect the "normal (stdout)" output use >/dev/null if you also want to redirect the error output as well use 2>&1 >/dev/null
eg
$ command 2>&1 >/dev/null
I think you have to redirect STDOUT (and may be STDERR) of the python interpreter:
...
echo "Process is not running... Starting..."
python likebot.py >/dev/null 2>&1 &
...
For further details, please have a look at Bash IO-Redirection.
Hope that helped a bit.
*Jost
You have two options:
You can redirect standard output to a log file using > /path/to/file
You can redirect standard output to /dev/null to get rid of it completely using > /dev/null
If you want error output redirected as well use &>
See here
Also, not relevant to this particular example, but some bash commands support a 'quiet' or 'silent' flag.
Append >> /path/to/outputfile/outputfile.txt to the end of every echo statement
echo "Process is running." >> /path/to/outputfile/outputfile.txt
Alternatively, send the output to the file when you run the script from the shell
[~]# sh check.sh >> /path/to/outputfile/outputfile.txt

Resources