Simple bash script count running processes by name - bash

i'm working on a small bash script which counts how often a script with a certain name is running.
ps -ef | grep -v grep | grep scrape_data.php | wc -l
is the code i use, via ssh it outputs the number of times scrape_data.php is running. Currently the output is 3 for example. So this works fine.
Now I'm trying to make a little script which does something when the count is smaller than 1.
#!/bin/sh
if [ ps -ef | grep -v grep | grep scrape_data.php | wc -l ] -lt 1; then
exit 0
#HERE PUT CODE TO START NEW PROCESS
else
exit 0
fi
The script above is what I have so far, but it does not work. I'm getting this error:
[root#s1 crons]# ./check_data.sh
./check_data.sh: line 4: [: missing `]'
wc: invalid option -- e
What am I doing wrong in the if statement?

Your test syntax is not correct, the lt should be within the test bracket:
if [ $(ps -ef | grep -v grep | grep scrape_data.php | wc -l) -lt 1 ]; then
echo launch
else
echo no launch
exit 0
fi
or you can test the return value of pgrep:
pgrep scrape_data.php &> /dev/null
if [ $? ]; then
echo no launch
fi

if you're using Bash then drop [ and -lt and use (( for arithmetic comparisons.
ps provides the -C switch, which accepts the process name to look for.
grep -v trickery are just hacks.
#!/usr/bin/env bash
proc="scrape_data.php"
limit=1
numproc="$(ps hf -opid,cmd -C "$proc" | awk '$2 !~ /^[|\\]/ { ++n } END { print n }')"
if (( numproc < limit ))
then
# code when less than 'limit' processes run
printf "running processes: '%d' less than limit: '%d'.\n" "$numproc" "$limit"
else
# code when more than 'limit' processes run
printf "running processes: '%d' more than limit: '%d'.\n" "$numproc" "$limit"
fi

Counting the lines is not needed. Just check the return value of grep:
if ! ps -ef | grep -q '[s]crape_data.php' ; then
...
fi
The [s] trick avoids the grep -v grep.

While the top-voted answer does in fact work, I have a solution that I used for my scraper that worked for me.
<?php
/**
* Go_Get.php
* -----------------------------------------
* #author Thomas Kroll
* #copyright Creative Commons share alike.
*
* #synopsis:
* This is the main script that calls the grabber.php
* script that actually handles the scraping of
* the RSI website for potential members
*
* #usage: php go_get.php
**/
ini_set('max_execution_time', 300); //300 seconds = 5 minutes
// script execution timing
$start = microtime(true);
// how many scrapers to run
$iter = 100;
/**
* workload.txt -- next record to start with
* workload-end.txt -- where to stop at/after
**/
$s=(float)file_get_contents('./workload.txt');
$e=(float)file_get_contents('./workload-end.txt');
// if $s >= $e exit script otherwise continue
echo ($s>=$e)?exit("Work is done...exiting".PHP_EOL):("Work is not yet done...continuing".PHP_EOL);
echo ("Starting Grabbers: ".PHP_EOL);
$j=0; //gotta start somewhere LOL
while($j<$iter)
{
$j++;
echo ($j %20!= 0?$j." ":$j.PHP_EOL);
// start actual scraping script--output to null
// each 'grabber' goes and gets 36 iterations (0-9/a-z)
exec('bash -c "exec nohup setsid php grabber.php '.$s.' > /dev/null 2>&1 &"');
// increment the workload counter by 36 characters
$s+=36;
}
echo PHP_EOL;
$end = microtime(true);
$total = $end - $start;
print "Script Execution Time: ".$total.PHP_EOL;
file_put_contents('./workload.txt',$s);
// don't exit script just yet...
echo "Waiting for processes to stop...";
// get number of php scrapers running
exec ("pgrep 'php'",$pids);
echo "Current number of processes:".PHP_EOL;
// loop while num of pids is greater than 10
// if less than 10, go ahead and respawn self
// and then exit.
while(count($pids)>10)
{
sleep(2);
unset($pids);
$pids=array();
exec("pgrep 'php'",$pids);
echo (count($pids) %15 !=0 ?count($pids)." ":count($pids).PHP_EOL);
}
//execute self before exiting
exec('bash -c "exec nohup setsid php go_get.php >/dev/null 2>&1 &"');
exit();
?>
Now while this seems like a bit of overkill, I was already using PHP to scrape the data (like your php script in the OP), so why not use PHP as the control script?
Basically, you would call the script like this:
php go_get.php
and then just wait for the first iteration of the script to finish. After that, it runs in the background, which you can see if you use your pid counting from the command line, or a similar tool like htop.
It's not glamorous, but it works. :)

Related

How to detect a non-rolling log file and pattern match in a shell script which is using tail, while, read, and?

I am monitoring a log file and if PATTERN didn't appear in it within THRESHOLD seconds, the script should print "error", otherwise, it should print "clear". The script is working fine, but only if the log is rolling.
I've tried reading 'timeout' but didn't work.
log_file=/tmp/app.log
threshold=120
tail -Fn0 ${log_file} | \
while read line ; do
echo "${line}" | awk '/PATTERN/ { system("touch pattern.tmp") }'
code to calculate how long ago pattern.tmp touched and same is assigned to DIFF
if [ ${diff} -gt ${threshold} ]; then
echo "Error"
else
echo "Clear"
done
It is working as expected only when there is 'any' line printed in the app.log.
If the application got hung for any reason and the log stopped rolling, there won't be any output by the script.
Is there a way to detect the 'no output' of tail and do some command at that time?
It looks like the problem you're having is that the timing calculations inside your while loop never get a chance to run when read is blocking on input. In that case, you can pipe the tail output into a while true loop, inside of which you can do if read -t $timeout:
log_file=/tmp/app.log
threshold=120
timeout=10
tail -Fn0 "$log_file" | while true; do
if read -t $timeout line; then
echo "${line}" | awk '/PATTERN/ { system("touch pattern.tmp") }'
fi
# code to calculate how long ago pattern.tmp touched and same is assigned to diff
if [ ${diff} -gt ${threshold} ]; then
echo "Error"
else
echo "Clear"
fi
done
As Ed Morton pointed out, all caps variable names are not a good idea in bash scripts, so I used lowercase variable names.
How about something simple like:
sleep "$threshold"
grep -q 'PATTERN' "$log_file" && { echo "Clear"; exit; }
echo "Error"
If that's not all you need then edit your question to clarify your requirements. Don't use all upper case for non exported shell variable names btw - google it.
To build further on your idea, it might be beneficial to run the awk part in the background and a continuous loop to do the checking.
#!/usr/bin/env bash
log_file="log.txt"
# threshold in seconds
threshold=10
# run the following process in the background
stdbuf -oL tail -f0n "$log_file" \
| awk '/PATTERN/{system("touch "pattern.tmp") }' &
while true; do
match=$(find . -type f -iname "pattern.tmp" -newermt "-${threshold} seconds")
if [[ -z "${match}" ]]; then
echo "Error"
else
echo "Clear"
fi
done
This looks to me like a watchdog timer. I've implemented something like this by forcing a background process to update my log, so I don't have to worry about read -t. Here's a working example:
#!/usr/bin/env bash
threshold=10
grain=2
errorstate=0
while sleep "$grain"; do
date '+[%F %T] watchdog timer' >> log
done &
trap "kill -HUP $!" 0 HUP INT QUIT TRAP ABRT TERM
printf -v lastseen '%(%s)T'
tail -F log | while read line; do
printf -v now '%(%s)T'
if (( now - lastseen > threshold )); then
echo "ERROR"
errorstate=1
else
if (( errorstate )); then
echo "Recovered, yay"
errorstate=0
fi
fi
if [[ $line =~ .*PATTERN.* ]]; then
lastseen=$now
fi
done
Run this in one window, wait $threshold seconds for it to trigger, then in another window echo PATTERN >> log to see the recovery.
While this can be made as granular as you like (I've set it to 2 seconds in the example), it does pollute your log file.
Oh, and note that printf '%(%s)T' format requires bash version 4 or above.

How to exit from a method in shell script

I am new to shell scripting and stuck with a problem. In my shell method if I saw any validation issue then rest of the programm will not execute and will show user a message. Till validation it's done but when I used exit 0 then only it comes out of the validation loop not from full method.
config_wuigm_parameters () {
echo "Starting to config parameters for WUIGM....." | tee -a $log
prepare_wuigm_conf_file
echo "Configing WUIGM parameters....." | tee -a $log
local parafile=`dirname $0`/wuigm.conf
local pname=""
local pvalue=""
create_preference_template
cat ${parafile} |while read -r line;do
pname=`echo $line | egrep -e "^([^#]*)=(.*)" | cut -d '=' -f 1`
if [ -n "$pname" ] ; then
lsearch=`echo $line | grep "[<|>|\"]" `
if [ -n "$lsearch" ] ; then
echo validtion=$lsearch
echo "< or > character present , Replace < with < and > with >"
exit 1;
else
pvalue=`echo $line | egrep -e "^([^#]*)=(.*)" | cut -d '=' -f 2- `
echo "<entry key=\"$pname\" value=\"$pvalue\"/>" >> $prefs
echo "Configured : ${pname} = ${pvalue} " | tee -a $log
fi
fi
done
echo $validtion
echo "</map>" >> $prefs
# Copy the file to the original location
cp -f $prefs /root/.java/.userPrefs/com/ericsson/pgm/xwx
# removing the local temp file
rm -f $prefs
reboot_server
}
Any help would be great
It is because the construction
cat file | while read ...
starts a new (sub)shell.
In the next you can see the difference:
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
exit 1
done
echo "Still here after the exit"
}
echoline $#
and compare with this
echoline() {
while read -r line
do
echo ==$line==
exit 1
done < "$1"
echo "This is not printed after the exit"
}
echoline $#
Using the return doesn't helps too, (because of subshell). The
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
return 1
done
echo "Still here"
}
echoline $#
will still prints the "Still here".
So, if you want exit the script, use the
while read ...
do
...
done < input #this not starts a new subshell
if want exit just the method (return from it) must check the exit startus of the previous command, like:
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
exit 1
done || return 1
echo "In case of exit (or return), this is not printed"
}
echoline $#
echo "After the function call"
Instead of || or you can use the
[ $? != 0 ] && return 1
just after the while.
You use the return instruction to exit a function with a value.
return [n]
Causes a function to exit with the return value specified by n. If n is omitted, the return status is that of the last command executed in the function body. If used outside a function, but during execution of a script by the . (source) command, it causes the shell to stop executing that script and return either n or the exit status of the last command executed within the script as the exit status of the script. If used out‐side a function and not during execution of a script by ., the return status is false. Any command associated with the RETURN trap is executed before execution resumes after the function or script.
If you want to exit a loop, use the break instruction instead:
break [n]
Exit from within a for, while, until, or select loop. If n is specified, break n levels. n must be ≥ 1. If n is greater than the number of enclosing loops, all enclosing loops are exited. The return value is 0 unless n is not greater than or equal to 1.
The exit instruction exits the current shell instead, so the current program as a whole. If you use sub-shells, code written between parenthesis, then only that sub-shell exits.

Shell while loop no stopping

I'm writing a routine that will identify if a process stops running and will do something once the processes targeted is gone.
I came up with this code (as a test for my future code):
#!/bin/bash
value="aaa"
ls | grep $value
while [ $? = 0 ];
do
sleep 5
ls | grep $value
echo $?
done;
echo DONE
My problem is that for some reason, the loop never stops and echoes 1 after I delete the file "aaa".
0
0 >>> I delete the file at that point (in another terminal)
1
1
1
1
.
.
.
I would expect the output to be "DONE" as soon as I delete the file...
What's the problem?
SOLUTION:
#!/bin/bash
value="aaa"
ls | grep $value
while [ $? = 0 ];
do
sleep 5
ls | grep $value
done;
echo DONE
The value of $? changes very easily. In the current version of your code, this line:
echo $?
prints the status of the previous command (grep) -- but then it sets $? to 0, the status of the echo command.
Save the value of $? in another variable, one that won't be clobbered next time you execute a command:
#!/bin/bash
value="aaa"
ls | grep $value
status=$?
while [ $status = 0 ];
do
sleep 5
ls | grep $value
status=$?
echo $status
done;
echo DONE
If the ls | grep aaa is intended to check whether a file named aaa exists, this:
while [ ! -f aaa ] ; ...
is a cleaner way to do it.
$? is the return code of the last command, in this case your sleep.
You can rewrite that loop in much simpler way like this:
while [ -f aaa ]; do
sleep 5;
echo "sleeping...";
done
You ought not duplicate the command to be tested. You can always write:
while cmd; do ...; done
instead of
cmd
while [ $? = 0 ]; do ...; cmd; done
In your case, you mention in a comment that the command you are testing is parsing the output of ps. Although there are very good arguments that you ought not do that, and that the followon processing should be done by the parent of the command for which you are waiting, we'll ignore that issue at the moment. You can simply write:
while ps -ef | grep -v "grep mysqldump" |
grep mysqldump > /dev/null; do sleep 1200; done
Note that I changed the order of your pipe, since grep -v will return true if it
matches anything. In this case, I think it is not necessary, but I believe is more
readable. I've also discarded the output to clean things up a bit.
Presumably your objective is to wait until a filename containing the string $value is present in the local directory and not necessarily a single filename.
try:
#!/bin/bash
value="aaa"
while ! ls *$value*; do
sleep 5
done
echo DONE
Your original code failed because $?is filled with the return code of the echo command upon every iteration following the first.
BTW, if you intend to use ps instead of ls in the future, you will pick up your own grep unless you are clever. Use ps -ef | grep [m]ysqlplus.

Run bash commands in parallel, track results and count

I was wondering how, if possible, I can create a simple job management in BASH to process several commands in parallel. That is, I have a big list of commands to run, and I'd like to have two of them running at any given time.
I know quite a bit about bash, so here are the requirements that make it tricky:
The commands have variable running time so I can't just spawn 2, wait, and then continue with the next two. As soon as one command is done a next command must be run.
The controlling process needs to know the exit code of each command so that it can keep a total of how many failed
I'm thinking somehow I can use trap but I don't see an easy way to get the exit value of a child inside the handler.
So, any ideas on how this can be done?
Well, here is some proof of concept code that should probably work, but it breaks bash: invalid command lines generated, hanging, and sometimes a core dump.
# need monitor mode for trap CHLD to work
set -m
# store the PIDs of the children being watched
declare -a child_pids
function child_done
{
echo "Child $1 result = $2"
}
function check_pid
{
# check if running
kill -s 0 $1
if [ $? == 0 ]; then
child_pids=("${child_pids[#]}" "$1")
else
wait $1
ret=$?
child_done $1 $ret
fi
}
# check by copying pids, clearing list and then checking each, check_pid
# will add back to the list if it is still running
function check_done
{
to_check=("${child_pids[#]}")
child_pids=()
for ((i=0;$i<${#to_check};i++)); do
check_pid ${to_check[$i]}
done
}
function run_command
{
"$#" &
pid=$!
# check this pid now (this will add to the child_pids list if still running)
check_pid $pid
}
# run check on all pids anytime some child exits
trap 'check_done' CHLD
# test
for ((tl=0;tl<10;tl++)); do
run_command bash -c "echo FAIL; sleep 1; exit 1;"
run_command bash -c "echo OKAY;"
done
# wait for all children to be done
wait
Note that this isn't what I ultimately want, but would be groundwork to getting what I want.
Followup: I've implemented a system to do this in Python. So anybody using Python for scripting can have the above functionality. Refer to shelljob
GNU Parallel is awesomesauce:
$ parallel -j2 < commands.txt
$ echo $?
It will set the exit status to the number of commands that failed. If you have more than 253 commands, check out --joblog. If you don't know all the commands up front, check out --bg.
Can I persuade you to use make? This has the advantage that you can tell it how many commands to run in parallel (modify the -j number)
echo -e ".PHONY: c1 c2 c3 c4\nall: c1 c2 c3 c4\nc1:\n\tsleep 2; echo c1\nc2:\n\tsleep 2; echo c2\nc3:\n\tsleep 2; echo c3\nc4:\n\tsleep 2; echo c4" | make -f - -j2
Stick it in a Makefile and it will be much more readable
.PHONY: c1 c2 c3 c4
all: c1 c2 c3 c4
c1:
sleep 2; echo c1
c2:
sleep 2; echo c2
c3:
sleep 2; echo c3
c4:
sleep 2; echo c4
Beware, those are not spaces at the beginning of the lines, they're a TAB, so a cut and paste won't work here.
Put an "#" infront of each command if you don't the command echoed. e.g.:
#sleep 2; echo c1
This would stop on the first command that failed. If you need a count of the failures you'd need to engineer that in the makefile somehow. Perhaps something like
command || echo F >> failed
Then check the length of failed.
The problem you have is that you cannot wait for one of multiple background processes to complete. If you observe job status (using jobs) then finished background jobs are removed from the job list. You need another mechanism to determine whether a background job has finished.
The following example uses starts to background processes (sleeps). It then loops using ps to see if they are still running. If not it uses wait to gather the exit code and starts a new background process.
#!/bin/bash
sleep 3 &
pid1=$!
sleep 6 &
pid2=$!
while ( true ) do
running1=`ps -p $pid1 --no-headers | wc -l`
if [ $running1 == 0 ]
then
wait $pid1
echo process 1 finished with exit code $?
sleep 3 &
pid1=$!
else
echo process 1 running
fi
running2=`ps -p $pid2 --no-headers | wc -l`
if [ $running2 == 0 ]
then
wait $pid2
echo process 2 finished with exit code $?
sleep 6 &
pid2=$!
else
echo process 2 running
fi
sleep 1
done
Edit: Using SIGCHLD (without polling):
#!/bin/bash
set -bm
trap 'ChildFinished' SIGCHLD
function ChildFinished() {
running1=`ps -p $pid1 --no-headers | wc -l`
if [ $running1 == 0 ]
then
wait $pid1
echo process 1 finished with exit code $?
sleep 3 &
pid1=$!
else
echo process 1 running
fi
running2=`ps -p $pid2 --no-headers | wc -l`
if [ $running2 == 0 ]
then
wait $pid2
echo process 2 finished with exit code $?
sleep 6 &
pid2=$!
else
echo process 2 running
fi
sleep 1
}
sleep 3 &
pid1=$!
sleep 6 &
pid2=$!
sleep 1000d
I think the following example answers some of your questions, I am looking into the rest of question
(cat list1 list2 list3 | sort | uniq > list123) &
(cat list4 list5 list6 | sort | uniq > list456) &
from:
Running parallel processes in subshells
There is another package for debian systems named xjobs.
You might want to check it out:
http://packages.debian.org/wheezy/xjobs
If you cannot install parallel for some reason this will work in plain shell or bash
# String to detect failure in subprocess
FAIL_STR=failed_cmd
result=$(
(false || echo ${FAIL_STR}1) &
(true || echo ${FAIL_STR}2) &
(false || echo ${FAIL_STR}3)
)
wait
if [[ ${result} == *"$FAIL_STR"* ]]; then
failure=`echo ${result} | grep -E -o "$FAIL_STR[^[:space:]]+"`
echo The following commands failed:
echo "${failure}"
echo See above output of these commands for details.
exit 1
fi
Where true & false are placeholders for your commands. You can also echo $? along with the FAIL_STR to get the command status.
Yet another bash only example for your interest. Of course, prefer the use of GNU parallel, which will offer much more features out of the box.
This solution involve tmp file output creation for collecting of job status.
We use /tmp/${$}_ as temporary file prefix $$ is the actual parent process number and it is the same for all the script execution.
First, the loop for starting parallel job by batch. The batch size is set using max_parrallel_connection. try_connect_DB() is a slow bash function in the same file. Here we collect stdout + stderr 2>&1 for failure diagnostic.
nb_project=$(echo "$projects" | wc -w)
i=0
parrallel_connection=0
max_parrallel_connection=10
for p in $projects
do
i=$((i+1))
parrallel_connection=$((parrallel_connection+1))
try_connect_DB $p "$USERNAME" "$pass" > /tmp/${$}_${p}.out 2>&1 &
if [[ $parrallel_connection -ge $max_parrallel_connection ]]
then
echo -n " ... ($i/$nb_project)"
wait
parrallel_connection=0
fi
done
if [[ $nb_project -gt $max_parrallel_connection ]]
then
# final new line
echo
fi
# wait for all remaining jobs
wait
After run all jobs is finished review all results:
SQL_connection_failed is our convention of error, outputed by try_connect_DB() you may filter job success or failure the way that most suite your need.
Here we decided to only output failed results in order to reduce the amount of output on large sized jobs. Especially if most of them, or all, passed successfully.
# displaying result that failed
file_with_failure=$(grep -l SQL_connection_failed /tmp/${$}_*.out)
if [[ -n $file_with_failure ]]
then
nb_failed=$(wc -l <<< "$file_with_failure")
# we will collect DB name from our output file naming convention, for post treatment
db_names=""
echo "=========== failed connections : $nb_failed/$nb_project"
for failure in $file_with_failure
do
echo "============ $failure"
cat $failure
db_names+=" $(basename $failure | sed -e 's/^[0-9]\+_\([^.]\+\)\.out/\1/')"
done
echo "$db_names"
ret=1
else
echo "all tests passed"
ret=0
fi
# temporary files cleanup, could be kept is case of error, adapt to suit your needs.
rm /tmp/${$}_*.out
exit $ret

Bash array with loop

I wrote a Bash script which tries to find a process and run the process if it had stopped.
This is the script.
#!/bin/bash
process=thin
path=/home/deepak/abc/
initiate=thin start -d
process_id=`ps -ef | pgrep $process | wc -m`
if [ "$process_id" -gt "0" ]; then
echo "The process process is running!!"
else
cd $path
$initiate
echo "Oops the process has stopped"
fi
This worked fine and I thought of using arrays so that i can form a loop use this script to check multiple processes. So I modified my script like this
#!/bin/bash
process[1]=thin
path[1]=/home/deepak/abc/
initiate[1]=thin start -d
process_id=`ps -ef | pgrep $process[1] | wc -m`
if [ "$process_id" -gt "0" ]; then
echo "Hurray the process ${process[1]} is running!!"
else
cd ${path[1]}
${initiate[1]}
echo "Oops the process has stopped"
echo "Continue your coffee, the process has been stated again! ;)"
fi
I get this error if i run this script.
DontWorry.sh: 2: process[1]=thin: not found
DontWorry.sh: 3: path[1]=/home/deepak/abc/: not found
DontWorry.sh: 4: initiate[1]=thin start -d: not found
I googled to find any solution for this, most them insisted to use "#!/bin/bash" instead of "#!/bin/sh". I tried both but nothing worked. What am i missing?
Perhaps something like this:
#!/usr/bin/perl -w
use strict;
my #processes = ({process=>'thin',
path=>'/home/deepak/abc/',
initiate=>'thin start -d'
},
# more records go here...
);
for $p (#processes) {
my $cmd = 'ps -ef | pgrep ' . $p->{process} . ' | wc -m';
if (`$cmd` > 0) {
print "The process process is running!!\n";
} else {
exec('cd ' . $p->{path} . '; ' .
$p->{$initiate}. '; '.
'echo Oops the process has stopped');
}
}
Deepak Prasanna, you may want to rethink the way you are monitoring the process.
lhunath gives reasons for not using ps to monitor/restart processes, and also
a simple script wrapper to achieve the goal in a cleaner manner.
I was not actually aware you could set arrays like that. I've always used:
pax> initiate=("thin start -d" "xx")
pax> echo ${initiate[0]}
thin start -d
pax> echo ${initiate[1]}
xx
You may need quotes around the strings. In my bash (4.0.33),
initiate[1]=thin start -d
is being interpreted as "set initiate[1]=thin then run start -d" because you can:
fspec=/etc/passwd ls -al ${fspec}
to set an environment variable for a single command. What version of bash are you running (use bash --version)?
Update:
Deepak, I've gotten that script working under the same release of bash as yours. See the following transcript:
pax> bash --version
GNU bash, version 3.2.48(21)-release (i686-pc-cygwin)
Copyright (C) 2007 Free Software Foundation, Inc.
pax> cat qq.sh
#!/bin/bash
process=(sleep)
path=(/)
initiate=("sleep 3600")
process_id=`ps -ef | pgrep ${process[0]} | wc -m`
if [ "$process_id" -gt "0" ]; then
echo "Hurray the process ${process[0]} is running!!"
else
cd ${path[0]}
${initiate[0]} &
echo "Oops the process has stopped"
echo "Continue your coffee, the process has been stated again! ;)"
fi
pax> ./qq.sh
Oops the process has stopped
Continue your coffee, the process has been stated again! ;)
pax> ./qq.sh
Hurray the process sleep is running!!
pax> ps -ef
UID PID PPID TTY STIME COMMAND
pax 112 1 con 10:16:24 /usr/bin/bash
pax 4568 1 con 10:23:07 /usr/bin/sleep
pax 5924 112 con 10:23:18 /usr/bin/ps
Can you try the modified script in your own environment and see how it goes?

Resources