jobs list and finding the process ID using Linux in Red Hat - bash

I have wrote this script but when i run the job -ls and ps|grep i get no results here is my script:
#!/bin/bash
trap 'echo -e "kill Command given \n";exit 2'SIGINT SIGTERM
count=1
echo "start of the program"
while [ $count -le 10 ]
do
echo "Loop #${count}"
sleep 10
count=$[ count + 1 ]
done
echo "end of program"

This syntax is invalid:
trap 'echo -e "kill Command given \n";exit 2'SIGINT SIGTERM
because you don't have valid spacing. Use this instead:
trap 'echo -e "kill Command given \n"; exit 2' INT TERM
You're probably doing something else wrong, too, since this works fine for me:
# Start 10 sleep processes in the background.
for x in {1..10}; do
sleep 60 &
done
$ pgrep -c sleep
10
I'm getting the results I'm expecting, which is a count of the number of sleep processes currently running. If you're expecting something else, please update your question and provide some examples of your expected output.

I use this
ps | awk /$1/'{print $4; exit}'

Related

How to kill a process group with kill in bash?

I have a script which is much more complicated but I managed to produce a short script that exhibits the same problem.
I create a process and make it a session leader and then send SIGINT to it. The kill builtin doesn't fail but the process doesn't get killed either (i.e. the default behaviour for SIGINT is to kill). I tried with kill -INT -pid (which should be equivalent to what I do currently) and the /bin/kill command but the behaviour is the same.
The script is as follows:
#!/bin/bash
# Run in a new session so that I don't have to kill the shell
setsid bash -c "sleep 50" &
procs=$(ps --ppid $$ -o pid,pgid,command | grep 'sleep' | head -1)
if [[ -z "$procs" ]]; then
echo "Couldn't find process group"
exit 1
fi
PID=$(echo $procs | cut -d ' ' -f 1)
pgid=$(echo $procs | cut -d ' ' -f 2)
if ! kill -n SIGINT $pgid; then
echo "kill failed"
fi
echo "done"
ps -P $pgid
My expectation is that the last ps command shouldn't report anything (as kill didn't report failure and hence the process should have died) but it does.
I am looking for an explanation of the above noted behaviour and how I can kill a process group (i.e. both the bash and the sleep it starts -- the setsid line above) running in a separate session.
I think you'll find that sleep ignores SIGINT. Take a look at the signals of your sleep command and see. On my Linux box I find:
SigIgn: 0000000000000006
The second bit from the right is set (6 = 4 + 2 + 0), and from the above link:
--> 2 = SIGINT
Try send a HUP, and you'll find it does kill the sleep.

applescript blocks shell script cmd when writing to pipe

The following script works as expected when executed from an Applescript do shell script command.
#!/bin/sh
sleep 10 &
#echo "hello world" > /tmp/apipe &
cpid=$!
sleep 1
if ps -ef | grep $cpid | grep sleep | grep -qv grep ; then
echo "killing blocking cmd..."
kill -KILL $cpid
# non zero status to inform launch script of problem...
exit 1
fi
But, if the sleep command (line 2) is swaped to the echo command in (line 3) together with the if statement, the script blocks when run from Applescript but runs fine from the terminal command line.
Any ideas?
EDIT: I should have mentioned that the script works properly when a consumer/reader is connected to the pipe. It only block when nothing is reading from the pipe...
OK, the following will do the trick. It basically kills the job using its jobid. Since there is only one, it's the current job %%.
I was lucky that I came across the this answer or it would have driven me crazy :)
#!/bin/sh
echo $1 > $2 &
sleep 1
# Following is necessary. Seems to need it or
# job will not complete! Also seen at
# https://stackoverflow.com/a/10736613/348694
echo "Checking for running jobs..."
jobs
kill %% >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Taking too long. Killed..."
exit 1
fi
exit 0

Output of background process output to Shell variable

I want to get output of a command/script to a variable but the process is triggered to run in background. I tried as below and few servers ran it correctly and I got the response. But in few I am getting i_res as empty.
I am trying to run it in background as the command has chance to get in hang state and I don't want to hung the parent script.
Hope I will get a response soon.
#!/bin/ksh
x_cmd="ls -l"
i_res=$(eval $x_cmd 2>&1 &)
k_pid=$(pgrep -P $$ | head -1)
sleep 5
c_errm="$(kill -0 $k_pid 2>&1 )"; c_prs=$?
if [ $c_prs -eq 0 ]; then
c_errm=$(kill -9 $k_pid)
fi
wait $k_pid
echo "Result : $i_res"
Try something like this:
#!/bin/ksh
pid=$$ # parent process
(sleep 5 && kill $pid) & # this will sleep and wake up after 5 seconds
# and kill off the parent.
termpid=$! # remember the timebomb pid
# put the command that can hang here
result=$( ls -l )
# if we got here in less than 5 five seconds:
kill $termpid # kill off the timebomb
echo "$result" # disply result
exit 0
Add whatever messages you need to the code. On average this will complete much faster than always having a sleep statement. You can see what it does by making the command sleep 6 instead of ls -l

Monitoring life time of a process

I have a python script called hdsr_writer.py. I can launch this script in shell by calling
"python hdsr_writer.py 1234"
where 1234 is a parameter.
I made a shell script to increase the number and execute the python script with the number every 1 second
for param from 1 to 100000
python hdsr_writer.py $param &
sleep (1)
Usually, the python script executes its task within 0.5 second. However, there are times at which the python script gets stuck and resides in the system for longer than 30 seconds. I don't want that. So I would like to monitor life time of each python process executed. If it has stayed for longer than 2 second it would be killed and re-executed 2 times at most.
Note: I would like do this in the shell script not python script because I could not change the python script.
Update: More explainations about my question
Please note that: launching a new python process and monitoring python processes are independent jobs. Launching job doesn't care how many python processes are running and how "old" they are, just calls "python hdsr_writer.py $param &" every 1 second after increasing param. On the other hand, monitoring job periodically checks life time of all hdsr_writer python processes. If one has resided more than 2 second in memory, kills it, and re-runs it at most of 2 times.
Not so short answer
#/bin/bash
param=1
while [[ $param -lt 100000 ]]; do
echo "param=$param"
chances=3
while [[ $chances -gt 0 ]]; do
python tst.py $param &
sleep 2
if [[ "$(jobs | grep 'Running')" == "" ]]; then
chances=0
else
kill -9 $(jobs -l | awk '{print $2}')
chances=$(($chances-1))
if [[ $chances -gt 0 ]]; then
echo "one more chance for parameter $param"
fi
fi
done
param=$(($param+1))
done
UPD
This is another answer as requested by OP.
Here is still 2 scripts in one. But they can be spitted in two files.
Please pay attention that $() & is used to run sub-shells in background
#!/bin/bash
# Script launcher
pscript='rand.py'
for param in {1..10}
do
# start background sub-shell, where python with $param is started
echo $(
left=3
error_on_exit=1
# go if any chances left and previous run exits not with code 0
while [[ ( ( $left -gt 0 ) && ( $error_on_exit -ne 0 ) ) ]]; do
left=$(($left-1))
echo "param=$param; chances left $left "
# run python and grab python exit code (=0 if ok)
python $pscript $param
error_on_exit=$?
done
) &
done
# Script controller
# just kills python processes older than 2 seconds
# exits after no python left
# $(...) & can be removed if this code goes to separate script
$(while [[ $(ps | grep -v 'grep' | grep -c python ) != "0" ]]
do
sleep 0.5
killall -9 -q --older-than 2s python
done) &
Use a combination of sleep and nohup commands. After sleep time use kill to finish the execution of python script. You can check if the process is running with ps command.
#!/usr/bin/ksh
for param from {1..100000}
nohup python hdsr_writer.py $param &
pid=$!
sleep(2)
if [ ps -p $pid ]
then
kill -9 $pid
fi
done
Re-answer:
I'd use two scripts, the first one (script1.ksh):
#!/usr/bin/ksh
for param from {1..1000000}
nohup script2.sh $param &
done
And the second (script2.ksh):
#!/usr/bin/ksh
for i from {1..3}
python hsdr_write.py $1 &
pid=$!
sleep(2)
if [ ps -p $pid ]
then
kill -9 $pid
else
echo 'Finalizado'$1 >> log.txt
return
fi
done
The first script will launch all yours processes one after the other. The second one will check his own python process.

How to stop a running script which calls a infinite loop

I'm writing a bash script for kicking up an uncertain program. The run time of the program is unknown. The script will also kick up a while loop for using linux commands or perf to record something in a 1 second manner.
./my_app &
$i=1
while true;
do
perf stat -a -A -e writeback:writeback_dirty_page sleep $i >> out
done
How can I stop the while loop while my_app is finished? Thank you.
Make your while loop conditional on the process id of the app existing:
./my_app &
app_pid=$!
i=1
while ps -p $app_pid >/dev/null 2>&1
do
perf stat -a -A -e writeback:writeback_dirty_page sleep $i >> out
done
Get the pid using
echo $!
then
kill
you can send kill signal from my_app to the process that spawn my_app
Here is the real example
test.sh
#!/bin/bash
./my_app.sh $$ &
while [ 1 ]
do
echo running....
sleep 2
done
my_app.sh
#!/bin/bash
sleep 10
kill -9 $1

Resources