Redirecting output in shell script from subprocess with simulated ctrl-c - bash

I am working on a shell script, where it sets some parameters and calls a python script. Sometimes the python script hangs and when I press ctrl-c, it generates some error output. I am trying to write this to a file. When I execute the shell script and press ctrl-c, I get the output in the redirected file, but if I try to simulate the ctrl-c by sleeping for some time and killing the process, the output is not redirected to the file. I have used some examples from SO to do the sleep and terminate which works, but the output file doesn't have the error which I get from ctrl-c action manually.
I cannot change the python script that this script is executing, so I have to implement this in the calling script.
[ -z "$1" ] && echo "No environment argument supplied" && exit 1
. env_params.txt
. run_params.txt
echo "========================================================== RUN STARTED AT $B ==========================================================" >> $OUTFILE
echo " " >> $OUTFILE
export RUN_COMMAND="$PYTHON_SCRIPT_LOC/pyscript.py PARAM1=VALUE1 PARAM2=VALU2 PAram3=value3"
echo "Run command from test.sh $RUN_COMMAND" >> $OUTFILE
echo " " >> $OUTFILE
echo " " >> $OUTFILE
echo "========================================================== Running Python script ==========================================================" >> $OUTFILE
echo "Before python command"
###############################################
( python $RUN_COMMAND >> $OUTFILE 2>&1 ) & pid=$!
SLEEP_TIME=1m
echo "before sleep - sleeping for $SLEEP_TIME $pid"
( sleep $SLEEP_TIME && kill -HUP $pid ) 2>/dev/null & watcher=$!
if wait $pid 2>/dev/null; then
echo "after sleep - sleeping for $SLEEP_TIME $pid"
echo "your_command finished"
pkill -HUP -P $watcher
wait $watcher
else
echo "after sleep - sleeping for $SLEEP_TIME $pid"
echo "your_command interrupted"
fi
### also tried this - did not work either
### python $RUN_COMMAND >> $OUTFILE 2>&1 &; (pythonPID=$! ; sleep 1m ;kill -s 2 $pythonPID)
.
.
What changes do I need to make such that the output is written to the $OUTFILE when the process is killed in the script itself, rather than pressing ctrl-c on the terminal?

Probabelly you do not want to use SIGHUP (Hangup detected on controlling terminal or death of controlling process, but rather
SIGINT 2 Term Interrupt from keyboard
for more info read man 7 signal

Related

How to keep the internal shell script running while the main shell script ends?

I am trying to create one script which check for a running process and start it if it is not running.
Here is test.sh
#!/bin/bash
if pgrep infiloop > /dev/null ;
then
echo "Process is running."
else
exec /u/team/infiloop.sh > /u/team/infiloopOutput.txt
echo "Process was not running."
fi
And infiloop.sh
#!/bin/sh
while true
do
echo "helllo"
sleep 2
done
Now when i run the 1st script , it starts the script but after it start it doesn't allow me to run another command.
Output:
[user#host ~]$ ./checkforRunningJob.sh
^C
I have to press Ctrl+C, Ctrl+Z, and once i do that my infinite script also stop.
Could you please check.
Thanks.
You can put the process in the background with &:
#!/bin/bash
if pgrep infiloop > /dev/null ;
then
echo "Process is running."
else
exec /u/team/infiloop.sh > /u/team/infiloopOutput.txt &
echo "Process was not running, started process $!"
fi

bash: suppress kill message in while loop

I have a progress bar that prints dots as it waits for an external program to finish executing. When it does finish, I get an ugly kill message which I want to suppress.
#!/bin/bash
program < input.file.1 > output.1 &
sim='running simulation'
echo -ne $sim >&2
while kill -0 $!; do
echo -n . >&2
sleep 1
done
Expected: running simulation.........
Actual: running simulation........./run_with_dots.1: line 8: kill: (11872) - No such process
Redirect stderr:
while kill -0 $! 2> /dev/null; do

How to run multiple shell script which does not give prompt at the end

I want to run three processes which all will stop showing that the service is started and prompt will not be given. I want to automate this procedure. I tried using "&" at the end but it pops in the terminal. I tried using "sh +x script1.sh & sh +x script2.sh" I need to stop the process by pressing ctrl+c for another script to run Please help in this
You need to define a general script that launches the three processes in background and waits for the user the press Control+C. Then you add a trap to the general script to launch a shutdown hook.
I think that the solution may look as this:
#!/bin/bash
end_processes() {
echo "Shutdown hook"
if [ -n $PID1 ]; then
echo "Killing PID 1 = $PID1"
kill -9 $PID1
fi
if [ -n $PID2 ]; then
echo "Killing PID 2 = $PID2"
kill -9 $PID2
fi
if [ -n $PID2 ]; then
echo "Killing PID 3 = $PID3"
kill -9 $PID3
fi
}
# Main code: Add trap
trap end_processes EXIT
# Main code: Launch scripts
./script1.sh &
PID1=$!
./script2.sh &
PID2=$!
./script3.sh &
PID3=$!
# Main code: wait for user to press Control+C
while [ 1 ]; do
sleep 1s
done
Notice that:
I have added some echo messages just to test.
Trap executes a function when EXIT is received on the script. You can change the received signal by capturing only a specific signal (i.e. SIGINT)
The trap function is now killing the processes with -9. I you wish, you can send other kill signals
The $! retrieves the PID of the most recent backgroud command.
You can modify the wait loop (the last while command) to sleep firstly for the aproximate time of the processes to finish and then to wait for a more smaller time:
APROX_TIME=30s
POLL_TIME=2s
sleep $APROX_TIME
while [ 1 ]; do
sleep $POLL_TIME
done

Preventing "Terminated" message when killing subprocess

I am looking for a tool like Linux's "timeout" command that will allow timouet values of less than 1 second. I am using the solution here, but I am running into one problem.
The problem is that sometimes it prints this message, and I am trying to figure out why.
./tools/utimeout.sh: line 17: 12369 Terminated $(./tools/usleep $TIMEOUT ; kill $PROC &> /dev/null)
I tried modifying the solution so that the subprocess would be killed before the script exits, but this didn't help.
#!/bin/bash
TIMEOUT=400000
#execute command in background
"$#" &
#get process ID
PROC=$!
#sleep for 10 milliseconds then kill command
(./tools/usleep $TIMEOUT ; kill $PROC &> /dev/null) &
CPROC=$!
wait $PROC &> /dev/null
kill $CPROC &> /dev/null
if [ $? -eq 1 ]; then
# echo "Process timed out."
exit 1
else
# echo "Process completed successfully."
exit 0
fi
Since I want to capture stderr from this script, as workaround, I am just removing the message from the error logs with sed. Since this is a hack, I was hoping to find a better solution.
To avoid the message you can disown the process
(./tools/usleep $TIMEOUT ; kill $PROC &> /dev/null) &
disown %
and then you cannot wait on it, or you can put it in a subshell:
( (./tools/usleep $TIMEOUT ; kill $PROC &> /dev/null) & )
but of course then you cannot kill it as $! wont be right.
My fedora 21 timeout takes floating point durations:
$ time timeout .1 sleep 1
real 0m0.103s
user 0m0.002s
sys 0m0.002s

Bash, CTRL+C in eval not interrupting the main script

In my bash script, I'm running an external command that's stored in $cmd variable. (It could be anything, even some simple bash oneliner.)
If ctrl+C is pressed while running the script, I want it to kill the currently running $cmd but it should still continue running the main script. However, I would like to preserve the option to kill the main script with ctrl+C when the main script is running.
#!/bin/bash
cmd='read -p "Ooook?" something; echo $something; sleep 4 '
while true; do
echo "running cmd.."
eval "$cmd" # ctrl-C now should terminate the eval and print "done cmd"
echo "done cmd"
sleep 5 # ctrl-C now should terminate the main script
done
Any idea how to do it some nice bash way?
Changes applied based on answers:
#! /bin/bash
cmd='read -p "Ooook1?" something; read -p "Oook2?" ; echo $something; sleep 4 '
while true; do
echo "running cmd.."
trap "echo Interrupted" INT
eval "($cmd)" # ctrl-C now should terminate the eval and print "done cmd"
trap - INT
echo "done cmd"
sleep 5 # ctrl-C now should terminate the main script
done
Now, pressing ctrl+C while "Ooook1?" read will break the eval only after that read is done. (it will interrupt just before "Oook2") However it will interrupt "sleep 4" instantly.
In both cases it will do the right thing - it will just interrupt the eval subshell, so we're almost there - just that weird read behaviour..
If you can afford having the eval part run in a subshell, "all" you need to do is trap SIGINT.
#! /bin/bash
cmd='read -p "Ooook1?" something; read -p "Oook2?" ; echo $something; sleep 4 '
while true; do
echo "running cmd.."
trap "echo Interrupted" INT
eval "($cmd)" # ctrl-C now should terminate the eval and print "done cmd"
trap - INT
echo "done cmd"
sleep 5 # ctrl-C now should terminate the main script
done
Don't know if that will fit your specific need though.
$ ./t.sh
running cmd..
Ooook1?^CInterrupted
done cmd
^C
$ ./t.sh
running cmd..
Ooook1?qsdqs^CInterrupted
done cmd
^C
$ ./t.sh
running cmd..
Ooook1?qsd
Oook2?^CInterrupted
done cmd
^C
$
GNU bash, version 4.1.9(2)-release (x86_64-pc-linux-gnu)
You can determine whether the sleep command exited abnormally by examining the last exit status echo $?. A non-zero status probably indicates Ctrl-C.
No, read is not an external command, it is internal builtin bash command being executed in the same process as the other instructions. So at Ctrl-C all the process will be killed.
P.S.
Yes. you can execute command in subshell. Something like this
#!/bin/bash
cmd='trap - INT; echo $$; read -p "Ooook?" something; echo $something; sleep 4 '
echo $$
while true; do
echo "$cmd" > tmpfile
echo "running cmd.."
trap "" INT
bash tmpfile
rm tmpfile
trap - INT
echo "done cmd"
sleep 5 # ctrl-C now should terminate the main script
done

Resources