Restarting a shell script with a signal - bash

I have a script that runs and outputs to my panel. What I'm trying to do is to restart the script from another script by sending a siganl to it.
Script 1 (panel_script):
#!/bin/sh
trap "exec panel_script" SIGTRAP
while true; do
echo "status"
sleep 10
done
Script 2:
#!/bin/sh
pkill -x -SIGTRAP panel_script

Use trap "exec $0" EXIT and pkill -f panel_script.
You did not write that you would ever want to stop the script again.

Related

How to keep the internal shell script running while the main shell script ends?

I am trying to create one script which check for a running process and start it if it is not running.
Here is test.sh
#!/bin/bash
if pgrep infiloop > /dev/null ;
then
echo "Process is running."
else
exec /u/team/infiloop.sh > /u/team/infiloopOutput.txt
echo "Process was not running."
fi
And infiloop.sh
#!/bin/sh
while true
do
echo "helllo"
sleep 2
done
Now when i run the 1st script , it starts the script but after it start it doesn't allow me to run another command.
Output:
[user#host ~]$ ./checkforRunningJob.sh
^C
I have to press Ctrl+C, Ctrl+Z, and once i do that my infinite script also stop.
Could you please check.
Thanks.
You can put the process in the background with &:
#!/bin/bash
if pgrep infiloop > /dev/null ;
then
echo "Process is running."
else
exec /u/team/infiloop.sh > /u/team/infiloopOutput.txt &
echo "Process was not running, started process $!"
fi

killing background processes when script exists [duplicate]

This question already has answers here:
How do I kill background processes / jobs when my shell script exits?
(15 answers)
Closed 6 years ago.
I have scriptA which will execute another script which will startup in the background. Now I need to make sure that when I kill scriptA (cmd+c) that the background processes are also killed.
#!/bin/bash
echo "This script is about to run another script."
sh ../processes/processb/bin/startserver.sh &
FOO_PID=$!
echo "This script has just run another script." $FOO_PID
This script executes fine, but once I press cmd+c and do a 'ps' command on FOO_PID value , that process still exists. What am I doing wrong?
UPDATE-----------
So I tried out below code, but still scriptC's process is not getting killed. I think it just only terminates scriptA ( parent) when pressed ctrl+c and therefore trap command does not get executed?
#!/bin/bash
echo "This script is about to run another script."
../common/samples/bin/scriptC.sh &
mypid=$!
kill -0 "$mypid" && echo "My process is still alive."
echo "This script has just run another script." $mypid
trap "kill $mypid && kill $$" INT
Add a trap for SIGINT:
trap "kill $FOO_PID && kill $$" INT
or for any sort of exiting, handle the pseudo signal EXIT:
trap "kill $FOO_PID && kill $$" EXIT

Bash Script execute commands from a file but if cancel on to jump on next one

Im tring to make a script to execute a set of commands from a file
the file for example has a set of 3 commands perl script-a, perl script-b, perl script-c, each command on a new line and i made this script
#!/bin/bash
for command in `cat file.txt`
do
echo $command
perl $command
done
The problem is that some scripts get stuck or takes too long to finish and i want to see their outputs. It is possible to make the bash script in case i send CTRL+C on the current command that is executed to jump to the next command in the txt file not to cancel the wole bash script.
Thank you
You can use trap 'continue' SIGINT to ignore Ctrl+c:
#!/bin/bash
# ignore & continue on Ctrl+c (SIGINT)
trap 'continue' SIGINT
while read command
do
echo "$command"
perl "$command"
done < file.txt
# Enable Ctrl+c
trap SIGINT
Also you don't need to call cat to read a file's contents.
#!/bin/bash
for scr in $(cat file.txt)
do
echo $scr
# Only if you have a few lines in your file.txt,
# Then, execute the perl command in the background
# Save the output.
# From your question it seems each of these scripts are independent
perl $scr &> $scr_perl_execution.out &
done
You can check each of the output to see if the command is doing as you expect. If not, you can use kill to terminate each of the command.

Bash, CTRL+C in eval not interrupting the main script

In my bash script, I'm running an external command that's stored in $cmd variable. (It could be anything, even some simple bash oneliner.)
If ctrl+C is pressed while running the script, I want it to kill the currently running $cmd but it should still continue running the main script. However, I would like to preserve the option to kill the main script with ctrl+C when the main script is running.
#!/bin/bash
cmd='read -p "Ooook?" something; echo $something; sleep 4 '
while true; do
echo "running cmd.."
eval "$cmd" # ctrl-C now should terminate the eval and print "done cmd"
echo "done cmd"
sleep 5 # ctrl-C now should terminate the main script
done
Any idea how to do it some nice bash way?
Changes applied based on answers:
#! /bin/bash
cmd='read -p "Ooook1?" something; read -p "Oook2?" ; echo $something; sleep 4 '
while true; do
echo "running cmd.."
trap "echo Interrupted" INT
eval "($cmd)" # ctrl-C now should terminate the eval and print "done cmd"
trap - INT
echo "done cmd"
sleep 5 # ctrl-C now should terminate the main script
done
Now, pressing ctrl+C while "Ooook1?" read will break the eval only after that read is done. (it will interrupt just before "Oook2") However it will interrupt "sleep 4" instantly.
In both cases it will do the right thing - it will just interrupt the eval subshell, so we're almost there - just that weird read behaviour..
If you can afford having the eval part run in a subshell, "all" you need to do is trap SIGINT.
#! /bin/bash
cmd='read -p "Ooook1?" something; read -p "Oook2?" ; echo $something; sleep 4 '
while true; do
echo "running cmd.."
trap "echo Interrupted" INT
eval "($cmd)" # ctrl-C now should terminate the eval and print "done cmd"
trap - INT
echo "done cmd"
sleep 5 # ctrl-C now should terminate the main script
done
Don't know if that will fit your specific need though.
$ ./t.sh
running cmd..
Ooook1?^CInterrupted
done cmd
^C
$ ./t.sh
running cmd..
Ooook1?qsdqs^CInterrupted
done cmd
^C
$ ./t.sh
running cmd..
Ooook1?qsd
Oook2?^CInterrupted
done cmd
^C
$
GNU bash, version 4.1.9(2)-release (x86_64-pc-linux-gnu)
You can determine whether the sleep command exited abnormally by examining the last exit status echo $?. A non-zero status probably indicates Ctrl-C.
No, read is not an external command, it is internal builtin bash command being executed in the same process as the other instructions. So at Ctrl-C all the process will be killed.
P.S.
Yes. you can execute command in subshell. Something like this
#!/bin/bash
cmd='trap - INT; echo $$; read -p "Ooook?" something; echo $something; sleep 4 '
echo $$
while true; do
echo "$cmd" > tmpfile
echo "running cmd.."
trap "" INT
bash tmpfile
rm tmpfile
trap - INT
echo "done cmd"
sleep 5 # ctrl-C now should terminate the main script
done

How to suppress Terminated message after killing in bash?

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources