How do I write a bash script to restart a process if it exits gracefully? - bash

So I read this How do I write a bash script to restart a process if it dies? and quickly discovered that this was not the dilemma I'm facing.
I copied the below into /etc/init/ and it appears to be working.
description "Forever my process"
start on started mountall
stop on shutdown
respawn
respawn limit 5 5
script
export HOME="/root"
exec my_process >> /var/log/my-process.log 2>&1
end script

A simple way:
while sleep 1; do
echo "success"
done
Seems to work fine for me.
Replace sleep 1 with the command to start your process.
edit: this is an answer for the question in the title, I'm not sure what /etc/init or the code you gave has to do with the question

Given that this works, I'm posting this as an answer. If there is a better way to do it, please add.
description "Forever my process"
start on started mountall
stop on shutdown
respawn
# respawn limit 5 5 - use this if you want to limit it for any reason
script
export HOME="/root"
exec my_process >> /var/log/my-process.log 2>&1
end script

Related

How to run Bash Script on startup and keep monitoring the results on the terminal

Due to some issues I wont elaborate here to not waste time, I made a bash script which will ping google every 10 minutes and if there is a response it will keep the loop running and if not then the PC will restart. After a lot of hurdle I have been able to make the script and also make it start on bootup. However the issue is that i want to see the results on the terminal, meaning I want to keep monitoring it but the terminal does not open on bootup. But it does open if I run it as ./net.sh.
The script is running on startup, that much I know because I use another script to open an application and it works flawlessly.
My system information
NAME="Linux Mint"
VERSION="18.3 (Sylvia)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 18.3"
VERSION_ID="18.3"
HOME_URL="http://www.linuxmint.com/"
SUPPORT_URL="http://forums.linuxmint.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/linuxmint/"
VERSION_CODENAME=sylvia
UBUNTU_CODENAME=xenial
The contents of my net.sh bash script are
#! /bin/bash
xfce4-terminal &
sleep 30
while true
do
ping -c1 google.com
if [ $? == 0 ]; then
echo "Ping Sucessful. The Device will Continue Operating"
sleep 600
else
systemctl reboot
fi
done
I have put the scripts in /usr/bin and inserted the scripts for startup at boot in /etc/rc.local
So I did some further research and with help from reddit I realized that the reason I couldnt get it to show on terminal was because the script was starting on bootup but I needed it to start after user login. So I added the script on startup application (which can be found searching on start menu if thats whats it called). But it was still giving issues so I divided the script in two parts.
I put the net.sh script on startup and directed that script to open my main script which i named net_loop.sh
This is how the net.sh script looks
#! /bin/bash
sleep 20
xfce4-terminal -e usr/bin/net_loop.sh
And the net_loop.sh
#! /bin/bash
while true
do
ping -c1 google.com
if [ $? == 0 ]; then
echo "Ping Sucessful. The Device will Continue Operating"
sleep 600
else
systemctl reboot
fi
done
The results are the results of the net_loop.sh script are open in another terminal.
Note: I used help from this thread
If minute interval is usable why not use "cron" to start your?
$> crontab –e
or
$> sudo crontab –e

Why does this nested bash command with subshells hang? [duplicate]

I have a script (lets call it parent.sh) that makes 2 calls to a second script (child.sh) that runs a java process. The child.sh scripts are run in the background by placing an & at the end of the line in parent.sh. However, when i run parent.sh, i need to press Ctrl+C to return to the terminal screen. What is the reason for this? Is it something to do with the fact that the child.sh processes are running under the parent.sh process. So the parent.sh doesn't die until the childs do?
parent.sh
#!/bin/bash
child.sh param1a param2a &
child.sh param1b param2b &
exit 0
child.sh
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#email.com
As you can see, I don't want to run the java process in the background because i want to send a mail out when the process dies. Doing it as above works fine from a functional standpoint, but i would like to know how i can get it to return to the terminal after executing parent.sh.
What i ended up doing was to make to change parent.sh to the following
#!/bin/bash
child.sh param1a param2a > startup.log &
child.sh param1b param2b > startup2.log &
exit 0
I would not have come to this solution without your suggestions and root cause analysis of the issue. Thanks!
And apologies for my inaccurate comment. (There was no input, I answered from memory and I remembered incorrectly.)
The following link from the Linux Documentation Project suggests adding a wait after your mail command in child.sh:
http://tldp.org/LDP/abs/html/x9644.html
Summary of the above document
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout. It can be a major annoyance.
....
....
As Walter Brameld IV explains it:
As far as I can tell, such scripts don't actually hang. It just
seems that they do because the background command writes text to
the console after the prompt. The user gets the impression that
the prompt was never displayed. Here's the sequence of events:
Script launches background command.
Script exits.
Shell displays the prompt.
Background command continues running and writing text to the
console.
Background command finishes.
User doesn't see a prompt at the bottom of the output, thinks script
is hanging.
If you change child.sh to look like the following you shouldn't experience this annoyance:
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#gmail.com
wait
Or as #SebastianStigler states in a comment to your question above:
Add a > /dev/null at the end of the line with mail. mail will otherwise try to start its interactive mode.
This will cause the mail command to write to /dev/null rather than stdout which should also stop this annoyance.
Hope this helps
The process was still linked to the controlling terminal because STDOUT needs somewhere to go. You solved that problem by redirecting to a file ( > startup.log ).
If you're not interested in the output, discard STDOUT completely ( >/dev/null ).
If you're not interested in errors, either, discard both ( &>/dev/null ).
If you want the processes to keep running even after you log out of your terminal, use nohup — that effectively disconnects them from what you are doing and leaves them to quietly run in the background until you reboot your machine (or otherwise kill them).
nohup child.sh param1a param2a &>/dev/null &

How to capture a process Id and also add a trigger when that process finishes in a bash script?

I am trying to make a bash script to start a jar file and do it in the background. For that reason I'm using nohup. Right now I can capture the pid of the java process but I also need to be able to execute a command when the process finishes.
This is how I started
nohup java -jar jarfile.jar & echo $! > conf/pid
I also know from this answer that using ; will make a command execute after the first one finishes.
nohup java -jar jarfile.jar; echo "done"
echo "done" is just an example. My problem now is that I don't know how to combine them both. If I run echo $! first then echo "done" executes immediately. While if echo "done" goes first then echo $! will capture the PID of echo "done" instead of the one of the jarfile.
I know that I could achieve the desire functionality by polling until I don't see the PID running anymore. But I would like to avoid that as much as possible.
You can use the bash util wait once you start the process using nohup
nohup java -jar jarfile.jar &
pid=$! # Getting the process id of the last command executed
wait $pid # Waits until the process mentioned by the pid is complete
echo "Done, execute the new command"
I don't think you're going to get around "polling until you don't see the pid running anymore." wait is a bash builtin; it's what you want and I'm certain that's exactly what it does behind the scenes. But since Inian beat me to it, here's a friendly function for you anyway (in case you want to get a few things running in parallel).
alert_when_finished () {
declare cmd="${#}";
${cmd} &
declare pid="${!}";
while [[ -d "/proc/${pid}/" ]]; do :; done; #equivalent to wait
echo "[${pid}] Finished running: ${cmd}";
}
Running a command like this will give the desired effect and suppress unneeded job output:
( alert_when_finished 'sleep 5' & )

Stop script when gnome session ends

In Start Script when Gnome Starts Up it was asked how to automatically start a script on gnome login. But how to automatically stop a long running script on logout, that was started on login? In my case there are two processes when I login twice. Interestingly the process started first does not reside under gnome-session anymore.
I would wrap the binary that gets executed in a simple bash script that saves the pid of the started process in a temporary file. If this file already exists it skips the start of the application. Since the file is saved in the /tmp directory everything gets deleted once you restart your computer.
#!/bin/bash
binary="git-cola"
temp_file="/tmp/my_${binary}_instance.pid"
if [[ -f ${temp_file} ]]
then
echo "PID exists"
else
exec ${binary} &
echo $! > ${temp_file}
fi
With a little more effort you can check if the pid of the process is still running and restart it on the login again (for example if the process crashed or the other user closed it).
I actually don't use Gnome, so I can't tell you if there is a more elegant way to kill the process. Like a logout hook. But once you got the pid of the process saved you can kill it with kill -9 PID. (See man kill for more gentle ways to end the process).
This might not be the solution to stop the process. But to prevent it starting twice.

Running bash script does not return to terminal when using ampersand (&) to run a subprocess in the background

I have a script (lets call it parent.sh) that makes 2 calls to a second script (child.sh) that runs a java process. The child.sh scripts are run in the background by placing an & at the end of the line in parent.sh. However, when i run parent.sh, i need to press Ctrl+C to return to the terminal screen. What is the reason for this? Is it something to do with the fact that the child.sh processes are running under the parent.sh process. So the parent.sh doesn't die until the childs do?
parent.sh
#!/bin/bash
child.sh param1a param2a &
child.sh param1b param2b &
exit 0
child.sh
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#email.com
As you can see, I don't want to run the java process in the background because i want to send a mail out when the process dies. Doing it as above works fine from a functional standpoint, but i would like to know how i can get it to return to the terminal after executing parent.sh.
What i ended up doing was to make to change parent.sh to the following
#!/bin/bash
child.sh param1a param2a > startup.log &
child.sh param1b param2b > startup2.log &
exit 0
I would not have come to this solution without your suggestions and root cause analysis of the issue. Thanks!
And apologies for my inaccurate comment. (There was no input, I answered from memory and I remembered incorrectly.)
The following link from the Linux Documentation Project suggests adding a wait after your mail command in child.sh:
http://tldp.org/LDP/abs/html/x9644.html
Summary of the above document
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout. It can be a major annoyance.
....
....
As Walter Brameld IV explains it:
As far as I can tell, such scripts don't actually hang. It just
seems that they do because the background command writes text to
the console after the prompt. The user gets the impression that
the prompt was never displayed. Here's the sequence of events:
Script launches background command.
Script exits.
Shell displays the prompt.
Background command continues running and writing text to the
console.
Background command finishes.
User doesn't see a prompt at the bottom of the output, thinks script
is hanging.
If you change child.sh to look like the following you shouldn't experience this annoyance:
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#gmail.com
wait
Or as #SebastianStigler states in a comment to your question above:
Add a > /dev/null at the end of the line with mail. mail will otherwise try to start its interactive mode.
This will cause the mail command to write to /dev/null rather than stdout which should also stop this annoyance.
Hope this helps
The process was still linked to the controlling terminal because STDOUT needs somewhere to go. You solved that problem by redirecting to a file ( > startup.log ).
If you're not interested in the output, discard STDOUT completely ( >/dev/null ).
If you're not interested in errors, either, discard both ( &>/dev/null ).
If you want the processes to keep running even after you log out of your terminal, use nohup — that effectively disconnects them from what you are doing and leaves them to quietly run in the background until you reboot your machine (or otherwise kill them).
nohup child.sh param1a param2a &>/dev/null &

Resources