How to run program in Bash script for as long as other program runs in parallel? - bash

I have two programs server and client. server terminates after an unknown duration. I want to run client in parallel to server (both from the same Bash script) and terminate client automatically a few seconds after the server has terminated (on its own).
How can achieve this?
I can run multiple programs in parallel from a bash script and timeout a command in Bash without unnecessary delay, but I don’t know the execution duration of server beforehand so I can’t simply define a timeout for client. The script should continue running so exiting the script to kill the child processes is not an option.
Edits
This question only addresses waiting for both processes to terminate naturally, not how to kill the client process once the server process has terminated.
#tripleee pointed to this question on Unix SE in the comments, which works especially if the order of termination is irrelevant.

#!/bin/bash
execProgram(){
case $1 in
server)
sleep 5 & # <-- change "sleep 5" to your server command.
# use "&" for background process
SERVER_PID=$!
echo "server started with pid $SERVER_PID"
;;
client)
sleep 18 & # <-- change "sleep 18" to your client command
# use "&" for background process
CLIENT_PID=$!
echo "client started with pid $CLIENT_PID"
;;
esac
}
waitForServer(){
echo "waiting for server"
wait $SERVER_PID
echo "server prog is done"
}
terminateClient(){
echo "killing client pid $CLIENT_PID after 5 seconds"
sleep 5
kill -15 $CLIENT_PID >/dev/null 2>&1
wait $CLIENT_PID >/dev/null 2>&1
echo "client terminated"
}
execProgram server && execProgram client
waitForServer && terminateClient

With GNU Parallel you can do:
server() {
sleep 3
echo exit
}
client() {
forever echo client running
}
export -f server client
# Keep client running for 5000 ms more
# then send TERM and wait 1000 ms
# then send KILL
parallel -u --termseq 0,5000,TERM,1000,KILL,1 --halt now,success=1 ::: server client

Related

Keep Track of laravel websocket with monit centos

Im trying to monitor laravel-websocket with monit instead of supervisord because of more options it provides
So In my /home/rabter/laravelwebsocket.sh :
#!/bin/bash
case $1 in
start)
echo $$ > /var/run/laravelwebsocket.pid;
exec 2>&1 php /home/rabter/core/artisan websockets:serve 1>/tmp/laravelwebsocket.out
;;
stop)
kill `cat /var/run/laravelwebsocket.pid` ;;
*)
echo "usage: laravelwebsocket.sh {start|stop}" ;;
esac
exit 0
And in etc/monit.d I made a file named cwp.laravelwebsocket with code
check process laravelwebsocket with pidfile /var/run/laravelwebsocket.pid
start program "/bin/bash -c /home/rabter/laravelwebsocket.sh start"
stop program "/bin/bash -c /home/rabter/laravelwebsocket.sh stop"
if failed port 6001 then restart
if 4 restarts within 8 cycles then timeout
unfortunately with I run monit everything starts to get monitord but laravel websocket, and it does not start once and in monit table infront I see
Process - laravelwebsocket Execution failed | Does not exist
How can I make monit monitor and start laravel-websocket on startup and on fails or errors or crashes?
I have looked into Monitor a Laravel Queue Worker with Monit
but no luck!
Your bash script inserts its own pid into your pid file. Additionally, the php process should be send to background if using monit, because monit is a monitoring tool, rather then a supervisor.
#!/usr/bin/env bash
case $1 in
start)
php /home/rabter/core/artisan websockets:serve & 2>&1 >/tmp/laravelwebsocket.out
echo $! > /var/run/laravelwebsocket.pid;
;;
stop)
kill $(cat /var/run/laravelwebsocket.pid) ;;
*)
echo "usage: $(basename $0) {start|stop}" ;;
esac
exit 0
Then make that file executable with chmod +x FILEPATH.
This should now work:
check process laravelwebsocket with pidfile /var/run/laravelwebsocket.pid
start program "/home/rabter/laravelwebsocket.sh start"
stop program "/home/rabter/laravelwebsocket.sh stop"
if failed port 6001 then restart
if 4 restarts within 8 cycles then timeout
Do you use monit as init-system for a container? If so, please let me know. Then a few more details apply.

Run / Close Programs over and over again

Is there a way I can write a simple script to run a program, close that program about 5 seconds later, and then repeat?
I just want to be able to run a program that I wrote over and over again but to do so Id have to close it like 5 seconds after running it.
Thanks!
If your command is non-interactive (requires no user interaction):
Launch your program in the background with control operator &, which gives you access to its PID (process ID) via $!, by which you can kill the running program instance after sleeping for 5 seconds:
#!/bin/bash
# Start an infinite loop.
# Use ^C to abort.
while :; do
# Launch the program in the background.
/path/to/your/program &
# Wait 5 seconds, then kill the program (if still alive).
sleep 5 && { kill $! && wait $!; } 2>/dev/null
done
If your command is interactive:
More work is needed if your command must run in the foreground to allow user interaction: then it is the command to kill the program after 5 seconds that must run in the background:
#!/bin/bash
# Turn on job control, so we can bring a background job back to the
# foreground with `fg`.
set -m
# Start an infinite loop.
# CAVEAT: The only way to exit this loop is to kill the current shell.
# Setting up an INT (^C) trap doesn't help.
while :; do
# Launch program in background *initially*, so we can reliably
# determine its PID.
# Note: The command line being set to the bakground is invariably printed
# to stderr. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
/path/to/your/program &
pid=$! # Save the PID of the background job.
# Launch the kill-after-5-seconds command in the background.
# Note: A status message is invariably printed to stderr when the
# command is killed. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
{ (sleep 5 && kill $pid &) } 2>/dev/null
# Bring the program back to the foreground, where you can interact with it.
# Execution blocks until the program terminates - whether by itself or
# by the background kill command.
fg
done
Check out the watch command. It will let you run a program repeatedly monitoring the output. Might have to get a little fancy if you need to kill that program manually after 5 seconds.
https://linux.die.net/man/1/watch
A simple example:
watch -n 5 foo.sh
To literally answer your question:
Run 10 times with sleep 5:
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do
# your script
sleep 5
let COUNTER=COUNTER+1
done
Run continuously:
#!/bin/bash
while [ 1 ]; do
# your script
sleep 5
done
If there is no input on the code, you can simply do
#!/bin/bash
while [ 1 ]
do
./exec_name
if [ $? == 0 ]
then
sleep 5
fi
done

WAIT for "1 of many process" to finish

Is there any built in feature in bash to wait for 1 out of many processes to finish? And then kill remaining processes?
pids=""
# Run five concurrent processes
for i in {1..5}; do
( longprocess ) &
# store PID of process
pids+=" $!"
done
if [ "one of them finished" ]; then
kill_rest_of_them;
fi
I'm looking for "one of them finished" command. Is there any?
bash 4.3 added a -n flag to the built-in wait command, which causes the script to wait for the next child to complete. The -p option to jobs also means you don't need to store the list of pids, as long as there aren't any background jobs that you don't want to wait on.
# Run five concurrent processes
for i in {1..5}; do
( longprocess ) &
done
wait -n
kill $(jobs -p)
Note that if there is another background job other than the 5 long processes that completes first, wait -n will exit when it completes. That would also mean you would still want to save the list of process ids to kill, rather than killing whatever jobs -p returns.
It's actually fairly easy:
#!/bin/bash
set -o monitor
killAll()
{
# code to kill all child processes
}
# call function to kill all children on SIGCHLD from the first one
trap killAll SIGCHLD
# start your child processes here
# now wait for them to finish
wait
You just have to be really careful in your script to use only bash built-in commands. You can't start any utilities that run as a separate process after you issue the trap command - any child process exiting will send SIGCHLD - and you can't tell where it came from.

find pid of ftp transfer in bash script

I would like to get the pid of an ftp inside a bash script... running on solaris
this is my script:
#!/usr/bin/bash
...
ftp -inv $FTPDEST <<EOF
user $USER $PASS
put $file
EOF
I would like to get the pid of the ftp command so that i can after check if it is hung and kill it..
I had a server crash because there were about 200 ftp process open when an ftp was cutting the connection.. for some reason the ftp process remained open.
thank you
Mario
This is what you seem to describe but may not really be what you need. And this is a kind of hack....
#!/usr/bin/bash
trap 'exit 0' SIGUSR1 # this is the normal successful exit point
# trigger the trap if ftp in a background process completes before 10 seconds
(ftp -inv $FTPDEST <<-EOF 2>>logfile
user $USER $PASS
put $file
EOF
kill -s SIGUSR1 $PPID ) & # last line here shuts process down. and exits with success
childpid=$! # get the pid of the child running in background
sleep 10 # let it run 10 seconds
kill $childpid # kill off the ftp command, hope we get killed of first
wait
exit 1 # error exit ftp got hung up
Parent waits for 10 seconds while the ftp child completes in less than 10 seconds for a successful exit.
Success mean the child send a SIGUSR1 signal to the parent which then exits via the trap.
If the child takes too long, the parent kills off the slow ftp child and exits with error.

Kill process in bash that runs more than specified time?

I have a shutdown script for Oracle in /etc/init.d dir
on "stop" command it does:
su oracle -c "lsnrctl stop >/dev/null"
su oracle -c "sqlplus sys/passwd as sysdba #/usr/local/PLATEX/scripts/orastop.sql >/dev/null"
..
The problem is when lsnrctl or sqlplus are unresponsive - in this case this "stop" script just never ends and server cant shutdown. The only way - is to "kill - 9 " that.
I'd like to rewrite script so that after 5min (for example) if command is not finished - it should be terminated.
How I can achieve this? Could you give me an example?
I'm under Linux RHEL 5.1 + bash.
If able to use 3rd-party tools, I'd leverage one of the 3rd-party, pre-written helpers you can call from your script (doalarm and timeout are both mentioned by the BashFAQ entry on the subject).
If writing such a thing myself without using such tools, I'd probably do something like the following:
function try_proper_shutdown() {
su oracle -c "lsnrctl stop >/dev/null"
su oracle -c "sqlplus sys/passwd as sysdba #/usr/local/PLATEX/scripts/orastop.sql >/dev/null"
}
function resort_to_harsh_shutdown() {
for progname in ora_this ora_that ; do
killall -9 $progname
done
# also need to do a bunch of cleanup with ipcs/ipcrm here
}
# here's where we start the proper shutdown approach in the background
try_proper_shutdown &
child_pid=$!
# rather than keeping a counter, we check against the actual clock each cycle
# this prevents the script from running too long if it gets delayed somewhere
# other than sleep (or if the sleep commands don't actually sleep only the
# requested time -- they don't guarantee that they will).
end_time=$(( $(date '+%s') + (60 * 5) ))
while (( $(date '+%s') < end_time )); do
if kill -0 $child_pid 2>/dev/null; then
exit 0
fi
sleep 1
done
# okay, we timed out; stop the background process that's trying to shut down nicely
# (note that alone, this won't necessarily kill its children, just the subshell we
# forked off) and then make things happen.
kill $child_pid
resort_to_harsh_shutdown
wow, that's a complex solution. here's something easier. You can track the PID and kill it later.
my command & #where my command is the command you want to run and the & sign backgrounds it.
PID=$! #PID = last run command.
sleep 120 && doProperShutdown || kill $PID #sleep for 120 seconds and kill the process properly, if that fails, then kill it manually.. this can be backgrounded too.

Resources