How to track process forked by bash script? - bash

Not pure bash question, but requires a combo knowledge of bash and kubernetes cli to fully answer. I want to do some port forwarding thing with kubernetes alongside with other job (like telepresense), and my bash script doing like that:
# Killing all kubectl port forwards that might remain after previous launch.
kill $(pidof kubectl)
#run kube-proxy to tunnel port 2828 to the my pod on k8s
kubectl port-forward deployment/my 2828:2828 -n myns &
#wait for proxy to establish
sleep 10
this script has downsides
killing all kubectl, not only mine
does not kill kubectl at the end of script (could it just handle ctrl+c and "join" process gracefully instead of killing it)?
may sleep longer than needed (can i track if kubectl server is established and script can continue?)
kubectl error is not handled (should exit if error happened)
how could I solve mentioned drawbacks?

You should track the pid of the created kubectl process and possible store it in a "lock" file.
Something like: pid=$! and echo $! > lockfile
Then in the beginning of the script you could check that lockfile and and kill the process:
pid=$(<lockfile)
kill "$pid"
kubectl port-forward deployment/my 2828:2828 -n myns &
pid=$!
echo "$pid" > lockfile
This way you can also check to see if the process is already running or it have stopped:
pid=$(<lockfile)
if ps "$pid" 2>&1 >/dev/null
then
echo "Already running, no need to restart"
exit 0
fi
kubectl port-forward deployment/my 2828:2828 -n myns &
pid=$!
echo "$pid" > lockfile
This will however not work if kubectl forks and stops the parent process.

Related

Terminate ros2 nodes in bash

I currently have a bash script which launches my ros2 nodes.This works perfectly fine. I now tried to start the script as a background task and write the output into a file. When doing so, I am unable to terminate the nodes at a later point. Previously I terminated it by hitting Ctrl + C in the terminal and all nodes stopped. I tried to do same by saving the PID when starting the script and then killing it afterwards, however the nodes keep running.
Is there any possibility to stop all nodes started by the script? Stopping all ros2 nodes is not possible because I launch multiple in parallel.
Start
"./${SCRIPTFILE}" > $LOGFILE &
echo $! > $PIDFILE
Stop
kill -TERM $(cat $PIDFILE) 2> /dev/null
When you use kill -TERM $(cat $PIDFILE) 2> /dev/null it sends a SIGTERM, which is more of a cautious way to see if a shutdown is possible not endangering the integrity of open DBs or files, but it can be blocked or handled otherwise. If you want to kill a process regardless of it's state then use:
kill -9 -TERM $(cat $PIDFILE) 2> /dev/null

How do I kill background processes / jobs started by a bash script after it finishes executing?

So I want to start a docker image, then a Django back-end and finally an angular front-end, let them run as long as I need to do tests/develop and then kill them when I'm done. To do this I first tried starting them all in a script and have them run in a background, and have a second script do kill %n for both processes. This doesn't work because the background processes are in another context, so the second script cannot reference them.
Then I tried this:
#!/bin/bash
# Exit Angular, Django and kill docker_img
function clean_up()
{
echo "Exiting..."
kill %2
kill %1
docker stop docker_img
reset
exit
}
# Trigger cleanup on CTRL + C
trap clean_up SIGINT
# Start docker database
docker start docker_img
# Start django backend
cd ~/Projects/DjangoBackend
source venv/bin/activate
python src/manage.py runserver &
sleep 3
echo 'Done starting django, starting angular'
sleep 1
# Start angular front end
cd ~/Projects/AngularFront
npm start &
However, after npm start & runs, the trap stops working, so it effectively becomes useless. I'm guessing it could be because once my script is done running the trap is no longer active, but I don't know how to fix this. What can I do?
If you are looking to kill a process in unix/linux, one way of doing it is you can record their PID in a file using ps -ef command.
And then use kill -9 to kill the process.
Example:
$ ps -ef | grep <process_name> | awk -F ' ' '{print $2}' > pid.txt
$ kill -9 `cat pid.txt`
ps -ef command will give all the running processes, using grep and process name, you can get PID of the particular process
awk is used to extract only PID from above command
kill -9 will forcefully kill the process
The answer seems to have been pretty easy, all I had to do was add wait to the end of the script, which allows the script to wait until the processes are done executing. Since two of the processes are servers, they don't stop unless prompted, so it'll just wait until SIGINT is received, at that point it'll run the clean_up function and exit gracefully.
Additionally, one could use the same trap but with the EXIT trigger instead of SIGINT to clean up when the script exits on it's own due to the processes closing.

Keep Track of laravel websocket with monit centos

Im trying to monitor laravel-websocket with monit instead of supervisord because of more options it provides
So In my /home/rabter/laravelwebsocket.sh :
#!/bin/bash
case $1 in
start)
echo $$ > /var/run/laravelwebsocket.pid;
exec 2>&1 php /home/rabter/core/artisan websockets:serve 1>/tmp/laravelwebsocket.out
;;
stop)
kill `cat /var/run/laravelwebsocket.pid` ;;
*)
echo "usage: laravelwebsocket.sh {start|stop}" ;;
esac
exit 0
And in etc/monit.d I made a file named cwp.laravelwebsocket with code
check process laravelwebsocket with pidfile /var/run/laravelwebsocket.pid
start program "/bin/bash -c /home/rabter/laravelwebsocket.sh start"
stop program "/bin/bash -c /home/rabter/laravelwebsocket.sh stop"
if failed port 6001 then restart
if 4 restarts within 8 cycles then timeout
unfortunately with I run monit everything starts to get monitord but laravel websocket, and it does not start once and in monit table infront I see
Process - laravelwebsocket Execution failed | Does not exist
How can I make monit monitor and start laravel-websocket on startup and on fails or errors or crashes?
I have looked into Monitor a Laravel Queue Worker with Monit
but no luck!
Your bash script inserts its own pid into your pid file. Additionally, the php process should be send to background if using monit, because monit is a monitoring tool, rather then a supervisor.
#!/usr/bin/env bash
case $1 in
start)
php /home/rabter/core/artisan websockets:serve & 2>&1 >/tmp/laravelwebsocket.out
echo $! > /var/run/laravelwebsocket.pid;
;;
stop)
kill $(cat /var/run/laravelwebsocket.pid) ;;
*)
echo "usage: $(basename $0) {start|stop}" ;;
esac
exit 0
Then make that file executable with chmod +x FILEPATH.
This should now work:
check process laravelwebsocket with pidfile /var/run/laravelwebsocket.pid
start program "/home/rabter/laravelwebsocket.sh start"
stop program "/home/rabter/laravelwebsocket.sh stop"
if failed port 6001 then restart
if 4 restarts within 8 cycles then timeout
Do you use monit as init-system for a container? If so, please let me know. Then a few more details apply.

Why use nginx with "daemon off" in background with docker?

It all started from this article about setting up nginx and certbot in docker. In the end of the manual the author made the automatic certificate renewal for nginx with this command:
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
I'm not the only one who didn't understand this part, so there was a question on SO: Why do sleep & wait in bash?
The answer was given that the original command was not perfect and here is the corrected version:
/bin/sh -c 'nginx -g \"daemon off;\" & trap exit TERM; while :; do sleep 6h & wait $${!}; nginx -s reload; done'
But in this command I see nginx -g \"daemon off;\" &
Why do we first put nginx on foreground and then stuff it in background? What are implications and why not just launch nginx in background at first?
Another question: as I understand, the while cycle stays in foreground for docker, unlike the original command. But if nginx if background, does it mean that if it dies, docker does not care? In foreground while is still working, no problem.
And the last question: why in this commands sometimes we see $${!} and sometimes ${!}. Example of ${!} from the same SO question:
docker run --name test --rm --entrypoint="/bin/sh" nginx -c 'nginx -g "daemon off;" & trap exit TERM; while :; do sleep 20 & wait ${!}; echo running; done'
I know it's a character escaping, but I don't figure out the rules for this case.
But in this command I see nginx -g \"daemon off;\" & Why do we first put nginx on foreground and then stuff it in background? What are implications and why not just launch nginx in background at first?
The reason was mainly to highlight the differences and there are no implications. The command is equivalent to:
"/bin/sh -c 'nginx; trap exit TERM; while :; do sleep 6h & wait $${!}; nginx -s reload; done'
Another question: as I understand, the while cycle stays in foreground for docker, unlike the original command. But if nginx if background, does it mean that if it dies, docker does not care? In foreground while is still working, no problem.
The command basically creates three processes: the shell process (/bin/sh), sleep 6H and the nginx server. A fourth process (nginx -s reload) is forked every 6 hours.
Docker always monitors the process with PID 1 which in this case is the shell (/bin/sh). If the shell dies the container exits. If the nginx server, which is a child of the shell process, dies docker, indeed doesn't care.
The "corrected" version doesn't address these issues. It has the same problems as the original one. The answer to the SO question only highlights that the sleep and wait is not needed unless you want to handle signals in a timely manner. It means that:
"/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx ..."
does exactly the same thing as:
"/bin/sh -c 'while :; do sleep 6h; nginx ..."
In conclusion, a proper implementation would have nginx as the main process (PID 1) and another process running in background waking up every 6h to signal the server to reload the configuration. Neither the original, nor the corrected command implement all this properly.
To fix the before mentioned problems the command should be like this:
'while :; do sleep 6h; nginx -s reload; done & exec nginx -g "daemon off;"'
The exec system call replaces the content of the shell process with the nginx server making nginx the main process in foreground.
All the signals are now propagated correctly to the server (see also Controlling nginx).
Note: This solution still has a flaw. The shell process (the while loop) is not monitored. If for any reason this process exits the only thing docker does
is to send an alert.
Hope this sheds some light.
Answer to my last question regarding ${!} and $${!}:
Apparently, if we write a command in docker-compose file with one dollar sign (${!}), it will be expanded by docker-compose into 'pid of last background command relative to shell that launched docker-compose'. So, the entrypoint in container will look like this:
/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait someUnknownLastPID; done;'"
With $${!} the dollar sign is escaped in docker-compose processing. The entrypoint in container will be something like
/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait ${!}; done;'"
Source: https://stackoverflow.com/a/40621373/11931043

How do I write a watchdog daemon in bash?

I want a way to write a daemon in a shell script, which runs another application in a loop, restarting it if it dies.
When run using ./myscript.sh from an SSH session, it shall launch a new instance of the daemon, except if the daemon is already running.
When the SSH session ends, the daemon shall persist.
There shall be a parameter (./myscript -stop) that kills any existing daemon.
(Notes on edit - The original question specified that nohup and similar tools may not be used. This artificial requirement was an "XY question", and the accepted answer in fact uses all the tools the OP claimed were not possible to use.)
Based on clarifications in comments, what you actually want is a daemon process that keeps a child running, relaunching it whenever it exits. You want a way to type "./myscript.sh" in an ssh session and have the daemon started.
#!/usr/bin/env bash
PIDFILE=~/.mydaemon.pid
if [ x"$1" = x-daemon ]; then
if test -f "$PIDFILE"; then exit; fi
echo $$ > "$PIDFILE"
trap "rm '$PIDFILE'" EXIT SIGTERM
while true; do
#launch your app here
/usr/bin/server-or-whatever &
wait # needed for trap to work
done
elif [ x"$1" = x-stop ]; then
kill `cat "$PIDFILE"`
else
nohup "$0" -daemon
fi
Run the script: it will launch the daemon process for you with nohup. The daemon process is a loop that watches for the child to exit, and relaunches it when it does.
To control the daemon, there's a -stop argument the script can take that will kill the daemon. Look at examples in your system's init scripts for more complete examples with better error checking.
The pid of the most recently "backgrounded" process is stored in $!
$ cat &
[1] 7057
$ echo $!
7057
I am unaware of a fork command in bash. Are you sure bash is the right tool for this job?

Resources