Handling dependent processes in Bash - macos

I have a long-running command (sidekiq, if you must know) that depends on another long-running processes (redis-server, as you may have guessed from the previous parenthetical).
I'd like to write a Bash (well, okay, Zsh actually) alias to start redis-server in the background, then run sidekiq and, when I use ctrl-C to interrupt sidekiq, to kill the background Redis job. If it's relevant, I'm on a Mac and only need to support OS X.
So what I'm looking for is something like:
redis-server & ; sidekiq ; kill $!
Unfortunately, my interrupt of the sidekiq command also prevents the kill from occurring. Is there any way to do this?
Bonus points if this can be a one-liner alias and not a function. Double bonus points if I don't have to write to any files in advance (like turning on the daemonize flag in /usr/local/etc/redis.conf).

Maybe this:
#!/bin/zsh
redis-server &
redispid=$!
trap 'kill $redispid' INT
sidekiq

Related

Terminating multiple background processes in bash?

I'm trying to dump trade-data off binance for multiple symbol-pairs, e.g. doge/btc, ada/btc, etc.
I can background, thus:
wscat -c wss://stream.binance.com:9443/ws/dogebtc#trade > doge.txt &
wscat -c wss://stream.binance.com:9443/ws/adabtc#trade > ada.txt &
But how to terminate them all?
Is there some smart way, like terminating the parent process?
I think the right answer depends a lot on the way your current system is implemented / used.
At the most basic scripting level, you could simply run kill against all wscat processes; but that may be too generic depending on the details.
Slightly better, in a BASH script, directly after creating these processes you'd have access to their PID as $!. You could stash those PIDs in a variable or file and later use them to kill each individual process.
If you're aiming for something slicker than that, you'd likely want to look into things like:
the SIGCHLD signal, becoming a subreaper (prctl PR_SET_CHILD_SUBREAPER), running as PID 1 in a PID-namespace (unshare --pid ...), things like that.

Terminal - Close all terminal windows/processes

I have a couple cli-based scripts that run for some time.
I'd like another script to 'restart' those other scripts.
I've checked SO for answers, but the scenarios were not applicable enough to mine, as I'm trying to end Terminal processes using Terminal.
Process:
2 cli-based scripts are running (node, python, etc).
3rd script is run and decides whether or not to restart the other 2.
This can't quit Terminal, but has to end current processes.
3rd script then runs an executable that restarts everything.
Currently none of the terminal windows are named, and from reading the other posts, I can see that it may be helpful to do so.
I can mostly set this up, I just could not find a command that would end all other terminal processes and close them.
There are a couple of ways to do this. Most common is having a pidfile.
This file contains the process ID (pid) of the job you want to kill
later on. A simple way to create the pidfile is:
$ node server &
$ echo $! > /tmp/node.pidfile
$! contains the pid of the process that was most recently backgrounded.
Then later on, you kill it like so:
$ kill `cat /tmp/node.pidfile`
You would do similar for the python script.
The other less robust way is to do a killall for each process and assume you are not running similar node or python jobs.
Refer to
What is a .pid file and what does it contain? if you're not familiar with this.
The question headline is quite general, so is my reply
killall bash
or generically
killall processName
eg. killall chrome

Bash – How should I idle until I get a signal?

I have a script for launchd to run that starts a server, then tells it to exit gracefully when launchd kills it off (which should be at shutdown). My question: what is the appropriate, idiomatic way to tell the script to idle until it gets the signal? Should I just use a while-true-sleep-1 loop, or is there a better way to do this?
#!/bin/bash
cd "`dirname "$0"`"
trap "./serverctl stop" TERM
./serverctl start
# wait to receive TERM signal.
You can simply use "sleep infinity". If you want to perform more actions on shutdown and don't want to create a function for that, an alternative could be:
#!/bin/bash
sleep infinity & PID=$!
trap "kill $PID" INT TERM
echo starting
# commands to start your services go here
wait
# commands to shutdown your services go here
echo exited
Another alternative to "sleep infinity" (it seems busybox doesn't support it for example) could be "tail -fn0 $0" for example.
A plain wait would be significantly less resource-intensive than a spin lock, even with a sleep in it.
Why would you like to keep your script running? Is there any reason? If you don't do anything later after signal then I do not see a reason for that.
When you get TERM from shutdown then your serverctl and server executable (if there is any) also gets TERM at the same time.
To do this thing by design you have to install your serverctl script as rc script and let init (start and) stop that. Here I described how to set up server process that is not originally designed to work as server.

Are forked processes (bash) subject to server timeout disconnection?

If I am working on a remote server (ssh) and I fork a process using bash & operator, will that process be killed if I am booted off the server due to server time-out? I'm pretty sure the answer is yes, but would love to know if there are any juicy details.
It might depend, but generally when you log out with your "connection program" (e.g. ssh in your case although it could have been rlogin or telnet as well), the shell and children (I think?) will receive a SIGHUP signal (hangup) which will make them terminate when you log out. There are two common ways to avoid this, running the program you want to keep running through nohup or screen. If the server have some other time limitation on running processes you will have to look into that.
bash will send a HUP signal to all background jobs. You can stop this from happening by starting the job with nohup (which should have a man page). If it's too late for nohup, you can use disown to stop the shell from sending a HUP to a job. disown is a builtin, so help disown will tell you everything you need to know.

bash restart sub-process using trap SIGCHLD?

I've seen monitoring programs either in scripts that check process status using 'ps' or 'service status(on Linux)' periodically, or in C/C++ that forks and wait on the process...
I wonder if it is possible to use bash with trap and restart the sub-process when SIGCLD received?
I have tested a basic suite on RedHat Linux with following idea (and certainly it didn't work...)
#!/bin/bash
set -o monitor # can someone explain this? discussion on Internet say this is needed
trap startProcess SIGCHLD
startProcess() {
/path/to/another/bash/script.sh & # the one to restart
while [ 1 ]
do
sleep 60
done
}
startProcess
what the bash script being started just sleep for a few seconds and exit for now.
several issues observed:
when the shell starts in foreground, SIGCHLD will be handled only once. does trap reset signal handling like signal()?
the script and its child seem to be immune to SIGINT, which means they cannot be stopped by ^C
since cannot be closed, I closed the terminal. The script seems to be HUP and many zombie children left.
when run in background, the script caused terminal to die
... anyway, this does not work at all. I have to say I know too little about this topic.
Can someone suggest or give some working examples?
Are there scripts for such use?
how about use wait in bash, then?
Thanks
I can try to answer some of your questions but not all based on what I
know.
The line set -o monitor (or equivalently, set -m) turns on job
control, which is only on by default for interactive shells. This seems
to be required for SIGCHLD to be sent. However, job control is more of
an interactive feature and not really meant to be used in shell scripts
(see also this question).
Also keep in mind this is probably not what you intended to do
because once you enable job control, SIGCHLD will be sent for every
external command that exists (e.g. every time you run ls or grep or
anything, a SIGCHLD will fire when that command completes and your trap
will run).
I suspect the reason the SIGCHLD trap only appears to run once is
because your trap handler contains a foreground infinite loop, so your
script gets stuck in the trap handler. There doesn't seem to be a point
to that loop anyways, so you could simply remove it.
The script's "immunity" to SIGINT seems to be an effect of enabling
job control (the monitor part). My hunch is with job control turned on,
the sub-instance of bash that runs your script no longer terminates
itself in response to a SIGINT but instead passes the SIGINT through to
its foreground child process. In your script, the ^C i.e. SIGINT
simply acts like a continue statement in other programming languages
case, since SIGINT will just kill the currently running sleep 60,
whereupon the while loop will immediately run a new sleep 60.
When I tried running your script and then killing it (from another
terminal), all I ended up with were two stray sleep processes.
Backgrounding that script also kills my shell for me, although
the behavior is not terribly consistent (sometimes it happens
immediately, other times not at all). It seems typing any keys other
than enter causes an EOF to get sent somehow. Even after the terminal
exits the script continues to run in the background. I have no idea
what is going on here.
Being more specific about what you want to accomplish would help. If
you just want a command to run continuously for the lifetime of your
script, you could run an infinite loop in the background, like
while true; do
some-command
echo some-command finished
echo restarting some-command ...
done &
Note the & after the done.
For other tasks, wait is probably a better idea than using job control
in a shell script. Again, it would depend on what exactly you are trying
to do.

Resources