php-resque : How to pause and stop a worker from php script - php-resque

PHP-Resque workers can be started from a script with something like
passthru("nohup php " . __RESQUE_BIN__ . " >> " . __RESQUE_LOG__ . " 2>&1 &");
But how do i pause them, or stop them from a php script ?

Check the README, you can send signals to worker processes to do what you ask:
QUIT - Wait for child to finish processing then exit
TERM / INT - Immediately kill child then exit
USR1 - Immediately kill child but don't exit
USR2 - Pause worker, no new jobs will be processed
CONT - Resume worker.
You will need the pid of the worker, you can send a signal with posix_kill

Related

Why doesn't bash script wait for its child processes to finish before exiting the parent script on receiving Sigterm?

trap exit_gracefully TERM
exit_gracefully() {
echo "start.sh got SIGTERM"
echo "Sending TERM to child_process_1_pid: ${child_process_1_pid}"
echo "Sending TERM to child_process_2_pid: ${child_process_2_pid}"
echo "Sending TERM to child_process_3_pid: ${child_process_3_pid}"
kill -TERM ${child_process_1_pid} ${child_process_2_pid} ${child_process_3_pid}
}
consul watch -http-addr=${hostIP}:8500 -type=key -key=${consul_kv_key} /child_process_1.sh 2>&1 &
child_process_1_pid=$!
/child_process_2.sh &
child_process_2_pid=$!
/child_process_3.sh &
child_process_3_pid=$!
/healthcheck.sh &
/configure.sh
# sleep 36500d &
# wait $!
wait ${child_process_1_pid} ${child_process_2_pid} ${child_process_3_pid}
echo 'start.sh exiting'
start.sh is the parent script. When SIGTERM is trapped, it is forwarded to 3 of its child processes. If # sleep 36500d &
# wait $! is commented (removed from code), start.sh does not wait for child_process_1.sh, child_process_2.sh and child_process_3.sh to receive SIGTERM, handle it and exit before exiting the parent process (start.sh), instead start.sh exits immediately on receiving SIGTERM even before child processes could handle it. But if I keep sleep 36500d & wait $! uncommented in the code, parent process (start.sh) waits for child processes (1, 2, and 3) to receive, handle Sigterm and exit first before exiting itself.
Why does this difference exist even though I wait for 3 pids (of child processes) in either case? Why should I need sleep when I am waiting for 3 pids?
Receiving a signal will cause any wait command in progress to return.
This is because the purpose of a signal is to interrupt a process in whatever it's currently doing.
All the effects you see are simply the result of the current wait returning, the handler running, and the script continuing from where the wait exited.

WAIT for "1 of many process" to finish

Is there any built in feature in bash to wait for 1 out of many processes to finish? And then kill remaining processes?
pids=""
# Run five concurrent processes
for i in {1..5}; do
( longprocess ) &
# store PID of process
pids+=" $!"
done
if [ "one of them finished" ]; then
kill_rest_of_them;
fi
I'm looking for "one of them finished" command. Is there any?
bash 4.3 added a -n flag to the built-in wait command, which causes the script to wait for the next child to complete. The -p option to jobs also means you don't need to store the list of pids, as long as there aren't any background jobs that you don't want to wait on.
# Run five concurrent processes
for i in {1..5}; do
( longprocess ) &
done
wait -n
kill $(jobs -p)
Note that if there is another background job other than the 5 long processes that completes first, wait -n will exit when it completes. That would also mean you would still want to save the list of process ids to kill, rather than killing whatever jobs -p returns.
It's actually fairly easy:
#!/bin/bash
set -o monitor
killAll()
{
# code to kill all child processes
}
# call function to kill all children on SIGCHLD from the first one
trap killAll SIGCHLD
# start your child processes here
# now wait for them to finish
wait
You just have to be really careful in your script to use only bash built-in commands. You can't start any utilities that run as a separate process after you issue the trap command - any child process exiting will send SIGCHLD - and you can't tell where it came from.

Getting results of parallel executions in bash

I have a bash scripts in which I invoke other scripts to run in parallel. With the wait command I can wait until all parallel processes have finished. But I want to know if all the processes that executed in background in parallel executed successfully (with return code 0).
My code looks like:
--calling multiple processes to execute in backgroud
process-1 &
process-2 &
process-3 &
wait
--after parallel execution finishes I want to know if all of them were successful and returned '0'
You can use wait -n which returns the exit code of the next job that terminates. Call it once for each background process.
process-1 &
process-2 &
process-3 &
wait -n && wait -n && wait -n
wait -n seems the correct solution, but since it is not present in bash 4.2.37 you can try this trick:
#!/bin/bash
(
process-1 || echo $! failed &
process-2 || echo $! failed &
process-2 || echo $! failed &
wait
) | grep -q failed
if [ $? -eq 0 ]; then
echo at least one process failed
else
echo all processes finished successfully
fi
Just make sure the string "failed" is not returned by the processes themselves while having an actual success. Also you could run processes with stdin and stderr redirected do /dev/null with process-1 &>/dev/null
I've written a tool that simplifies the solutions a bit: https://github.com/wagoodman/bashful
You provide a file describing what you want to run...
# awesome.yaml
tasks:
- name: My awesome tasks
parallel-tasks:
- cmd: ./some-script-1.sh
- cmd: ./some-script-2.sh
- cmd: ./some-script-3.sh
- cmd: ./some-script-4.sh
...and run it like so:
bashful run awesome.yaml
Then it will run your tasks in parallel with a vertical progress bar showing the status of each task. Failures are indicated in red and the program exits with 1 if there were any errors found (exit occurs after the parallel block completes).

Sending SIGTERM to all processes

I have a bash script call run.sh that launches multiple processes
#!/bin/bash
proc1 &
proc2 &
proc3 &
final # this runs until sigterm
When I execute run.sh and I send a SIGTERM to run.sh, I don't think SIGTERM is being sent to final, and I don't think it is being sent to proc1, proc2, and proc3. Note that in this use case this is a docker container which runs run.sh, and running docker stop is the way I'm trying to send SIGTERM.
What would be the easiest way for the bash script to send a sigterm to all of the processes it started? The only way I can think of is by starting final with the & too and then do a while loop in run.sh?
EDIT - I've tried it though, doesn't seem to work:
In run.sh
#!/bin/bash
_term() {
echo "Caught SIGTERM signal!"
}
trap _term SIGTERM
echo "hi"
sleep 100000 &
wait $!
When running docker stop, I never see Caught SIGTERM signal!
You said you run that script in a Docker container. Could you give us more details on how your start the container and how the run.sh is invoked?.
When docker stop is invoked or a direct SIGTERM is received by the container the contained process with PID 1 will receive it. When your run.sh creates child processes that run in background it also has to forward signals to them.
Therefore it is not a good approach to create background child processes in a bash script with &. Using a supervisor would be a good practice as it handles signals properly and forwards them to its child processes without any further scripting needed.
In addition the supervisord should not be started as a shell child process itself. That would happen if you specify this as your container command in your Dockerfile:
CMD /usr/bin/supervisord
Instead it should look like:
CMD ["/usr/bin/supervisord"]
That way the supervisor becomes the root process with PID 1 and will receive all the signals properly and redirects them to its child processes.
Use jobs -p to get the process ids of any background jobs, then pass them to kill.
trap 'kill $(jobs -p)' TERM
proc1 &
proc2 &
proc3 &
wait
Correct, I would collect them all in an array and then send a signal to each one of them when finished. I would use something like awk '{ system("kill -15 " $1)}'.

How do you stop two concurrent processes?

In my web development workflow, I have two processes:
watching my folder for changes
previewing my site in the browser
I want to be able to run them and then later stop them both at the same time. I've seen everyone suggesting using the ampersand operator:
process_1 & process_2
But pressing Ctrl + C only stops the second one. I have to kill the first one manually. What am I missing in this approach?
You can have the foreground script explicitly kill the subprocesses in response to SIGINT:
#!/bin/sh
trap 'kill $pid1 $pid2' 2
cmd1 &
pid1=$!
cmd2 &
pid2=$!
wait
There is a race condition in this example: if you send SIGINT to the parent before pid1 is assigned, kill will emit a warning message and neither child will be terminated. If you send SIGINT before pid2 is assigned, only the process running cmd1 will be sent the signal. In either case, the parent will continue running and a second SIGINT can be sent. Some versions of kill allow you to avoid this race condition by sending a signal to the process group using kill -$$, but not all versions of kill support that usage. (Note that if either child process does not terminate in response to the signal, the parent will not exit but continue waiting.)
How about writing two scripts, one containing
./process_1 &
./process_2 &
and a second containing
killall process_1
killall process_2
Start both prcesses by running the first script, and end them by running the second script.

Resources