When writing a makefile how can I kill a subprocess on exit? - makefile

Makefile:
default:
command1 &
command2 &
When I run make and hit Control-C, command1 and command2 continue running. How can I make it so that command1 and command2 are killed once make is killed? command1 and command2 should run in parallel. Command1 watches source files and compiles them. Command2 is a webserver.

You can use make's own parallelism option make -j2 with a makefile like this :
default: task1 task2
task1:
command1
task2:
command2
make will take care of interrupting both command1 and command2.

The following code works I have no idea why. I took it from https://veithen.github.io/2014/11/16/sigterm-propagation.html
Makefile:
.PHONY: default
default:
bash build.sh
build.sh:
#!/bin/bash
trap 'kill -TERM $PID' TERM INT
webpack & # Command 1
http-server -p 8000 -c-1 & # Command 2
PID=$!
wait $PID
trap - TERM INT
wait $PID
EXIT_STATUS=$?

Related

How to pipe background processes in a shell script

I have a Shell script that starts a few background processes (using &) and are automatically killed when the user calls Ctrl+C (using trap). This works well:
#!/bin/sh
trap "exit" INT TERM ERR
trap "kill 0" EXIT
command1 &
command2 &
command3 &
wait
Now I would like to filter the output of command3 with a grep -v "127.0.0.1" to exclude all the line with 127.0.0.1. Like this:
#!/bin/sh
trap "exit" INT TERM ERR
trap "kill 0" EXIT
command1 &
command2 &
command3 | grep -v "127.0.0.1" &
wait
The problem is that the signal Ctrl+C doesn't kill command3 anymore.
Is there a way to capture pipe command3 with the grep in order to be able to kill at the end of the process?
Thanks
I will answer my own question. The problem was in the trap too limited. I changed to kill all jobs properly.
#!/bin/sh
killjobs() {
for job in $(jobs -p); do
kill -s SIGTERM $job > /dev/null 2>&1 || (sleep 10 && kill -9 $job > /dev/null 2>&1 &)
done
}
trap killjobs EXIT
command1 &
command2 &
command3 | grep -v "127.0.0.1" &
wait

&& in bash script using GNU "parallel"

I would like to run my bash my_script.sh using GNU parallel:
parallel < my_script.sh
where my_script.sh is:
command1 && command2
command3
command4
Will command1 and command2 run in parallel or sequentially.
In case I am not clear:
command1
command2
command3
command4
or
command1 -> command2
command3
command4
?
Thank you!
&& lets you do something based on whether the previous command completed successfully.
It will run like this:
command1 -> command2 (only if the exit status of "command1" is zero)
command3
command4

series of commands using nohup [duplicate]

This question already has answers here:
Why can't I use Unix Nohup with Bash For-loop?
(3 answers)
Closed 7 years ago.
I have a series of commands that I want to use nohup with.
Each command can take a day or two, and sometimes I get disconnected from the terminal.
What is the right way to achieve this?
Method1:
nohup command1 && command2 && command3 ...
or
Method2:
nohup command1 && nohup command2 && nohup command3 ...
or
Method3:
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I can see method 3 might work, but it forces me to create a temp file. If I can choose from only method 1 or 2, what is the right way?
nohup command1 && command2 && command3 ...
The nohup will apply only to command1. When it finishes (assuming it doesn't fail), command2 will be executed without nohup, and will be vulnerable to a hangup signal.
nohup command1 && nohup command2 && nohup command3 ...
I don't think this would work. Each of the three commands will be protected by nohup, but the shell that handles the && operators will not. If you logout before command2 begins, I don't think it will be started; likewise for command3.
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I think this should work -- but there's another way that doesn't require creating a script:
nohup sh -c 'command1 && command2 && command3'
The shell is then protected from hangup signals, and I believe the three sub-commands are as well. If I'm mistaken on that last point, you could do:
nohup sh -c 'nohup command1 && nohup command2 && nohup command3'

How to avoid a shell script exiting on failure for particular commands

The following script runs with the -e option, so it will exit if any of the commands in it fail:
#!/bin/sh -e
command1 #script should fail if command1 fails
command2 #script should NOT fail if command2 fails
command3 #script should fail if command3 fails
How can I make the script not to fail on command2?
command1
command2 || true
command3
You could turn off the setting as required:
#!/bin/sh
set -e
command1 #script should fail if command1 fails
set +e
command2 #script should NOT fail if command2 fails
set -e
command3 #script should fail if command3 fails
if you are running a command in shell that exits you can prevent it by running it inside a sub-shell:
(. script.sh)

Executing a command (only) when prior jobs are finished in Bash

I am trying to ensure that a command is run serially after parallel commands have been terminated.
command1 &
command2 &
command3
In the above example, command1 and command2 are launched in background at the same time, but command3 is run soon after. I know this is expected behaviour in Bash, but I was wondering if there was a way for command3 to be launched after command1 and command2 are terminated.
It is probably possible to do:
(command1; touch done1) &
(command2; touch done2) &
while [ ! -f done1 ] && [ ! -f done2 ]; do sleep 1000; done
command3
...but if a more elegant solution is available I will take it. The join needs to be passive as these commands are destined to be used in PBS queues. Any ideas? Thanks in advance.
You can use wait without arguments to wait for all previous jobs to complete:
command1 &
command2 &
wait
command3

Resources