Force execution in makefile - makefile

I am actually trying to modify a Makefile and I want at the end of this makefile add a command that will be run even if the rest of the make fails and I would also like to know if it would be possible to run this command even if the user hits ctrl + c.
I searched a bit on the web but couldn't find an answer to my question.
Thank you so much in advance.

It is basically not possible to define something in a makefile that will always be run as the very last thing.
The best I can suggest is that you create a wrapper script that people invoke instead of make, and that the wrapper script run the command you want after make exits. The script can capture SIGINT (^C) even.
If you really must do it with make the only way to do it is create a "wrapper makefile" that essentially invokes a sub-make on the real makefile to do the work, then can do more things when that sub-make finishes. There's no way to catch SIGINT in this situation. This is also complicated because there's not really any such thing as "perfect forwarding" in makefiles, so getting the makefile wrapper to behave identically as when there is no wrapper is tricky or even impossible.

If you want to run a command every time make is called you can call the shell function:
DUMMY := $(shell myCommand &)
Put it at the beginning of the Makefile such that it is one of the first things make will do. Normally stopping make with CTRL-C, unless you are super fast and hit CTRL-C before make executed the command, should not stop the child process.

Related

How to exit the entire call stack of shell scripts if a child script fails?

I have set of shell scripts, around 20-30, that are used for performing one big task as a whole. The wrapper script calls mainly the high-level task scripts but internally those scripts calls other scripts like and the flow goes on in a nested manner.
I want to know if there is a way to exit the entire call stack if some critical script fails. Normally I run exit 125 command and then catch that in caller script and so on but I feel that little complicated. Is there a special exit that will abort the entire call stack? I don't want to use kill command to abort the wrapper script process.
You could have your main wrapper script start every sub-script in its own process group, using e.g. chpst -P.
Then the sub-scripts, as well as their children, could kill their own process group by sending it a KILL signal, and this would not affect the main wrapper script.
I think this would be a bad idea and what you're currently doing is the good way, though (because it makes the code easier to follow).

BASH: Recursive design, linear implementation

The idea
Say I have a few scripts. For example:
script1
script2
script3
I want each script to:
Do something
Run next script
Wait
Cleanup
The wait is simply to wait for the next script to complete.
The problem
A recursive solution is rather straightforward. The problem is that each script then needs to check if there is a next script. This is ok but a minor mistake in a script and it becomes a debugging hell, especially if there are many scripts.
For this reason I was thinking to do it in a linear way. Having a main script (script1) keeping control of everything. The main issue is the wait part.
How do I make script1 to pause script2 until script3 has completed so that it cleans up?
The easiest would be to simply split each worker script in two parts: the real work and the cleanup. Then your master script can run each of the scripts in sequence, followed by each of the cleanup scripts.
Another way to go about this would be to use a "build system" like SCons, which may work well if you can define the inputs and outputs of each script as filenames and let SCons schedule the work and support the "clean" command. This will be a bit of a steep learning curve, but for serious systems where debugging may be needed often, it may be more beneficial.

How can I tell what -j option was provided to make

In Racket's build system, we have a build step that invokes a program that can run several parallel tasks at once. Since this is invoked from make, it would be nice to respect the -j option that make was originally invoked with.
However, as far as I can tell, there's no way to get the value of the -j option from inside the Makefile, or even as an environment variable in the programs that make invokes.
Is there a way to get this value, or the command line that make was invoked with, or something similar that would have the relevant information? It would be ok to have this only work in GNU make.
In make 4.2.1 finally they got MAKEFLAGS right. That is, you can have in your Makefile a target
opts:
#echo $(MAKEFLAGS)
and making it will tell you the value of -j parameter right.
$ make -j10 opts
-j10 --jobserver-auth=3,4
(In make 4.1 it is still broken). Needless to say, instead of echo you can invoke a script doing proper parsing of MAKEFLAGS
Note: this answer concerns make version 3.82 and earlier. For a better answer as of version 4.2, see the answer by Dima Pasechnik.
You can not tell what -j option was provided to make. Information about the number of jobs is not accessible in the regular way from make or its sub-processes, according to the following quote:
The top make and all its sub-make processes use a pipe to communicate with
each other to ensure that no more than N jobs are started across all makes.
(taken from the file called NEWS in the make 3.82 source code tree)
The top make process acts as a job server, handing out tokens to the sub-make processes via the pipe. It seems to be your goal to do your own parallel processing and still honor the indicated maximum number of simultaneous jobs as provided to make. In order to achieve that, you would somehow have to insert yourself into the communication via that pipe. However, this is an unnamed pipe and as far as I can see, there is no way for your own process to join the job-server mechanism.
By the way, the "preprocessed version of the flags" that you mention contain the expression --jobserver-fds=3,4 which is used to communicate information about the endpoints of the pipe between the make processes. This exposes a little bit of what is going on under the hood...

How to bundle bash completion with a program and have it work in the current shell?

I sweated over the question above. The answer I'm going to supply took me a while to piece together, but it still seems hopelessly primitive and hacky compared to what one could do were completion to be redesigned to be less staticky. I'm almost afraid to ask if there's some good reason that completion logic seems to be completely divorced from the program it's completing for.
I wrote a command line library (can be seen in scala trunk) which lets you flip a switch to have a "--bash" option. If you run
./program --bash
It calculates the completion file, writes it out to a tempfile, and echoes
. /path/to/temp/file
to the console. The result is that you can use backticks like so:
`./program --bash`
and you will have completion for "program" in the current shell since it will source the tempfile.
For a concrete example: check out scala trunk and run test/partest.

Pitfalls of using shell scripts to wrap a program?

Consider I have a program that needs an environment set. It is in Perl and I want to modify the environment (to search for libraries a special spot).
Every time I mess with the the standard way to do things in UNIX I pay a heavy price and I pay a penalty in flexibility.
I know that by using a simple shell script I will inject an additional process into the process tree. Any process accessing its own process tree might be thrown for a little bit of a loop.
Anything recursive to a nontrivial way would need to defend against multiple expansions of the environment.
Anything resembling being in a pipe of programs (or closing and opening STDIN, STDOUT, or STDERR) is my biggest area of concern.
What am I doing to myself?
What am I doing to myself?
Getting yourself all het up over nothing?
Wrapping a program in a shell script in order to set up the environment is actually quite standard and the risk is pretty minimal unless you're trying to do something really weird.
If you're really concerned about having one more process around — and UNIX processes are very cheap, by design — then use the exec keyword, which instead of forking a new process, simply exec's a new executable in place of the current one. So, where you might have had
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
perl myprog.pl
You'd just say
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
exec perl myprog.pl
and the spare process goes away.
This trick, however, is almost never worth the bother; the one counter-example is that if you can't change your default shell, it's useful to say
$ exec zsh
in place of just running the shell, because then you get the expected behavior for process control and so forth.

Resources