Can GNU makefiles rules have processes as requirements, if so how? - makefile

At some step of my software building automatization, which I attempt to implement using GNU make Makefiles, I run into the case of not only having targets a requirement being source files, but as a sort of different type of requirement I would like the target to depend on another software is started and hence exist as an operation system process.
Such a program could be background process but also a foreground process such as a Webbrowser which running a HTML5 application, which might play a role in a building process by for instance interacting with files it is fed through the building process.
I would hence like to write a rule somewhat like this:
.PHONY: firefoxprocess
Html5DataResultFile: HTML5DataSourceFile firefoxprocess
cp HTML5DataSourceFile folder/checked/by/html5app/
waitforHtml5DataResultFile
firefoxprocess:
/usr/bin/firefox file://url/to/html5app &
As seen I have taken the idea that .PHONY targets are somewhat non-file targets and hence would allow for requirering a process to be started?
Yet I a unsure if that is right. The documentation of GNU make is excellent and quite large and I am unsure understood it completely. To the best of my knowledge the documentation did not really report on the use of processes being used in rules, which motivates the question here.
My feeling has been that pidfiles are somewhat a link between processes and files, but they come with several problems (i.e. race conditions, uniqueness etc)

Sometimes a Makefile dependency tree includes elements that aren't naturally or necessarily time-dependent files. There are two answers:
create a file to represent the step, or
just do the work "in line" as part of the step.
The second option is usually easiest. For instance, if a target file is to be created in a directory that might not exist yet, you don't want to make the directory name itself a dependency, because that would cause the file to be out of date whenever the directory changed. Instead, I do:
d/foo:
#test -d d || mkdir -p d
...
In your case, you could something similar; you just need a way to test for a running instance of firefox, and to be able to start it. Something like this might do:
Html5DataResultFile: HTML5DataSourceFile
pgrep firefox || { /usr/bin/firefox && sleep 5; }
cp HTML5DataSourceFile folder/checked/by/html5app/
waitforHtml5DataResultFile
The sleep call just lets FF initialize, because it might not be ready to do anything the instant it returns.
The problem with option #1 in your case is that it's undependable and a little circular. Firefox won't reliably remove the pidfile if the process dies. If it does successfully remove the file when it exits, and re-creates it when it restarts, you have a new problem: the timestamp on the file spuriously defines any dependencies as out of date, when in fact the restarted process hasn't invalidated them.

Related

Gnu Make: When invoking parallel make, if pre-requisites are supplied during the build, will make try to remake those?

This is an order of operations question.
Suppose I declare a list of requirements:
required:=$(patsubst %.foo,%.bar, $(shell find * -name '.foo'))
And a rule to make those requirements:
$(required):
./foo.py $#
Finally, I invoke the work with:
make foo -j 10
Suppose further the job is taking days and days (up to a week on this slow desktop computer).
In order to speed things up, I'd like to generate a list of commands and do some of the work on the Much faster laptop. I can't do all of the work on the laptop because, for whatever reason, it can't stay up for hours and hours without discharging and suspending (if I had to guess, probably due to thermal throttling):
make -n foo > outstanding_jobs
cat outstanding_jobs | sort -r | sponge outstanding_jobs
scp slow_box:outstanding_jobs fast_laptop:outstanding_jobs
ssh fast_laptop
head -n 200 outstanding_jobs | parallel -j 12
scp *.bar slow_box:.
The question is:
If I put *.bar in the directory where the original make job was run, will make still try to do that job on the slow box?
OR do I have to halt the job on the slow box and re-invoke make to "get credit" in the make recipe for the new work that I've synced over onto the slow box?
NOTE: substantially revised.
Before it starts building anything, make constructs a dependency graph to guide it, based on an analysis of the requested goal(s), the applicable build rules, and, to some extent, the files already present. It then walks the graph, starting from the goal nodes, to determine which are out of date with respect to their prerequisites and update them.
Although it does not necessarily evaluate the whole graph before running any recipes, once it decides that a given target needs to be updated, make is committed to updating it. In particular, once make decides that some direct or indirect prerequisite of T is out of date, it is committed to (re)building T, too, regardless of any subsequent action on T by another process.
So, ...
If I put *.bar in the directory where the original make job was run,
will make still try to do that job on the slow box?
Adding files to the build directory after make starts building things will not necessarily affect which targets the ongoing make run will attempt to build, nor which recipes it uses to build them. The nearer a target is to a root of the dependency graph, the less likely that the approach described will affect whether make performs a rebuild, especially if you're running a parallel make.
It's possible that you would see some time savings, but you must also consider the possibility that you end up with an inconsistent build.
OR do I have to halt the job on the slow box and re-invoke make to "get credit" in the make recipe for the new work that I've synced over onto the slow box?
If the possibility of an inconsistent build can be discounted, then that is probably a viable option. A new make run will take the then-existing files into account. Depending on the defined rules and the applicable timestamps, it is still possible that some targets would be rebuilt that did not really need to be, but unless the makefile engages in unusual shennanigans, chances are good that at least most of the built files imported from the helper machine will be accepted and used without rebuilding.

Makefile: store warning count into variable without using temp file

I would like to improve an existing Makefile, so it prints out the number of warnings and/or errors that were encountered during the build process.
My basic idea is that there must be a way to pipe the output to grep and have the number of occurences of a certain string in either stderr or stdout stream (i.e. "Warning:") stored into a variable that can then simply be echo'ed out at the end make command.
Requirements / Challenges:
Current console output and exit code must remain exactly the same
That means also without even changing control characters. Dev's using the MakeFile must not recognize any difference to what the output was prior to my change (except for a nice, additional Warning count output at the end of the make process). Any approaches with tee i tried so far were not successful, as the color coding of stderr messages in the console is lost, changing them to all black & white.
Must be system-independent
The project is currently being built by Win/OSX/Linux devs and thus needs to work with standard tools available out-of-the-box in most *nix / CygWin shells. Introducing another dependency such as unbuffer is not an option.
It must be stable and free of side-effects (see also 5.)
If the make process is interrupted (i.e. by the user pressing CTRL+C or for any other reason), there should be no side-effects (such as having an orphaned log file of the output left behind on disk)
(Bonus) It should be efficient
The amount of output may get >1MB and more, so just piping to a file and greping it will be a small performance hit and also there will be additional the disk I/O (thus unnecessarily slowing down the build). I'm simply wondering if this cannot be done w/o a temp file as i understand pipes as sort of "streams" that just need to be analysed as the flow through.
(Bonus) Make it locale-independent w/o changing the current output
Depending on the current locale, the string to grep and count is localized differently, i.e. "Warning:" (en_US.utf8) or "Warnung:" (de_DE.utf8). Surely i could have locale switch to en_US in the Makefile, but that would change console output for users (Hence breaking requirement 1.), so i'd like to know if there's any (efficient) approach you could think of for this.
At the end of the day, i'd be able to do with a solid solution that just fullfills requirement 1. + 2.
If 3. to 5. are not possible to be done then i'd have to convince the project maintainers to have some changes to .gitignore, have the build process slightly take up more time and resources, and/or fix the make output to english only but i assume they will agree that would be worth it.
Current solution
The best i have so far is:
script -eqc "make" make.log && WARNING_COUNT=$(grep -i --count "Warning:" make.log)" && rm make.log || rm make.log
That fulfills my requirements 1, 2 and almost no. 3: still, if the machine has a power-outage while running the command, make.log will remain as an unwanted artifact. Also the repetition of rm make.log looks ugly.
So i'm open on alternative approaches and improvements by anybody. Thanks in advance.

How to rebuild when the recipe has changed

I apollogize if this question has already been asked. It's not easy to search.
make has been designed with the assumption that the Makefile is kinda god-like. It is all-knowing about the future of your project and will never need any modification beside adding new source files. Which is obviously not true.
I used to make all my targets in a Makefile depend on the Makefile itself. So that if I change anything in the Makefile, the whole project is rebuilt.
This has two main limitations :
It rebuilds too often. Adding a linker option or a new source file rebuilds everything.
It won't rebuild if I pass a variable on the command line, like make CFLAGS=-O3.
I see a few ways of doing it correctly, but none of them seems satisfactory at first glance.
Make every target depend on a file that contains the content of the recipe.
Generate the whole rule with its recipe into a file destined to be included from the Makefile.
Conditionally add a dependency to the targets to force them being rebuilt whenever necessary.
Use the eval function to generate the rules.
But all these solutions need an uncommon way of writing the recipes. Either putting the whole rule as a string in a variable, or wrap the recipes in a function that would do some magic.
What I'm looking for is a solution to write the rules in a way as straightforward as possible. With as little additional junk as possible. How do people usually do this?
I have projects that compile for multiple platforms. When building a single project which had previously been compiled for a different architecture, one can force a rebuild manually. However when compiling all projects for OpenWRT, manual cleanup is unmanageable.
My solution was to create a marker identifying the platform. If missing, everything will recompile.
ARCH ?= $(shell uname -m)
CROSS ?= $(shell uname -s).$(ARCH)
# marker for the last built architecture
BUILT_MARKER := out/$(CROSS).built
$(BUILT_MARKER) :
#-rm -f out/*.built
#touch $(BUILT_MARKER)
build: $(BUILT_MARKER)
# TODO: add your build commands here
If your flags are too long, you may reduce them to a checksum.
"make has been designed with the assumption that the Makefile is kinda god-like. It is all-knowing about the future of your project and will never need any modification beside adding new source files."
I disagree. make was designed in a time when having your source tree sitting in a hierarchical file system was about all you needed to know about Software configuration management, and it took this idea to the logical consequence, namely that all that is, is a file (with a timestamp). So, having linker options, locator tables, compiler flags and everything else but the kitchen sink in a file, and putting the dependencies thereof also in a file, will yield a consistent, complete and error-free build environment as far as make is concerned.
This means that passing data to a process (which is nothing else than saying that this process is dependent on that data) has to be done via a file - command line arguments as make variables are an abuse of makes capabilities and lead to erroneous results. make clean is the technical remedy for a systemic misbehaviour. It wouldn't be necessary, had the software engineer designed the make process properly and correctly.
The problem is that a clean build process is hard to design and maintain. BUT: in a modern software process, transient/volatile build parameters such as make all CFLAGS=O3 never have a place anyway, as they wreck all good foundations of config management.
The only thing that can be criticised about make may be that it isn't the be-all-end-all solution to software building. I question if a program with this task would have reached even one percent of makes popularity.
TL;DR
place your compiler/linker/locator options into separate files (at a central, prominent, easy to maintain and understand, logical location), decide about the level of control through the granularity of Information (e.g. Compiler flags in one file, linker flags in another) and put the true dependencies down for all files, and voila, you will have the exactly necessary amount of compilation and a correct build.

Why do we describe build procedures with Makefiles instead of shell scripts?

Remark This is a variation on the question “What is the purpose
of linking object files separately in a
Makefile?” by user4076675 taking
a slightly different point of view. See also the corresponding META
discussion.
Let us consider the classical case of a C project. The gcc compiler
is able to compile and link programs in one step. We can then easily
describe the build routine with a shell script:
case $1 in
build) gcc -o test *.c;;
clean) rm -f test;;
esac
# This script is intentionally very brittle, to keep
# the example simple.
However, it appears to be idiomatic to describe the build procedure
with a Makefile, involving extra steps to compile each compilation
unit to an object file and ultimately linking these files. The
corresponding GNU Makefile would be:
.PHONY: all
SOURCES=$(wildcard *.cpp)
OBJECTS=$(SOURCES:.cpp=.o)
%.o: %.cpp
g++ -c -o $# $<
all: default
default: $(OBJECTS)
g++ -o test $^
clean:
rm -rf *.o
This second solution is arguable more involved than the simple shell
script we wrote before. It as also a drawback, as it clutters the
source directory with object files. So, why do we describe build
procedures with Makefiles instead of shell scripts? At the hand of
the previous example, it seems to be a useless complication.
In the simple case where we compile and link three moderately sized
files, any approach is likely to be equally satisfying. I will
therefore consider the general case but many benefits of using
Makefiles are only important on larger projects. Once we learned the
best tool which allows us to master complicated cases, we want to use
it in simple cases as well.
Let me highlight the ''benefits'' of using make instead of a simple
shell script for compilation jobs. But first, I would like to make an
innocuous observation.
The procedural paradigm of shell scripts is wrong for compilation-like jobs
Writing a Makefile is similar to writing a shell script with a slight
change of perspective. In a shell script, we describe a procedural
solution to a problem: we can start to describe the whole procedure in
very abstract terms using undefined functions, and we refine this
description until we reached the most elementary level of description,
where a procedure is just a plain shell command. In a Makefile, we do
not introduce any similar abstraction, but we focus on the files we
want to produce and how we can produce them. This works well because
in UNIX, everything is a file, therefore each treatment is
accomplished by a program which reads its input data from input
files, do some computation and write the results in some output
files.
If we want to compute something complicated, we have to use a lot of
input files which are treated by programs whose outputs are used as
inputs to other programs, and so on until we have produced our final
files containing our result. If we translate the plan to prepare our
final file into a bunch of procedures in a shell script, then the
current state of the processing is made implicit: the plan executor
knows “where it is at” because it is executing a given procedure,
which implicitly guarantees that such and such computations were
already done, that is, that such and such intermediary files were
already prepared. Now, which data describes “where the plan executor
is at”?
Innocuous observation The data which describes “where the plan
executor is at” is precisely the set of intermediary files which
were already prepared, and this is exactly the data which is made
explicit when we write Makefiles.
This innocuous observation is actually the conceptual difference
between shell scripts and Makefiles which explains all the advantages
of Makefiles over shell scripts in compilation jobs and similar jobs.
Of course, to fully appreciate these advantages, we have to write
correct Makefiles, which might be hard for beginners.
Make makes it easy to continue an interrupted task where it was at
When we describe a compilation job with a Makefile, we can easily
interrupt it and resume it later. This is a consequence of the
innocuous observation. A similar effect can only be achieved with
considerable efforts in a shell script, while it is just built in
make.
Make makes it easy to work with several builds of a project
You observed that Makefiles will clutter the source tree with object
files. But Makefiles can actually be parametrised to store these
object files in a dedicated directory. I work with BSD Owl
macros for bsdmake and use
MAKEOBJDIR='/usr/home/michael/obj${.CURDIR:S#^/usr/home/michael##}'
so that all object files end under ~/obj and do not pollute my
sources. See this
answer
for more details.
Advanced Makefiles allow us to have simultaneously several directories
containing several builds of a project with distinct compilation
options. For instance, with distinct features enabled, or debug
versions, etc. This is also consequence of the innocuous observation
that Makefiles are actually articulated around the set of intermediary
files. This technique is illustrated in the testsuite of BSD Owl.
Make makes it easy to parallelise builds
We can easily build a program in parallel since this is a standard
function of many versions of make. This is also consequence of the
innocuous observation: because “where the plan executor is at” is an
explicit data in a Makefile, it is possible for make to reason about
it. Achieving a similar effect in a shell script would require a
great effort.
The parallel mode of any version of make will only work correctly if
the dependances are correctly specified. This might be quite
complicated to achieve, but bsdmake has the feature which
literally anhilates the problem. It is called the
META mode. It
uses a first, non-parallel pass, of a compilation job to compute
actual dependencies by monitoring file access, and uses this
information in later parallel builds.
Makefiles are easily extensible
Because of the special perspective — that is, as another consequence
of the innocuous observation — used to write Makefiles, we can
easily extend them by hooking into all aspects of our build system.
For instance, if we decide that all our database I/O boilerplate code
should be written by an automatic tool, we just have to write in the
Makefile which files should the automatic tool use as inputs to write
the boilerplate code. Nothing less, nothing more. And we can add this
description pretty much where we like, make will get it
anyway. Doing such an extension in a shell script build would be
harder than necessary.
This extensibility ease is a great incentive for Makefile code reuse.

How can I tell what -j option was provided to make

In Racket's build system, we have a build step that invokes a program that can run several parallel tasks at once. Since this is invoked from make, it would be nice to respect the -j option that make was originally invoked with.
However, as far as I can tell, there's no way to get the value of the -j option from inside the Makefile, or even as an environment variable in the programs that make invokes.
Is there a way to get this value, or the command line that make was invoked with, or something similar that would have the relevant information? It would be ok to have this only work in GNU make.
In make 4.2.1 finally they got MAKEFLAGS right. That is, you can have in your Makefile a target
opts:
#echo $(MAKEFLAGS)
and making it will tell you the value of -j parameter right.
$ make -j10 opts
-j10 --jobserver-auth=3,4
(In make 4.1 it is still broken). Needless to say, instead of echo you can invoke a script doing proper parsing of MAKEFLAGS
Note: this answer concerns make version 3.82 and earlier. For a better answer as of version 4.2, see the answer by Dima Pasechnik.
You can not tell what -j option was provided to make. Information about the number of jobs is not accessible in the regular way from make or its sub-processes, according to the following quote:
The top make and all its sub-make processes use a pipe to communicate with
each other to ensure that no more than N jobs are started across all makes.
(taken from the file called NEWS in the make 3.82 source code tree)
The top make process acts as a job server, handing out tokens to the sub-make processes via the pipe. It seems to be your goal to do your own parallel processing and still honor the indicated maximum number of simultaneous jobs as provided to make. In order to achieve that, you would somehow have to insert yourself into the communication via that pipe. However, this is an unnamed pipe and as far as I can see, there is no way for your own process to join the job-server mechanism.
By the way, the "preprocessed version of the flags" that you mention contain the expression --jobserver-fds=3,4 which is used to communicate information about the endpoints of the pipe between the make processes. This exposes a little bit of what is going on under the hood...

Resources