Include generated makefile without warning message - makefile

For a project of mine I am automatically generating makefiles and including them, like this:
all:
#echo 'SUCCESS is $(SUCCESS)'
clean:
rm depend.mk
depend.mk:
#echo 'Creating $#'
#echo 'SUCCESS := 1' > $#
.PHONY: all clean
include depend.mk
This works, but the include line generates a warning message:
$ make
Makefile:13: depend.mk: No such file or directory
Creating depend.mk
SUCCESS is 1
I would like to silence that first warning line saying that depend.mk doesn't exist. I know it doesn't exist since I have a rule written to generate it, so the warning is unnecessary (unless of course there isn't a rule for it). I do NOT want make to ignore the error where the included file doesn't exist and there is no rule for it, so prefixing include with a - to ignore the error will not work for me. I'd like something similar to bash's convention of piping stderr to /dev/null like some_cmd 2>/dev/null but for including in make.
The sample above is a very simplified example of this case. In my actual project there are a lot of automatically generated makefiles (via clang's automatic dependency generation) being included, meaning a fresh run of make will flood my screen with these warning messages.
Is anything like this possible, or am I just going to have to deal with the annoying warning messages?

I've encountered and (re-re-re-re-)solved this problem a number of times myself. Really, the problem is in the thinking surrounding when the dependency files are generated and used.
This link has the detailed description of the "resolution": http://make.mad-scientist.net/papers/advanced-auto-dependency-generation/
Basically it comes down to the fact that dependency files are really only necessary for rebuilding, not the initial building of your library/executable. Resultantly you don't need to have a rule for generating dependency files up front (which is in fact less efficient), you instead should generate them during the object file step as intermediate files marked precious (so they're created and tracked as side-effect files that should never be cleaned up automatically). Subsequent builds will then have the files available, which is exactly what you were trying to achieve overall. You can then make it a "-include" on the dependency files, with the foreknowledge that your object file build step will fail if the dependency file generation fails, giving an immediate error, as you've mentioned is preferred, rather than an obscure and indirect one much later.
I've actually done a couple rather large build systems implementing this method, and it does work quite well, including ones that used non-GNU toolchains. To an outside user it appears identical, but internally it performs more efficiently and isn't hiding potentially important errors.

I tried many (many!) things to see if I could prevent or redirect the error message. No luck.
But when I tried -include (include with a leading dash), it didn't give an error, and make with clean, all, depend.mk and 'default' all worked properly and as expected.
Is there a particular reason you didn't want to use the -include variant? Seems to do exactly what you're looking for, and doesn't alter how the Makefile works in any way, just doesn't show the error during the first pass through the Makefile.

Related

How to get make to fail if generation of a required include passes but the file is not actually created

I have a makefile that is erroneously generating a required makefile include file. The included file does not initially exist, but there is a rule to make it. The rule runs successfully, but because of a bug, the required include file is not created where expected (and thus make is unable to include it). However, instead of make failing (due to the fact that the file still can't be included), make completes successfully.
The following is my makefile1.mak file.
include myfile.mak
default:
#echo hi
myfile.mak:
#echo hello
When I execute 'make -f makefile1.mak', I get:
makefile1.mak:1: myfile.mak: No such file or directory
hello
hi
Of course, I finally figured out that my code to generate myfile.mak was not generating it correctly, but the actual makefiles that I'm using are 100s of lines long, so we didn't notice that the include wasn't happening for quite a while (it was a very subtle build issue that was introduced).
So, my question is - is there any way to get make to fail on the above example?
Add a line to the rule:
myfile.mak:
do various things to build myfile.mak
test -f $#

Including an automatically generated makefile as a trick to enforce execution of a recipe

I'm slowly losing my mind here. First, let me describe what it is I'm trying to do. We have a compiler that spews out weirdly formatted dependency files. To get these makefiles into a format GNU Make can understand, they need to be processed by a Perl script first. Technically, the Perl script doesn't convert the input dependency files it gets passed; instead it creates a new, properly formatted dependency file for each input dependency file.
Now, in order for GNU Make to know which translation units need recompiling and which don't, it obviously must have seen those dependency files before trying to make the translation unit targets, so we have the following line in our master makefile:
include $(PROCESSED_EXISTING_DEPENDENCY_FILES)
where $(PROCESSED_EXISTING_DEPENDENCY_FILES) is a list of all converted dependency files. My idea was to (ab-)use an automatically generated makefile whose recipe not only builds that makefile but also triggers the creation of all dependency files mentioned in the $(PROCESSED_EXISTING_DEPENDENCY_FILES) list and include that makefile just before including the converted dependency files. To ensure that the conversion takes place, the parent process of our Make process will delete the automatically created makefile first (we have a Perl wrapper process controlling GNU Make). The relevant part in the master makefile would look like this:
# Phony target that creates processed dependency files.
CONVERTED_EXISTING_DEPENDENCY_FILES :
<recipe here>
$(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK) : CONVERTED_EXISTING_DEPENDENCY_FILES
$(info $(TARGET_BUILD_MESSAGE_PREFIX) Building $(notdir $#) ...)
$(file >$#,# Automatically generated makefile that gets included before including the existing, converted dependency files.)
$(file >>$#,$(DOLLAR)(info Including pre-converted-dependency-files-inclusion hook file ...))
$(file >>$#,)
include $(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK)
include $(PROCESSED_EXISTING_DEPENDENCY_FILES)
We're already using the same basic principle in several other cases, and so far this has worked perfectly fine, but for some reason when I try this, GNU Make gets lost in an infinite loop where it will continuously re-revaluate the master makefile, include all other makefiles and then go back to re-revaluating the master makefile again.
The $(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK) does get created, and if there are any dependency files to be converted, they are processed, too, but I'm still at a loss as to what causes this infinite loop in Make. We are using GNU Make 4.2.1 for Windows on a Windows 10 (64 bit) system.
I recommend you rework your model completely to avoid any recipes that know how to build included files, and instead follow the model for auto-dependency generation described in this post (based on how automake handles dependency generation).
Then add in the postprocessing step directly into the same recipe that generates the dependency files, rather than having a separate rule that does it. I don't think it's necessary to have two separate rules because you really don't want the intermediate step here: you just want to generate the make prerequisites definitions... similar to how normally we wouldn't have separate rules for preprocessing, compiling, assembling object files: one rule does that even though there are multiple steps involved.

Generate include files for the Makefile by the same Makefile

In my program, I have a somewhat complicated build process. Currently, in one directory I use include in Makefile.am with a file, that does not exist but has to be build on its own. The reason is that this include file is quite long. Further in the real program it is not just only one file but several and the generation process for this file can change from time to time.
The Makefile.am looks something like this
noinst_LIBRARIES = libtest.a
nodist_libtest_a_SOURCES = file.c
CLEANFILES = file.c Make_file.mk
$(builddir)/Make_test.mk: $(srcdir)/Perl/generate_mk_files.pl
perl $(srcdir)/Perl/generate_mk_files.pl file
include $(builddir)/Make_file.mk
After creation of Make_file.mk it looks something like
$(builddir)/file.c: $(srcdir)/file.template $(srcdir)/Perl/generate_c.pl
perl $(srcdir)/Perl/generate_c.pl $(srcdir)/file.template
Automake works and the final build process as well. The output to make is something like (I have shorted it somewhat):
Makefile:721: Make_file.mk: Datei oder Verzeichnis nicht gefunden (file not found)
perl ../../../../src/components/test/Perl/generate_mk_files.pl test
perl ../../../../src/components/test/Perl/generate_c.pl ../../../../src/components/test/file.template
Therefore, make first complains that the include file is not found, then creates it and then also follows the rules of the included file.
Although I am happy that it works I wonder why. First, I thought that make loads the Makefile. During this step, Make_file.mk does not exists. Therefore it seems the Makefile is loaded more than once.
Further, the manual of Automake for include states:
Note that these fragments are read and interpreted by automake, not by
make.
Which is not what I see, since the included fragment does not exist during the execution of Automake.
My questions basically are:
Why does it work?
Is this the correct way to do this or should I use another approach, e.g. starting new instances of make within Makefile.
I don't really know Automake, but, from the GNU make manual :
If an included makefile cannot be found in any of these directories {standard includes directories} , a
warning message is generated, but it is not an immediately fatal
error; processing of the makefile containing the include continues.
Once it has finished reading makefiles, make will try to remake any
that are out of date or don’t exist. See How Makefiles Are Remade.
Only after it has tried to find a way to remake a makefile and failed,
will make diagnose the missing makefile as a fatal error.
If you want make to simply ignore a makefile which does not exist or
cannot be remade, with no error message, use the -include directive
instead of include, like this:
-include filenames…
This acts like include in every way except that there is no error (not even a warning) if any of the filenames (or any
prerequisites of any of the filenames) do not exist or cannot be
remade.
So basically, make cannot execute the recipe for remaking the include file before he has finished to parse the main Makefile. So it raises a warning, continue to read the Makefile, find the rule for remaking the included file, remake it, and then restart itself (that is explained in details in the How Makefiles Are Remade section).
Going back to the manual, it states there's two forms for automakes include mechanism:
include $(srcdir)/file
and
include $(top_srcdir)/file
neither of which match your include. So I'd imagine the include is actually run by the underlying make whatever that might be (e.g. GNU Make, though of course other make programs have this functionality as well).
Now for the questions:
Why does it work?
As explained in another answer, GNU Make will attempt to make a missing included makefile, before failing.
Is this the correct way to do this or should I use another approach, e.g. starting new instances of make within Makefile
Generating makefiles is one of the tasks autotools do, either through autoconf or automake. Going through multiple stages of "making makefiles" seems prone to error (and hard to maintain). Recursive make has similar problems.
The reason is that this include file is quite long.
automake include statements will happily paste together a large makefile out of smaller components.
Further in the real program it is not just only one file but several and the generation process for this file can change from time to time.
It's hard to recommend what to suggest to do in autotools based on how the "changes" are determined. Since it seems you're also using libtool adding/removing sources to libs (or entire libs) there can be effected by conditionals, variables, etc.

Avoid regenerating files that won't change

I have a Makefile with several rules of this form
protolist.c: $(PROTOCOLS) Makefile src/genmodtable.sh
$(SHELL) $(srcdir)/src/genmodtable.sh \
$# $(filter-out %Makefile %genmodtable.sh, $^)
As the name implies, protolist.c winds up containing a list of all the "protocols" defined by the .c files in $(PROTOCOLS). The contents of this file do formally depend on everything in $(PROTOCOLS), the Makefile, and the generator script, but it's very rare for the file to actually change when one of those .c files is edited. Therefore, genmodtable.sh is coded to not change the timestamp of protolist.c if it's not going to make any change to its contents. This causes Make to skip rebuilding protolist.o and its dependencies when it's not really necessary.
That all works fine; the problem is that, because protolist.c now appears to be out of date with respect to its dependencies, Make thinks it has to try to regenerate protolist.c on every run. This isn't a performance issue -- the script is very fast -- but it is confusing behavior. I dimly recall an idiom, involving timestamp files, that could be used to stop Make from doing this, but I have not been able to reconstruct it or find it described anywhere. Does anyone know what it is?
(Also if anyone can suggest how to get rid of that silly $(filter-out ...) construct, that would be helpful, as that is the only GNUmakeism in this Makefile.)
This appears similar to an issue with Fortran programming and make, involving the files generated when compiling a module. (Not relevant, other than that is where I picked up how to do this.)
What you want is have make compare the timestamp of protolist.o to the timestamp of protolist.c, which remains 'old', and make the decision to run the recipe for protolist.c, depending on the timestamp of, well, a timestamp file, which gets updated each time the recipe is run.
In order to make this work, you have to link the two together with an empty rule.
protolist.o: protolist.c
[...]
protolist.c: protolist.c.time ;
protolist.c.time: $(PROTOCOLS) Makefile src/genmodtable.sh
$(SHELL) $(srcdir)/src/genmodtable.sh \
protolist.c $(filter-out %Makefile %genmodtable.sh, $^)
touch protolist.c.time
In my own makefiles, I have to declare the timestamp files as prerequisites of the special target .PRECIOUS, to prevent make from deleting them, but I'm using pattern rules; I'm not 100% sure, but I think this isn't necessary when using explicit rules, like here.
To avoid the $(filter-out ...) construct, can you not simply replace it with $(PROTOCOLS)?
(Although, personally, I would stick to Paul's First Rule of Makefiles: Don't hassle with writing portable makefiles, use a portable make instead.)

Makefile: need to do a target before including another makefile

Part of my Makefile:
CPUDEPS=./mydeps.cpu
(...)
deps: $(CPUDEPS)
$(CPUDEPS): $(CCFILES)
#echo [DEPS] CPU
$(CMDECHO)makedepend -Y -s'# CPU sources dependencies generated with "make deps"' \
-w4096 -f- -- $(CFLAGS) -- $^ 2> /dev/null > $(CPUDEPS)
(...)
sinclude $(CPUDEPS)
Problem 1: includes are done during the first phase of processing, targets during the second phase; so, if ./mydeps.cpu doesn't exist and I "make deps", I get first the error
Makefile:335: ./mydeps.cpu: No such file or directory
I hide the error using sinclude instead of include, but the problem is still there: the old file is included, not the just-generated-one. Have to run it twice to include the updated file. This is because make does a two-phase processing; is there any way to tell make to complete the target deps before parsing the includes?
Problem 2: even if the file ./mydeps.cpu doesn't exist and make deps actually creates it, I always get a "make: Nothing to do for deps". This doesn't happen with other targets. I don't understand why and how to avoid it.
Problem 1 is non-existant: before building a target, make automatically rebuilds makefiles (with implicit rules if no explicit rule is provided). So having a rule for the makefile ensures that will always be up to date, there is no need to run deps twice. Additionally, since CPUDEPS is a makefile, it will be updated automatically before any other rule is run, so dependencies will always be updated if necessary and make deps is not needed. You can probably notice this by yourself by observing the [DEPS] line being echoed if any of the CCFILES becomes more recent that the dependency file.
For Problem 2, adding anything to the recipe ensures that make doesn't complain about having nothing to do. If there is nothing else, you can use something like #echo OK to give feedback to the user, or a simple #true if you prefer totally silent makes.
What you are trying to achieve is useless: you can use the dependencies file that was created during the previous build. That's enough.
The main reasoning behind that rule is:
if you haven't changed any of your files, then the dependencies file is up-to-date, and there's nothing to build.
if you have changed anything, even very deep into your #include chain, on an existing file that were used by previous build, then the dependencies file have already caught it. You'll rebuild what is needed.
if you change something in a new file (you add that file!) then it was not used by previous build, and not listed in dependencies. But if you really want to use it, then you have to modify at least one of your other files that was used before, and you're back on the previous case.
The solution is to create the dependencies file during the normal process of the compilation, and to optionally include it (with sinclude) if it is present.

Resources