Can anyone tell me the differences between "make clean" and "make clobber"? I searched but didn't find a useful answer.
Neither of these terms has a firmly established meaning. The Makefile author creates targets and gives them human-readable names; conventionally, clean means to remove things from a previous build, while clobber means to forcibly overwrite some previous results. The latter is less commonly seen or necessary.
In each case, for each Makefile, you should read any accompanying build documentation, or examine the Makefile and understand what it does, or, perhaps as a last resort, ask the author.
Related
If I have a Makefile, I can type make, change some source code, type make again and be sure that it won't needlessly rebuild the entire project; it'll just build the part that changed. Great.
How about if I want to avoid that first call to make because building everything in the Makefile, which may include many distinct relatively unrelated binaries is needlessly expensive or even impossible. Suppose I just typed make clean and changed some source code. I can build the specific thing I changed by saying make that_specific_thing. Is there a way to say "build that specific thing, but also anything that depends on it"?
To make that more concrete. Let's say I changed myLibrary.cpp, and now I want to have a way to say build everything that uses myLibrary.cpp, for example, some unittest.cpp that I've never heard of/seen in the Makefile before. I don't want to say make my_liberary my_unittest, because that requires me the human to figure out what should be built.
Is there a way to do this automatically? If yes, does it require structuring my Makefile in an atypical way?
It sounds as if your default target is all; I'll assume that.
make clean # you know what this does
make -t # this doesn't actually run any recipes, it just touches all targets
make -w that_specific_thing # "what if" mode, as if that file were new
EDIT: If you also want to rebuild the prerequisites of that_specific_thing, you must add one more step:
make clean # whether you still need this depends on what you're trying to do
make that_specific_thing
make -t
make -w that_specific_thing
I have had several people tell me at this point that eval is evil in makefiles. I originally took their word for it, but now I'm starting to question it. Take the following makefile:
%.o:
$(eval targ=$*)
echo making $targ
%.p:
echo making $*
I understand that if you then did make "a;blah;".o, then it would run blah (Which could be an rm -rf \, or worse). However, if you ran make "a;blah;".p you would get the same result without the eval. Furthermore, if you have permissions to run make, you would also have permissions to run blah directly as well, and wouldn't need to run make at all. So now I'm wondering, is eval really an added security risk in makefiles, and if so, what should be avoided?
Why is eval evil?
Because it grants a whole power of language to things you actually don't want to give that power.
Often it is used as "poor man's metaprogramming" to construct some piece of code and then run it. Often it looks like eval("do stuff with " + thing) - and thing is only known during runtime, because it gets supplied from outside.
However, if you don't make sure that thing belongs to some tiny subset of language you need in that particular case (like, is a string representation of one valid name), your code would grant permissions to stuff you didn't intend to. For example, if thing is "apples; steal all oranges" then oranges would be stolen.
If you do make sure that thing belongs to some subset of language you actually need then 2 problems arise:
You are reimplementing language features (parsing source) which is not DRY and is often a sign of misusing a language.
If you resort to this that means simpler means are not suitable and your use case is somewhat complicated which makes validating your input harder.
Thus, it's really easy to break security with eval and taking enough precautions to make it safe is hard, that's why if you see an eval you should suspect possible security flaw. That's just a heuristic, not a law.
eval is a very powerful tool - as powerful as the whole language - and it's too easy to shoot your leg off with it.
Why this particular use of eval is not good?
Imagine a task that requires making some steps that depend on a file. Task can be done with various files. (like, user gives Virtualbox image of a machine that is to be brought up and integrated into existing network infrastructure)
Imagine, say, lazy administrator that automated this task - all commands are written in a makefile because it fits better than sh script (some steps depend on other and sometimes don't need to be re-done).
Administrator made sure that all commands are ok and correct and had given sudoers permission to run make with that specific makefile. Now, if makefile contains string like yours then using properly crafted name for your Virtualbox image you could pwn the system, or something like that.
Of course, I had to stretch far to make this particular case a problem, but it's a potential problem anyway.
Makefiles usually offer simple contracts: you name the target and some very specific stuff - written in makefile - gets done. Using eval the way you've used it offers a different contract: the same stuff as above but you also can supply commands in some complicated way and they would get executed too.
You could try patching the contract by making sure that $* would not cause any trouble. Describing what that means exactly could be an interesting exercise in language if you want to keep as much flexibility in target names as possible.
Otherwise, you should be aware of extended contract and don't use solutions like this in cases where that extension would cause problems. If you intend your solution to be reusable by as many people as possible, you should make its contract cause as little problems as possible, too.
GNU Make under MinGW is known to be very slow under certain conditions due to how it executes implicit rules and how Windows exposes file information (per "MinGW “make” starts very slowly").
That previous question and all other resources on the issue that I've found on the internet suggest working around the problem by disabling implicit rules entirely with the -r flag. But is there another way?
I have a "portable" Makefile that relies on them, and I'd like to make it so that it does not take around a minute to start it up each time, rather than having to get the Makefile owner to alter it just for me.
You should use make -d to see all the things make is doing and try to see where the time is going. One common reason for lengthy make times are match-anything rules which are used to determine whether or not a makefile needs to be rebuilt. Most of the match-anything rules CAN be removed; they're rarely needed anymore.
You can add this to your makefile and see if it helps:
%:: %,v
%:: RCS/%,v
%:: RCS/%
%:: s.%
%:: SCCS/s.%
And, if you don't need to auto-create your makefile you can add:
Makefile: ;
(also put any included makefiles there that you don't need to auto-create).
ETA
It seems your real question can be summed up as, "why does make take so much longer to start on Windows than on Linux, and what can I do to fix that without changing makefiles?"
The answer is, nothing. Make does exactly the same amount of work on both Windows and Linux: there are no extra rules or procedures happening on Windows that could be removed. The problem is that Windows NTFS is slower than typical Linux filesystems for these lookups. I know of no system setting, etc. that will fix this problem. Your only choice is to get make to do less work so that it's faster, and the only way to do that is by removing built-in rules you don't need.
If the problem is you really don't want to edit the actual makefiles, that's simple enough to solve: just write the rules above into a small separate makefile, maybe something like speedup.mk, then set the environment variable MAKEFILES=speedup.mk before invoking make. Make will parse that makefile as well without you having to change any makefiles.
I'm aware of the idea of using recursive makefiles. Will the subsequent makefiles such as the following be called be updated solely on any changes to the subsequent makefiles themselves?
e.g.:
#parent makefile. no changes here.
subsystem:
cd subdir && $(MAKE)
If the makefile within subdir was changed such that the following does not hold (e.g. only a gcc flag was changed), then will the object files be updated?
The recompilation must be done if the source file, or any of the
header files named as dependencies, is more recent than the object
file, or if the object file does not exist.
The only reason that, as written, make even runs that rule at all is because subsystem and subdir do not match.
If a subsystem file or directory were ever to be created in that directory that rule would cease to function.
If .PHONY: subsystem1 were added that problem would be fixed and that rule would always be run when listed on the command line (i.e. make subsystem). (As indicated in the comments .PHONY is a GNU Make extension. The section following the linked section discusses a portable alternative. Though it is worth noting that they are not completely identical in that .PHONY has some extra benefits and some extra limitations.)
In neither of those cases is the subsystem target paying any attention to modification dates of anything (as it lists no prerequisites).
To have a target depend on changes to a makefile you need to list the makefile(s) as prerequisites like anything else (i.e. subsystem: subdir/Makefile). Listing it as .PHONY is likely more correct and more what you want.
No, nothing in make itself tracks non-prerequisites. So flag changes/etc. do not trigger rebuilds. There are ways to make that work for make however (they involve storing the used flags in files that themselves are prerequisites of the targets that use those flags, etc.). There are questions and answers on SO about doing that (I don't have them ready offhand though).
Other tools do handle flag changes automatically however. I believe Electric Cloud's tools do this. I believe CMake does as well. There might also be others.
Recursive makefiles are executed whether or not anything changed. This is exactly one of the objections pointed out by Paul Miller in his Recursive make considered harmful paper from almost 20 years ago.
With that said, a makefile is just like any other dependency and can be added to a production rule to trigger that rule if the makefile is altered.
You can include the makefile as a dependency, the same as any other file:
mytarget.o: mytarget.c Makefile
We have unit tests in our project, and they run very slowly. The main reason for this, as far as I can tell is that each subdir runs serially. There is no reason for this and I'd like to modify things so each subdirectory is processed in parallel.
I found this question but it seems that the accepted answer is for how to specify this in your makefile, and not the makefile.am. I tried just adding the solution to my Makefile.am and it didn't seem to make a difference. Is this the correct way to do it at a Makefile.am level? If so, any advice for what I could be doing wrong? If not, please show me the path of truth :-)
In answer to my question, things from Makefile.am are translated fairly directly to the Makefile, so the changes in the original question can be made in Makefile.am. The only part I'm not 100% confident on is whether or not SUBDIRS (as it has special meaning) can get mangled in the autotools process. At any rate, processing the SUBDIRS in parallel is perhaps not typically the answer.
I solved this was to use a separate target for the directories I wanted processed in parallel, and I bet that this is typically the correct answer. There may well be some way to get the SUBDIRs to be processed this way, but using a separate target was pretty easy to get working for me, and at least for what I was trying to do a separate target was more appropriate.