Yes, that is the question, how can I prevent make, or better the compiler, from recompiling already existing libraries (.o-Files)?
I call the makefile simply via make [-parameters], and if possible, I don't like to rewrite the makefile itself (the reason is, I have to work with a lot of them)?
Edit: Whoops, sorry, yes, I meant the object-files.
The problem is, I got a directory with several hundred .mk-Files for several hundred sub-programs/libraries/whatever. I call the mk-files in a loop:
foreach i ('ls *.ml')
make CFLAGS='[some parameters]' -f $i -B
end
Now when I do it like this, everytime a makefile gets called, it recompiles not only the programs, but for each mk-file the needed libraries and objects, which slow down the whole process a lot. So I don't want it to recompile these.
These makefiles were not written by me, and I really don't want to edit several hundreds of them, so I'm asking whether there is a parameter especially prohibiting overwriting.
Related
I have a conditional makefile (well, actually I am dealing with the arch file that will be called when invoking make) that is quite involved and I would like to preprocess it to get rid of all the 'ifeq', 'ifneq' parts that only worsen the readability, in order to see better what is being actually done. I tried doing
make -n -d
where I get the whole calls to the compiler, but that is also a pain since then I need to separate manually all the flags. I just want to get my nice makefile with my separate FLAGS, DFLAGS, LIBS sentences etc etc.
(My apologies if this has been said anywhere, but I am unable to find it).
Thanks!
GNU Make under MinGW is known to be very slow under certain conditions due to how it executes implicit rules and how Windows exposes file information (per "MinGW “make” starts very slowly").
That previous question and all other resources on the issue that I've found on the internet suggest working around the problem by disabling implicit rules entirely with the -r flag. But is there another way?
I have a "portable" Makefile that relies on them, and I'd like to make it so that it does not take around a minute to start it up each time, rather than having to get the Makefile owner to alter it just for me.
You should use make -d to see all the things make is doing and try to see where the time is going. One common reason for lengthy make times are match-anything rules which are used to determine whether or not a makefile needs to be rebuilt. Most of the match-anything rules CAN be removed; they're rarely needed anymore.
You can add this to your makefile and see if it helps:
%:: %,v
%:: RCS/%,v
%:: RCS/%
%:: s.%
%:: SCCS/s.%
And, if you don't need to auto-create your makefile you can add:
Makefile: ;
(also put any included makefiles there that you don't need to auto-create).
ETA
It seems your real question can be summed up as, "why does make take so much longer to start on Windows than on Linux, and what can I do to fix that without changing makefiles?"
The answer is, nothing. Make does exactly the same amount of work on both Windows and Linux: there are no extra rules or procedures happening on Windows that could be removed. The problem is that Windows NTFS is slower than typical Linux filesystems for these lookups. I know of no system setting, etc. that will fix this problem. Your only choice is to get make to do less work so that it's faster, and the only way to do that is by removing built-in rules you don't need.
If the problem is you really don't want to edit the actual makefiles, that's simple enough to solve: just write the rules above into a small separate makefile, maybe something like speedup.mk, then set the environment variable MAKEFILES=speedup.mk before invoking make. Make will parse that makefile as well without you having to change any makefiles.
So I have a script, myscript.py, that produces a few output files, out/a.pickle, out/b.pickle, and out/c.pickle
And I have a Makefile that has the rule:
out/a.pickle: data/data.csv
myscript.py
Now, If I update the script, firstly, make out/a.pickle says there's nothing to be done here, even though the script has been modified. Isn't make supposed to check to see if things have been updated and then run them? Do I need to add myscript.py as a dependency to out/a.pickle, or something?
Secondly, is there a way to handle the fact that the script has multiple output files? Do I need to create a rule for each?
Make does not examine time stamps on executables. Otherwise, you would have to recompile the universe if gcc or echo or the shell is upgraded, and it's a slippery slope anyway; what if libraries or the kernel also changed in a way which requires you to recompile? You need human intervention at some point anyhow. So the designers of make simply drew the line at explicit dependencies.
(GNU Make has a lot of other built-in implicit dependencies, which are convenient. I vaguely believe that the original make didn't have any built-in dependencies at all. Anybody able to confirm?)
You can declare all the outputs in one rule:
out/a.pickle out/b.pickle out/c.pickle: myscript.py data/data.csv
./$^
(Notice how the script is included in the dependencies now. You might want to change that after the script is considered stable. Then you'll need to change the action as well.)
We have unit tests in our project, and they run very slowly. The main reason for this, as far as I can tell is that each subdir runs serially. There is no reason for this and I'd like to modify things so each subdirectory is processed in parallel.
I found this question but it seems that the accepted answer is for how to specify this in your makefile, and not the makefile.am. I tried just adding the solution to my Makefile.am and it didn't seem to make a difference. Is this the correct way to do it at a Makefile.am level? If so, any advice for what I could be doing wrong? If not, please show me the path of truth :-)
In answer to my question, things from Makefile.am are translated fairly directly to the Makefile, so the changes in the original question can be made in Makefile.am. The only part I'm not 100% confident on is whether or not SUBDIRS (as it has special meaning) can get mangled in the autotools process. At any rate, processing the SUBDIRS in parallel is perhaps not typically the answer.
I solved this was to use a separate target for the directories I wanted processed in parallel, and I bet that this is typically the correct answer. There may well be some way to get the SUBDIRs to be processed this way, but using a separate target was pretty easy to get working for me, and at least for what I was trying to do a separate target was more appropriate.
I'm a Java developer learning C++. I'm using eclipse as my IDE and MinGW as my toolset. Is it considered a best practice to list down every single object in a makefile? Also, is it just as acceptable to use wildcards to include all the files?
The use of wildcards is common, and accepted, but not really good practice.
If extra source files get into your source directories, they could wind up causing conflicts or -- worse -- riding silently in your libraries as useless baggage (introns?). Also, if a needed source file goes missing, your linker will complain about a missing {function|typename|whatever} and it might not be obvious what file has been lost (not really a problem if you have good source control, but still annoying). Finally, if your build system is expected to produce different targets using different subsets of the source files, wildcards will require you to either divide your source directories Venn-diagram-style, or resort to filename conventions that do the same thing (gah!).
Maintaining explicit lists of object files in a makefile really isn't that hard to do, and it keeps things simple.