Run several 'make' command at the same time? - makefile

(make noob here)
I did the following:
Configured a C++ project with CMake.
In one terminal tab, ran make to start building the whole project.
Got bored of waiting for the whole thing to build, figured I could
just make the subfolder I'm working on at the moment.
Without stopping the ongoing build in the first tab, opened a second tab and ran make from said subfolder.
Things looked pretty normal for a short while, then suddenly second tab started displaying build outputs related to the whole project, not only to the subfolder. I figured what I tried didn't work as I expected, so I CTRL-C'd the second tab.
That's when the weirdest happened: in the first tab the build output was mixed with lines coming form the specific folder I wanted to build. And the build went way past 100%, up to 128%!
My question is: what exactly does 'make' do when launched more than once at the same time?
Am I correct to think that the multiple make commands where somehow "merged" in the same process??

This is more a question about the makefiles that CMake creates and how they work. It's these makefiles which do things like track the percent complete, etc., not make itself. It's quite possible that by starting a second build in the same directory, you've messed up whatever facilities the CMake makefiles use to track progress.
The short answer is that no, it's not possible that one invocation of make will somehow "take over" or merge another invocation of make. As far as the make program is concerned they know nothing about each other. However since both are operating on the same filesystem, if one make writes files in a way that can confuse another make, you could see strange behaviors.
The cmake-generated makefiles are very complex; I've never actually tried to understand completely how they work. I've always thought it a shame that no one has tried to implement a CMake GNU Makefile generator in addition to the Unix Makefiles generator, that took full advantage of GNU make features. I'm sure the results would be easier to read and probably faster. But it seems unlikely this will ever happen; CMake users who care more about speed than portability are probably just switching to use Ninja as a generator.

Related

Why might it be necessary to force rebuild a program?

I am following the book "Beginning STM32" by Warren Gay (excellent so far, btw) which goes over how to get started with the Blue Pill.
A part that has confused me is, while we are putting our first program on our Blue Pill, the book advises to force rebuild the program before flashing it to the device. I use:
make clobber
make
make flash
My question is: Why is this necessary? Why not just flash the program since it is already made? My guess is that it is just to learn how to make an unmade program... but I also wonder if rebuilding before flashing to the device is best practice? The book does not say why?
You'd have to ask the author, but I would suggest it is "just in case" no more. Lack of trust that the make file specifies all possible dependencies perhaps. If the makefile were hand-built without automatically generated dependencies, that is entirely possible. Also it is easier to simply advise to rebuild than it is to explain all the situations where it might be necessary or otherwise, which will not be exhaustive.
From the author's point of view, it eliminates a number of possible build consistency errors that are beyond his control so it ensures you don't end up thinking the book is wrong, when it might be something you have done that the author has no control over.
Even with automatically generated dependencies, a project may have dependencies that the makefile or dependency generator does not catch, resource files used for code generation using custom tools for example.
For large projects developed over a long time, some seldom modified modules could well have been compiled with an older version of the tool chain, a clean build ensures everything is compiled and linked with the current tool.
make builds dependencies based on file timestamp; if you have build variants controlled by command-line macros, the make will not determine which dependencies are dependent on such a macro, so when building a different variant (switching from a "debug" to "release" build for example), it is good idea to rebuild all to ensure each module is consistent and compatible.
I would suggest that during a build/debug cycle you use incremental build for development speed as intended, and perform a rebuild for release or if changing some build configuration (such as enabling/disabling floating-point hardware for example or switching to a different processor variant.
If during debug you get results that seem to make no sense, such as breakpoints and code stepping not aligning with the source code, or a crash or behaviour that seems unrelated to some small change you may have made (perhaps that code has not even been executed), sometimes it may be down to a build inconsistency (for a variety of reasons usually with complex builds) and in such cases it is wise to at least eliminate build consistency as a cause by performing a rebuild all.
Certainly if you if you are releasing code to a third-party, such as a customer or for production of some product, you would probably want to perform a clean build just to ensure build consistency. You don't want users reporting bugs you cannot reproduce because the build they have been given is not reproducible.
Rebuilding the complete software is a good practice beacuse here you will generate all dependencies and symbol files path along with your local machine paths.
If you would need to debug any application with a debugger then probably you would need symbol file and paths where your source code is present and if you directly flash the application without rebuilding , then you might not be able to debug certain paths because you dont get to know at which path the application was compiled and you might miss the symbol related information.

Trick gmake build/ Copying files to preserve time stamps and avoid complete rebuild

short background info:
For some rather convoluted reasons I am trying to trick my projects build system. We are working with Code Composer Studio on Windows from Texas Instruments and the gmake implementation that was included. We use the standard Debug build option pretty much unmodified as far as I am aware of (my understanding of make and our implementation of it is unfortunately limited). When developing in Code Composer the build works as one would expect, only the files that have been changed and their dependencies are changed. Sometimes I need to regenerate all code from a code generator that we use however, triggering a complete rebuild.
Trying to trick this complete rebuild I wrote a python script that moves all the original files out from the project, waits for the code generator, then compares the files and replaces the new identical files with the old ones. Thus the creation timestamp and the last modified timestamp should be preserved. This also seems to work when examining these properties. However a complete rebuild of the problem is always triggered.
Core of the Problem:
Having already built the project in Code Composer and building again it concludes that nothing has changed and that there is nothing to do, as one would expect.
I move a file out of the folder and back in again, preserving both the creation time stamp and last modified time stamp as far as I can see (in windows explorer -> properties).
When now rebuilding the project it will rebuild the said file and its dependencies, despite it being identical to before and having the same timestamps.
What is gmake looking at? Does it detect that the folder has been changed. Is it looking at some other hidden timestamp? Is it a Windows problem? Shouldn't it be possible to do this, inserting a file with an old timestamp?
What am I missing? Anyone have a clue?
Having examined this I concluded that the problem does not lie with gmake or the makefile but with Code Composer Studio's/Eclipse makefile generation.
We use the built in makefile generation in Code Composer Studio (which is an Eclipse derivative). It seems to keep track of if files have been removed/moved out from the projects workspace. If it has it removes the dependecies of that files during the makefile generation, triggering a rebuild.
Examples:
Case 1, works as I expect:
Rebuilding an unchanged project in Code Composer Studio. Nothing is recompiled since nothing is changed.
Case 2, works as I expect:
Changing something in a single then rebuilding results in recompilation of only that single file. As I/once expects.
Case 3, does not work as I expect:
Having an unchanged project and the moving one source code file (xyz.cpp) out of the project, not changing it in any way and then mvoing it back results in a recompilation of xyz.obj.
Code Composer/Eclipse seem to keep track on the filestructure. Despite the file being put back into the project exactly the way it was, Code Composer/Eclipse deletes xyz.obj during the makefile generation, triggering a rebuild.
On the other hand; closing Code Composer, moving the file back and forth, Reopening Code Composer and regenerating makefile/rebuilding works as one expects. Nothing is rebuilt and no .obj files are removed.
Since this didn't have anything to do with gmake and makefile I guess this question was invalid/unrelevant to begin with. I will collect my thoughts and reformulate a new question formulated strictly as a Code Composer/Eclipse errand.
If anyone has found themselves in the same situation or have relevant information on their hands please share however!

I stopped the make command mid-way, will previous build be affected?

I built a very big project, which had a number of sub-projects, using make command. It took me 3 hours. Then by mistake (without cleaning the previous build) I re-executed the make command for a few minutes and then stopped it.
Have I ruined my previous build? How does make actually work behind the scenes? Are building the object files done in an atomic and safe manner?
Note: I cannot really run any of my binary files to see if they are broken since that's another lengthy process. I just want to know if I am fine or I have to re-run the make and let it finish.
If you want to publish this binary as a production version of your commercial product, then I would not rely on it, always be 100% sure that you are using a successfully built version of a fully saved and committed code base.
On the other hand, I you need this for debugging purposes, then you could use this! why? because the make system overrides the output binary only once if finishes compiling all the object files and only if it detects changes that requires a relink of the binary:
After recompiling whichever object files need it, make decides whether to relink edit. This must be done if the file edit does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than edit, so edit is relinked.
From GNU make: How Make Works
So if you haven't changed your code base, the linker will not relink the binary, leaving it as it was created by the successful build.

Can GNU make create broken binaries when building in parallel?

I am working in a project where we have just added parallelism to our build system, using GNU Make.
We build both libraries and the programs in parallel.
First we build all the libs necessary for the binaries. After the libs are created we start building the binaries.
Now when running our programs we have found that one of the binaries dont run as expected. Is it possible that GNU Make could cause broken binaries when building in parallel but still link correctly? If that is the case, what is the common cause and how can one avoid it?
Correct parallel builds depend on a correct makefile. If a build works serially but not in parallel, that means that your makefile has not declared all the prerequisites that it needs, so make doesn't realize it can't build target X until after target Y is complete.
However, it's extremely unlikely that these kinds of errors would allow the build to succeed: that is, the compiler or linker will almost always fail if things are building in the wrong order. It's hard for me to imagine how the build would succeed except by the purest chance, if at all (maybe if your tools overwrite an existing file instead of deleting it and writing it from scratch). Of course you given no information about exactly what "don't run as expected" means so it's hard to say for sure.
To investigate you need to do some testing: does it fail the same way every time you do a parallel build? Does it fail even if you use different amounts of parallelism (different -j levels)? Does it continue to fail if you switch back to non-parallel builds? Does the build succeed with -j even if you start with a completely clean workspace (nothing built)?

make: Adding function for "no rule" targets

I have a Makefile which uses a lot of static files, for example:
/mystaticpath/somepath/official_gpl_package-0.1.2.tar.gz
I use this Makefile on several computers and I don't need all packages on all computers. (That's the whole point of make)
Anyway, what I would like to do is put everything in /mystaticpath on a centralized server and download the packages on demand.
In other words, whenever make encounters a missing source file ("no rule"-error), it should run a script and then try again afterwards. The script would need the name of the missing file as a parameter and would download the file from the centralized server, so from make's point of view, the script is an universal creator of everything that might be needed in /mystaticpath.
Does anybody know whether that is possible with make?
Your design makes the little hairs on the back of my neck stand up, but give this a try:
/mystaticpath/%:
retrieve_script $#

Resources