How to find name of dependencies which cause make to build - makefile

I am using GNU MAKE tool for building my embedded C project. Whenever I build using make command it always rebuild entire project even though there is no change.
Is there any way I can find which dependencies files are causing make to rebuild.

make -d shows some of the internal workings of make. It is a lot of data, though, but it usually allows one to pinpoint why the rebuilds happen.

Related

Why might it be necessary to force rebuild a program?

I am following the book "Beginning STM32" by Warren Gay (excellent so far, btw) which goes over how to get started with the Blue Pill.
A part that has confused me is, while we are putting our first program on our Blue Pill, the book advises to force rebuild the program before flashing it to the device. I use:
make clobber
make
make flash
My question is: Why is this necessary? Why not just flash the program since it is already made? My guess is that it is just to learn how to make an unmade program... but I also wonder if rebuilding before flashing to the device is best practice? The book does not say why?
You'd have to ask the author, but I would suggest it is "just in case" no more. Lack of trust that the make file specifies all possible dependencies perhaps. If the makefile were hand-built without automatically generated dependencies, that is entirely possible. Also it is easier to simply advise to rebuild than it is to explain all the situations where it might be necessary or otherwise, which will not be exhaustive.
From the author's point of view, it eliminates a number of possible build consistency errors that are beyond his control so it ensures you don't end up thinking the book is wrong, when it might be something you have done that the author has no control over.
Even with automatically generated dependencies, a project may have dependencies that the makefile or dependency generator does not catch, resource files used for code generation using custom tools for example.
For large projects developed over a long time, some seldom modified modules could well have been compiled with an older version of the tool chain, a clean build ensures everything is compiled and linked with the current tool.
make builds dependencies based on file timestamp; if you have build variants controlled by command-line macros, the make will not determine which dependencies are dependent on such a macro, so when building a different variant (switching from a "debug" to "release" build for example), it is good idea to rebuild all to ensure each module is consistent and compatible.
I would suggest that during a build/debug cycle you use incremental build for development speed as intended, and perform a rebuild for release or if changing some build configuration (such as enabling/disabling floating-point hardware for example or switching to a different processor variant.
If during debug you get results that seem to make no sense, such as breakpoints and code stepping not aligning with the source code, or a crash or behaviour that seems unrelated to some small change you may have made (perhaps that code has not even been executed), sometimes it may be down to a build inconsistency (for a variety of reasons usually with complex builds) and in such cases it is wise to at least eliminate build consistency as a cause by performing a rebuild all.
Certainly if you if you are releasing code to a third-party, such as a customer or for production of some product, you would probably want to perform a clean build just to ensure build consistency. You don't want users reporting bugs you cannot reproduce because the build they have been given is not reproducible.
Rebuilding the complete software is a good practice beacuse here you will generate all dependencies and symbol files path along with your local machine paths.
If you would need to debug any application with a debugger then probably you would need symbol file and paths where your source code is present and if you directly flash the application without rebuilding , then you might not be able to debug certain paths because you dont get to know at which path the application was compiled and you might miss the symbol related information.

Avoiding build twice when using a shared project together with build generated code

I have a visual studio solution with multiple projects. One generates code files as part of pre-build (grpc classes via Grpc.Tools). There is also a shared project that extends the partial classes built as part of that pre-build.
However, sometimes for one reason or another - like compiling the client half of this (client uses the shared project to extend its own classes), compilation will error because the shared project can't find the generated classes yet. Presumably they don't exist. It's fixed easily by compiling the project twice.
Is there something I can do in this scenario? Is it possible to somehow move validating/compiling the shared project "further down" the compilation pipeline? Or even just set that particular project to try and compile twice if there's an error? Or is this the kind of thing that realistically I should just live with given what I'm doing - I haven't found any other references to this problem. It's not that big of an issue and it wouldn't happen very often, but I'd like to handle it reasonably if I can.
Edit
If I wasn't clear, this is a shared project, as in a .shproj, a project that is not compiled separately. The project that references it includes it and builds it all together as one.
If project B depends on project A, then project A must be built before project B. Visual Studio is smart enough to figure out the build order this way. Incidentally, this is also one of the reasons (among many) why circular dependencies simply cannot work.
I suspect that your projects are currently not linked via a dependency, as this issue wouldn't occur if there were such a link. Perhaps your second project is accessing the first project's files via the file system? That's just a guess though.
You can use this "A before B which depends on A" behavior of the build process to your advantage. Have project B (i.e. the project you need to go second) add project A (i.e. the project you need to go first) as a dependency. This forces VS to build them in the appropriate order.
Some remarks:
I am unsure if VS is able to omit dependencies that you add but not actually depend on (i.e. you never reference its content). I can't find any confirmation on this point (but absence of proof is not proof of absence!) But even if that is the case, that could be easily worked around by having a dummy class in B which actually references and uses something from project A.
Keep in mind that during a regular build, VS does not rebuild projects that have not changed since the last build. If this is an issue for you (unsure if it is, you didn't add enough context), make sure to always rebuild or clean to make sure that a new build will be triggered.
However, sometimes for one reason or another - like compiling the client half of this (client uses the shared project to extend its own classes), compilation will error because the shared project can't find the generated classes yet. Presumably they don't exist. It's fixed easily by compiling the project twice.
That it is only sometimes and can be fixed by "trying again" points at one thing - you got race condition. But a race condition during compilation, is not a thing I heard off or encountered before.
I got a few possible cultripts. But in the end, race conditions are notoriously hard to debug:
- Maybe the compiler that deals with the shared project returns before it is finished - wich should be impossible - or
- Maybe something causes the main projects to compile before the shared projects files are ready.
- Maybe a 3rd party tool - like a Virus scanner or auto backup maker inteferes?
- Maybe the shared projects compiled files are hosted on a network drive, and there sometimes is just the slightest delay between "compiled" and "visible to all other computers in the network"?
Usually the proper ways for dependant compilation should deal with such issues. That indicates that what you got there, is propably not the most stable setup.

I stopped the make command mid-way, will previous build be affected?

I built a very big project, which had a number of sub-projects, using make command. It took me 3 hours. Then by mistake (without cleaning the previous build) I re-executed the make command for a few minutes and then stopped it.
Have I ruined my previous build? How does make actually work behind the scenes? Are building the object files done in an atomic and safe manner?
Note: I cannot really run any of my binary files to see if they are broken since that's another lengthy process. I just want to know if I am fine or I have to re-run the make and let it finish.
If you want to publish this binary as a production version of your commercial product, then I would not rely on it, always be 100% sure that you are using a successfully built version of a fully saved and committed code base.
On the other hand, I you need this for debugging purposes, then you could use this! why? because the make system overrides the output binary only once if finishes compiling all the object files and only if it detects changes that requires a relink of the binary:
After recompiling whichever object files need it, make decides whether to relink edit. This must be done if the file edit does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than edit, so edit is relinked.
From GNU make: How Make Works
So if you haven't changed your code base, the linker will not relink the binary, leaving it as it was created by the successful build.

Can GNU make create broken binaries when building in parallel?

I am working in a project where we have just added parallelism to our build system, using GNU Make.
We build both libraries and the programs in parallel.
First we build all the libs necessary for the binaries. After the libs are created we start building the binaries.
Now when running our programs we have found that one of the binaries dont run as expected. Is it possible that GNU Make could cause broken binaries when building in parallel but still link correctly? If that is the case, what is the common cause and how can one avoid it?
Correct parallel builds depend on a correct makefile. If a build works serially but not in parallel, that means that your makefile has not declared all the prerequisites that it needs, so make doesn't realize it can't build target X until after target Y is complete.
However, it's extremely unlikely that these kinds of errors would allow the build to succeed: that is, the compiler or linker will almost always fail if things are building in the wrong order. It's hard for me to imagine how the build would succeed except by the purest chance, if at all (maybe if your tools overwrite an existing file instead of deleting it and writing it from scratch). Of course you given no information about exactly what "don't run as expected" means so it's hard to say for sure.
To investigate you need to do some testing: does it fail the same way every time you do a parallel build? Does it fail even if you use different amounts of parallelism (different -j levels)? Does it continue to fail if you switch back to non-parallel builds? Does the build succeed with -j even if you start with a completely clean workspace (nothing built)?

Xcode dependencies across different build directories?

I am trying to set up Xcode for a project which contains multiple executables and static libraries. I have created multiple targets and set up the linking and dependencies, and initially everything works great. The catch...
This is an existing project which already has Visual Studio and Makefile builds. Those builds put the libraries in a lib/Debug directory and the executables in bin/Debug. So in Xcode I changed the Build Products Path to "lib" and "bin" respectively (so we can use one set of documentation for all of the platforms). This puts the compiled targets in the right place, but completely breaks both the linking (Library not found) and the dependencies.
I can fix the linking by adding $(SRCROOT)/lib/Debug to the Library Search Paths for each executable (but it feels like Xcode should be able to figure this out on its own, which makes me think I'm doing something wrong).
But — I can't figure out how to get the dependencies working again. If I change a library source file, the library will rebuild but not the dependent executables. If I force a build of the executable Xcode returns success without doing anything; it thinks the target is up to date. If I clean the target and then rebuild it works.
So, any ideas here? Is Xcode being fundamentally stupid in this regard, or is it me (I'm leaning toward the latter)?
Update: I've posted a sample project to demonstrate the issue at http://share.industriousone.com/XcodeDepsIssue.zip. Build it once, then modify MyStaticLib.c and build it again. The executable will not relink (and it should). Many thanks for any help on this one.
starkos, thanks for publishing your conclusion. It validated my experience as well. This situation really screwed me, so it was nice to know I wasn't just missing something.
I did however discover a workaround that avoids creating multiple projects or keeping the library and its dependent in the same directory. It is a hack, but it does work here.
I know it's a bit late but better than never.
For the dependency library, add a "Copy Files Build Phase", with Absolute Path as the destination, and the path text field should be the directory where the DEPENDENT target lives. Then click on Products, find the dependency library (will end with .a), and drag it into the "Copy Files Build Phase." If you now build, this will put the library into its own directory like before and THEN also copy it into the dependent's target directory.
For the dependent, you can now remove the dependency's output directory from the Library Search Paths. This will cause it to find the library copy. If you do this, the dependent will indeed be relinked each time the dependency .a is relinked.
The negatives are, of course, the extra time for the copy, and the necessity to specify (in the Copy phase) the target directory for each dependent of your library. Beats the hell out of the alternatives though....
Xcode doesn't automatically set up dependencies based on use of build products; you have to set up explicit target dependencies yourself.
Project > Edit Target Settings, General tab, + button, add any targets that are prerequisites to building the selected target. That should get you going again.
I've researched this some more and the answer is no, Xcode 3.x doesn't track dependencies between targets that live in different directories. You can work around it by giving each library its own project, and adding each of those to a master project. Or you can keep all of your targets in one directory. Pick your poison.
Here is my solution for this weird behavior in xcode 4.3.1. You have to add build pre-action in scheme:
rm -f ${BUILT_PRODUCTS_DIR}/${EXECUTABLE_PATH}
and choose which build settings to use for this script. Each time before build, target executable will be removed and rebuild completely. It helped for me, and i hope it helps you.
NOTE: Have tried to put this script in project build phase, and result was negative - debugger could not connect process to start debugging.
Good luck!
OK, it would help to have the text of the Linking... build line that's failing. But a couple of things:
1) You shouldn't be linking to anything in $(SRCROOT). That's your project sources. The two places to find things to link are $(SYMROOT) (the Build Products directory) or $(DSTROOT) (the Installed Products directory).
One thing you could do is to have a common Build Directory, then use 'xcodebuild install' action to install the products in the Installation Directory. The other is to use a Copy Files build phase to copy them after building, so you can link against them in $(SYMROOT) but still have them where your Windows compatriots expect them.
THere is probably a way to set up the per-target build products directories correctly, but I'd really have to see the project itself to figure it out.

Resources