Can GNU make create broken binaries when building in parallel? - makefile

I am working in a project where we have just added parallelism to our build system, using GNU Make.
We build both libraries and the programs in parallel.
First we build all the libs necessary for the binaries. After the libs are created we start building the binaries.
Now when running our programs we have found that one of the binaries dont run as expected. Is it possible that GNU Make could cause broken binaries when building in parallel but still link correctly? If that is the case, what is the common cause and how can one avoid it?

Correct parallel builds depend on a correct makefile. If a build works serially but not in parallel, that means that your makefile has not declared all the prerequisites that it needs, so make doesn't realize it can't build target X until after target Y is complete.
However, it's extremely unlikely that these kinds of errors would allow the build to succeed: that is, the compiler or linker will almost always fail if things are building in the wrong order. It's hard for me to imagine how the build would succeed except by the purest chance, if at all (maybe if your tools overwrite an existing file instead of deleting it and writing it from scratch). Of course you given no information about exactly what "don't run as expected" means so it's hard to say for sure.
To investigate you need to do some testing: does it fail the same way every time you do a parallel build? Does it fail even if you use different amounts of parallelism (different -j levels)? Does it continue to fail if you switch back to non-parallel builds? Does the build succeed with -j even if you start with a completely clean workspace (nothing built)?

Related

fastest to rebuild the linux kernel using rpmbuild

I just built the linux kernel for CentOS using the instructions that can be found here: https://wiki.centos.org/HowTos/Custom_Kernel
Now, I made my changes and I would like to rebuild the kernel and test it with my changes. How do I do that but:
1. Without having to recompile everything. So, build process should reuse whatever object files generated by the first build that wont need to be modified.
2. Without having to build the other packages that are build with the kernel (e.g., debuginfo, tools, debug-devel, ...etc.).
Thanks.
You cannot. The paradigm of rpmbuild is to always start from a clean slate to ensure reproducibility and predictability. The subpackages would be also be invalidated because they depend on the exact output of your kernel build, e.g. locations within the binary images where certain symbols are defined, that may have changed when you rebuilt it.

When should I check "Parallelize Build" for an Xcode scheme?

I see that this option is unchecked in my current scheme and that a few places around the web recommend against it in certain cases. Can someone give a more thorough method of determining when this option can be checked for a scheme?
We don't know the internals of the "Parallelize Build" setting, but we can deduct why the setting might not be beneficial sometimes.
First it's good to understand what "Parallelize Build" does. Source:
This option allows Xcode to speed up total build time by building
targets that do not depend on each other at the same time. This is a
time-saver on projects with many smaller dependencies that can easily
be run in parallel.
When you have many targets that inter-depend on other targets this option can produce problems.
For example, imagine that one target is a framework, that your application target depends on. If you made modifications to the framework target, then there are cases where you MUST build the framework target BEFORE the application target. Parallelizing these won't work because for the application target and framework target to work nice together, they must be "in sync." We can't build the application target, without compiling the changes in the framework target first.
The above is a simple example, one that Xcode might handle nicely already, but some projects get very complex and without feeding proper information of your target-dependencies to Xcode, it might not be able to correctly parallelize your targets.
In summary, the setting is likely beneficial, and can reduce build speeds If you enable the setting and don't see any problems with the code being out-of-sync across targets. Otherwise, turn it off. As with all performance settings, make sure to test and measure whether you actually are seeing build speed increases.

Parallel Building in Recursive Autotools Project

I have a large project which uses a recursive autotools structure. Most of the build time is spent on a single directory within this, so I want to make that directory build in parallel. I've found documentation related to make's -j option to enable parallel building, but the question is, where should I specify -j in my Makefile.am for the directory I am building?
I understand that it's better to use a non-recursive structure for parallel building, but that's too big a job for now, and I'm hoping there's still a way to make this one directory build in parallel.
It is not your task as a maintainer to specify the level of parallelism of the build, because it depends on the machine you are building on. Often passing the number of CPUs to -j is a good idea, but not always. What is supposed to happen is that a user just runs make with the appropriate -j flag. If you also happen to be that user and you are tired of passing -j explicitly all the time, then
export MAKEFLAGS=-j2
from your shell profile (e.g. .bashrc) and have make always consider this option.
Assuming a linux distribution, and further assuming you would like the builds to be adaptive, you might try:
export MAKEFLAGS=-j`nproc`

Make, install, executing a program

I have been a CS student for a while and it seems like I (or many of my friends) never understood what's happening behind the scene when it terms to make, install etc.
Correct me but is make a way to compile a set of files?
what is it mean by "installing a program to a computer" like on windows because when I am coding in different languages such as java or perl, we dont install what we wrote. we would compile (if not, interpret language) and just run it. So, why are programs such as Skype needs to be "installed"?
Can anyone clarify this? I feel like this is something i need to know as a programmer.
Make is a build system
Make is a build system which is simply a way to script the steps needed to compile a program. Make specifically can be used with anything, but is usually used to compile C or C++ programs. It simplifies and creates a standard way for programmers to script the preparation of their program, so that it can be built and installed with ease
Why a build system
You see, if your program is a simple one source file program, then using make might be an overkill, as compiling the simplest c program is as simple as
gcc simpleprogram.c -o simpleprogram.out
However, as the size of the software grows, the complexity of it grows, and the complexity of how it needs to be built grows. For example, you may want to determine which version of each library is installed in the computer which you are compiling in, you may want to run some tests after compiling your program to determine it is working correctly, or you may want to automatically download some dependencies your program has.
Most software built need a mixture of these tasks eventually. So, instead of reinventing the wheel, they use a build system which allow scripting this. If you are familiar with Java (which you mentioned) a build system comparable to make, but used in the java world is Apache Ant.
Why install
Well, lets assume that you used the "make" command but not "make install". The "make" command is usually used to just to prepare the program for compilation, and the compile it. However, once your program is compiled, all you have is an executable in the directory in which you compiled the program in. The program, its documentation, and it's configuration files haven't been put in the appropriate directories needed for all users to use it. That's what "make install" is for. Make install takes all the files associated with the program you just compiled, and puts said files in the appropriate directories, so that it becomes available to everyone, and so that each component is in the expected directory according to your operating system.
make is a bit of software that reduces the amount of code that needs to be compiled - it compares modification times of the source code with the target. If the code has changed a compile is done to construct the target otherwise you can skip that step.
Installing software is placing the executables/configuration files into the right places - perhaps constructing some files along the way. E.g. usernames in your skype example

Stop XCode from embedding build time in executables

XCode4 is putting build time in executables it creates. When I build the same code twice, binaries will differ by few bytes belonging to a unix timestmap.
Is there a way to prevent this from happening?
(I'm running expensive tests and benchmarks after each build and cache results based on hash of executables, but ever-changing executables broke my cache and pollute benchmark results with duplicates).
I've worked around this by switching to building project myself "old skool" way with Makefiles & gcc.

Resources