Parallel Building in Recursive Autotools Project - makefile

I have a large project which uses a recursive autotools structure. Most of the build time is spent on a single directory within this, so I want to make that directory build in parallel. I've found documentation related to make's -j option to enable parallel building, but the question is, where should I specify -j in my Makefile.am for the directory I am building?
I understand that it's better to use a non-recursive structure for parallel building, but that's too big a job for now, and I'm hoping there's still a way to make this one directory build in parallel.

It is not your task as a maintainer to specify the level of parallelism of the build, because it depends on the machine you are building on. Often passing the number of CPUs to -j is a good idea, but not always. What is supposed to happen is that a user just runs make with the appropriate -j flag. If you also happen to be that user and you are tired of passing -j explicitly all the time, then
export MAKEFLAGS=-j2
from your shell profile (e.g. .bashrc) and have make always consider this option.

Assuming a linux distribution, and further assuming you would like the builds to be adaptive, you might try:
export MAKEFLAGS=-j`nproc`

Related

Run several 'make' command at the same time?

(make noob here)
I did the following:
Configured a C++ project with CMake.
In one terminal tab, ran make to start building the whole project.
Got bored of waiting for the whole thing to build, figured I could
just make the subfolder I'm working on at the moment.
Without stopping the ongoing build in the first tab, opened a second tab and ran make from said subfolder.
Things looked pretty normal for a short while, then suddenly second tab started displaying build outputs related to the whole project, not only to the subfolder. I figured what I tried didn't work as I expected, so I CTRL-C'd the second tab.
That's when the weirdest happened: in the first tab the build output was mixed with lines coming form the specific folder I wanted to build. And the build went way past 100%, up to 128%!
My question is: what exactly does 'make' do when launched more than once at the same time?
Am I correct to think that the multiple make commands where somehow "merged" in the same process??
This is more a question about the makefiles that CMake creates and how they work. It's these makefiles which do things like track the percent complete, etc., not make itself. It's quite possible that by starting a second build in the same directory, you've messed up whatever facilities the CMake makefiles use to track progress.
The short answer is that no, it's not possible that one invocation of make will somehow "take over" or merge another invocation of make. As far as the make program is concerned they know nothing about each other. However since both are operating on the same filesystem, if one make writes files in a way that can confuse another make, you could see strange behaviors.
The cmake-generated makefiles are very complex; I've never actually tried to understand completely how they work. I've always thought it a shame that no one has tried to implement a CMake GNU Makefile generator in addition to the Unix Makefiles generator, that took full advantage of GNU make features. I'm sure the results would be easier to read and probably faster. But it seems unlikely this will ever happen; CMake users who care more about speed than portability are probably just switching to use Ninja as a generator.

Can GNU make create broken binaries when building in parallel?

I am working in a project where we have just added parallelism to our build system, using GNU Make.
We build both libraries and the programs in parallel.
First we build all the libs necessary for the binaries. After the libs are created we start building the binaries.
Now when running our programs we have found that one of the binaries dont run as expected. Is it possible that GNU Make could cause broken binaries when building in parallel but still link correctly? If that is the case, what is the common cause and how can one avoid it?
Correct parallel builds depend on a correct makefile. If a build works serially but not in parallel, that means that your makefile has not declared all the prerequisites that it needs, so make doesn't realize it can't build target X until after target Y is complete.
However, it's extremely unlikely that these kinds of errors would allow the build to succeed: that is, the compiler or linker will almost always fail if things are building in the wrong order. It's hard for me to imagine how the build would succeed except by the purest chance, if at all (maybe if your tools overwrite an existing file instead of deleting it and writing it from scratch). Of course you given no information about exactly what "don't run as expected" means so it's hard to say for sure.
To investigate you need to do some testing: does it fail the same way every time you do a parallel build? Does it fail even if you use different amounts of parallelism (different -j levels)? Does it continue to fail if you switch back to non-parallel builds? Does the build succeed with -j even if you start with a completely clean workspace (nothing built)?

Make, install, executing a program

I have been a CS student for a while and it seems like I (or many of my friends) never understood what's happening behind the scene when it terms to make, install etc.
Correct me but is make a way to compile a set of files?
what is it mean by "installing a program to a computer" like on windows because when I am coding in different languages such as java or perl, we dont install what we wrote. we would compile (if not, interpret language) and just run it. So, why are programs such as Skype needs to be "installed"?
Can anyone clarify this? I feel like this is something i need to know as a programmer.
Make is a build system
Make is a build system which is simply a way to script the steps needed to compile a program. Make specifically can be used with anything, but is usually used to compile C or C++ programs. It simplifies and creates a standard way for programmers to script the preparation of their program, so that it can be built and installed with ease
Why a build system
You see, if your program is a simple one source file program, then using make might be an overkill, as compiling the simplest c program is as simple as
gcc simpleprogram.c -o simpleprogram.out
However, as the size of the software grows, the complexity of it grows, and the complexity of how it needs to be built grows. For example, you may want to determine which version of each library is installed in the computer which you are compiling in, you may want to run some tests after compiling your program to determine it is working correctly, or you may want to automatically download some dependencies your program has.
Most software built need a mixture of these tasks eventually. So, instead of reinventing the wheel, they use a build system which allow scripting this. If you are familiar with Java (which you mentioned) a build system comparable to make, but used in the java world is Apache Ant.
Why install
Well, lets assume that you used the "make" command but not "make install". The "make" command is usually used to just to prepare the program for compilation, and the compile it. However, once your program is compiled, all you have is an executable in the directory in which you compiled the program in. The program, its documentation, and it's configuration files haven't been put in the appropriate directories needed for all users to use it. That's what "make install" is for. Make install takes all the files associated with the program you just compiled, and puts said files in the appropriate directories, so that it becomes available to everyone, and so that each component is in the expected directory according to your operating system.
make is a bit of software that reduces the amount of code that needs to be compiled - it compares modification times of the source code with the target. If the code has changed a compile is done to construct the target otherwise you can skip that step.
Installing software is placing the executables/configuration files into the right places - perhaps constructing some files along the way. E.g. usernames in your skype example

Manage the build of multiple libraries in one place

Suppose that I have multiple libraries that I can build with cmake or with configure scripts, there is a tool that can help me with building this libraries such as I can easily manage the rebuilding of this libraries with few modifications like changing compiler's flags ?
I would like to run a sort of automated process a see the feedback about each build + some freedom about building options.
There is a tool like this one beside a conveniently created bash script ?
Make seems like the best tool to use here, but bash script would also work. You could use a makefile that calls the other makefiles with -f (or switch to the directory with -C ). Also, you could handle the flags and such within a single makefile with judicious use of variables, targets and recipes. Realize you can set make variables (and therefore flags) from the command line. That's about the most I can help without knowing more specifics of your situation. Good luck!

What are the other uses of the "make" command?

A sysadmin teacher told me one day that I should learn to use "make" because I could use it for a lot of other things that just triggering complilations.
I never got the chance to talk longer about it. Do you have any good example ?
As a bonus, isn't it this tool deprecated, and what are modern alternatives (for the compilation purpose and others) ?
One excellent thing make can be used for besides compilation is LaTeX. If you're doing any serious work with LaTeX, you'll find make very handy because of the need to re-interpret .tex files several times when using BibTex or tables of contents.
Make is definitely not deprecated. Although there are different ways of doing the same thing (batch files on Windows, shell scripts on Linux) make works the best, IMHO.
Make can be used to execute any commands you want to execute. It is best used for activities that require dependency checking, but there is no reason you couldn't use make to check your e-mail, reboot your servers, make backups, or anything else.
Ant, NAnt, and msbuild are supposedly the modern alternatives, but plain-old-make is still used extensively in environments that don't use Java or .NET.
isn't it this tool deprecated
What?! No, not even slightly. I'm on Linux so I realise I'm not an average person, but I use it almost daily. I'm sure there are thousands of Linux devs who do use it daily.
I remember seeing an article on Slashdot a few years ago describing a technique for optimising Linux boot sequence by using make.
edit:
Here's an article from IBM explaining the principle.
Make performs a topological sort, which is to say that given a bunch of things, and a set of requirements that one thing be before another thing, it finds a way to order all of the things so that all of the requirements are met. Building things (programs, documents, distribution tarballs, etc.) is one common use for topological sorting, but there are others. You can create a Makefile with one entry for every server in your data center, including dependencies between servers (NFS, NIS, DNS, etc.) and make can tell you what order in which to turn on your computers after a power outage, or what order to turn them off in before a power outage. You can use it to figure out what order in which to start services on a single server. You can use it to figure out what order to put your clothes on in the morning. Any problem where you need to find an order of a bunch of things or tasks that satisfies a bunch of specific requirements of the form A goes before B is a potential candidate for being solved with make.
The most random use I've ever seen is make being used in place of bash for init scripts on BCCD. It actually worked decently, once you got over the wtf moment....
Think of make as shell scripts with added oomph.
Well, I sure that the UNIX tool "make" is still being used a lot, even if it's waning in the .Net world. And while more people may be using MSBUILD, Ant, nAnt, and others tools these days, they are essentially just "make" with a different file syntax. The basic concept is the same.
Make tools are handy for anything where there's an input file which is processed into an output file. Write your reports in MSWord, but distribute them as PDFs? -- use make to generate the PDFs.
Configuration file changes through crontab, if needed.
I have examples for postfix maps, and for squid external tables.
Example for /etc/postfix/Makefile:
POSTMAP=/usr/sbin/postmap
POSTFIX=/usr/sbin/postfix
HASHES=transport access virtual canonical relocated annoying_senders
BTREES=clients_welcome
HASHES_DB=${HASHES:=.db}
BTREES_DB=${BTREES:=.db}
all: ${BTREES_DB} ${HASHES_DB} aliases.db
echo \= Done
${HASHES_DB}: %.db: %
echo . Rebuilding $< hash...
${POSTMAP} $<
${BTREES_DB}: %.db: %
echo . Rebuilding $< btree...
${POSTMAP} $<
aliases.db: aliases
echo . Rebuilding aliases...
/usr/bin/newaliases
etc

Resources