I have makefile all: clean $(binary_output_path)/$(target_executable) which cleans output directory first and the build executable. The problem is when I want to use -j10: sometimes it start building and cleaning at the same time, so build fails, obviously.
How can I overcome this and have targets executed in order but on multiple cores?
You should add the dependency:
$(binary_output_path)/$(target_executable): | clean
However, this is not a good idea to use clean like that at all. Just have
all: $(binary_output_path)/$(target_executable)
$(binary_output_path)/$(target_executable): prerequisites
write your recipe here that builds this target
You should focus on writing makefiles in such a way that cleaning is not needed. If you must clean first before building your executable, find out why that is so, and write your commands that build the executable, in such a way that they work regardless of whether you have "cleaned" or not.
Related
I am keeping a WIFI driver alive by patching compilation errors for new Kernel versions. I can build it against a source tree, so I do not have to boot the kernel for which I want to fix it.
Unfortunately for this I have to fully compile the entire kernel. I know how to build a small version by using make localmodconfig, but that still takes very long.
Recently, I learned about the prepare target. This allows me to "compile" the module, so I learn about compilation problems. However, it fails in the linking phase, which prevents using make prepare in a Git bisect run. I also had the impression that it requires to clean the source tree from time to time due to spurious problems.
The question is: What is the fastest way to prepare a source tree so I can compile a Wifi module against it?
The target you are looking for is modules_prepare. From the doc:
An alternative is to use the "make" target "modules_prepare." This will make sure the kernel contains the information required. The target exists solely as a simple way to prepare a kernel source tree for building external modules.
NOTE: "modules_prepare" will not build Module.symvers even if CONFIG_MODVERSIONS is set; therefore, a full kernel build needs to be executed to make module versioning work.
If you run make -j modules_prepare (-j is important to execute everything in parallel) it should run pretty fast.
So what you need is basically something like this:
# Prepare kernel source
cd '/path/to/kernel/source'
make localmodconfig
make -j modules_prepare
# Build your module against it
cd '/path/to/your/module/source'
make -j -C '/path/to/kernel/source' M="$(pwd)" modules
# Clean things up
make -j -C '/path/to/kernel/source' M="$(pwd)" clean
cd '/path/to/kernel/source'
make distclean
The last cleaning up step is needed if you are in a bisect run before proceeding to the next bisection step, otherwise you may leave behind unwanted object files that might make other builds fail.
I have a makefile that has a ton of subsystem and am able to build it with the -j flag so that it goes much faster and builds the different recipes in parallel.
This seems to be working fine for now but am not sure if I am missing some needed dependencies and am just "getting lucky" with the order that these are being built.
Is there a way where I can randomize the order recipes are run while still following all the dependencies that I have defined?
You can control number of jobs Make is allowed to run asynchronously with -j command line option. This way you can "randomize" recipes being executed simultaneously and catch some issues in your makefiles.
I'll duplicate the answer from https://stackoverflow.com/a/72722756/5610270 here:
Next release of GNU make will have --shuffle mode. It will allow
you to execute prerequisites in random order to shake out missing
dependencies by running $ make --shuffle.
The feature was recently added in
https://savannah.gnu.org/bugs/index.php?62100 and so far is available
only in GNU make's git tree.
I have a makefile that is a 3rdParty dependency builder, so it's actually just going to various other directories and running cmake/make with various flags to ensure all 15-20 dependencies of my project compile the way I need.
Building parallel would really help here, (the build takes about 2 hours serially), but I need a 'make -jN' to not run the toplevel makefile parallel, instead run it serially (the various 3rdParty libs have internal dependencies to meet) and pass the arg to the inside makefiles.
Is there a way to get this behavior?
Use the .NOTPARALLEL pseudo target; from the docs:
`.NOTPARALLEL'
If `.NOTPARALLEL' is mentioned as a target, then this invocation of
`make' will be run serially, even if the `-j' option is given.
Any recursively invoked `make' command will still be run in
parallel (unless its makefile contains this target). Any
prerequisites on this target are ignored.
I have a configure script that writes a Makefile (from Makefile.in). The clean target currently removes everything created from within the makefile, but it doesn't delete the makefile itself. (I'm not using Autotools as you can probably tell)
My question therefore: Should the makefile also remove itself, requiring the developer to run ./configure again?
On the one hand, I want the clean target to properly clean up the source tree. But, on the other hand, I'd like to be able to type make clean test to check that everything's working as it should before committing; Running the configure script again seems weird somehow.
This is a stylistic question, rather than a technical question. The best place to go for answers is the automake manual, which will tell you:
`make clean'
Erase from the build tree the files built by `make all'.
`make distclean'
Additionally erase anything `./configure' created.
So, no, make clean should not delete Makefile. make distclean should delete Makefile, since it's created by configure not make all.
One of the best things about autotools is that they are consistent and standard. It's best to not irritate your users by flouting those standards.
I'd probably have a separate target for that. So clean would leave them able to build again but distclean or realclean or allclean or something would force a reconfigure. You could see which autotools clean target (if any) has similar behaviour.
The purpose of the clean target is usually to remove interim files, so you can start your compile from scratch. See more here For instance, a common makefile target is "clean," which generally performs actions that clean up after the compiler--removing object files and the resulting executable.
I have an autotools project. In one of its directories, I would like to run a script, after the make process is done. In other words, I'd like to have an option to "phony" target that would be executed last. Alternatively, I could use a dedicated m4 Macro (I only I knew which one...).
Any ideas?
Thanks
I'm assuming that by "autotools", you're using Automake as well as Autoconf. I can see two ways of doing this.
You can make a -hook rule in your Makefile.am. However, this can only be done for certain default targets: install-data, install-exec, uninstall, dist and distcheck. So, to make a rule that will be run immediately after install-exec, call it install-exec-hook. Then just run the script in the recipe for that rule.
Based on the wording of your question, though, it seems that you want to run the script after building. If that's the case, you can customize the all target with an all-local target and then run the script in the recipe for this target. Note that, according to the Automake documentation,
With the '-local' targets, there is no particular guarantee of
execution order; typically, they are run early, but with parallel make,
there is no way to be sure of that.
However, since the all target is phony, it shouldn't run until everything is built. Nevertheless, if you can run the script after installation, I would recommend that way since the execution order is guaranteed.