Ok so im getting in to Kernel Module Development and the guides all pretty much use the same basic make file that contains this line:
make -C /lib/modules/`uname -r`/build M=$(PWD) modules
So my questions are:
Why is a make file calling make? that seems recursive
what is the M for? i cant find a make -M flag in any of the man pages
Recursive use of make is a common technique for introducing modularity into your build process. For example, in your particular case, you could support a new architecture by putting the relevant component in a folder whose name matches the uname -r output for that architecture, and you wouldn't have to change the master makefile at all. Another example, if you make one component modular, it makes it much easier to reuse in another project without making large changes to the new project's master makefile.
Just like it can be helpful to separate your code into files, modules, and classes (the latter for languages other than C, obviously), it can be helpful to separate your build process into separate modules. It's just a form of organization to make managing your projects easier. You might group related functionality into separate libraries, or plugins, and build them separately. Different individuals or teams could work on the separate components without all of them needing write access to the master makefile. You might want to build your components separately so that you can test them separately.
It's not impossible to do all of these things without recursive use of make, of course, but it's one common way of organizing things. Even if you don't use make recursively, you're still going to end up with a bunch of different component "makefiles" on a large project - they'll just be imported or included into the master makefile, rather than standing alone by themselves and being run via separate invocations of make.
Creating and maintaining a single makefile for a very large project is not a trivial matter. However, as the article Recursive make considered harmful describes, recursive use of make is not without its own problems, either.
As for your M, that's just overriding a variable at the command line. Somewhere in the makefile(s) the variable M will be used, and if you specify its value at the command line in this way, then the value you specify will override any other assignments to that variable that may occur in the makefile(s).
Related
I'm struggling to create a Makefile.am which, after being processed, contains a conditional. I've found other questions (here, here and here) which seem to be struggling to do the same thing. This made me wonder if this actually isn't a sensible goal and there is some other way I should be doing the check.
My goal is to have a Makefile which passes different options to tools based on the environment of the system on which it runs. Crucially I don't want to have to rerun (the lengthy) configure between the different calls to make.
Is this a sensible goal?
If not, what is the canonical way of accomplishing the same effect? One way that that comes to mind is adding an additional make target make check and make check-quick for example.
For work purposes I need to link against an object file generated by another program and found in its folder, the case is that I did not find information about this kind of linkage. I think that if I hardcode the paths and put the name-of-obj.o in front of the package_LDADD variable should work, but the case is that I don't want to do it that way.
If the object is not found I want the configure to fail and tell the user that the name-of-obj.o is missing.
I tried by using AC_LIBOBJ([name-of-obj.o]) but this will try to find in the root directory a name-of-obj.c and compile it.
Any tip or solution around this issue?
Thank you!
I need to link against an object file generated by another program and
found in its folder
What you describe is a very unusual requirement, not among those that the Autotools are designed to handle cleanly or easily. In particular, Autoconf has no mechanisms specifically applicable to searching for bare object files, as opposed to libraries, and Automake has no particular automation around including such objects when it links. Nevertheless, these tools do have enough general purpose functionality to do what you want; it just won't be as tidy as you might like.
I think that if I hardcode the paths and put the
name-of-obj.o in front of the package_LDADD variable should work, but
the case is that I don't want to do it that way.
I take it that it is the "hardcode the paths" part that you want to avoid. Adding an item to an appropriate LDADD variable is not negotiable; it is the right way to get your object included in the link.
If the object is not found I want the configure to fail and tell the
user that the name-of-obj.o is missing.
Well, then, the key thing appears to be to get configure to perform a search for your object file. Autoconf does not have a built-in mechanism to perform such a search, but it's just a macro-based shell-script generator, so you can write such a search in shell script + Autoconf, maybe something like this:
AC_MSG_CHECKING([for name-of-obj.o])
OTHER_LOCATION=
for my_dir in
/some/location/other_program/src
/another/location/other_program.12345/src
$srcdir/../relative/location/other_program/src; do
AS_IF([test -r "${my_dir}/name-of-obj.o"], [
# optionally, perform any desired test to check that the object is usable
# ... perhaps one using AC_LINK_IFELSE ...
# if it passes, then
OTHER_LOCATION=${my_dir}
break
])
done
# Check whether the object was in fact discovered, and act appropriately
AS_IF([test "x${OTHER_LOCATION}" = x], [
# Not found
AC_MSG_RESULT([not found])
AC_MSG_ERROR([Cannot configure without name-of-obj.o])
], [
AC_MSG_RESULT([${OTHER_LOCATION}/name-of-obj.o])
AC_SUBST([OTHER_LOCATION])
])
That's functional, but of course you could embellish, such as by providing for the package builder to specify a location to use via a command-line argument (AC_ARG_WITH(...)). And if you want to do this for multiple objects, then you would probably want to wrap up at least some of that into a custom macro.
The Automake side is much less involved. To get the object linked, you just need to add it to the appropriate LDADD variable, using the output variable created by the above, such as:
foo_LDADD = $(OTHER_LOCATION)/name-of-obj.o
Note that if you're building just one program target then you can use the general LDADD instead of foo_LDADD, but note that by default these are alternatives not complements.
With that said, this is a bad idea overall. If you want to link something that is not part of your project, then you should get it from an installed library. That can be a local, custom-built library, of course, so long as it is a library, not a bare object file, and it is installed. It can be a static library if you don't want to rely on or distribute a separate shared library.
On the other hand, if your project is part of a larger build, then the best approach is probably to integrate it into that build, maybe as a subproject. It would still be best to link a library instead of a bare object file, but in a subproject context it might make sense to use a lib that was not installed to the build system. In conjunction with a command-line argument that tells it where to find the wanted lib, this could make the needed Autoconf code much cleaner and clearer.
Make(1) has built-in rules, such that for simple tasks you don't need a makefile at all. I can type make prog and if the current directory has a prog.c, make will do something useful.
I have a number of rules like this (e.g., how to make .pdf from .html) that apply in many projects. If I have a makefile in a directory, I can simply include my rules from a file. Is there a way to tell make to use this file always? Like a dot file that make would always include before doing anything else.
Make's rules are truly built-in, not read from a file. This has advantages (the entirety of make is one executable and you can copy it and install it anywhere and get identical behavior) and disadvantages (you can't modify the default rules without modifying the source code and recompiling--if you want to do that it's easy to do, though: see the default.c file in the sources).
You can specify an extra makefile (or makefiles) that should be parsed before the usual ones using an environment variable, though, so you can create a makefile with some extra rules, then (in your ~/.bashrc or whatever) set the MAKEFILES environment variable to the name of that file (or files) containing these extra rules (don't forget to export it).
Now every make invocation will load these rules as well.
You may discover, though, that this isn't quite what you'd hoped, because it could cause other makefiles to fail or act in bizarre ways (for example if you download open source packages and want to build them locally, etc.) If you do this just remember you did it, so in a few months if you run into issues you'll remember to try undoing it and see if it helps :-)
Background
I have a (large) project A and a (large) project B, such that A depends on B.
I would like to have two separate makefiles -- one for project A and one for project B -- for performance and maintainability.
Based on the comments to an earlier question, I have decided to entirely rewrite B's makefile such that A's makefile can include it. This will avoid the evils of recursive make: allow parallelism, not remake unnecessarily, improve performance, etc.
Current solution
I can find the directory of the currently executing makefile by including at the top (before any other includes).
TOP := $(dir $(lastword $(MAKEFILE_LIST)))
I am writing each target as
$(TOP)/some-target: $(TOP)/some-src
and making changes to any necessary shell commands, e.g. find dir to find $(TOP)/dir.
While this solves the problems it has a couple disadvantages:
Targets and rules are longer and a little less readable. (This is likely unavoidable. Modularity has a price).
Using gcc -M to auto-generate dependencies requires post-processing to add $(TOP) everywhere.
Is this the usual way to write makefiles that can be included by others?
If by "usual" you mean, "most common", then the answer is "no". The most common thing people do, is to improvise some changes to the includee so the names do not clash with the includer.
What you did, however, is "good design".
In fact, I take your design even futher.
I compute a stack of directories, if the inclusion is recursive, you need to keep the current directories on a stack as you parse the makefile tree. $D is the current directory - shorter for people to type than $(TOP)/,
and I prepend everything in the includee, with $D/, so you have variables:
$D/FOOBAR :=
and phony targets:
$D/phony:
I am looking for suggestions to properly handle separate debug and release build subdirectories, in a recursive makefile system that uses the $(SUBDIRS) target as documented in the gnumake manual to apply make targets to (source code) subdirectories.
Specifically, I'm interested in possible strategies to implement targets like 'all', 'clean', 'realclean' etc. that either assume one of the trees or should work on both trees are causing a problem.
Our current makefiles use a COMPILETYPE variable that gets set to Debug (default) or Release (the 'release' target), which properly does the builds, but cleaning up and make all only work on the default Debug tree. Passing down the COMPILETYPE variable gets clumsy, because whether and how to do this depends on the value of the actual target.
One option is to have specific targets in the subdirectories for each build type. So if you do a "make all" at the top level, it looks at COMPILETYPE and invokes "make all-debug" or "make all-release" as appropriate.
Alternatively, you could set a COMPILETYPE environment variable at the top level, and have each sub-Makefile deal with it.
The real solution is to not do a recursive make, but to include makefiles in subdirectories in the top level file. This will let you easily build in a different directory than the source lives in, so you can have build_debug and build_release directories. It also allows parallel make to work (make -j). See Recursive Make Considered Harmful for a full explanation.
If you are disciplined in your Makefiles about the use of your $(COMPILETYPE) variable to reference the appropriate build directory in all your rules, from rules that generate object files, to rules for clean/dist/etc, you should be fine.
In one project I've worked on, we had a $(BUILD) variable that was set to (the equivalent of) build-(COMPILETYPE) which made rules a little easier since all the rules could just refer to $(BUILD), e.g., clean would rm -rf $(BUILD).
As long as you are using $(MAKE) to invoke sub-makes (and using GNU make), you can automatically exporting the COMPILETYPE variable to all sub-makes without doing anything special. For more information, see the relevant section of the GNU make manual.
Some other options:
Force a re-build when compiler flags change, by adding a dependency for all objects on a meta-file that tracks the last used set of compiler flags. See, for example, how Git manages object files.
If you are using autoconf/automake, you can easily use a separate build out-of-place build directory for your different build types. e.g., cd /scratch/build/$COMPILETYPE && $srcdir/configure --mode=$COMPILETYPE && make which would take the build-type out of the Makefiles and into configure (where you'd have to add some support for specifying your desired build flags based on the value of --mode in your configure.ac)
If you give some more concrete examples of your actual rules, maybe you will get some more concrete suggestions.