I work on a project which is often built and run on several operating systems and in multiple configurations. I use two compilers: icc and gcc, and multiple sets of arguments for those compilers. That can give me many variants of build of one project.
What I would like to do is:
compile project using icc or gcc compiler with one set of arguments
test the performance of the application befor and after new build>
compare obtained results
compile project for another set of arguments and repeat previous steps
Has anyone an idea how to do it nicely using makefile?
You just need to cascade your make-targets according to your needs:
E.g.:
# Assumed that $(CONFIGURATION_FILES) is a list of files, all named *.cfg
# There you store your set of arguments per step
#The main target
all: $(CONFIGURATION_FILES)
#Procedure for each configuration-file
%.cfg: compile_icc compile_gcc test compare
compile_icc:
#DO whatever is necesarry
compile_gcc:
#DO whatever is necesarry
test:
#DO whatever is necesarry
compare:
#DO whatever is necesarry
However, for this kind of job I would rather use some build-automation tool ... I only know Maven, but for Makefile-Based build some other tools may fit better ... take a look on the different options e.g. here:
https://en.wikipedia.org/wiki/List_of_build_automation_software
Related
I have a series (dozens) of projects that consist of large amounts of content in git repositories. Each repository has a git submodule of a common toolkit. The toolkit contains libraries and scripts needed to process the content repositories and build a publishable result. All the repositories are pushed to a host that runs CI and publishes the results. The idea is to keep the repeated code to an absolute minimum and mostly have content in the repositories and rely on and the toolkit to put it all together the same way for every project.
Each project has a top level Makefile that typically only has a couple lines, for example:
STAGE = stage
include toolkit/Makefile
The stage variable has some info about what stage this particular is in which determine which formats get built. Pretty much everything else is handled by the 600 line Makefile in the toolkit. Building some of the output formats can require a long chain of dependencies: The process of a source might trigger a target rule, but to get to the target there might be 8–10 intermediate dependencies where various files get generated before the final target can be made.
I've run across a couple situations where I want to completely replace (not just extend) a specific target rule in just one project. The target gets triggered in the middle of a chain of dependencies but I want to do something completely different for that one step.
I've tried just replacing the target in the top level Makefile:
STAGE = stage
%-target.fmt:
commands
include toolkit/Makefile
This is specifically documented not to be supported, but tantalizingly it works sometime of the time. I've tried changing the order of declaring the custom target and the include but that doesn't seem to significantly affect this. In case it matters, yes, the use of patterns in targets is important.
Sometimes it is useful to have a makefile that is mostly just like another makefile. You can often use the ‘include’ directive to include one in the other, and add more targets or variable definitions. However, it is invalid for two makefiles to give different recipes for the same target.
Interestingly if I put custom functions in the top level Makefile below the include I can override the functions from the toolkit such that $(call do_thing) will use my override:
STAGE = stage
include toolkit/Makefile
define do_thing
commands
endef
However the same does not seem to be true for targets. I am aware of the two colon syntax, but I do not want to just extend an existing target with more dependencies, I want to replace the target entirely with a different way of generating the same file.
I've thought about using recursive calls to make as suggested in the documentation, but then the environment including helper functions that are extensively setup in the toolkit Makefile would not be available to any targets in the top level Makefile. This would be a show stopper.
Is there any way to make make make an exception for me? Can it be coerced into overriding targets? I'm using exclusively recent versions of GNU-Make and am not too concerned about portability. Failing that is there another conceptual way to accomplish the same ends?
¹ My brain hasn't had enough coffee today. In trying to open Stack Overflow to ask this question I typed makeoverflow.com into my browser and was confused why auto-completion wasn't kicking in.
Updated answer:
If your recipe has dependencies, these cannot be overriden by default. Then $(eval) might save you like this:
In toolkit have a macro definition with your generic rule:
ifndef TARGET_FMT_COMMANDS
define TARGET_FMT_COMMANDS
command1 # note this these commands should be prefixed with TAB character
command2
endef
endif
define RULE_TEMPLATE
%-target.fmt: $(1)
$$(call TARGET_FMT_COMMANDS)
endef
# define the default dependencies, notice the ?= assignment
TARGET_DEPS?=list dependencies here
# instantiate the template for the default case
$(eval $(call RULE_TEMPLATE,$(TARGET_DEPS)))
Then into the calling code, just define TARGET_FMT_COMMANDS and TARGET_DEPS before including the toolkit and this should do the trick.
(please forgive the names of the defines/variables, they are only an example)
Initial answer:
Well, I'd write this in the toolkit:
define TARGET_FMT_COMMANDS
command1 # note this these commands should be prefixed with TAB character
command2
endef
%-target.fmt:
$(call TARGET_FMT_COMMANDS)
The you could simply redefine TARGET_FMT_COMMANDS after include toolkit/Makefile
The trick is to systematically have the TAB character preced the commands inside the definition if not you get weird errors.
You can also give parameters to the definition, just as usual.
I ran into the same issue, how I ended up working around the problem that overriding a target does not override the prerequisites was to override the pre-requisites' rules as well to force some of them to be empty commands.
in toolkit/Makefile:
test: depend depend1 depend2
#echo test toolkit
...
in Makefile:
include toolkit/Makefile
depend1 depend2: ;
test: depend
#echo test
Notice how depend1 and depend2 now have empty targets, so the test target's command is overridden and the dependencies are effectively overridden as well.
I have a #define ONB in a c file which (with several #ifndef...#endifs) changes many aspects of a programs behavior. Now I want to change the project makefile (or even better Makefile.am) so that if ONB is defined and some other options are set accordingly, it runs some special commands.
I searched the web but all i found was checking for environment variables... So is there a way to do this? Or I must change the c code to check for that in environment variables?(I prefer not changing the code because it is a really big project and i do not know everything about it)
Questions: My level is insufficient to ask in comments so I will have to ask here:
How and when is the define added to the target in the first place?
Do you essentially want a way to be able to post compile query the binaries to to determine if a particular define was used?
It would be helpful if you could give a concrete example, i.e. what are the special commands you want run, and what are the .c .h files involved?
Possible solution: Depending on what you need you could use LLVM tools to maybe generate and examine the AST of your code to see if a define is used. But this seems a little like over engineering.
Possible solution: You could also use #includes to pull in .c or header files and a conditional error be generated, or compile (to a .o), then if the compile fails you know it is defined or not. But this has it's own issues depending on how things are set-up in your make file.
I am having a design problem when using GNU Make.
My problem is the following:
I have 2 executables to compile.
These binaries need to be compiled for each compiler I list in a variable, let us call it COMPILERS.
After compiling the binaries, I need to run all binaries (all of them) several times and generate, for each of them, the times in a text file.
I must put all these files together, and generate a plot out of all that data.
So, for example, if I have 3 compilers to test, I would have 6 binaries, 6 * n_of_time_to_run_benchmark and a final output with all that data, in a single plot file.
The problem with this is that my usual way to approach binary compilation is to use CXX variable, CXXFLAGS, etc. But the CXX variable is supposed to change inside the same session, which looks inconsistent to me. An example of invocation would be:
make plot COMPILERS=clang++ g++
What I did is to just compile binaries separately every time I invoke make and per compiler, making use of CXX variable.
From a script I create a folder build-clang++ and compile, I create another folder build-g++ and compile, run all benchmarks, per folder for every couple of executables for same compiler. But for this I need an external script, and this is what I want to avoid, to be able to port to windows later more easily without duplicating scripts or installing more dependencies.
What is the best way to handle this:
Use another Makefile that calls this makefile with different configurations and use it as my "script" for generating the plot? This way the makefile looks like much more traditional to me, with his separate flags, etc.
Just create all targets directly inside same Make session?
To me it looks cleaner the script solution because a Makefile is usually written in a way that the compiler is a fixed variable that does not change in the whole session.
Thank you.
I'm struggling with the last pieces of logic to make our Ada builder work as expectedly with variantdir. The problem is caused by the fact that the inflexible tools gnatbind and gnatlink doesn't allow the binder files to be placed in a directory other than the current one. This leaves me with two options:
Let gnatbind write the the binder files to topdir and then let gnatlink pick it from there. This may however cause race conditions if we want to allow simulatenous builds for different architectures and compiler versions which we want.
Modify the calls to gnatbind and gnatlink to temporarily go down to the build directory, in our case build/$ARCH/src-path. I successfully fixed the gnatbind step as this is explicitly called using a env.Execute from within the Ada builder. To try to fix the linking step I've modified the Program env using
env["LINKCOM"] = SCons.Action.Action(ada_linkcom)
where ada_linkcom is defined as
def ada_linkcom(source, target,env ):
....
return ret
where ret is a string describing what should be done in the shell. I need this to be a function it contains a bit complicated logic to convert paths from being relative to top-level to just containing their basenames.
This however fails with an error in scons-2.3.1/SCons/Executor.py on line 347 in function do_execute. Isn't env["LINKCOM"] allowed to be a function with ada_linkcom's signature?
No, it's not. You seem to think that 'env["LINKCOM"]' is what actually calls/executes the final build command, and that's not quite correct. Instead, environment variables like LINKCOM get expanded by the Executor/Builder for each specified Action, and are then executed.
You can have Python functions as Actions, and also use a so-called "generator" to create your Action strings on-the-fly. But you have to assign this Action to a Builder, and can't set it as an environment variable directly.
Please also have a look at the UserGuide ( http://www.scons.org/doc/production/HTML/scons-user.html ), especially section 18.4 "Builders That Execute Python Functions". Our basic guide for writing Builders and Tools might also prove to be helpful: http://www.scons.org/wiki/ToolsForFools
I am looking for suggestions to properly handle separate debug and release build subdirectories, in a recursive makefile system that uses the $(SUBDIRS) target as documented in the gnumake manual to apply make targets to (source code) subdirectories.
Specifically, I'm interested in possible strategies to implement targets like 'all', 'clean', 'realclean' etc. that either assume one of the trees or should work on both trees are causing a problem.
Our current makefiles use a COMPILETYPE variable that gets set to Debug (default) or Release (the 'release' target), which properly does the builds, but cleaning up and make all only work on the default Debug tree. Passing down the COMPILETYPE variable gets clumsy, because whether and how to do this depends on the value of the actual target.
One option is to have specific targets in the subdirectories for each build type. So if you do a "make all" at the top level, it looks at COMPILETYPE and invokes "make all-debug" or "make all-release" as appropriate.
Alternatively, you could set a COMPILETYPE environment variable at the top level, and have each sub-Makefile deal with it.
The real solution is to not do a recursive make, but to include makefiles in subdirectories in the top level file. This will let you easily build in a different directory than the source lives in, so you can have build_debug and build_release directories. It also allows parallel make to work (make -j). See Recursive Make Considered Harmful for a full explanation.
If you are disciplined in your Makefiles about the use of your $(COMPILETYPE) variable to reference the appropriate build directory in all your rules, from rules that generate object files, to rules for clean/dist/etc, you should be fine.
In one project I've worked on, we had a $(BUILD) variable that was set to (the equivalent of) build-(COMPILETYPE) which made rules a little easier since all the rules could just refer to $(BUILD), e.g., clean would rm -rf $(BUILD).
As long as you are using $(MAKE) to invoke sub-makes (and using GNU make), you can automatically exporting the COMPILETYPE variable to all sub-makes without doing anything special. For more information, see the relevant section of the GNU make manual.
Some other options:
Force a re-build when compiler flags change, by adding a dependency for all objects on a meta-file that tracks the last used set of compiler flags. See, for example, how Git manages object files.
If you are using autoconf/automake, you can easily use a separate build out-of-place build directory for your different build types. e.g., cd /scratch/build/$COMPILETYPE && $srcdir/configure --mode=$COMPILETYPE && make which would take the build-type out of the Makefiles and into configure (where you'd have to add some support for specifying your desired build flags based on the value of --mode in your configure.ac)
If you give some more concrete examples of your actual rules, maybe you will get some more concrete suggestions.