We are as good as done converting our large GNU Make build system to SCons.
Everthing works nicely except that we have 7 seconds of startup time spent in Environment.Clone(). We are using Clone() because of its nice functional style of not modifying existing global state.
Has anybody come up with a way to not copy 99 percent of the Environment data never gets changed in the Clone anyway?
Would removing non-used keys in its dictionary improve Clone() time?
Based on your comments, you could consider dividing your build into several smaller builds, each controlled by its seperate SConstruct build script, while maintaining the current SConstruct for the complete build. I could imagine that a developer changing the c++ code would only need to build the complete project once to get the non-c++ dependencies. And then while working on the c++ feature, he/she would only need to build the c++. Likewise for ada and fortran.
For example, assuming you have something similar to the following dir structure:
root_dir
|
+--- src_ada
|
+--- src_cpp
|
+--- src_fortran
You could have a root level SConstruct for each type of build, as follows:
SConstruct # performs a complete build
SConstruct.ada # builds just the ada
SConstruct.cpp # builds just the cpp
SConstruct.fortran # builds just the fortran
If these SConstruct scripts are correctly created, then the subsidiary SConscript build scripts probably wouldnt need to be modified.
You could even take this one step further by creating a build wrapper script that could take the following command line arguments:
build [complete (default arg) | ada | cpp | fortran]
And internally, the script would call scons with the appropriate SConstruct, which could be one of the following:
scons -f SConstruct
scons -f SConstruct.ada
scons -f SConstruct.cpp
scons -f SConstruct.fortran
Notice the options -f file, --file=file, --makefile=file, --sconstruct=file all do the same: let you specify which SConstruct to use for the build, as explained in the SCons man pages.
Another option would be to have several different Environment objects that would be passed to the different subsidiary SConscript build scripts. You could have one that has general build settings applicable to all subsidiary SConscript scripts that would not need to be cloned, and then different Environment objects for the different programming languages, that would selectively be passed to subsidiary SConscript scripts, depending on the needs therein. These more specific Environment objects would probably need to be cloned.
Separating the Environment objects like this should speed-up the cloning, as only the needed information for a particular subsidiary SConscript script would be cloned.
Related
I'm slowly losing my mind here. First, let me describe what it is I'm trying to do. We have a compiler that spews out weirdly formatted dependency files. To get these makefiles into a format GNU Make can understand, they need to be processed by a Perl script first. Technically, the Perl script doesn't convert the input dependency files it gets passed; instead it creates a new, properly formatted dependency file for each input dependency file.
Now, in order for GNU Make to know which translation units need recompiling and which don't, it obviously must have seen those dependency files before trying to make the translation unit targets, so we have the following line in our master makefile:
include $(PROCESSED_EXISTING_DEPENDENCY_FILES)
where $(PROCESSED_EXISTING_DEPENDENCY_FILES) is a list of all converted dependency files. My idea was to (ab-)use an automatically generated makefile whose recipe not only builds that makefile but also triggers the creation of all dependency files mentioned in the $(PROCESSED_EXISTING_DEPENDENCY_FILES) list and include that makefile just before including the converted dependency files. To ensure that the conversion takes place, the parent process of our Make process will delete the automatically created makefile first (we have a Perl wrapper process controlling GNU Make). The relevant part in the master makefile would look like this:
# Phony target that creates processed dependency files.
CONVERTED_EXISTING_DEPENDENCY_FILES :
<recipe here>
$(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK) : CONVERTED_EXISTING_DEPENDENCY_FILES
$(info $(TARGET_BUILD_MESSAGE_PREFIX) Building $(notdir $#) ...)
$(file >$#,# Automatically generated makefile that gets included before including the existing, converted dependency files.)
$(file >>$#,$(DOLLAR)(info Including pre-converted-dependency-files-inclusion hook file ...))
$(file >>$#,)
include $(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK)
include $(PROCESSED_EXISTING_DEPENDENCY_FILES)
We're already using the same basic principle in several other cases, and so far this has worked perfectly fine, but for some reason when I try this, GNU Make gets lost in an infinite loop where it will continuously re-revaluate the master makefile, include all other makefiles and then go back to re-revaluating the master makefile again.
The $(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK) does get created, and if there are any dependency files to be converted, they are processed, too, but I'm still at a loss as to what causes this infinite loop in Make. We are using GNU Make 4.2.1 for Windows on a Windows 10 (64 bit) system.
I recommend you rework your model completely to avoid any recipes that know how to build included files, and instead follow the model for auto-dependency generation described in this post (based on how automake handles dependency generation).
Then add in the postprocessing step directly into the same recipe that generates the dependency files, rather than having a separate rule that does it. I don't think it's necessary to have two separate rules because you really don't want the intermediate step here: you just want to generate the make prerequisites definitions... similar to how normally we wouldn't have separate rules for preprocessing, compiling, assembling object files: one rule does that even though there are multiple steps involved.
I have a tool lets say mytool that does some pre-processing on the source files. Basically, it instruments some functions (based on an input list file) in the source files.
The way it is invoked is : (lets say we have two source files - input1.c and input2.c)
./mytool input1.c input2.c --
('--' is for leaving some arguments to default)
Now, I wish to hook this tool to any build process i.e. makefile such that the tool can get all the source files from the makefile and can run on all the source files. So it say there were 3 C files - 1.c, 2.c and 3.c then we would want to do
./mytool 1.c 2.c 3.c --
and then proceed with the usual build process i.e. 'make' in the simplest case.
How can I achieve this? Is this possible by some sort of variable over-riding?
The simplest thing to do, assuming you don't mind the tool running once per .c file (as opposed to once for all .c files) would be to replace the default %: %.c and %.o: %.c built-in make rules (assuming those are in use) and add your tool to the body of those rules.
This only runs the tool for files that need to be re-built from source (as per #Beta's comment on the OP).
I have a situation where I am building something like this:
target1: dep1 list2
dep1:
perl script that generates a .h file
list2:
compiles a list of c files
including one that needs the .h file from dep1 to exist
I build target1 with a parallel build option (-J 6) and I am trying to understand the clearmake Makefile syntax that would say "finish dep1 and then do all of list2 compiling in parallel".
Whenever I build, it fails to compile items in list2 because the header file generation hasn't happened yet from dep1, because they are all happening in parallel (dep1 takes like 20 seconds).
How would you avoid that?
One way to introduce sequential build steps in a clearmake is to split that build in two, a bit like what is explained in the section "Coordinating reference times of several builds":
An alternative approach is to run all the builds within the same clearaudit session.
For example, you can write a shell script, multi_make, that includes several invocations of clearmake or clearaudit (along with other commands).
Running the script as follows ensures that all the builds are subsessions that share the same reference time:
clearaudit -c multi_make
While the clearaudit part might not be relevant to your case, the main idea persist: pilot the build process through multiple sequential clearmake steps, instead of trying to code those steps in the Makefile (with the .NOTPARALLEL: directive mentioned in clearmake Makefiles and BOS Files).
Here is my problem: I have been using Java for many years and enjoy having many directories separating different areas of the code. For my current project I am writing Fortran code, which should compile under Windows and Unix/Linux. For Windows, I am using Eclipse/Photran with MinGW/gfortran tools to set up Makefiles.
Here is the desired project structure (deep nesting tree-like Java-like would be even nicer)
dir1/src/*.f95
dir1/make/Makefile_lib1.any
dir1/make/Makefile_lib1.win
dir1/make/Makefile_lib1.unix
dir2/src/*.f
dir2/make/Makefile_lib2.any
dir2/make/Makefile_lib2.win
dir2/make/Makefile_lib2.unix
...
dir_main/src/*.f or *.f95
dir_main/make/Makefile_main.any
dir_main/make/Makefile_main.win
dir_main/make/Makefile_main.unix
I would like to call make Makefile_main.unix, which would set up any Unix-specific variables and then include Makefile_main.any, Makefile_lib1.any, ...
(similar for making on Windows)
I got to the stage where I can see all source files in a given directory, e.g.
SRCS := $(wildcard $(SRC_DIR)/*.$(SRC_EXT))
Now I am struggling with how to make all dependencies as in Fortran 95 each source generates *.o and *.mod.
Is there a way to switch between directories when compiling so that all targets/dependencies do not have dir-path in their names? Note that I am calling make from some other service directory where the Eclipse project lives. Any suggestions how to proceed?
I really do not want to do the usual Fortran style of having just one directory with
all the mess together with the code.
There are two major strategies you can take.
You can place a makefile in each subdirectory and have it support targets like all, clean etc, then use recursive make invocations from the top-level makefile to make the same target (e.g. all) in every subdirectory.
Alternatively, you can handle it all in one make invocation, without recursing, but then you'll have to work with relative paths containing subdirectory names. Personally I don't see a problem with it, and I've maintained a system of makefiles based on this approach.
Here is what you can do in your case, assuming that SRC is set correctly to the list of relative paths to every source you need to compile.
# This replaces the SRC_EXT suffix with .o in each filename
OBJ = $(SRC:%.$(SRC_EXT)=%.o)
$(BINARY_NAME): $(OBJ)
...link command...
%.o: %.$(SRC_EXT)
...compile command...
We have an ActionScript (Flex) project that we build using GNU make. We would like to add an M4 preprocessing step to the build process (e.g., so that we can create an ASSERT() macro that includes file and line numbers).
We are having remarkable difficulty.
Our current strategy is:
Create a directory "src/build" (assuming source code is in src/ and subdirectories).
Within src/build, create a Makefile.
Run make inside src/build.
The desired behavior is, make would then use the rules we write to send the *.as files src/ and its subdirs, creating new *.as files under build. For example:
src/bar.as -> m4 -> src/build/bar.as
src/a/foo.as -> m4 -> src/build/a/foo.as
The obvious make rule would be:
%.as : ../%.as
echo "m4 --args < $< > $#"
This works for bar.as but not a/foo.as, apparently because make is being "smart" about splitting and re-packing directories. make -d reveals:
Trying implicit prerequisite `a/../foo.as'.
Looking for a rule with intermediate file `a/../foo.as'.
but we want the prerequisite to be "../a/foo.as". This (what we don't want) is apparently documented behavior (http://www.gnu.org/software/make/manual/make.html#Pattern-Match).
Any suggestions? Is it possible to write a pattern rule that does what we want?
We've tried VPATH also and it does not work because the generated .as files are erroneously satisfying the dependency (because . is searched before the contents of VPATH).
Any help would be greatly appreciated.
One option is to use a different extension for files that haven't been preprocessed. Then you can have them in the same directory without conflict.
As Anon also said, your source code is no longer Flex - it is 'to be preprocessed Flex'. So, use an extension such as '.eas' (for Extended ActionScript) for the source code, and create a 'compiler' script that converts '.eas' into '.as' files, which can then be processed as before.
You may prefer to have the Extended ActionScript compiler do the whole compilation job - taking the '.eas' direct to the compiled form.
The main thing to be wary of is ensuring that '.eas' files are considered before the derived '.as' files. Otherwise, your changes in the '.eas' files will not be picked up, leading to hair-tearing and other undesirable behaviours (head banging, as in 'banging head against wall', for example) as you try to debug code that hasn't changed even though the source has.