I have a situation where I am building something like this:
target1: dep1 list2
dep1:
perl script that generates a .h file
list2:
compiles a list of c files
including one that needs the .h file from dep1 to exist
I build target1 with a parallel build option (-J 6) and I am trying to understand the clearmake Makefile syntax that would say "finish dep1 and then do all of list2 compiling in parallel".
Whenever I build, it fails to compile items in list2 because the header file generation hasn't happened yet from dep1, because they are all happening in parallel (dep1 takes like 20 seconds).
How would you avoid that?
One way to introduce sequential build steps in a clearmake is to split that build in two, a bit like what is explained in the section "Coordinating reference times of several builds":
An alternative approach is to run all the builds within the same clearaudit session.
For example, you can write a shell script, multi_make, that includes several invocations of clearmake or clearaudit (along with other commands).
Running the script as follows ensures that all the builds are subsessions that share the same reference time:
clearaudit -c multi_make
While the clearaudit part might not be relevant to your case, the main idea persist: pilot the build process through multiple sequential clearmake steps, instead of trying to code those steps in the Makefile (with the .NOTPARALLEL: directive mentioned in clearmake Makefiles and BOS Files).
Related
Just out of curiosity, what is the order of executing targets in a makefile with
${OBJ_DIR}/%.o: ${SRC_DIR}/%.cpp
I noticed it is not lexicographic (like ls -l).
Is it just random?
They are built in the order in which make walks the prerequisite graph.
In the simple case where you don't have parallel jobs (no -j option), then if you have a target like:
prog: foo.o bar.o. baz.o
make will first try to build foo.o, then bar.o, then baz.o, then finally prog.
If you do enable parallel jobs, then make will still try to start builds in the same order but because some builds finish faster than others, you may get different targets building at the same time.
I'm slowly losing my mind here. First, let me describe what it is I'm trying to do. We have a compiler that spews out weirdly formatted dependency files. To get these makefiles into a format GNU Make can understand, they need to be processed by a Perl script first. Technically, the Perl script doesn't convert the input dependency files it gets passed; instead it creates a new, properly formatted dependency file for each input dependency file.
Now, in order for GNU Make to know which translation units need recompiling and which don't, it obviously must have seen those dependency files before trying to make the translation unit targets, so we have the following line in our master makefile:
include $(PROCESSED_EXISTING_DEPENDENCY_FILES)
where $(PROCESSED_EXISTING_DEPENDENCY_FILES) is a list of all converted dependency files. My idea was to (ab-)use an automatically generated makefile whose recipe not only builds that makefile but also triggers the creation of all dependency files mentioned in the $(PROCESSED_EXISTING_DEPENDENCY_FILES) list and include that makefile just before including the converted dependency files. To ensure that the conversion takes place, the parent process of our Make process will delete the automatically created makefile first (we have a Perl wrapper process controlling GNU Make). The relevant part in the master makefile would look like this:
# Phony target that creates processed dependency files.
CONVERTED_EXISTING_DEPENDENCY_FILES :
<recipe here>
$(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK) : CONVERTED_EXISTING_DEPENDENCY_FILES
$(info $(TARGET_BUILD_MESSAGE_PREFIX) Building $(notdir $#) ...)
$(file >$#,# Automatically generated makefile that gets included before including the existing, converted dependency files.)
$(file >>$#,$(DOLLAR)(info Including pre-converted-dependency-files-inclusion hook file ...))
$(file >>$#,)
include $(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK)
include $(PROCESSED_EXISTING_DEPENDENCY_FILES)
We're already using the same basic principle in several other cases, and so far this has worked perfectly fine, but for some reason when I try this, GNU Make gets lost in an infinite loop where it will continuously re-revaluate the master makefile, include all other makefiles and then go back to re-revaluating the master makefile again.
The $(PRE_CONVERTED_DEPENDENCY_FILE_INCLUSION_HOOK) does get created, and if there are any dependency files to be converted, they are processed, too, but I'm still at a loss as to what causes this infinite loop in Make. We are using GNU Make 4.2.1 for Windows on a Windows 10 (64 bit) system.
I recommend you rework your model completely to avoid any recipes that know how to build included files, and instead follow the model for auto-dependency generation described in this post (based on how automake handles dependency generation).
Then add in the postprocessing step directly into the same recipe that generates the dependency files, rather than having a separate rule that does it. I don't think it's necessary to have two separate rules because you really don't want the intermediate step here: you just want to generate the make prerequisites definitions... similar to how normally we wouldn't have separate rules for preprocessing, compiling, assembling object files: one rule does that even though there are multiple steps involved.
Hi I have a build script called "buildMyJava" that builds a bunch of Java source code. Assuming those source code are in differnet directories such as "folder1" and "folder2", the output goes to some folder called "classes". How do I create a makefile so it KNOWS to build only when the source code meaning the *.java in those two directories have changed as well as the output classes is missing?
I have something like the following but it ALWAYS builds, dependencies are not working.
all: task
task: folder1/*.java folder2/*.java classes/
buildMyJava
First of all, the build script produces the .java files, thus the .java files should be targets, not prerequisites. So you should have something like this:
folder1/%.java folder2/%.java:
buildMyJava
The only problem with this is that if you do a make -j2, buildMyJava will run multiple times (once for folder1, and once for folder2). In fact, this is a limitation to makefiles -- you cannot have multiple targets invoke the same recipe only once. There is a good discussion on this here: http://www.cmcrossroads.com/article/rules-multiple-outputs-gnu-make
Notice though that a 'pattern' target counts as a single target though -- which means, if you can get a pattern to match all targets, you can invoke the recipe only once. A small caveat to that -- the % symbol cannot represent /'s. Thus you cannot do folder%.java, as that would not match folder1/file1.java... If you can split your script to output only to one directory at a time though, you may be able to do the following:
folder1/%.java:
buildMyJava folder1
folder2/%.java:
buildMyJava folder2
John
We are as good as done converting our large GNU Make build system to SCons.
Everthing works nicely except that we have 7 seconds of startup time spent in Environment.Clone(). We are using Clone() because of its nice functional style of not modifying existing global state.
Has anybody come up with a way to not copy 99 percent of the Environment data never gets changed in the Clone anyway?
Would removing non-used keys in its dictionary improve Clone() time?
Based on your comments, you could consider dividing your build into several smaller builds, each controlled by its seperate SConstruct build script, while maintaining the current SConstruct for the complete build. I could imagine that a developer changing the c++ code would only need to build the complete project once to get the non-c++ dependencies. And then while working on the c++ feature, he/she would only need to build the c++. Likewise for ada and fortran.
For example, assuming you have something similar to the following dir structure:
root_dir
|
+--- src_ada
|
+--- src_cpp
|
+--- src_fortran
You could have a root level SConstruct for each type of build, as follows:
SConstruct # performs a complete build
SConstruct.ada # builds just the ada
SConstruct.cpp # builds just the cpp
SConstruct.fortran # builds just the fortran
If these SConstruct scripts are correctly created, then the subsidiary SConscript build scripts probably wouldnt need to be modified.
You could even take this one step further by creating a build wrapper script that could take the following command line arguments:
build [complete (default arg) | ada | cpp | fortran]
And internally, the script would call scons with the appropriate SConstruct, which could be one of the following:
scons -f SConstruct
scons -f SConstruct.ada
scons -f SConstruct.cpp
scons -f SConstruct.fortran
Notice the options -f file, --file=file, --makefile=file, --sconstruct=file all do the same: let you specify which SConstruct to use for the build, as explained in the SCons man pages.
Another option would be to have several different Environment objects that would be passed to the different subsidiary SConscript build scripts. You could have one that has general build settings applicable to all subsidiary SConscript scripts that would not need to be cloned, and then different Environment objects for the different programming languages, that would selectively be passed to subsidiary SConscript scripts, depending on the needs therein. These more specific Environment objects would probably need to be cloned.
Separating the Environment objects like this should speed-up the cloning, as only the needed information for a particular subsidiary SConscript script would be cloned.
I have a somewhat complicated Makefile which runs perl scripts and other tools and generates some 1000 files. I would like to edit/modify some of those generated files after all files are generated. So I thought I can simply add a new rule to do so like this:
(phony new rule): $LIST_OF_FILES_TO_EDIT
file_modifier ...
however, the point here is some of those generated files which I'd like to edit ($LIST_OF_FILES_TO_EDIT) are used in the same make process to generate a long list of files. So I have to wait to make sure those files are no longer needed in the make process before I can go ahead and edit them. But I don't know how to do that. Not to mention that it is really hard to find out what files are generated by the help of $LIST_OF_FILES_TO_EDIT.
If it was possible to mention in the Makefile that this rule should be only run as the last rule, then my problem would be solved. but as far as I know this is not possible. So anyone has an idea?
Some points:
List of files to edit ($LIST_OF_FILES_TO_EDIT) is determined dynamically (not known before make process)
I am not sure I have picked a good title for this question. :)
1) If you're going to modify the files like that, it might behoove you to give the targets different names, like foo_unmodified and foo_modified, so that the Make's dependency handling will take care of this.
2) If your phony new rule is the one you invoke on the command line ("make phonyNewRule"), then Make will build whatever else it's going to build before executing the file_modifier command. If you want to build targets not on that list, you could do it this way:
(phony new rule): $(LIST_OF_FILES_TO_EDIT) $(OTHER_TARGETS)
file_modifier ...
3) If your dependencies are set up correctly, you can find out which targets depend on $(LIST_OF_FILES_TO_EDIT), but it's not very tidy. You could just touch one of the files, run make, see which targets it built, repeat for all files. You could save a little time by using Make arguments: "make -n -W foo1 -W foo2 -W foo3 ... -W foo99 all". This will print the commands Make would run-- I don't know of any way to get it to tell you which targets it would rebuild.