How to avoid running Virtual Makefile targets more than once? - makefile

I want to depend on a virtual target that only runs once.
Makefile with what I tried so far:
a: b
b: c d
touch b
c:
# time consuming task that only needs to run once
d:
# time consuming task that only needs to run once
Is there a way to stop the dependency chain when b already exists? I'm okay with making a manual clean to get rid of b to trigger a re-run of c and d. I want to be able to run a many times without triggering the long running tasks if b exists.
I have a lot of tasks like c and d, so I want to avoid touching a file per separate task and I don't want the file system to be cluttered with unnecessary files.

You can write your Makefile as:
a: b
b c:
# time consuming task that only needs to run once
touch b
make c will call your task unconditionally.
make b only executes your task when the file b does not exist.
make a depends on b, so the task is only executed when b does not exist,

Related

How to force make to restart and reload generated makefiles?

The task at hand is the following: an external tool called in a recipe produces a makefile that should be included by make immediately. Then, another rule, using the data included generates further include files. Make should restart again, and only then process the further rules. Consider the following example:
$(info ------- Restart $(MAKE_RESTARTS))
all :
include a
include b
a : p
touch a
b : a
touch b
Works like this:
touch p
make
------- Restart
touch a
touch b
------- Restart 1
make: Nothing to be done for 'all'.
My problem is that the rule b needs the data included from a, but b is executed BEFORE including the updated version of a.
Make should be restarted before executing b. How can this be achieved? I'd like to see this:
touch p
make
------- Restart
touch a
------- Restart 1
touch b
------- Restart 2
make: Nothing to be done for 'all'.
It's easy to detect whether a was included or not, and the rule for b can be hidden when a is not included. This works for a clean build, but does not when a already exists on the disk from a previous build, and the rule is triggered because p was updated.
Only make knows, whether the rule a:p is up-to-date, it's not possible to check that with conditional expressions.
Is there a solution for this?
Update: based on the advice from #MadScientist, I made it working this way:
$(info ------- Restart $(MAKE_RESTARTS))
all :
include b
include a
a : p
#echo rule A
touch a
$(eval upd=1)
b : a
#echo rule B
$(if $(upd),#echo b skipped,touch b)
And the output:
touch p
make
------- Restart
rule A
touch a
rule B
b skipped
------- Restart 1
rule B
touch b
------- Restart 2
Perfect! Thanks guys, Merry Xmas everybody.
One solution is to invoke a sub-make to build b. That sub-make will include the newer a and so will be correct. I believe something like this will work (untested):
$(info ------- Restart $(MAKE_RESTARTS))
all :
include a
include b
a : p
touch a
ifeq ($(filter real_b,$(MAKECMDGOALS)),)
b : a
$(MAKE) real_b
endif
.PHONY: real_b
real_b:
touch b
Another solution would be to ensure that b is not updated when a is updated. Maybe something like this (again, not tested):
$(info ------- Restart $(MAKE_RESTARTS))
all :
include a
include b
BUILT_A =
a : p
touch a
$(eval BUILT_A = true)
b : a
$(if $(BUILT_A),,touch b)
(a rare legitimate use of eval in a recipe!) In this version if a is built, then b will not be touched. This way after make re-execs itself it will include a and include b, then see a is up to date but see that b is out of date (because we skipped the build step the first time) and rebuild b: this time b will be updated because a was updated in the previous pass, then make will re-exec itself again.
There's more than one way to do it (as there usually is with Make).
I'd do it this way: put the the include b statement in a.
$(info ------- Restart $(MAKE_RESTARTS))
all :
-include a
a : p
touch a
#echo -include b >> a
b : a
touch b

Make should not rebuild deep dependencies

I have a build procedure roughly described by the following Makefile example:
a: b
#echo "Build a, just using b. Don't care about c."
touch a
b: c
#echo "Constructing b from c is cheap..."
touch b
#echo "Once accomplished, I no longer need c."
c:
#echo "Constructing c is very expensive..."
#echo "Work work work..."
touch c
clean:
$(RM) a b c
example: clean
make a
$(RM) c
make a
The point is: I need c to build b, but once I have b, I never again need c. When I do make example, make makes c, b, and a (as expected), deletes c, and then, in the last make a invocation, just remakes c (and does NOT re-make b and a, even though, I'd have thought they were stale now). But, since my goal is a and b hasn't changed, I don't want to remake c. Forget about it! Who cares! a should be considered up-to-date.
Another peculiar thing, is that when I
make a
rm c
make a
(rather than make example), in the second invocation make rebuilds everything (while in make example the second invocation just rebuilds c).
How do I prevent make from building c when its goal is a and all of a's immediate prerequisites exist and is fresher than they are (a isn't stale compared to b), even though the prerequisites of the prerequisites do not?
Edit: I think that what I may want is to treat every file as old (eg. with --old-file) unless that file doesn't exist.
It looks like you want make to treat the file c as an intermediate file, a file that does not have any importance to you other than as an intermediate result when generating another file or other files. This concept is explained in section 10.4 Chains of Implicit Rules of the manual. Since your example does not use any implicit rules, you can manually mark your file c as .INTERMEDIATE.
This makefile shows c as an intermediate file.
a: b
#echo "Build a, just using b. Dont care about c."
touch a
b: c
#echo "Constructing b from c is cheap..."
touch b
#echo "Once accomplished, I no longer need c."
c: d
#echo "Constructing c is very expensive..."
#echo "Work work work..."
touch c
.INTERMEDIATE: c
.PRECIOUS: c
I added a file d, based on your comment, although it is not needed for this example to work.
Before invoking make, the file d has to exist, it is the starting point of the chain. When invoking make, the following happens:
$ touch d
$ make
Constructing c is very expensive...
Work work work...
touch c
Constructing b from c is cheap...
touch b
Once accomplished, I no longer need c.
Build a, just using b. Dont care about c.
touch a
Now deleting c will not have any impact on the build:
$ rm c
$ make
make: `a' is up to date.
Other than that, the update behavior based on dependencies is "the same as usual".
The .PRECIOUS target is optional. It is a built-in that instructs make not to delete the intermediate file named c. You can see for yourself what happens if you remove that line.
b might be built from c, but you don't want to tell Make that b depends on c — if b merely exists, then that's good enough. So you might write b's recipe as
b:
$(MAKE) c
#echo "Constructing b from c is cheap..."
touch b
#echo "Once accomplished, I no longer need c."
or if c is only used in making b, you could just fold the commands for making c into the recipe for b and not expose the existence of c to Make at all.
Maybe there are more elegant ways of expressing this, without invoking sub-makes. And if c has some prerequisites that would cause it to be rebuilt if they were updated, I guess they would need to be listed as reprequisites of b as well.

Gnu make - Force "Remake" of prerequisite

Makefile-snippet:
x:
# echo reached $(#)
a: x
b: a x
make b; on commandline echoes only once: how to get x executed every time it is referenced (2 times in this example); in other words: how to make make forget that x has already been done as a prerequisite of a, so it runs as a prerequisite of b too ?
found Execute make prerequisite every time but maybe there are other approaches.
thanks in advance.

windows clearmake parallel build with dependency

I have a situation where I am building something like this:
target1: dep1 list2
dep1:
perl script that generates a .h file
list2:
compiles a list of c files
including one that needs the .h file from dep1 to exist
I build target1 with a parallel build option (-J 6) and I am trying to understand the clearmake Makefile syntax that would say "finish dep1 and then do all of list2 compiling in parallel".
Whenever I build, it fails to compile items in list2 because the header file generation hasn't happened yet from dep1, because they are all happening in parallel (dep1 takes like 20 seconds).
How would you avoid that?
One way to introduce sequential build steps in a clearmake is to split that build in two, a bit like what is explained in the section "Coordinating reference times of several builds":
An alternative approach is to run all the builds within the same clearaudit session.
For example, you can write a shell script, multi_make, that includes several invocations of clearmake or clearaudit (along with other commands).
Running the script as follows ensures that all the builds are subsessions that share the same reference time:
clearaudit -c multi_make
While the clearaudit part might not be relevant to your case, the main idea persist: pilot the build process through multiple sequential clearmake steps, instead of trying to code those steps in the Makefile (with the .NOTPARALLEL: directive mentioned in clearmake Makefiles and BOS Files).

Preventing two make instances from executing the same target (parallel make)

I trying to make parallel make work on our build server. I am running into a very frequent problem here that two instances of make trying to make two different targets, say A and B, nearly simultaneously try to make a target which is required by both, say C.
As both instances try to make C together in different instances, C make fails for either of them since one making of C requires some files to be moved here and there and either one of these instances ends up moving or deleting an already created file.
Is there is common construct that I can use to prevent a re-entry into a makefile if the target is already being made ?
Update:
Ok let me put it this way :
My application requires A.lo and B.lo to be present. These A.lo and B.lo are libraries which also link against C.lo.
So the rules look like
app.d : A.lo B.lo (other lo s)
(do linking)
In some other directory say A (which will house A.lo) :
A.lo : C.lo (other .o s and .lo s)
(do linking)
In some other directory say B (which will house B.lo) :
B.lo : C.lo (other .o s and .lo s)
(do linking)
So in effect while making app.d make forks off two parallel makes for targets A.lo and B.lo.
Upon entering directories A and B make forks off another two threads for target C.lo independently and at times both of these are linking C.lo at the same time, which causes one of them to fail with some weird errors like file not recognized (since it may be written onto by other linker instance)
How should I go about solving this? It is not possible to create A.lo and B.lo without C.lo linked against them.
This may sound a little, well, obvious, but the simplest solution is just to arrange for C to be built explicitly before either A or B. That way when the recursive makes for A and B run, C will already be up-to-date, and neither will bother trying to build it.
If your toplevel makefile looks like this:
all: buildA buildB
buildA:
$(MAKE) -C A
buildB:
$(MAKE) -C B
You can extend it like this:
all: buildA buildB
buildA: buildC
$(MAKE) -C A
buildB: buildC
$(MAKE) -C B
buildC:
$(MAKE) -C C
You didn't give a lot of detail about your specific build, so probably this won't just drop-in and work, but hopefully you get the idea.
I solve this problem using the "mkdir" technique:
SHELL = /bin/bash
all: targetC targetD
targetC: targetA
............
targetD: targetB
............
targetA targetB: source
-mkdir source.lock && ( $(command) < source ; rm -r source.lock )
$(SHELL) -c "while [ -d source.lock ] ; do sleep 0.1 ; done"
I would be happy to see a more elegant solution, though.

Resources