I want do something like this in a Makefile (with more tasks):
.PHONY: sep tasks task1 task2 task3
tasks: task1 sep task2 sep task3
task1:
#echo task 1
task2:
#echo task 2
task3:
#echo task 3
sep:
#echo
The result of the tasks target is:
task 1
task 2
task 3
The sep target is executed only the first time. That is perfectly normal. But I want that executes each times. Is there a way to force repetition?
A workaround is to use several sep targets like this:
tasks: task1 sep1 task2 sep2 task3
and with grouped targets definition all sep can be defined in one time:
sep1 sep2 &:
#echo
The result with that is correct:
task 1
task 2
task 3
I want know if it is possible to have the same behavior with only one sep and force the repetition of this prerequisite.
It is not possible. There is no way to get make to build the same target more than one time per invocation of make.
Anyway, relying on this ordering is wrong: if you ever wanted to use parallel jobs (make -j) to allow your build to run faster, then this would not do what you wanted it to do.
The best thing to do is simply put the "sep" into the recipes for the targets:
task1:
#echo
#echo task 1
etc.
Related
write:
#echo `date`
file_1: write file_2
#echo "file_1 begin"
$(shell sleep 3)
#echo "file_1 end"
file_3: write file_4
#echo "file_3 begin"
$(shell sleep 3)
#echo "file_3 end"
all: file_1 file_3
.PHONY: write
When I run make, it outputs:
Sat Oct 9 15:22:45 CST 2021
file_1 begin
file_3 begin
file_1 end
file_3 end
target write run only once.
My ultimate goal is to calculate the execution time of file_1 and file_3 by writing the start time of target write. In the process of testing whether it will be covered, the above problems were found. If there is a better way to calculate the time, let me know.
First, it's virtually always an error to use the $(shell ...) function inside a recipe. A recipe is already running in the shell, so it's not needed, and it has unexpected behaviors. Why not just use #sleep 3 here instead?
There is no way to make a single target get built more than one time in a single invocation of make. That would be very bad: consider if you have multiple programs all depending on one .o file for example: would you want to recompile that .o file many times?
Anyway, this won't really do what you want:
file_1: write file_2
This doesn't measure the execution time of file_1 because the start date is written, then file_2 is built, then file_1 is built. So you're including the execution time of file_2 as well.
And, of course, if you ever enable parallel builds all bets are off.
If you want to measure something, put the start/stop operations inside the recipe, not using prerequisites. You can hide it in a variable to make it less onerous to type.
I'd like to ignore dependency check in makefile.
For example, please look this code.
test:: test1 test2
#echo "test"
test1:: back3
#echo "test1"
test2:: back3
#echo "test2"
back3::
#echo "back3"
Results of "make test"
back3
test1
test2
test
But I want to get below result.
back3
test1
back3 <---- I want to run back3 again.
test2
test
How can I do this?
You could use Make recursively:
test: test1 test2
#echo "test"
test1:
$(MAKE) back3
#echo "test1"
test2:
$(MAKE) back3
#echo "test2"
back3:
#echo "back3"
or use a "canned recipe":
define run_back3
#echo "back3"
endef
test: test1 test2
#echo "test"
test1:
$(run_back3)
#echo "test1"
test2:
$(run_back3)
#echo "test2"
You write
I'd like to ignore dependency check in makefile.
, by which you seem to mean that you want all the prerequisites of each target to be rebuilt prior to building that target, specifically for that target, such that prerequisites may be built more than once.
make simply does not work like that. On each run, make builds each target at most once. Prerequisite lists help it determine which targets need to be built, in which order, but designating a prerequisite should not be viewed as calling a subroutine.
If indeed you want something that works like calling a subroutine, then it needs to be expressed in the rule's recipe, not its prerequisite list. Your other answer presents two alternatives for that. Of those, the recursive make example is more general; the one based on defining a custom function is specific to GNU make.
In the GNU make, apparently the prerequisites for the rules are expanded immediately. Consider:
AAA = task1
all: ${AAA} ; #echo finalizing
task1: ; #echo task1
task2: ; #echo task2
AAA += task2
This would print only task1, but not task2, since at the moment of parsing the all rule, the variable ${AAA} held only the task1.
Is there any workaround for that? Or some way to make the make expand the variables lazily, like it does inside the commands?
P.S. In my case the ${AAA} variables is set/changed inside included files. Most are included at the top. Few are included at the bottom - and they are affected. I can't simply move all the include files to the top, since that messes up the default rule. And I can't easily add the extra prereqs to the rules, since there are multiple rules which rely on multiple variables (would lead to lots of copy-paste). The only ideas I have come up with so far were (A) to extract the final value of the variable and call make recursively with its full value (based on the suggestion here) or (B) move the variabe from prerequisites to the command and loop over it calling make recursively for every prerequisite. I'd love to have a more elegant alternative.
Yes, this is clearly spelled out in the GNU make manual, here for example.
An alternative you can consider is secondary expansion which will let you defer the expansion of the variable until run-time.
However you don't need to do that. You are forgetting that you can add more prerequisites to a rule whenever you want. All you have to do is write your makefiles like this:
AAA = task1
all: ; #echo finalizing
task1: ; #echo task1
task2: ; #echo task2
AAA += task2
all: ${AAA}
As long as you can arrange for a final section to always be placed at the end you can add the prerequisites there and be sure that the variable assignments have all completed. This is also a lot more portable, if you care about that.
Also, you can force the default target to be whatever you want just by mentioning it first, you don't need to give it a recipe. So you could also write this if you wanted:
all:
AAA = task1
task1: ; #echo task1
task2: ; #echo task2
AAA += task2
all: ${AAA} ; #echo finalizing
I have a single makefile that runs a group of tests involving various files and writes the results to log files. Is it possible to write the makefile so that it will run the same set of shell commands after all other targets in the file have been processed, regardless of errors that occurred.
I know I could use a shell script that calls the makefile and then the shell commands, but I'm wondering if it's possible to do this from within the makefile itself, without using some phony target like make print (for example).
You could try something like this:
.PHONY: test1 test2 test3 finale
all: finale
finale: test1 test2 test3
test1:
- exit 1
test2:
- exit 2
test3:
- exit 3
finale:
echo "And the winner is..."
You make your script, the target "finale", dependent on the other targets, and you use the "-" operator to ignore the non-zero return codes from the tests.
I have a compile job where linking is taking a lot of IO work. We have around a dozen of cores so we run make -j13, but when it comes to linking the 6 targets, I'd like those to be done in a round robin way. I thought about making one depend on the next but I think this would break the individual targets. Any ideas how to solve this small issue?
make itself doesn't provide a mechanism to request "N of these, but no more than M of those at a time".
You might try using the sem command from the GNU parallel package in the recipe of your linker rules. Its documentation has an example of ensuring only one instance of a tool runs at once. In your example, you would allow make to start up to 13 sems at a time, but only one of those at a time will run the linker, while the others block.
The downside is that you could get into a situation where 5 of your make's 13 job slots are tied up with instances of sem that are all waiting for a linker process to finish. Depending on the structure of your build, that might mean some wasted CPU time. Still beats 6 linkers thrashing the disk at once, though :-)
You should specify that your six targets cannot be built in parallel. Add a line like this to your makefile:
.NOTPARALLEL: target1 target2 target3 target4 target5 target6
For more information look here https://www.gnu.org/software/make/manual/html_node/Parallel-Disable.html.
I've stumbled upon a hacky solution:
For each recipe it runs, Make does two things: it expands variables/functions in the recipe, and then runs the shell commands.
Since the first step can read/write the global variables, it seems to be done synchronously.
So if you run all your shell commands during the first step (using $(shell )), no other recipe will be able to start while they're running.
E.g. consider this makefile:
all: a b
a:
sleep 1
b:
sleep 1
time make -j2 reports 1 second.
But if you rewrite it to this:
# A string of all single-letter Make flags, without spaces.
override single_letter_makeflags = $(filter-out -%,$(firstword $(MAKEFLAGS)))
ifneq ($(findstring n,$(single_letter_makeflags)),)
# See below.
override safe_shell = $(info Would run shell command: $1)
else ifeq ($(filter --trace,$(MAKEFLAGS)),)
# Same as `$(shell ...)`, but triggers a error on failure.
override safe_shell = $(shell $1)$(if $(filter-out 0,$(.SHELLSTATUS)),$(error Unable to execute `$1`, exit code $(.SHELLSTATUS)))
else
# Same functions but with logging.
override safe_shell = $(info Shell command: $1)$(shell $1)$(if $(filter-out 0,$(.SHELLSTATUS)),$(error Unable to execute `$1`, exit code $(>
endif
# Same as `safe_shell`, but discards the output and expands to nothing.
override safe_shell_exec = $(call,$(call safe_shell,$1))
all: a b
a:
$(call safe_shell_exec,sleep 1)
#true
b:
$(call safe_shell_exec,sleep 1)
#true
time make -j2 now reports 2 seconds.
Here, #true does nothing, and suppresses Nothing to be done for ?? output.
There are some problems with this approach though. One is that all output is discarded unless redirected to file or stderr...
It won't break individual targets.
You can create any number of (:) rules for a target, as long as only one of them has an actual recipe for building it. This appears to be a good use case for that.