`make` doesn't notice modifications in a Rust module - how to better integrate Rust into the build? - makefile

In a simple embedded project, i have two files main.rs and module.rs. To build the project, I use something similar to this:
all: main.o
$(CC) main.o $(LDFLAGS)
%.o: %.rs
$(RUSTC) $(RUSTFLAGS) -o ${#} ${<}
If only module.rs is changed, make all won't recompile my Rust code. How can I fix this?
I'm posting a suboptimal self-answer as a first step, but would love to see better ways.

The best way to use Make is to encode every single dependency into the Makefile. That's what gives Make the power to know what to rebuild in order to reach a goal state.
To do this for a C project, you'll often use something like the GCC command line option -M. This brings the compiler into the mix as it's the best tool to parse C source code and understand the dependencies between the
files.
There is actually an equivalent switch for rustc, the Rust compiler: --emit=dep-info. When you run this on a source file, it will output a file with the extension .d, which contains an almost-Makefile-compatible list of dependencies. If you had a main.rs that referenced the module foo.rs, it would output something like:
main.d: main.rs foo.rs
With a bit of sed tweaking you can get this to play nicely. You can then include this in your Makefile:
main.o:
rustc -o $# $<
main.d: main.rs
rustc --emit=dep-info $<
# Add the object file as a rule
gsed 's/:/ $(#:.d=.o):/' -i $#
-include main.d
Here, I've specified main in a few parts, but I believe that you can easily modify them into pattern rules.

The pragmatic solution is to just use Cargo, the Rust build tool and package manager. Let it deal with dependencies (both local modules and other crates).
libbar.dylib: target/debug/libbar.dylib
cp $< $#
.PHONY: target/debug/libbar.dylib
target/debug/libbar.dylib:
cargo build --verbose
Here, I've marked the rule as PHONY, which says "always run this rule". I've added --verbose to have Cargo print out what it is doing so you can verify when things are rebuilt.
I'd recommend dropping off the cp step if you can and instead just use the nested path, but the copy might be needed if other things rely on the current location.

The pattern
%.o: %.rs
is familiar from building C projects, but that's not the only way a target can be written. Specific to the setup above, this would fix the situation:
main.o: main.rs module.rs
$(RUSTC) $(RUSTFLAGS) -o main.o main.rs
A noteworthy difference to the original code is that the names of the inputs is not really what matters for the command. We can generalize this as follows:
main.o: $(wildcard *.rs)
$(RUSTC) $(RUSTFLAGS) -o ${#} ${#:.o=.rs}
This is a start, but it still has some downsides I couldn't get rid of:
The main.o: part is hardcoded. If there are multiple top-level modules to compile, there would be code duplication
All Rust files will be considered for all top-level modules, due to the wildcard. In other words, changing any Rust file will require a full recompilation.

Related

GNU Makefile Multiple rules in multiple targets

I am doing a nasm project, and I need to execute the ej and use as a parameter the ex.asm . I tried searching through GNU how can I pick one by one the parameter. My solution has been writing ex1_ and ex2_, but I want to put those inside the $(ex) dependency, so I don't have to replicate multiple times the same code. Is there any way?
Thank you in advance
The code:
ej = ej1_gen ej2_gen
ex = ex1 ex2
# -----------------------------------------------
all: $(ej) $(ex)
exs: ex1_ ex2_
# -----------------------------------------------
$(ex): exs
nasm -g -o $#.o -f elf32 $#.asm
$(CC) $(FLAGS) -m32 -o $# $#.o alfalib.o
ex1_:
./ej1_gen ex1.asm
ex2_:
./ej2_gen ex2.asm
As I read the question, you have programs or scripts ej1_gen and ej2_jen in the project, serving to generate the wanted assembly sources. They each take the name of the output file as a command-line argument. Parts of this answer would need to be adjusted if that's a misinterpretation.
Rules to describe how to build the assembly files should designate the resulting assembly file(s) as the target. Also, supposing that the code-generator programs are part of the project, they should be designated as prerequisites, since changing those could cause them to produce different outputs. Any configuration files or similar that they read to inform their results should also be named as prerequisites (not shown). That leads to rules something like this:
ex1.asm: ej1_gen
./ej1_gen $#
ex2.asm: ej2_gen
./ej2_gen $#
It sounds like you may be asking for a way to express that via just one rule covering both, but I would not do so in this case. I don't think you get any clearer than the above, even if there are more than two assembly files to generate. It might be different if the same code generator program were being used, with different options, to generate all the assembly files, or perhaps if the generator name could be derived more directly from the target name.
With those rules in place, you can write a generic suffix rule or pattern rule to assemble the resulting files. Since you tag [gnu], I'll assume that a pattern rule is acceptable:
%.o: %.asm
nasm -g -o $# -f elf32 $<
And you can take a similar approach to expressing a link rule:
%: %.o alfalib.o
$(CC) $(FLAGS) -m32 -o $# $^
With that, you should be able to get rid of the ej variable and the exs target, too, leaving
all: $(ex)
as the only other rule (and it should still appear first in the file, as it does now).

Nothing to be done for 'all'

Im trying to run this simple makefile commands but get the error - 'Nothing to be done for 'all''
FILES = file1.c file2.c file3.c
all:test
test:
for file in $(FILES);
do
echo $$file;
done
The target test has no dependencies and therefore no reason to be built, which is inherited by the target all. It has instructions, but it should include FILES as its prerequisites. What you're doing appears to be ingredients-first, but test is the target. Working backwards is what make is best at. You may benefit from an article called "Auto-Dependency Generation" which takes the opposite approach (you appear to think like I do.)
test: $(FILES)
Then you could do something like the following:
$(FILES:.o:.c): %.o: %.c
$(CC) -c -o $# $<
The first part is a set of possible targets, the list of objects corresponding to the list of sources, and the second is a specific but nameless object (it will assume the name of the corresponding source.) Later on, the target, e.g. test, can be the name of your executable taking these objects as both dependencies and objects to link statically. For my purposes I typically use shared libraries but this is irrelevant to the question at hand.
Edit: untested, will revise if issues ensue

How to force a certain groups of targets to be always run sequentially?

Is there a way how to ask gmake to never run two targets from a set in parallel?
I don't want to use .NOTPARALLEL, because it forces the whole Makefile to be run sequentially, not just the required part.
I could also add dependencies so that one depends on another, but then (apart from being ugly) I'd need to build all of them in order to build the last one, which isn't necessary.
The reason why I need this is that (only a) part of my Makefile invokes ghc --make, which takes care of its dependencies itself. And it's not possible to run it in parallel on two different targets, because if the two targets share some dependency, they can rewrite each other's .o file. (But ghc is fine with being called sequentially.)
Update: To give a specific example. Let's say I need to compile two programs in my Makefile:
prog1 depends on prog1.hs and mylib.hs;
prog2 depends on prog2.hs and mylib.hs.
Now if I invoke ghc --make prog1.hs, it checks its dependencies, compiles both prog1.hs and mylib.hs into their respective object and interface files, and links prog1. The same happens when I call ghc --make prog2.hs. So if they the two commands get to run in parallel, one will overwrite mylib.o of the other one, causing it to fail badly.
However, I need that neither prog1 depends on prog2 nor vice versa, because they should be compilable separately. (In reality they're very large with a lot of modules and requiring to compile them all slows development considerably.)
Hmmm, could do with a bit more information, so this is just a stab in the dark.
Make doesn't really support this, but you can sequential-ise two targets in a couple of ways. First off, a real use for recursive make:
targ1: ; recipe1...
targ2: ; recipe2...
both-targets:
${MAKE} targ1
${MAKE} targ2
So here you can just make -j both-targets and all is fine. Fragile though, because make -j targ1 targ2 still runs in parallel. You can use dependencies instead:
targ1: ; recipe1...
targ2: | targ1 ; recipe2...
Now make -j targ1 targ2 does what you want. Disadvantage? make targ2 will always try to build targ1 first (sequentially). This may (or may not) be a show-stopper for you.
EDIT
Another unsatisfactory strategy is to explicitly look at $MAKECMDGOALS, which lists the targets you specified on the command-line. Still a fragile solution as it is broken when someone uses dependencies inside the Makefile to get things built (a not unreasonable action).
Let's say your makefile contains two independent targets targ1 and targ2. Basically they remain independent until someone specifies on the command-line that they must both be built. In this particular case you break this independence. Consider this snippet:
$(and $(filter targ1,${MAKECMDGOALS)),$(filter targ2,${MAKECMDGOALS}),$(eval targ1: | targ2))
Urk! What's going on here?
Make evaluates the $(and)
It first has to expand $(filter targ1,${MAKECMDGOALS})
Iff targ1 was specified, it goes on to expand $(filter targ2,${MAKECMDGOALS})
Iff targ2 was also specified, it goes on to expand the $(eval), forcing the serialization of targ1 and targ2.
Note that the $(eval) expands to nothing (all its work was done as a side-effect), so that the original $(and) always expands to nothing at all, causing no syntax error.
Ugh!
[Now that I've typed that out, the considerably simpler prog2: | $(filter prog1,${MAKECMDGOALS})
occurs to me. Oh well.]
YMMV and all that.
I'm not familiar with ghc, but the correct solution would be to get the two runs of ghc to use different build folders, then they can happily run in parallel.
Since I got stuck at the same problem, here is another pointer in the direction that make does not provide the functionality you describe:
From the GNU Make Manual:
It is important to be careful when using parallel execution (the -j switch; see Parallel Execution) and archives. If multiple ar commands run at the same time on the same archive file, they will not know about each other and can corrupt the file.
Possibly a future version of make will provide a mechanism to circumvent this problem by serializing all recipes that operate on the same archive file. But for the time being, you must either write your makefiles to avoid this problem in some other way, or not use -j.
What you are attempting, and what I was attempting (using make to insert data in a SQLite3 database) suffers from the exact same problem.
I needed to separate the compilation from other steps (cleaning, building dirs and linking), as I wanted to run the compilation with more core processes and the -j flag.
I managed to solve this, with different makefiles including and calling each other. Only the "compile" make file is running in parallel with all the cores, the rest of the process is syncronous.
I divided my makefile in 3 separate scripts:
settings.mk: contains all the variables and flag definitions
makefile: has all the targets except the compilation one (It has .NOTPARALLEL directive). It calls compile.mk with -j flag
compile.mk: contains only the compile operation (without .NOTPARALLEL)
In settings.mk I have:
CC = g++
DB = gdb
RM = rm
MD = mkdir
CP = cp
MAKE = mingw32-make
BUILD = Debug
DEBUG = true
[... all other variables and flags needed, directories etc ...]
In makefile I have Link and compilation target as these:
include .makefiles/settings.mk
[... OTHER TARGETS (clean, directories etc)]
compilation:
#echo Compilation
#$(MAKE) -f .makefiles/compile.mk --silent -j 8 -Oline
#Link
$(TARGET): compilation
#echo -e Linking $(TARGET)
#$(CC) $(LNKFLAGS) -o $(TARGETDIR)/$(TARGET) $(OBJECTS) $(LIBDIRS) $(LIB)
#Non-File Targets
.PHONY: all prebuild release rebuild clean resources directories run debug
.NOTPARALLEL: all
# include dependency files (*.d) if available
-include $(DEPENDS)
And this is my compile.mk:
include .makefiles/settings.mk
#Defauilt
all: $(OBJECTS)
#Compile
$(BUILDDIR)/%.$(OBJEXT): $(SRCDIR)/%.$(SRCEXT)
#echo -e Compiling: $<
#$(MD) -p $(dir $#)
#$(CC) $(COMFLAGS) $(INCDIRS) -c $< -o $#
#Non-File Targets
.PHONY: all
# include dependency files (*.d) if available
-include $(DEPENDS)
Until now, it's working.
Note that I'm calling compile.mk with -j flag AND -Oline so that parallel processing doesn't mess up with the output.
Any syntax color can be setted in the makefile main script, since the -O flag invalidates escape color codes.
I hope it can help.
I had a similar problem so ended up solving it on the command line, like so:
make target1; make target2
to force it to do the targets sequentially.

makefile conditionals

Note: using MinGW's make (should be GNU make)
i have a couple of -include statements in my makefile to import dependencies which were generated using g++ -MM. However I would like to only do this when necessary. I have several different build targets and I don't want all of their respective dependency files to be included since this takes a while (suppose I'm running make clean: no need to include them in this case)
Here's the format of my makefile.
DEPS_debug = $(patsubst %.cpp,build_debug/%.d,$(SRC))
OBJ_debug = $(patsubst %.cpp,build_debug/%.o,$(SRC))
all: program_debug
-include $(DEPS_debug) #make: include: Command not found
program_debug: $(OBJ_debug)
$(CC) $(CFLAGS) $(OBJ_debug) -o $#
If you really don't want to include those files needlessly, you have a couple of options:
You can put in a conditional as Diego Sevilla suggests (but I would recommend using MAKECMDGOALS so that you can write a more flexible version, specific to targets, e.g. you'll include foo.d if and only if you're making foo.o).
You can use make recursively (heresy!), invoking $(MAKE) for each target object, using a makefile that includes that target's dependencies.
But actually including the file takes negligible time, it's the rebuilding of the file (automatic for any included file that's out of date) that takes time.
If needless rebuilding is what you want to avoid, you can use a very clever trick. When must foo.d be rebuilt? Only when something about foo has changed. But in that case foo.o must also be rebuilt. So don't have a seperate rule for foo.d, just rebuild it as a side effect of making foo.o. That way you can include all dependency files and not waste time rebuilding them if they aren't needed.
EDIT:
I'm astounded that merely including these files can add 2-3 seconds to make clean. My last paragraph is off the mark, so let me expand on the first two options.
If all is the only target for which these files should be included, and you make all from the command line (and not e.g. make all tests tarball install kitchenSink), then this will do it:
ifeq ($(MAKECMDGOALS),all)
-include $(DEPS_debug)
endif
Note that this will not include foo.d if you make foo.o. You can write a more sophisticated conditional, something like
$(foreach targ,$(MAKECMDGOALS),$(eval $(call include_deps $(targ)))...
but that's pretty advanced, so let's get a simple version working first.
If you'd rather avoid the conditional and use recursive Make, the simplest way is to split the makefile in two:
makefile:
all:
$(MAKE) -f makefile.all
clean:
rm whatever
...other rules
makefile.all:
DEPS_debug = $(patsubst %.cpp,build_debug/%.d,$(SRC))
OBJ_debug = $(patsubst %.cpp,build_debug/%.o,$(SRC))
-include $(DEPS_debug)
all: program_debug
program_debug: $(OBJ_debug)
$(CC) $(CFLAGS) $(OBJ_debug) -o $#
Indenting a line by a TAB makes make think it's a command to be passed to the shell (as you found out). It doesn't work that way.
The - in front of include suppresses errors that might result from DEPS_debug not existing (e.g. when running clean or release without having had a dependency-file-generating call first). Since DEPS_debug is not a dependency of those rules (clean / release), your dependency files do not get generated when you call them, and everything is fine. I don't really see the problem you're having - you don't have to make the include conditional.
Perhaps you'd like to change your approach, though. Instead of having a seperate *.d target, with a seperate -M preprocessor pass, you might want to try something like -MMD -MP which generates the dependency files inline during code generation, in your standard *.c -> *.o pass.
(I know this sounds completely wrong at first, but when you think about it, it makes sense. Makefile logic is a bit backwards that way, unless you're familiar with functional programming.)
includes are independent of the rules, as they are makefile indications, not compilation indications. You can, however, use makefile conditionals based on special makefile variables such as MAKECMDGOALS, that is set to the default goal:
ifeq ($(MAKECMDGOALS),all)
-include whatever
endif
This is included when no default goal is specified. You can change the condition to specify the exact goal you want to check to include other sub-makefiles.

How do you implement a Makefile that remembers the last build target?

Let's say you have a Makefile with two pseudo-targets, 'all' and 'debug'. The 'debug' target is meant to build the same project as 'all', except with some different compile switches (like -ggdb, for example). Since the targets use different compile switches, you obviously need to rebuild the entire project if you switch between the two. But GNUmake doesn't naturally recognize this.
So if you type make all you'll get
Building ...
...
Then if you type make debug, you get
make: Nothing to be done for `debug'.
So my question is: how do you implement a clean solution in the Makefile to notice that the last build used a different pseudo-target, or different compile switches, than the one you want currently? If they are different, the Makefile would rebuild everything.
Put the build products into different directory trees (whilst keeping one copy of the source of course). That way you are always just a short compile from an up-to-date build, be it debug or release (or even others). No possibility of confusion either.
EDIT
Sketch of the above.
src := 1.c 2.c 3.c
bare-objs := ${src:%.c=%.o}
release-objs := ${bare-objs:%=Release/%}
debug-objs := ${bare-objs:%=Debug/%}
Release/prog: ${release-objs}
Debug/prog: ${debug-objs}
${release-objs}: Release/%.o: %.c # You gotta lurve static pattern rules
gcc -c $< -o $#
${debug-objs}: Debug/%.o: %.c
gcc -c $< -o $#
Release/prog Debug/prog:
gcc $^ -o $#
.PHONY: all
all: Release/prog ; echo $# Success
.PHONY: debug
debug: Debug/prog ; echo $# Success
(Disclaimer: not tested, nor even run through make.)
There you go. It's even -j safe so you can do make -j5 all debug. There is a lot of obvious boiler plate just crying out for tidying up.
Keeping variant sets of object files (as in bobbogo's solution) is probably the best way, but if for some reason you don't want to do that, you can use empty files as markers, to indicate which way you last built the executable:
%-marker:
#rm -f $(OBJECTS) *-marker
#touch $#
debug: GCCFLAGS += -ggdb
debug: SOMEOTHERFLAG = WHATEVER
all debug: % : %-marker
#echo making $#
#$(MAKE) -S GCCFLAGS='$(GCCFLAGS)' SOMEOTHERFLAG='$(SOMEOTHERFLAG)' main
There are other variants on this idea; you could have a small file containing the flag settings, which the makefile would build and include. That would be clever, but not really any cleaner than this.
The only clean solution is to incorporate the difference into the target names.
E.g. you can define a variable $(DEBUG) and consistently use it in all targets that depend on the compile step.

Resources