Imagine the following folder structure:
project
src
code.c
makefile
bin
How can I compile code.c to code.o and directly put it inside bin? I know I could compile it to code.o under src and the do "mv code.o ../bin" but that would yield an error if there were compile errors, right? Even if it works that way, is there a better way to do it?
Thanks.
The process should or should not "yield an error" depending on what you mean. If there are compiler errors, you'll know it.
That said, there are several ways to do it with make. The best in this case are probably:
You could put the Makefile in bin. Make is good at using files there to make files here, but not the other way around.
You could specify the target path in the makefile target:
$(MAIN_DIR)/bin/%.o: %.c
$(COMPILE)...
A little late, but if it can be helpful.
This is how I get the up one level directory path from where the makefile is.
$(subst $(notdir $(CURDIR)),,$(CURDIR))
if your project looks like that:
~/myProject/
src/
Makefile
#all the .c and .cpp
bin/
#where you want to put the binaries.
$(CURDIR) will output ~/myProject/src
$(subst $(notdir $(CURDIR)),,$(CURDIR)) will output ~/myProject
You could try moving, but only when the compilation was successful using &&:
code.o: code.c code.h
g++ -c code.c && mv code.o ../
mv code.o ../ will only be executed if g++ returned 0, which is when the compilation was successful. This may not be suitable solution for you if you have very complicated makefile, but I thought I'd share what I know.
You can still use the move approach and survive compiler errors:
cc -c code.c && mv code.o ../bin
This won't run the "mv" part if the "cc" part fails.
Related
I'm working with autotools for the first time, for a tool that's written in perl (SQLTeX), so only installation is required, no compilation.
The toplevel contains a simple Makefile.am:
AUTOMAKE_OPTIONS = foreign
SUBDIRS = src man doc
EXTRA_DIST = README.md
.PHONY: all-am
all-am:
#echo "Done!"
If I create Makefile.am files in the sub-directories too, nothing seems to happen there so I just stick to Makefile. A snippet from src/Makefile (EDIT: this file is now renamed to Makefile.am):
SQLTeX: SQLTeX.pl
cat $^ | sed -e 's#{PERLDIR}#$(PL)#;s#{SYSCONFDIR}#$(sysconfdir)#' > $#
#chmod +x $#
The symbol PL is set as expect (defined in the same makefile), but sysconfdir is empty, although it is defined in the top-level Makefile generated by ./configure.
What am I doing wrong?
Thanks in advance!
What am I doing wrong?
Although the Autotools support, with some caveats, recursing into directories where you provide pre-built makefiles, you cannot expect those pre-built makefiles to be able to rely on autotools-provided variables such as the standard directory variables bindir and sysconfdir. Thus, although it is allowed to rely on hand-written makefiles in subdirectories, this is probably a false trail for you.
I recommend going back to this:
If I create Makefile.am files in the sub-directories too, nothing seems to happen there
and working out what's wrong. The Autotools definitely support generating recursive build systems, and one Makefile.am per directory is part of the usual approach to that. If it didn't work for you then my first guess would be that you forgot to list the extra makefiles in your AC_CONFIG_FILES list.
As an alternative, just because you have multiple directories does not mean that you need to use recursive make. It is quite possible to build such a project with the support of a single makefile, and the Autotools can help with such a makefile.
I have a project directory structure of:
Root
Source
Common
MyFolder
++ My 3 source files and header
When I am building my project it generates 3 to 4 shared libraries. Lib1 compiled using c++98 and others using c++11. Flags are added in CmakeList.txt which is at root.
I need my 3 source files to be compiled for Lib1 and for other Libs as as well. but here what happens is compiler is first compiling my source file for lib using c++11 and then it is trying to use same .o file for Lib1 as well. So for .o file which is generated using c++11 is throwing exception when same is used for c++98 compiled library.
So how do write this in CmakeList.txt such that compiler rather than trying to use same .o file will compile source file again for Lib1(c++98 compiled library)
Is there any flag I can specify so that it won't take precompiled .o file and will compile it again ?
Here flags are not being overridden for different shared libraries but actually same object file by make file is being used for different flags
This is sort of counter to how makefiles and cmake usually work.
Most users consider it really important that make performs an incremental build.
The usual way with makefiles is to do make clean which is supposed to remove any binaries and object files that were created.
However, sometimes I write cmake scripts that use globbing over the source directory to assemble the project. (That means, it says "just grab all *.cpp files in the /src folder and make an executable from them".) A makefile cannot check what files in a directory, so the make build will be broken after I add a new file, and make clean won't fix it -- the whole makefile will need to be regenerated by cmake.
Usually what I do is, I write a simple bash script, named rebuild.sh or something,
#!/bin/bash
rm -rf build
mkdir build
cd build
cmake ..
make -j3
./tests
And I put that in the root of my repository, and add /build to my .gitignore. I call that when I want to do a full rebuild -- it nukes the build directory, so its foolproof. When I want an incremental rebuild, I just type make again in the /build directory.
The rebuild.sh script can also serve a double purpose if you use travis-ci for continuous integration.
Most build system assume the compiled objects remain the same within the same pass. To avoid shooting your foot I would suggest telling the build system they were actually different objects, while still compiled from same source files.
I'm not familiar with cmake but this is how you do with make:
For example you have a a.cpp which you want to compile 2 times for different compiler options:
#include <stdio.h>
int main(int argc, char* argv[]) {
printf ("Hello %d\n", TOKEN);
return 0;
}
And the Makefile would looks like:
SRC := $(wildcard *.cpp)
OBJ_1 := $(patsubst %.cpp,%_1.o,$(SRC))
OBJ_2 := $(patsubst %.cpp,%_2.o,$(SRC))
all: pass1 pass2
pass1: $(OBJ_1)
gcc -o $# $(OBJ_1) -lstdc++
pass2: $(OBJ_2)
gcc -o $# $(OBJ_2) -lstdc++
%_1.o: %.cpp
gcc -DTOKEN=1 -c $< -o $#
%_2.o: %.cpp
gcc -DTOKEN=2 -c $< -o $#
clean:
rm -f $(OBJ_1) $(OBJ_2)
What I do here is generate two different list of object from the same source files, which you can even do the same for dependency(-MMD -MP flags).
I have a C project that consists of a fairly large number of source files, and to make some sense of them, I have put them into subdirectories (with subdirectories). The whole project results in only one executable file, however.
In order to build this project, then, I am using recursive Makefiles, where the Makefile in each non-toplevel directory links all the object files produced in that directory into a concatenated lib.o file (using ld -r, that is). I do have a Makefile system that can build this and works rather fine for what it is, but it cannot support parallel make, which I would like to fix.
The problem is that I cannot figure out a proper way to both force make to descend into each directory's subdirectories, but also have the local lib.o target depend on that without being forced to rebuild even when nothing has changed.
This is how it works, somewhat abbreviated (leaving out CFLAGS and whatnot):
default: build
SUBOBJECTS = $(patsubst %,%/lib.o,$(SUBDIRS))
.PHONY: $(SUBDIRS)
$(SUBDIRS):
#$(MAKE) -C $#
build: $(SUBDIRS) lib.o
lib.o: $(OBJECTS) $(SUBOBJECTS)
$(LD) $(LDFLAGS) -r -o $# $^
This is from a Makefile.common which all other Makefiles include. Every other Makefile would also define their own SUBDIRS and OBJECTS. It might look like this, for instance:
SUBDIRS = dir1 dir2
OBJECTS = object1.o object2.o
include ../Makefile.common # Or ../../Makefile.common, &c.
As you can see from this, the main target is really the build target, which depends on the subdirectories and lib.o. If I invoke parallel make on this, it won't know that lib.o cannot be built until make has already run recursively on the subdirectories and will sometimes attempt that, causing errors. However, if I make lib.o depend on the subdirectories, then lib.o will always be unnecessarily rebuilt on each invocation, in each directory.
Is there a way to solve this? I've wrecked my brains on this for quite a while now without being able to find a way out. I'm only using GNU make, so don't worry too much about being POSIX-compatible.
When I call protoc like this
protoc --cpp_out=. path/to/test.proto
the files
path/to/test.pb.cc and
path/to/test.pb.h
are generated which is what I want. But, since the cc needs the h, the h is included like this
#include "path/to/test.pb.h"
which is not what I want. The background is that my build tool (scons) calls protoc from the project's root and not from the directory which includes the source files. I found no obvious option in the manpage or the help text.
So my next idea was to consider this as "correct" and adjust my build system, but: The two files are siblings in the directory tree, so when one includes the other, no path is needed. Even compiling by hand fails.
Can someone help me with that?
Doing find-replace on generated files is most likely easier
than reorganization of your build system (use sed command on Linux/unix).
What I ended up doing for my project is as follows:
Create a pb/ directory at the same level as your include/ and src/ directories.
Put your .proto files in there, and create a makefile. Write the following in it:
CXX = g++
CXXFLAGS = -O3
PROTOBF = $(shell find ./ -name '*.proto')
SOURCES = $(subst proto,pb.cc,$(PROTOBF))
OBJECTS = $(subst proto,pb.o,$(PROTOBF))
default: $(OBJECTS)
#echo -n
$(SOURCES): %.pb.cc : %.proto
protoc --cpp_out=. $<
$(OBJECTS): %.pb.o : %.pb.cc
$(CXX) $(CXXFLAGS) -c $< -o $#
Which will essentially generate and build the protobuffer files when invoked.
In your main makefile, simply add the following include path: -Ipb/.
And when including a protocol buffer header, use #include <whatever.pb.h>.
Add the object files generated in pb/ to your linking step. Myself I used:
PB_OBJS = $(shell find pb/ -name '*.pb.o')
And gave that to the linker along with the normal object files in obj/.
Then, you can probably call the pb/ makefile from the main makefile if you want to automate it. The important point is that protoc be called from the pb/ directory or the include will be messed up.
Sorry for the ugly makefiles. At least it works, and I hope this helps you...
I want to create a Makefile (in a parent dir) to call several other Makefiles (in sub dirs) such that I can build several binaries (one per project sub dir) by invoking just the one parent Makefile.
My research has been hampered by finding loads of stuff on recursive Makefiles, but I think this is where you are trying to build several directories Makefiles into a single binary?
Maybe what I want to do is better handled by a shell script perhaps invoking make in each sub directory in turn, but I thought a Makefile might be a more elegant solution?
any pointers gratefully received
PS using linux and the GNU tool chain
The for loop solution given in the first answer above actually shouldn't be used, as-is. In that method, if one of your sub-makes fails the build will not fail (as it should) but continue on with the other directories. Not only that, but the final result of the build will be whatever the exit code of the last subdirectory make was, so if that succeeded the build succeeds even if some other subdirectory failed. Not good!!
You could fix it by doing something like this:
all:
#for dir in $(SUBDIRS); \
do \
$(MAKE) -C $${dir} $# || exit $$?; \
done
However now you have the opposite problem: if you run "make -k" (continue even if there are errors) then this won't be obeyed in this situation. It'll still exit on failure.
An additional issue with both of the above methods is that they serialize the building of all subdirectories, so if you enable parallel builds (with make's -j option) that will only happen within a single subdirectory, instead of across all subdirectories.
Eregrith and sinsedrix have solutions that are closer to what you want, although FYI you should never, ever use "make" when you are invoking a recursive make invocation. As in johfel's example you should ALWAYS use $(MAKE).
Something like this is what you want:
SUBDIRS = subdir1 subdir1 subdir3 ...
all: $(addprefix all.,$(SUBDIRS))
all.%:
# $(MAKE) -C '$*' '$(basename $#)'
.PHONY: $(addprefix all.,$(SUBDIRS))
And of course you can add more stanzas like this for other targets such as "install" or whatever. There are even more fancy ways to handle building subdirectories with any generic target, but this requires a bit more detail.
If you want to support parallel builds you may need to declare dependencies at this level to avoid parallel builds of directories which depend on each other. For example in the above if you cannot build subdir3 until after both subdir1 and subdir2 are finished (but it's OK for subdir1 and subdir2 to build in parallel) then you can add something like this to your makefile:
all.subdir3 : all.subdir1 all.subdir2
You can call targets in subdirectory makefiles via
all:
$(MAKE) -C subdirectory1 $#
$(MAKE) -C subdirectory2 $#
...
or better
SUBDIRS=subd1 subd2 subd3
all:
#for dir in $(SUBDIRS); \
do \
$(MAKE) -C $${dir} $#; \
done
you should indeed use cmake to generate the Makefile automatically from a given CMakeLists.txt configuration file.
Here's a random link to get you started. Here you can find a simple sample project, including multiple subdirectories, executables, and a shared library.
Each makefile can have several target, it's still true with recursive makefiles, usually it's written:
all: target1 target2 target3
target1 :
make -C subdir
Then make all