How is automatic dependency generation with GCC and GNU Make useful? - gcc

I found many ways online to use the -M-type flags for GCC (or G++) to automatically generate dependencies in your Makefile. All approaches seem similar, I implemented this one.
All arguments I could find in favor are along the lines of: It helps you in that you don't have to manage dependencies manually.
I don't see why.
Consider the following files:
main.c
#include "foo.h"
int main() { foo(); return 0; }
foo.h
void foo();
foo.c
#include "foo.h"
void foo() { ... }
I would say that main.c depends on foo. However, when I run make main.o, foo is not built. The dependency file main.d contains (which explains why foo has not been built):
main.o: main.c foo.h
foo.h:
Now if I were to create an executable (e.g. app: ; $(CC) -o app main.c, with or without auto-dependency generation flags), I would still have to manually specify that it depends on foo.o.
So my question is: how does the auto-dependency generation save me any work if I still have to specify the dependency on foo.o?

No, main.c does not depend on foo. To be more exact main.c does not depend on anything but the text editor. It's the main.o that depends on main.c and foo.h as those files are required to compile main.o. Your final binary should depend on main.o and foo.o to be linked together, but it needs to be explicitly stated - neither make nor linker will find out what files you want to build together.
What this autodependency gives you is the notation that when foo.h changes, main.o will need to be recompiled (as this is included from main.c) even though main.c itself did not have any changes.

"I would say that main.c depends on foo."
Not quite; main.o depends on foo.h, and app depends on foo.o.
Automatic dependency generation can take care of that first dependency; the compiler finds #include "foo.h" in main.c and takes note of it.
The second dependency you must take care of. Neither the compiler nor Make can deduce it. (Bear in mind that not every header file has a corresponding source file with a matching name.)

Related

LD: does --export-dynamic imply --whole-archive for a static lib if any of it's symbols are referenced?

My main executable links with a static library whose symbols need to be available for dynamic libraries loaded through dlopen(). I understand that I need to use -Wl,--export-dynamic,--whole-archive flags to make it work. However there are many libraries specified on the link command, some maybe unused, and I'm having difficulties applying --whole-archive selectively to the needed library through cmake within the current build infrastructure. What I'm seeing is that if only -Wl,--export-dynamic is used and the executable calls a function in the static library of interest, then the whole library gets included to the same effect of specifying --whole-archive for it, which is exactly what I need. Can I rely on this behavior to implicitly impose --whole-archive on libs whose symbols are referenced by the executable?
What I'm seeing is that if only -Wl,--export-dynamic is used and the executable calls a function in the static library of interest, then the whole library gets included to the same effect of specifying --whole-archive for it, which is exactly what I need.
This isn't supposed to happen, and it is very likely that you are mis-interpreting what you see.
Example:
// foo.c
int foo() { return 42; }
// bar.c
int bar() { return 24; }
// main.c
int main() { return foo() - 42; }
gcc -w -c foo.c bar.c main.c
ar ruv libfoobar.a foo.o bar.o
gcc -Wl,--export-dynamic main.o -L. -lfoobar
nm a.out | egrep ' (foo|bar)'
000000000000113c T foo
As you can see, the whole libfoobar.a was not included in the executable. Contrast with:
gcc -Wl,--export-dynamic main.o -L. -Wl,--whole-archive -lfoobar -Wl,--no-whole-archive
nm a.out | egrep ' (foo|bar)'
0000000000001147 T bar
000000000000113c T foo
Update:
if I add a function foo1() to foo.c it is pulled in, but it also happens regardless if --export-dynamic is supplied.
That is expected: the linker doesn't "split" a single .o file -- you get all or nothing.
You can change this behavior by using -ffunction-sections (and -fdata-sections for a good measure) at compile time and -Wl,--gc-sections at link time.
The cost is increased .o size and longer link time. The benefit is smaller executable.

GCC - how to tell linker not to skip unused sections

My problem is following:
I am trying to write embedded application, which must have it's own linker script supplied (using arm-none-eabi-gcc compiler/linker).
embedded bootloader loads binary and starts at 0x8000 address, this is why I need a dedicated linker script, which allows me to put desired startup function into this address. Script's code is following:
MEMORY
{
ram : ORIGIN = 0x8000, LENGTH = 0x1000
}
SECTIONS
{
.start : { *(.start) } > ram
.text : { *(.text*) } > ram
.bss : { *(.bss*) } > ram
}
Having this what I want to do now is to have a function, that will be inserted into .start section, so that it's at the beginning of 0x8000. For this in my library I use following function:
__attribute__((section(".start"))) void notmain() {
main();
}
This seems to be working fine, but later I link this library with function notmain with the project, which defines main() function. During the link process I can see .start section no more exists and notmain symbol
is totally missing. When I move notmain function out of the library (into the project) its'all fine.
My understanding is, that linker sees, that .start section is not used at all in my Application, which makes it skip all the sections. I already tried adding several attributes to function notmain such as (__attribute__((used)) __attribute__((externally_visible))) but it did not work too (notmain is still missing from the final binary).
CMake source code is following:
** Project **
project(AutomaticsControlExample)
enable_language(ASM)
set(CMAKE_CXX_STANDARD 14)
set(SOURCES main.cpp PID.hpp)
set(DEPENDENCIES RPIRuntime PiOS)
add_executable(${PROJECT_NAME} ${SOURCES})
target_link_libraries(${PROJECT_NAME} ${DEPENDENCIES})
add_custom_command(TARGET ${PROJECT_NAME} POST_BUILD
COMMAND ${CMAKE_OBJDUMP} -D ${PROJECT_NAME}
COMMAND ${CMAKE_OBJDUMP} -D ${PROJECT_NAME} > ${PROJECT_NAME}.list
COMMAND ${CMAKE_OBJCOPY} ${PROJECT_NAME} -O binary ${PROJECT_NAME}.bin
COMMAND ${CMAKE_OBJCOPY} ${PROJECT_NAME} -O ihex ${PROJECT_NAME}.hex)
** Library **
project(RPIRuntime)
enable_language(ASM)
set(CMAKE_CXX_STANDARD 14)
set(LINKER_SCRIPT memmap)
set(LINKER_FLAGS "-T ${CMAKE_CURRENT_SOURCE_DIR}/${LINKER_SCRIPT}")
set(SOURCES
notmain.cpp
assert.cpp)
add_library(${PROJECT_NAME} STATIC ${SOURCES})
target_link_libraries(${PROJECT_NAME} ${LINKER_FLAGS})
My question is: is there any way to prevent linker from omitting linking .start section?
As you know, a static library is an ar archive of object files.
Suppose libfoobar.a contains just foo.o and bar.o. A linkage:
g++ -o prog a.o foo.o bar.o # A
is not the same as the linkage:
g++ -o prog a.o -lfoobar. # B
The linker unconditionally consumes every object file in the linkage sequence,
so in case A, it links a.o, foo.o, bar.o in prog.
The linker does not unconditionally consume every object file that is a member of
a static library in the linkage sequence. A static library is a way of offering to
the linker a bunch of object files from which to pick the ones it needs.
Suppose that a.o calls function foo, which is defined in foo.o, and that
a.o references nothing defined in bar.o.
In that case, the linker unconditionally links a.o into prog, after which
prog contains an undefined reference to foo, for which the linker needs a
definition. Next it reaches libfoobar.a and inspects the archive (by its index,
normally) to see if any member of the archive defines foo. It finds that foo.o does
so. So it extracts foo.o from the archive and links it. It needs no definitions
for any symbols defined in bar.o, so bar.o is not added to the linkage. The
linkage B is exactly the same as:
g++ -o prog a.o foo.o
Suppose on the other hand that a.o calls bar, which is defined in bar.o,
and references nothing defined in foo.o. In that case, the linkage B is
exactly the same as:
g++ -o prog a.o bar.o
So an object file that you insert into a static library for linkage with
your executable will never be linked, by default, unless it provides a definition
for at least one symbol that is referenced, but not defined, in an object file
that has already been linked.
Your function notmain is not referenced in the only object file, main.o that
you are explicitly linking in your program. Therefore, when main.o is linked into your program,
the program contains no undefined reference to notmain: the linker requires no definition
of notmain - it has never heard of notmain - and will not link any object file
from within a static library to obtain a definition of notmain. This has nothing
to do with linkage sections.
When linking an ordinary program with static libraries, as a matter of course
you do it like:
g++ -o prog main.o x.o ... -ly -lz ....
where one of the *.o files - say main.o - is the object file that defines the main function. You never
put main.o in one of the static libraries. That's because, in a ordinary program,
main is not called in any of the other object files you are explicitly linking,
so if main.o was in one of your libraries, the linkage:
g++ -o prog x.o ... -ly -lz ...
would have no need to find a definition of main at any of -ly -lz ..., and no definition
of main would be linked.
The case is just the same with your notmain. If you want it linked you can do one of:-
Add -Wl,--undefined=notmain to your linkage options (replacing notmain with
the mangled name of notmain, for C++). This will make the linker assume it has an
undefined reference to notmain even though it hasn't seen any.
Add the command EXTERN(notmain) to your linker script (again with mangling
for C++). This is equivalent to 1.
Explicitly link an object file that defines notmain. Don't put it in a static library.
3 is effectively what you did when you discovered that:
When I move notmain function out of the library (into the project) its'all fine.
For 3, however, you don't need to compile notmain.cpp in your project and any other
project that needs notmain.o. You can build it independently, install it
in /usr/local/lib and explicitly add /usr/local/lib/notmain.o to the
linkage of your project. That would be following the example of GCC itself, which explicitly
links the crt*.o startup files of an ordinary program just by appending their
absolute names to the linkage, e.g.
/usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/crti.o
...
/usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/crtn.o

Problem understanding the make utility and the header .h files usage

I am trying to learn the make command but I am having a little trouble with the way headers are being used
I got prg1.c
#include <stdio.h>
main()
{
printf("Hello World\n");
print();
}
and the prg2.c
#include <stdio.h>
print()
{
printf("Hello World from prg2\n");
}
and here is the make file
objects = prg1.o prg2.o
exe : $(objects)
cc -o exe $(objects)
prg1.o : prg1.c
cc -c prg1.c
prg2.o : prg2.c
cc -c prg2.c
This works perfectly. But if I don't include stdio.h in both file and then I have to compile it using the make, how am I supposed to write the makefile?
If you don't include <stdio.h>, then you can do one of two things:
supply a correct declaration for printf yourself:
int printf(const char *fmt, ...);
There is almost never any reason to do such a thing.
If your compiler is GCC, use the -include compiler option to force the inclusion of "stdio.h":
prg1.o: prg1.c
gcc -c -include stdio.h prg1.c
This is completely hokey; don't do it.
Note that the make utility has nothing to do with ensuring that the correct header material is included in C translation units. make is a utility which runs shell commands in response to some files not existing or having modification stamps older than other files.

MakeFile Example

main.c: simple 'driver' program to call the 'sayHello()' function in the hello module. Note that since main.c does not call any standard I/O
library functions, it should not have #include stdio.h
hello.h: provides the prototype for the sayHello() function; don't forget
the include guard
hello.c: implements the sayHello() function. This is the only file that has
#include stdio.h
Here is my Makefile: (w/o the 'pack' part)
hello: hello.o main.o
gcc main.o hello.o -o hello
main.o: main.c hello.h
gcc -c main.c -o main.o
hello.o: hello.c hello.h
gcc -c hello.c -o hello.o
test: hello
./hello
clean:
rm -f *.o hello
My hello.c file is:
#include<stdio.h>
#include "main.c"
int main()
{
sayHello();
return 0;
}
My hello.h file is:
void sayHello(void);
My main.c file is:
#include "hello.h"
void sayHello(void)
{
puts("Hello,World!");
return;
}
I did a test with this and it displayed "Hello, World!". But when I ran it again just in case, there were errors. Any ideas what could have happened?
hello.c and hello.h are some kind of library. hello.h provides sayHello() function to the world and this function is implemented in hello.c. That means that hello.c must have following include:
#include "hello.h"
and
#include <stdio.h>
main.c should only have:
#include "hello.h"
I think "guard" should be a function prototype in hello.h:
void sayHello(void);
You seem to be asking several questions without realizing it.
Let's look at it one by one. Your assignment is to make a project, separate the program which calls a function and the implementation of the function into separate pieces of source code. That's why you have the restriction on #include and the specification of the include file. You're also asked to generate a Makefile to compile the various source code files into a single program and provide basic facilities like compressing source code into a zip file or removing the object files. The assignment is meant to acquaint you with modular and automated compilation and separation of functions into distinct pieces.
If you want to learn about programming the best thing you can do is invest some effort into looking at simple make files and compilation. I could give you the answer but won't until you've tried for a while. You'll learn more from failed attempts than peeking at the answers.
In short, you have to first create the source code, figure out how to separate the sayHello function and the main function into two different source files and export the function definition through the include file. The second problem you have is the design of the make file, which your assignment pretty much specifies, all you have to do is learn about the make file configuration language and re-write the human worded specification into the make format. You'll benefit from searching for "makefile tutorial" and reading the first handful of results. ... I'm assuming you want to learn all of this and not just get the answers for no work. Although make files can be tricky, the good news is that at this level they're pretty trivial.
PS Try looking here: http://mrbook.org/tutorials/make/

how to gcc compile with #define in multiple files

I have a project with multiple files.. I want to compile it using gcc from command line.
the directory looks like this
lib/
Comp/ contains .cpp files
Decomp/ contains .cpp files
Globals.cpp
include/
Comp/ contains .h files
Decomp/ contains .h files
Globals.h
some of these .h files are not paired with .cpp files
to compile this i use something like this :
g++ lib/Comp/* lib/Decomp/* lib/Globals.cpp -std=c++0x -o TEST
the problem is,I have to add some #defines for each .h file and i have to do it through command line. how to do this ??
also if i had to compile each file on its own and then link them. what would be the appropriate order for doing this ?
The dirtiest ugliest way is that you want to use something like:
g++ -Iinclude lib/Comp/*.cpp lib/Decomp/*.cpp lib/Globals.cpp -o test
Your .cpp files should #include <Comp/foo.h> or whatever
The correct way to manage this is to use a makefile to build each object file and then link them together:
Makefile
Create a a file called Makefile and put the following in it:
CXX=g++
CPPFLAGS=-Iinclude -DFOO -DBAR=1 -DSOME_STRING=\"My Name\"
CXXFLAGS=-O2 -g
SOURCES=lib/Comp/file1.cpp \
lib/Comp/file2.cpp \
lib/Comp/file3.cpp \
lib/Decomp/file1.cpp \
lib/Decomp/file2.cpp \
...
OBJ=$(SOURCES:%.cpp=%.o)
default: test
test: $(OBJ)
<tab> $(CXX) -o $# $(OBJ)
%.o: %.cpp
<tab> $(CXX) $(CPPFLAGS) $(CXXFLAGS) -o $# -c $<
NOTES
Replace file1.cpp etc. with the actual filenames in your project. DO NOT include headers in SOURCES only your .cpp or .cc files
If you are using sub-paths like #include <Comp/foo.h> or #include "Comp/foo.h" in your source files then you only need to use -Iinclude in CPPFLAGS but if you are doing something like "foo.h" and foo.h is actually in include/Comp/ then add -Iinclude/Comp and -Iinclude/Decomp to the CPPFLAGS
Where it says <tab> make sure you use the TAB key to insert a tab (don't type the word '')
Before using this Makefile blindly . Know that it will NOT work as is you have to correct the entries. It is offered as a starting point... Read up on writing Makefiles ... http://frank.mtsu.edu/~csdept/FacilitiesAndResources/make.htm has a good introduction
Defines can be provided on the compiler command line using -DVAR=VALUE (on Windows, presumably /DVAR=VALUE). Note that you can not provide different defines for different headers as in:
compiler -DX=one first.h -DX=two second.h third.cc -o third.o
In such a case, my compiler spews warning and uses the last value of X for all source code.
Anyway, in general you should not list header files on the compilation line; prefer to include them from the implementation files (.cc/.cpp or whatever) that need them.
Be careful too - if you're changing defines to modify class definitions, inline function implementation etc. you can end up with technically and/or practically undefined behaviour.
In terms of how best to decide which objects to create and link - you probably want one object per .cc/.cpp file. You can link those objects then specify them on the command line when compiling the file containing main().

Resources