Makefile library prerequisite - gcc

I have a makefile, which I am using to cross-compile for and embeded ARM platform with gcc. Specifcally, I am using arm-none-eabi-gcc, but the same appiles to avr-gcc, msp430-gcc, etc. Typically when using make+gcc (and not cross compiling) I list libs as prerequisite as follows:
programA.elf: programA.o foo.o -lm ...etc
programB.elf: programB.o bar.o -lftdi ...etc
%.elf:
gcc $(LDFLAGS) -o $# $^
Make handles this "-lsyntax" very nicely, and its very convienient if you are building multiple progams/targets and want to have a generic rule for linking. The problem I have run into durring cross-compiling is that arm-none-eabi-gcc obviously has a different libm.a than my system's gcc libm.so (for example), but Make doesn't know whats going on here and keeps trying to use the x86 libm instead of the ARM base one. I can get things to work by adding the line:
.LIBPATTERNS = /usr/lib/arm-none-eabi/newlib/lib%.a
but it seems kinda clunky and exposes anyone wanting to compile the project to knowing a little more about the toolchain's install locations than is normally expected.
My question is: "Is there a better convention to list a binary's lib dependencies I should be using here that wont break when cross-compiling?"

This can be done. But a general solution is complex. I have Makefiles which build arm, x86, and c67 executables from a single set of sources. The page you reference eludes to the key: VPATH. I suggest a separate subdirectory for each architecture. The following is not working code, but it gives the idea
all: arm/pgma x86/pgma
vpath %.c $(CURDIR)
arm x86:
mkdir -p $#
arm/pgma: arm/main.o arm/sub.o | arm
x86/pgma: x86/main.o x86/sub.o more.o | x86
arm/%: CC=arm-none-eabi-gcc
arm/%: CFLAGS += -march=armv7-a -mtune=corex-a8
x86/%: CC=gcc
arm/%: VPATH = /usr/lib/arm-none-eabi/newlib
# Notice, VPATH not needed for x86 since it is the native host
This entire concept can be extend to build dependency file is each subdirectory as well debug and release variants. I have not tried this with the -lfoo, but it should work. E.g.,
arm/pgma: arm/main.o arm/sub.o -lmylib | arm

Related

How to create and link a static library for an ARM project using arm-none-eabi-gcc?

I want to create a static library libmylib.a from mylib.c/.h and link it to a project to use this library in bootloader code using the arm-none-eabi-gcc cross compiler in ubuntu 20.04 LTS.
I have an electronic engineering background, so I'm kind of new in this compiler and linker stuff.
What I know:
I've been searching about this, and found out that '.a' are just packed '.o' files, and that's it. You can do it using ar in linux. I don't know how to manage the dependencies for this '.a' file, for example, or how to link it to the project.
What I want to know:
I really want to understand how it works, to compile and generate the bin, elf or hex files using these static libraries for arm using the arm-none-eabi-gcc cross compiler (found some for linux), but I don't know how to search for this properly, how to learn it in a linear way. If you guys could help me on this I would be really grateful.
First you create your library objects. Let us say that you have a foo function written in foo.c, then you do:
arm-none-eabi-gcc -c foo.c
The -c options tells the compiler to stop after assembling and no go further.
Then you need to create the .a file
arm-none-eabi-ar -rc libfoo.a foo.o
this command creates a static library called libfoo.a
At the end you compile your main with:
arm-none-eabi-gcc -L. -lfoo main.c -o main
Note that in -l flag we don put "lib" and ".a", those are automagically added. The -L. flag tells gcc to look into the current folder for library files.

Automatic dependency detection not working in GFortran

I the GCC Wiki it is stated that support for auto-detection of dependencies has been available since version 4.6:
Support the generation of Makefile dependencies via the -M... flags of GCC; you may need to specify additionally the -cpp option. The dependencies take modules, Fortran's include, and CPP's #include into account. Note: Using -M for the module path is no longer supported, use -J instead.
In my program I have two Fortran files: module_1.f08 and main.f08, where main uses module_1. I am using the following command to try to auto-detect dependencies of main:
gfortran -cpp -M main.f08
If module_1 has been already compiled, the command above returns a list of dependencies as expected, though if module_1 has not been compiled yet, I get an error message instead saying that module_1.mod does not exist.
The way I'm seeing this is that every time a new module is added to the code, it has to be compiled separately before running make all (or we might run make all before using the module in any other file, then use the module and compile again) or else any dependency of it might be compiled before the module itself and a compilation error will be returned.
Another thing is that dependency files have to be created gradually and one-by-one as the project grows, and if .mod files and dependency files got deleted at some point (with a make clean command for example), there will be no way to generate dependency files all at once using the auto-detection feature.
Is there a way to get around these limitations? Is there a way for auto-detection to work even if .mod files do not exist yet?
To start with, you need to add snippets to your Makefile to actually use the dependency generation feature. Additionally, you can use the -MD option to generate dependency files automatically for each target, so you don't need a special target to regenerate your dependencies. For an example project like yours above with a main.f90 that uses a module defined in mod1.f90 a simple Makefile using dependencies could look like:
FC := gfortran
FFLAGS := -O2 -g
LIBS := # Needed libs like -lopenblas
SRCS := mod1.f90 main.f90
OBJS := ${SRCS:f90=o}
DEPS := ${OBJS:o=d}
myprog: $(OBJS)
$(FC) $(FFLAGS) -o $# $^ $(LIBS)
.PHONY: clean
clean:
-rm -f $(OBJS) *.mod
-include $(DEPS)
%.o: %.f90
$(FC) $(FFLAGS) -cpp -MD -c -o $# $<
When you run make you'll see that it generates files main.d and mod1.d containing the dependencies for the corresponding source file.
A (minor?) problem here is that your SRCS variable containing your list of source files must be listed in an order that allows the files to be compiled from left to right before you have any .d files. So the dependency stuff as it's done here doesn't help with ordering a build before the .d files have been generated. (Thus I'd recommend distributing the .d files as part of the source package.)

using external libraries with msys - minGW

I want to write and compile C++ code that requires the FLTK 1.3.2 GUI libraries.
I would like to use minGW with MSYS.
I have installed minGW and MSYS properly and have been able to build FLTK with ./configure
make. Everything worked up to this point.
Now I am testing the hello program, and can get the compiler to locate the header files, but it returns errors - which I believe are a result of the compiler not finding the location of the FLTK library. I have looked over the minGW site and it seems the difficulty of getting MSYS to direct the compiler to the correct location is not uncommon.
I have worked with C++ minGW for about a year but am completely new to MSYS.
Here is my command:
c++ Hello.cxx -Lc:/fltk-1.3.2/test -Ic:/fltk-1.3.2 -o Hello.exe
(I am not sure if my syntax is correct so any comments are appreciated)
Here is what I get from the compiler:
C:\Users\CROCKE~1\AppData\Local\Temp\ccbpaWGj.o:hello.cxx(.text+0x3c): undefined reference to 'Fl_Window::Fl_Window(int, int, char const*)'
... more similar comments...
collect2: ld returned exit status
It seems the compiler can't find the function definitions which I believe are in c:/fltk-1.3.2/test.
Again, I am a newbie so any help is greatly appreciated.
Thanks.
Your compile command is not good... You only inform LD where to search for additional libraries with the -L parameter, but you do not specify any library you actually want to use. For that you use -l flag.
So the command should be something like: g++ Hello.cxx -Lc:/fltk-1.3.2/test -Ic:/fltk-1.3.2 -o Hello.exe -llibfltk_images -llibfltk -llibwsock32 -llibgdi32 -llibuuid -llibole32
My recommendation - use the provided fltk-config script to obtain the flags.
Here is a MinGW makefile I "stole" from here: http://www.fltk.org/articles.php?L1286 .
# Makefile for building simple FLTK programs
# using MinGW on the windows platform.
# I recommend setting C:\MinGW\bin AND C:\MinGW\msys\1.0\bin
# in the environment %PATH% variable on the development machine.
MINGW=C:/MinGW
MSYS=${MINGW}/msys/1.0
FLTK_CONFIG=${MSYS}/local/bin/fltk-config
INCLUDE=-I${MSYS}/local/include
LIBS=-L${MSYS}/local/lib
CC=${MINGW}/bin/g++.exe
RM=${MSYS}/bin/rm
LS=${MSYS}/bin/ls
EXE=dynamic_buttons_scroll.exe
SRC=$(shell ${LS} *.cxx)
OBJS=$(SRC:.cxx=.o)
CFLAGS=${INCLUDE} `${FLTK_CONFIG} --cxxflags`
LINK=${LIBS} `${FLTK_CONFIG} --ldflags`
all:${OBJS}
${CC} ${OBJS} ${LINK} -o ${EXE}
%.o: %.cxx
${CC} ${INCLUDE} ${CFLAGS} -c $*.cxx -o $*.o
clean:
- ${RM} ${EXE}
- ${RM} ${OBJS}
tidy: all
- ${RM} ${OBJS}
rebuild: clean all
# Remember, all indentations must be tabs... not spaces.

Cross-Compiling for an embedded ARM-based Linux system

I try to compile some C code for an embedded (custom) ARM-based Linux system. I set up an Ubuntu VM with a cross-compiler named arm-linux-gnueabi-gcc-4.4 because it looked like what I needed. Now when I compile my code with this gcc, it produces a binary like this:
$ file test1
test1: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked
(uses shared libs), for GNU/Linux 2.6.31,
BuildID[sha1]=0x51b8d560584735be87adbfb60008d33b11fe5f07, not stripped
When I try to run this binary on the embedded Linux, I get
$ ./test1
-sh: ./test1: not found
Permissions are sufficient. I can only imagine that something's wrong with the binary format, so I looked at some working binary as reference:
$ file referenceBinary
referenceBinary: ELF 32-bit LSB executable, ARM, version 1, dynamically linked
(uses shared libs), stripped
I see that there are some differences, but I do not have the knowledge to derive what exactly I need to fix and how I can fix that. Can someone explain which difference is critical?
Another thing I looked at are the dependencies:
$ ldd test1
libc.so.6 => not found (0x00000000)
/lib/ld-linux.so.3 => /lib/ld-linux.so.3 (0x00000000)
(Interestingly, this works on the target system although it cannot execute the binary.) The embedded system only has a libc.so.0 available. I guess I need to tell the compiler the libc version I want to link against, but as I understand it, gcc just links against the version it comes with, is this correct? What can I do about it?
Edit: Here's the Makefile I use:
CC=/usr/bin/arm-linux-gnueabi-gcc-4.4
STRIP=/usr/bin/arm-linux-gnueabi-strip
CFLAGS=-I/usr/arm-linux-gnueabi/include
LDFLAGS=-nostdlib
LDLIBS=../libc.so.0
SRCS=test1.c
OBJS=$(subst .c,.o,$(SRCS))
all: test1
test1: $(OBJS)
$(CC) $(LDFLAGS) -o main $(OBJS) $(LDLIBS)
$(STRIP) main
depend: .depend
.depend: $(SRCS)
rm -f ./.depend
$(CC) $(CFLAGS) -MM $^>>./.depend;
clean:
rm -f $(OBJS)
include .depend
What you should probably do is to install libc6 on the embedded system. Read this thread about a similar problem. The solution in post #5 was to install:
libc6_2.3.6.ds1-13etch9_arm.deb
linux-kernel-headers_2.6.18-7_arm.deb
libc6-dev_2.3.6.ds1-13etch9_arm.deb
Your other option is to get the libc from the embedded system onto your VM and then pass it to the gcc linker and use the -static option.
This solution was also mentioned in the above thread. Read more about static linking here.
Other things to try:
In this thread they suggest removing the -mabi=apcs-gnu flag from your makefile if you're using one.
This article suggests feedint gcc the -nostdlib flag if you're compiling from the command line.
Or you could switch to using the arm-none-eabi-gcc compiler. References on this can be found here and here.

GCC is not linking library to a non-default path

I have boost C++ libraries already installed on my Fedora10 machine but I want to use a newer version that I keep at some location in my home folder. I want g++ to use include and library files from my home folder location instead of default (/usr/include and /usr/lib64).
For that matter, I also have declared CPLUS\_INCLUDE\_PATH and LIBRARY\_PATH environment variables in my ~/.bashrc file as explained here.
Now when I run,
g++ -o hello.so -fPIC hello.cpp -shared -lboost_python
The preprocessor uses include files from my home folder location, overriding the default location (as it should, because CPLUS\_INCLUDE\_PATH has a higher precedence in the search path). But the linker does not seem to follow the same precedence rule. It always uses libboost_python.so from the default location /usr/lib64 instead of first searching LIBRARY\_PATH. It only links to the libboost\_python.so library in my home folder when I explicitly specify with -L switch. This is really inconvenient.
The -L switch is the standard way of telling the compiler where to find the libraries. Write a makefile that builds your compiler/linker switches - you'll find it's worth investing your time. You can do something like:
MY_LIBPATH += -L$(BOOST_LIB_PATH)
MY_INCPATH += -I$(BOOST_INC_PATH)
hello.so: hello.cpp
g++ -o $# -fPIC $(MY_INCPATH) $(MY_LIBPATH) hello.cpp -shared -lboost_python
And then you can control this via environment (of course there could be many variations on how to structure the makefile.)

Resources