Using the Makefile provided by the Pi GPIO library, I made the libpigpio.so shard object using:
# from line 119 in make file
make libpigpio.so
The shared object is created fine. The Makefile first created the pigpio.o object, then the command.o object, and links them together as a shared object. So far so good!
I wrote a very small main function that calls the gpioInitialise and gpioGetPWMfrequency.
It doesn't really matter which functions, what's important is they are defined in pigpio.h and written in pigpio.c.
Meaning the shared object should have them.
The compile command for my code is:
gcc -Wall -pthread -fpic -L. -lpigpio -o drive drive.c
Still I get the undefined reference error to both those functions.
It makes no sense! If it didn't find the shared object, it would reject the command. I also tried it -l:libpigpio.so and still the same problem.
I am compiling directly on the Rpi A+ (not using a cross compiler). So it should work!
What am I missing here?
It is a link order question. Please try the flowing command.
gcc drive.c -Wall -pthread -fpic -o drive -L. -lpigpio
you can read Why does the order in which libraries are linked sometimes cause errors in GCC? for more details.
Related
I try to write a Makefile that takes several static libraries that have been created before and link the to an executable. Although one libary has a main-routine.
I get the error:
/lib/../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
make: *** [dockSIM_gcc_release] Error 1
I tried it with just linking the library that has the main routine but the error stays the same and comes directly after invoking make.
The Makefile:
SHELL = /bin/sh
RM=/bin/rm -f
CXX=g++
PROGNAME=dockSIM_gcc_release
DEFINES=-DDOCKSIM_VERBOSE=FALSE -DNDEBUG -DPRINT_LOG_MSG=0 -DPRINT_DEBUG_MSG=0
LDFLAGS = -fopenmp -g -O3 -std=c++11 -mavx -mstackrealign -fstrict-aliasing
LIBS= -lnagc_mkl -lm -L../externalCode -lpardiso500-GNU481-X86-64 -lacml
FILENAMES = commandInterpreter_lib.a
OBJNAMES =
all: $(PROGNAME)
$(PROGNAME): $(FILENAMES)
$(CXX) $(LDFLAGS) $(DEFINES) -o $(PROGNAME) $(FILENAMES)
clean:
$(RM) *.mo *.ho *.o $(PROGNAME) core *~
test:
echo $(FILENAMES)
showlibs:
echo $(LIBS)
The flags are compatible with those that were used to compile the code.
g++ 4.9.3 is used.
Signature of the main-Routine:
int main(int argc, char* argv[])
Thanks for help and kind regards.
I can only guess what's wrong.
There is more to linking a static library than just a convenient bundle of object files to reduce command line length. In addition to that, the linker only links in object files which it thinks are needed. An object file is needed if there's some undefined symbol that the linker is looking for, that is contained in that object. If there's no symbol that the linker needs in the object, then the linker ignores the object and doesn't link it.
The normal way to build a program is to have the main program listed as object files on the command line: the linker always links every object file. This gives the linker a set of symbols which are defined (by the object files) and undefined (things the object files use but that aren't defined by them). Then the linker will go through the libraries on the link line and add in object files that resolve undefined symbols. These object files in turn may have other undefined symbols that the linker will need to resolve later, etc.
All I can guess is that by not having any object files on your link line, the linker doesn't see the object file in the library containing main as needed and so it doesn't link it.
I don't know why building with debug vs. non-debug makes a difference.
I didn't understand your comment about why you need to do things this way: even if the person who knew about this left, someone will need to learn about it to maintain the software.
In any event you have a few options.
One simple one is to use the "ar" program to extract out the object file containing main and link it directly: in addition to adding objects to libraries ar can extract them. Then you can link that object directly. See the man page for ar.
Another would be to look at the documentation for your compiler and linker and find flags that will force it to include the entire library, not just the unresolved symbols in the library. For the GCC/binutils linker, for example, you can pass -Wl,--whole-archive before the libraries you want to be fully included on the command line, then -Wl,--no-whole-archive after them to turn off that feature.
I'm trying to compile AODV for ARM linux. I use a SabreLite as a board with kernel version 3.0.35_4.1.0. It's worth mention that i'm using openembedded to create my Linux Distribution for my board.
The AODV source code (http://sourceforge.net/projects/aodvuu/) has a README file which give some indications on how to install it on ARM as stated a bit here.
(http://w3.antd.nist.gov/wctg/aodv_kernel/kaodv_arm.html).
I was able to upgrade the makefile in order to be used with post 2.6 kernel version ( as stated above, i have the 3.0.35_4.1.0 kernel version).
So, basically, what i am trying to do is that i have to create a module (let's say file.ko) and then load it into the ARM (with insmod file.ko command).
To do that, i am using a cross compiler which some values are stated below:
echo $CC :
arm-oe-linux-gnueabi-gcc -march=armv7-a -mthumb-interwork -mfloat-abi=hard -mfpu=neon -mtune=cortex-a9 --sysroot=/usr/local/oecore-x86_64/sysroots/cortexa9hf-vfp-neon-oe-linux-gnueabi
echo $ARCH=arm
echo $CFLAGS: O2 -pipe -g -feliminate-unused-debug-types
echo $LD :
arm-oe-linux-gnueabi-ld --sysroot=/usr/local/oecore-x86_64/sysroots/cortexa9hf-vfp-neon-oe-linux-gnueabi
echo $LDFLAGS :
-Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -Wl,--as-needed
when i launch "make command", i get the following errors:
LD [M] /home/scof/script_emulation/AODV/aodv-uu/lnx/kaodv.o
arm-oe-linux-gnueabi-ld: unrecognized option '-Wl,-O1'
arm-oe-linux-gnueabi-ld: use the --help option for usage information
It states that there is something wrong with the linker. This linker comes from the cross compilation tools and i normally shouldn't touch it.
Anyway, to get this above errors fixed, i try to withdraw the LDFLAGS like this:
export LDFLAGS='',
and after this, the make command works and i get the module kaodv.ko. But when i insert it into my ARM to check, it does not work. It actually freeze my terminal
So my question is, do i have to specify the LDFLAGS when compiling ? Does withdrawing LDFLAGS can have impact on the generated kernel module.
Actually, i try to understand where might be the problem and the only thing that come to me is that may be i should not change manually the LDFLAGS. But if i don't change de LDFLAGS, i get the unrecognized option error.
My second question related to that is, what are the possibly value of LDFLAGS
in ARM compilation
Thanks !!
echo $LDFLAGS : -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -Wl,--as-needed
There are two common methods of invoking the linker in a GCC-based toolchain. One is to do it directly, but another is to use GCC as a front end to invoke the linker, rather than invoke it directly. When doing this, options intended for the linker are prefixed with -Wl, so that GCC knows to pass them through rather than interpret them itself.
In your case the error message from LD itself
arm-oe-linux-gnueabi-ld: unrecognized option '-Wl,-O1'
Indicates that your build system is passing LDFLAGS directly to the linker, and not by way of GCC.
Therefore, you should remove the -Wl, prefix and your LDFLAGS would instead be
-O1 --hash-style=gnu --as-needed --as-needed
(the duplication of the last argument is probably pointless but benign)
-O1 is an option that tells the linker to optimize. I believe it something new, and your linker may be slightly out of date. Try removing -Wl,-O1, it should still work.
I know the linkage order in gcc is important for symbols to be correctly determined; but
now I am seeing a weird speed issue on the resulting executable. I am linking objects and archieves as
g++ -m32 a.o b.o ar1.a ar2.a -lm -lpthread -lcrypt -lz -pthread -o afast.out
vs
g++ -m32 a.o ar1.a b.o ar2.a -lm -lpthread -lcrypt -lz -pthread -o aslow.out
The second version runs 2x slower. b.o is actually in the ar1.a archieve, but ar2.o has references to it, thus linker complains, thus I had to put the b.o. In the beginning, I was putting b.o all the way to the end of the linkage to make the correct dependency order, though then figured out it even works at beginning, and even faster.
Has anyone experienced this? Is object file linkage order different than archieve order? How can there be any speed impact?
getting similar results with gcc3.4.6 or gcc4.1.2
There could be significant differences in execution speed depending on how the object code is laid out in memory. In general, you want hot functions to be close together, so they are not mixed up with cold functions, and so your Icache and TLB are not polluted by cold functions. It is however very unlikely that you are affected by this.
Most likely, you have some symbols that are resolved one way in the "fast" executable, and another way in the "slow" executable. The order of archive libraries and object files on command line matters, and you can end up pulling some object from ar1.a in the "fast" link, whereas you'll pull an equivalent object from ar2.a in the "slow" link. Perhaps there is some un-optimized code in ar2.a ?
Running nm -A ar1.a ar2.a and checking to see if there are any symbols that occur in both would be the first step. You can then ask the linker to produce a link map (with -Wl,-M,map.out) and check where these symbols are actually coming from in the two links.
in my homework i must use this command to compile my program:
gcc -o mtm_rentals -std=c99 -Wall -pedantic-errors -Werror -DNDEBUG mtm_ex2.c rentals.c list.c -L -lmtm
what i can change in that line are the files im writing after -DNDEBUG. when i do this the gcc says that there are undefined references to specific functions. now those functions are declared in an .h file and are implemented in a given file called libmtm.a
i concluded that it doesnt recognize libmtm.a, but our homework task says that the -lmtm flag(which is not declared anywhere) is supposed to link libmtm.a to the program.
what am i missing here? am i supposed to implement somehow the -lmtm flag?
thank you!
You are missing a . (single dot) behind the -L.
-lmtm will link against a libmtm library, this is correct. It's not an -lmtm flag, it's a -l flag concatenated with mtm, the library you want to link against. This library is searched in some predefined paths (like /usr/lib/) and additionally in the paths given by -L. Assuming libmtm lives in your current directory, you need to add that to -L, which is done with a ..
I'm using gcc 4.3.4 and ld 2.20.51 in Cygwin under Windows 7. Here's a simplified version of my problem:
foo.o contains function foo_bar() which calls bar() in bar.o
bar.o contains function bar()
main.c calls functions in foo.o, but foo_bar() is not in the call chain
If I try to compile main.c and link it to foo.o, I get an undefined reference to _foo_bar error from ld. As you can see from my Makefile except below, I've tried using flags for putting each function in its own section and having the linker discard unused sections.
COMPILE_CYGWIN = gcc -iquote$(INCDIR)
COMPILE = $(COMPILE_CYGWIN) -g -MMD -MP -Wall -ffunction-sections -Wl,-gc-sections $(DEFINE)
main_OBJECTS = main.o foo.o
main.exe : $(main_OBJECTS)
$(COMPILE) -o main.exe $(main_OBJECTS)
The function foo_bar() is a short function that provides a connection between two networking layers in a protocol stack. Some programs don't need it, so they won't link in the other object files related to the upper layer of the stack. It's a small function, and seems inappropriate to put it into its own .o file.
I don't understand why ld throws the error -- nothing is calling foo_bar(), so there's no need to include bar() in the final executable. A coworker has just told me that ld is not a "smart linker", so maybe what I'm trying to do isn't possible?
Unless the linker is from Cyberdyne Systems it has no way to know exactly which functions will actually be called. It only knows which ones are referenced. Even Skynet's linker can't predict what run-time decisions will be made or what will happen if you load a module dynamically at run-time and it starts calling various global functions1.
So, if you link in module m and it references function f, you will need to link with whatever module has f.
1. This problem is related to the Halting Problem and has been proven undecidable.
I hit the similar issue and I find this page:
http://lists.gnu.org/archive/html/bug-gnu-utils/2004-09/msg00098.html
Highligt:
The GNU linker still works at .o file granularity.
Gcc pulls in foo.o and then find bar() was undefined.
You'd better put foo_bar() into another .o file.