I build my own static library with gcc and ar like this.
g++ \
... list of library sources ... \
... a lot of -L -l -I -D options etc... \
-c \
&& ar crf ./lib/libpackager.a *.o
Then I use this library in my app. Currently I built it like this.
g++ \
myApp.cpp \
... same -L -l -I options as in library ... \
-L. -lpackager \
-o myApp
It works, but I am little odd for me that I need to duplicate all -l and -L during building the app. Is it possible include all this library dependencies inside the library. My goal to build app like this.
g++ myApp.cpp -L. -lpackager -o myApp
Transferring comments into an answer.
Specifying the -l and -L operations when compiling to object files is irrelevant. Some versions of GCC warn about arguments that won't be used because they are link-time arguments, and linking won't be used when you include the -c flag.
The ar command doesn't know what to do with the C compiler's -l and -L arguments (it might have its own uses for the flags; one version of ar accepts but ignores -l).
So, you have to specify the dependencies when you link with the static library. That is the way life has been since the early 70s — that aspect hasn't changed yet.
Shared libraries can be built with the dependency information, but not static libraries.
As I understand it, I need to build a shared library and link it in a static way, right?
No. You either need to build and link a shared library as a shared library, or you need to accept that using a static library means you will need to specify other libraries on the command line when you use this library. There are systems to help manage such information; pkg-config is one such. AFAIK, you cannot link a shared library in a 'static way'.
Related
In my project I have makefiles which build Solaris kernel modules, and they use gcc to compile files but use ld to link all .o files together into a kernel module. I am trying to include some coverage options like gcov (-fprofile-arcs) or tcov (-xprofile=tcov) in my build, hence I want to replace ld with gcc during linking also.
But as soon as I use replace gcc with ld, the builds start failing with lot of "undefined symbol" errors, even if I use some compile flags and get rid of these errors, the kernel module will not load into my Solaris kernel at all.
For example:
$ /usr/ccs/bin/ld -r -dy -Nstrmod/rpcmod -Nfs/nfs \
-Nmisc/rpcsec -Nmisc/klmmod -Nfs/zfs \
-o debug64/nfssrv \
debug64/nfs_server.o debug64/nfs_srv.o debug64/nfs3_srv.o \
debug64/nfs_acl_srv.o debug64/nfs_auth.o obj64/nfs41_srv.o \
obj64/ctl_ds_srv.o obj64/dserv_server.o
ld works fine but with gcc I get following errors:
/opt/gcc-4.4.4/bin/gcc -m64 -z muldefs \
-Lmod/rpcmod -Lfs/nfs -Lmisc/rpcsec \
-Lmisc/klmmod -Lfs/zfs \
-o obj64/nfssrv \
obj64/nfs_server.o obj64/nfs_srv.o obj64/nfs3_srv.o
obj64/nfs_acl_srv.o obj64/nfs_auth.o obj64/nfs41_srv.o
obj64/ctl_ds_srv.o obj64/dserv_server.o
Undefined first referenced
symbol in file
hz obj64/nfs_server.o
p0 obj64/nfs_server.o
nfs_range_set obj64/nfs41_srv.o
getf obj64/nfs_server.o
log2 obj64/nfs4_state.o
main /usr/lib/amd64/crt1.o
stoi obj64/ctl_ds_srv.o
dmu_object_alloc obj64/dserv_server.o
nvpair_name obj64/nfs4_srv.o
__dtrace_probe_nfss41__i__destroy_encap_session obj64/nfs41_srv.o
__dtrace_probe_nfssrv__i__dscp_freeing_device_entries obj64/ctl_ds_srv.o
mod_install obj64/nfs_server.o
xdr_faststatfs obj64/nfs_server.o
xdr_WRITE3res obj64/nfs_server.o
svc_pool_control obj64/nfs_server.o
Warning the option -L allows to specify a path where to search for libraries, to specify a library you want to link with you (also) have to use the option -l
So a priori you have to add the options -lrpcmod -lnfs -lrpcsec -lklmmod -lzfs
More details in GCC Linking Options
By default, the GNU linker called through the gcc compiler driver will try to create a standard executable. Consequently, if you don't tell it otherwise, ld will use its default linker script, the C startup code and it will look for a main() routine and everything else that makes a valid executable.
I'm not too familiar with Solaris, but would bet this will not be suitable to build kernel modules. I would expect kernel modules will at least require some options like -ffreestanding, -nostdlibs and most likely a non-default linker script that's probably very different from the default one used for applications.
Even if you manage to link your kernel modules this way, I seriously doubt you will be finished. The gcov instrumentation routines most likely do not expect to live within a kernel driver but expect a proper C execution environment (e.g. it will at least expect to fopen() a file to fwrite() its findings). A kernel driver, however, does not have this comfort. You'll probably find yourself confronted with the problem to get the gcov data somehow out of your kernel modules.
Not saying this is not doable, but it certainly will be a lot of work.
When I compile with -fsanitize=address, GCC/Clang implicitly make use of an ASAN dynamic library which provides runtime support for ASAN. If your built library is dynamically loaded by another application, it is necessary to set LD_PRELOAD to include this dynamic library, so that it gets run at application start up time.
It is often not obvious which copy of libasan.so GCC/Clang expects to use, because there may be multiple copies of ASAN on your system (if you have multiple compilers installed.) Is there a reliable way to determine the location of the shared library you need to load?
You can use -print-file-name flag:
GCC_ASAN_PRELOAD=$(gcc -print-file-name=libasan.so)
CLANG_ASAN_PRELOAD=$(clang -print-file-name=libclang_rt.asan-x86_64.so)
You could also extract libasan path from the library itself via ldd:
$ echo 'void foo() {}' | gcc -x c -fPIC -shared -fsanitize=address -
$ ldd a.out | grep libasan.so | awk '{print $3}'
/usr/lib/x86_64-linux-gnu/libasan.so.4
I want to know when i should use ld linker instead off gcc. I just wrote a simply hello world in c++, of course i include iostream library. If i want make a binary file with gcc i just use:
g++ hello hello.cpp
and i've got my binary file.
Later i try to use ld linker. To get object file i use:
g++ -c hello.cpp. Ok that was easy, but the link command was horrible long:
ld -o hello.out hello.o \
-L /usr/lib/gcc/x86_64-linux-gnu/4.8.4/ \
/usr/lib/gcc/x86_64-linux-gnu/4.8.4/crtbegin.o \
/usr/lib/gcc/x86_64-linux-gnu/4.8.4/crtend.o \
/usr/lib/x86_64-linux-gnu/crti.o \
/usr/lib/x86_64-linux-gnu/crtn.o \
/usr/lib/x86_64-linux-gnu/crt1.o \
-dynamic-linker /lib64/ld-linux-x86-64.so.2 -lstdc++ -lc
I know fact that gcc uses the ld.
Using gcc is better in all cases or just in most cases? Please, tell me somethink about cases where ld linker has advantage.
As you mentioned, gcc merely acts as a front-end to ld at link time; it passes all the linker directives (options, default/system libraries, etc..), and makes sure everything fits together nicely by taking care of all these toolchain-specific details for you.
I believe it's best to consider the GNU toolchain as a whole, tightly integrated environment (as anyone with an experience of building toolchains for some exotic embedded platforms with, say, dietlibc integration will probably agree).
Unless you have some very specific platform integration requirements, or have reasons not to use gcc, I can hardly think of any advantage of invoking ld directly for linking. Any extra linker-specific option you may require could easily be specified with the -Wl, prefix on the gcc command line (if not already available as a plain gcc option).
It is mostly a matter of taste: you would use ld directly when the command-lines are simpler than using gcc. That would be when you are just using the linker to manipulate a small number of shared objects, e.g., to create a shared library with few dependencies.
Because you can pass options to ld via the -Wl option, often people will recommend just using gcc to manage the command-line.
I tried to use gcc command to link a static library, but it didn't work.
If you want to use the -l flag command to link your application like so:
gcc t.c -L. -lt1.a -o t
Then your .a archive needs to have a filename of libt1.a not just t1.a.
When using -lsome_name to link in a library, the linker will look for a file named libsomename.so or libsomename.a
If you do not want to rename your .a archive, you can also just do
gcc t.c t1.a -o t
also, in the future please don't post an image of your code or commands, just copy paste it as text into your post
Libraries in POSIX environments (like Linux and OSX) are usually named in the pattern lib<name of library>.a. When you link with the library you either use the -l option and only use <name of library> and the linker will automatically add the lib prefix and .a suffix. Or you don't use the -l option and istead give the whole file-name verbatime.
Since you don't use the standard naming scheme for the libraries, you can't use the -l option and instead have to explicitly use the library file, similar to
$ gcc ... t1.a
If you want to use the -l option you have to name your library libt1.a and only use t1 when linking:
$ gcc ... -L. -lt1
I've got a binary "CeeloPartyServer" that needs to find libFoundation.so at runtime, on a FreeBSD machine. They're both in the same directory. I compile (on another platform, using a cross compiler) CeeloPartyServer using linker flag -rpath=$ORIGIN.
> readelf -d CeeloPartyServer |grep -i rpath
0x0000000f (RPATH) Library rpath: [$ORIGIN]
> ls
CeeloPartyServer Contents Foundation.framework libFoundation.so
> ./CeeloPartyServer
/libexec/ld-elf.so.1: Shared object "libFoundation.so" not found, required by "CeeloPartyServer"
Why isn't it finding the library when I try to run it?
My exact linker line is: -lm -lmysql -rpath=$ORIGIN.
I am pretty sure I don't have to escape $ or anything like that since my readelf analysis does in fact show that library rpath is set to $ORIGIN. What am I missing?
I'm assuming you are using gcc and binutils.
If you do
readelf -d CeeloPartyServer | grep ORIGIN
You should get back the RPATH line you found above, but you should also see some entries about flags. The following is from a library that I built.
0x000000000000000f (RPATH) Library rpath: [$ORIGIN/../lib]
0x000000000000001e (FLAGS) ORIGIN
0x000000006ffffffb (FLAGS_1) Flags: ORIGIN
If you aren't seeing some sort of FLAGS entries, you probably haven't told the linker to mark the object as requiring origin processing. With binutils ld, you do this by passing the -z origin flag.
I'm guessing you are using gcc to drive the link though, so in that case you will need to pass flag through the compiler by adding -Wl,-z,origin to your gcc link line.
Depending on how many layers this flag passes through before the linker sees it, you may need to use $$ORIGIN or even \$$ORIGIN. You will know that you have it right when readelf shows an RPATH header that looks like $ORIGIN/../lib or similar. The extra $ and the backslash are just to prevent the $ from being processed by other tools in the chain.
\$\ORIGIN if you are using chrpath and \$\$ORIGIN if you are providing directly in LDFLAGS
using ldd CeeloPartyServer to check the dependency .so is starting with ./ or not. (e.g. libFoundation.so and ./libFoundation.so)
For common situation it should be libFoundation.so and without the prefix ./
if ./ prefix is necessary for some uncommon case, make sure the CWD is the same folder with libFoundation.so, and the $ORIGIN would be invalid.
=======
For example:
g++ --shared -Wl,--rpath="\$ORIGIN" ./libFoundation.so -o lib2.so
would got a library lib2.so with ./libFoundation.so
g++ --shared -Wl,--rpath="\$ORIGIN" libFoundation.so -o lib2.so
would got libFoundation.so instead.