In our build system we have recently integrated ASAN tool (adding -fsanitize=address) to CFLAGS & also while linking , creating library .so files.
Note:- We are using GCC 6.3 compiler.
We are able to successfully build our code. But while running it fails with following issue:
==52215==ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.
Here is my gcc command:-
/local/common/pkgs/gcc/v6.3.0/bin/gcc -m32 -shared -o /local/testing/build/new_tool/asan_build/syn/verilog/libspd.so -Wl,-rpath=\$ORIGIN/lib -Wl,-rpath=\$ORIGIN/../lib -W1,-rpath=/local/common/gcc/v6.3.0/lib -fsanitize=address -L/local/testing/build/new_tool/asan_build/modules /local/testing/build/new_tool/asan_build/modules/silvpi.o /local/testing/build/new_tool/asan_build/modules/sypsv.o /local/testing/build/new_tool/asan_build/modules/cdnsv_tfs.o /local/testing/build/new_tool/asan_build/modules/libcore.o /local/testing/build/new_tool/asan_build/modules/vpi_user.o /local/testing/build/new_tool/asan_build/modules/libdenbase.a /local/testing/build/new_tool/asan_build/modules/libbdd.a -L/local/testing/build/new_tool/asan_build/syn/lib -L/local/testing/build/new_tool/asan_build/modules -L/home/local/outer/Linux/lib /local/testing/build/new_tool/asan_build/modules/vhpimodelfunc.o /local/testing/build/new_tool/asan_build/modules/vipcommonlib.a -lm -lc -ldenbase -lbdd -ldenbase -lviputil -llocalCommonMT_sh
I am able to build library libspd.so successfully. But when we try to run it fails with above error i mentioned.
i can see the dependent library list of libspd.so
ldd /local/testing/build/new_tool/asan_build/syn/verilog/libspd.so
linux-gate.so.1 => (0x00279000)
libasan.so.3 => /local/pkgs/gcc/v6.3.0/lib/libasan.so.3 (0xf7175000)
libm.so.6 => /lib/libm.so.6 (0x0014e000)
libc.so.6 => /lib/libc.so.6 (0xf6f83000)
libcdsCommonMT_sh.so => /local/testing/build/new_tool/asan_build/verilog/../lib/liblocalCommonMT_sh.so (0x00178000)
libdl.so.2 => /lib/libdl.so.2 (0x00197000)
We are trying to run our application with 'xrun' where it runs simulation on top of my build which was build with asan.
As error says : you should either link runtime to your application i was trying to add my complete asan library path to LD_LIBRARY_PATH, Still facing the same issue.
Not sure whats going wrong here. How can i resolve this issue?
Any idea? Thanks and regards!
You have several ways to work around this:
build main executable with -fsanitize=address
get rid of /etc/ld.so.preload on your test machine
disable the check (need recent GCC) with export ASAN_OPTIONS=verify_asan_link_order=0; but you have to be sure that libraries from /etc/ld.so.preload do not intercept symbols important for Asan e.g. malloc, free, etc., otherwise things will start breaking
Related
My os is Kali, running GLIBC_2.32. I need to build an CGO application for a debian 10 system, which is running GLIBC_2.28.
If I go build with dynamic linking, it can't be run on the debian system, it shows GLIBC mismatch:
version `GLIBC_2.29` not found
version `GLIBCXX_3.4.29` not found
version `GLIBC_2.32` not found
So I tried static linking: CGO_LDFLAGS='-static' go build. A gui library uses OpenGL and it shows error:
# github.com/go-gl/gl/v3.2-core/gl
/usr/bin/ld: cannot find -lGL
After searching a while I found the libGL is related to the gpu driver and can't be statically linked.
Then I tried linking libGL.so dynamically and statically linking other libraries by:
CGO_LDFLAGS='-L/usr/lib/x86_64-linux-gnu -Bdynamic -lGL -static' go build
But same error: "cannot find -lGL"
I don't want to use docker, it's too heavy. And I don't think upgrading from debian 10 to 11 solves the problem, there maybe some other clients running different os in the future. What's the best solution?
Then I tried linking libGL.so dynamically and statically linking other libraries by:
CGO_LDFLAGS='-L/usr/lib/x86_64-linux-gnu -Bdynamic -lGL -static' go build
The -static flag tells the linker: perform a completely static link. It doesn't matter whether you put it before or after -lGL, the meaning is the same.
To link some libraries statically and link others dynamically, see this answer.
That said, what you are trying to do is impossible: if you have any dynamic libraries in your link, then libc.so.6 must also be dynamically linked.
I don't want to use docker, it's too heavy.
Too bad. You'll have to use docker, or set up a chroot environment, or build yourself a "Linux to older Linux" crosscompiler. The docker is likely easiest to implement.
As mentioned using -fsanitize=address during compilation or .so file creation will automatically link libasan.so library right ?
I am facing issue :-
==13640==ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.
xrun: *E,ELBERR: Error during elaboration (status 1), exiting.
I found the same issue and fix for the same here :- https://github.com/google/sanitizers/issues/796
Firstly i try to use -fsanitize=address -static-libasan flags to my gcc compiler and linker to created .so files. The created library file 'libsynsv.so' itself don't show the 'asan' library as its dependency with ldd libsynsv.so output.
/folder/san/client/src/main/cvip/asan/Release/verilog/../lib/libviputil.so: undefined symbol: __asan_option_detect_stack_use_after_return.
Is there any issue with my GCC command? Why my library was not linked to asan though i ran with -fsanitize-address.
Is there any issue with my GCC command?
Yes: to properly work, address sanitizer must intercept every call to malloc. Thus you can not instrument a shared library with -fsanitize=address and load that library into main executable that is itself not instrumented.
The created library file 'libsynsv.so' itself don't show the 'asan' library as its dependency with ldd libsynsv.so output.
As #yugr said in comments, -static-libasan is ignored when linking a shared library.
Why my library was not linked to asan though i ran with -fsanitize-address.
Because linking asan runtime into a shared library is not sufficient to make address sanitizer work.
I have installed another version of GLIBC and want to compile Golang code against this new GLIBC.
I have tried the following command for dynamic compilation:
go build --ldflags '-linkmode external -L /path/to/another_glibc/
But when I run ldd "go_executable", it still shows linked to default glibc.
Output:
linux-vdso.so.1 => (0x00007fff29da7000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f128a93c000)
/lib64/ld-linux-x86-64.so.2 (0x00007f128ad06000)
Expected Output:
linux-vdso.so.1 => (0x00007fff45fa7000)
libc.so.6 => /another_glibc/lib/libc.so.6 (0x00007f5cd2067000)
/another_glibc/ld-2.29.so => /lib64/ld-linux-x86-64.so.2 (0x00007f5cd2420000)
What is missing here?
This is not an answer to the question, just a warning:
If you, like me, came here because you were compiling to deploy on another machine and got "version `GLIBC_2.32' not found" (or similar), but you were not intentionally using CGo, stop here.
Go on Linux dynamically links C libraries to have faster and smaller builds, but it is able to supplement them for example when cross-compiling.
You can do export CGO_ENABLED=0 to disable CGo and get rid of the dependencies.
Before doing go build
Set
CGO_LDFLAGS
Dynamic:
export CGO_LDFLAGS="-Xlinker -rpath=/path/to/another_glibc/lib"
Static:
export CGO_LDFLAGS="-Xlinker -rpath=/path/to/another_glibc/lib -static"
CGO_LDFLAGS lets you set GCC-like ld flags for Go.
bitbyter's answer is not correct for the dynamic case because it requires that the system dynamic linker is compatible with the non-system glibc, which is unlikely. You can set the dynamic linker like this:
export CGO_LDFLAGS="-Xlinker -rpath=/path/to/another_glibc/lib64"
CGO_LDFLAGS="$CGO_LDFLAGS -Xlinker --dynamic-linker="/path/to/another_glibc/lib64/ld-linux-x86-64.so.2"
The dynamic linker name is specific to the architecture, so you have to research its name.
I am building a shared object on Ubuntu 16.04 which uses libgomp. My goal is to make this final object as portable as possible, by static linking anything not normally in a base distribution (using docker ubuntu or alpine images as a reference baseline). I've been able to do this with my other dependencies pretty easily, but I'm hung up on libgomp.
I can link just fine with the -fopenmp option, and get a dynamic link:
# ldd *.so
linux-vdso.so.1 => (0x00007fff01df4000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f9ba59db000)
libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f9ba57b9000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9ba55a3000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9ba5386000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9ba4fbc000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9ba6516000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9ba4db8000)
But if I naively add -static before -fopenmp I get:
relocation R_X86_64_32 against `__TMC_END__' can not be used when making a shared object; recompile with -fPIC
Fair enough; with my other dependencies I've just built from source to enable PIC and any other options I needed. When I try to do the same with libgomp, though, I'm not having any luck. I checked out gcc 5.5 from http://gcc.gnu.org/svn/gcc, and tried building from the gcc/libgomp folder. There is a configure script already generated, but running it returns:
./config.status: line 1486: ./../../config-ml.in: No such file or directory
OK, apparently this has something to do with multilibrary support, which I don't believe I need. Running ./configure --help shows that there is an --enable-multilib option with no obvious default, but setting --enable-multilib=no or --disable-multilib still returns the same error. I've also tried running autoreconf -fiv to regenerate the configure script, but I get this error:
configure.ac:5: error: Please use exactly Autoconf 2.64 instead of 2.69.
If I explicitly install and use autoreconf2.64, I get this one:
configure.ac:65: error: Autoconf version 2.65 or higher is required
What am I missing?
What I was missing was the fact that libgomp is not buildable separate from the rest of gcc. It was just a matter of going up a level and running the whole build with -fPIC enabled:
export CFLAGS="-O3 -fPIC"
export CXXFLAGS="-O3 -fPIC"
./configure --disable-multilib --enable-languages=c,c++
make
make install
That gave me a copy of libgomp.a in /usr/local/lib64 ready for linking in to my shared object.
Follow up:
While this worked, at least in a test environment, after the comments above from Jim Cownie we decided to just disable OpenMP support from our library for now.
I have to compile a program on a current ubuntu (12.04). This program should then run on a cluster using CentOS with an older Kernel (2.6.18). I cannot compile on the cluster directly, unfortunately. If I just compile and copy the program without any changes I get the error message "kernel too old".
The way I understood it, the reason for this is not so much the Kernel version, but the version of libc that was used for compilation. So I tried to compile my program dynamically linking the libc from the cluster and statically linking everything else.
Research
There are already a lot of questions about this on SO but none of the answers really worked for me. So here is my research on that topic:
This question explains the reason for the Kernel too old message
This question is similar but more specialized and has no answers
Linking statically as proposed here didn't work because the libc is too old on the cluster. One answer also mentions to build using the old libc, but doesn't explain how to do this.
One way is to compile in a VM running an old OS. This worked but is complicated. I also read that you should not link libc statically
Apparently it is possible to compile for a different libc version with the option -rpath but this did not work for me (see below)
Current state
I copied the following files from the cluster into the directory /path/to/copied/libs
libc-2.5.so
libgcc_s.so.1
libstdc++.so.6
and am compiling with the options -nodefaultlibs -Xlinker -rpath=/path/to/copied/libs -Wl,-Bstatic,-lrt,-lboost_system,-lboost_filesystem -Wl,-Bdynamic,-lc,-lstdc++,-lgcc_s
The output of ldd on the compiled binary is
mybin: /path/to/copied/libs/libc.so.6: version `GLIBC_2.14' not found (required by mybin)
mybin: /path/to/copied/libs/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by mybin)
linux-vdso.so.1 => (0x00007ffff36bb000)
libc.so.6 => /path/to/copied/libs/libc.so.6 (0x00007fbe3789a000)
libstdc++.so.6 => /path/to/copied/libs/libstdc++.so.6 (0x00007fbe37599000)
libgcc_s.so.1 => /path/to/copied/libs/libgcc_s.so.1 (0x00007fbe3738b000)
/lib64/ld-linux-x86-64.so.2 (0x00007fbe37bf3000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbe37071000)
I'm somewhat confused by the error, because it uses the correct path (i.e. the libc from the cluster) but still complains about a missing glibc version. When running ldd on the cluster it returns not a dynamic executable and running the binary results in the same two errors mentioned above. It also looks like there are other libraries included (linux-vdso.so.1, ld-linux-x86-64.so.2 and libm.so.6). Should I use the older versions for those as well?
So now I have two main questions:
Is this even the correct approach here?
If yes: how do I link the old libc correctly?
See this answer.
Is this even the correct approach here
No: you can't use mismatched versions of glibc as your link command does. You used crt0.o and ld-linux.so from new (system-installed) libc, but libc.so.6 from an old (copied from cluster) libc. That is just not going to work.
-rpath sets the DT_RPATH tag but doesn't tell the linker to look there for libs, you want -L for that.