Recently I tried using link time optimization but didn't get very far. On the first attempt to link an exe I get a load of
{path}/bin/ld: <artificial>:(.text.startup+0x136): undefined reference to `some_function`
errors.
I can't see anything much special about the functions. We do take their addresses, and also refer to them via macros.
This is on RHEL 7.6 home rolled GCC 5.3 and binutils 2.34 (I don't know how they were configured unfortunately).
For a non-lto build I see that one of the functions is in a read-onlu section (according to nm). I see the same symbol in a .a file. From that I can find the .o file.
Going back to the lto version, with objdump -D I see
.gnu.lto_{missing function}.7c974f7d7bc920e2
And that's about as far as I can get. My only idea is that this is some sort of ODR violation that doesn't show up otherwise.
EDIT:
I've made some progress. Some if not all of the symbols are in .rodata arrays of pointers to functions.
These are generated in multiple files using some nasty C macros, something like this:
// file1.c
#include "param1_def.h"
#include "pfn_table.c"
// file2.c
#include "param2_def.h"
#include "pfn_table.c"
and
// pfn_table.c
function_type const MAKE_NAME(NAME, _functions) =
{
MAKE_NAME(NAME, _write_file),
MAKE_NAME(NAME, _read_file),
// etc
}
Where NAME is a macro defined in the paramX_def.h headers (and is different each time) and MAKE_NAME is a macro that pastes together the final names.
Related
I have a program that statically links glib library and dynamically links a shared library that in turn also statically links the same glib library. When I run the program I get a segfault. After debugging in gdb I found that there is a global static variable defined in glib that's being set and it had different values in one call trace and than a later call trace. I then noticed that the variable addresses were different as well. So it seems like there are two copies of the global static variable? Shouldn't the executable override the symbol from shared library so there is only one global static variable in the executable during dynamic linking?
The other part of the story is that there is another executable that does the same as above, which seems to behave okay i.e., no segfault (haven't debugged to see if the different code paths load the same static variable). So perhaps this behavior is not deterministic.
The following issue is happening with gcc (8.3.1) on Linux (centos 7).
executableA (segfault) executableB (no segfault)
| \ | \
| (static) \(shared) |(static) \(shared)
| \ | \
libglib-2.0.a libA.so libglib-2.0.a libA.so
| |
| (static) |(static)
| |
libglib-2.0.a libglib-2.0.a
So it seems like there are two copies of the global static variable?
Yes, that is expected.
Shouldn't the executable override the symbol from shared library so there is only one global static variable in the executable during dynamic linking?
A static variable by definition has local linkage -- it is not accessible from any other compilation unit, and is not exported from the shared library(ies).
You would have to make this variable (and any other similar variables) non-static and exported from both shared libraries. Only then will the dynamic loader bind all references to this variable to a single instance.
Note that linking separate copies of libglib-2.0.a into shared libraries without controlling symbol visibility is asking for trouble. Whatever you hoped to achieve by doing that, you are not achieving.
there is another executable that does the same as above, which seems to behave okay
Ah, programming by coincidence. The mine you stepped on didn't explode, so it should be ok to continue doing that.
This may sound like a very noob question.
I'm trying to implement a UDP-based protocol in the linux kernel. I was following the UDPLite protocol implementation as a reference.
Step 1
I created a new_protocol.c in net/ipv4/
This file has a function
void _init protocol_init(void){*Code here*}
I also used
#include "udp_impl.h"
in this file as I was using some functions from the UDP protocol
Step 2
I modified the file net/ipv4/udp_impl.h to include net/new_protocol.h
Step 3
I created the file include/net/new_protocol.h where I defined the function
void protocol_init(void);
Step 4
Finally, I called the function in net/ipv4/af_inet.c. Also, I gave an include statement in this file for net/new_protocol.h
Now when I try to build the kernel, I get an error saying
undefined reference to `protocol_init()'
What am I missing here? Is my way of including header files incorrect? Do I need to include some info in the makefile to pick up the new net/ipv4/protocol.c?
Do I need to include some info in the makefile to pick up the new net/ipv4/protocol.c?
Yes, you need. Kernel build system doesn't autodetect source files, all of them should be listed explicitely in appropriate Makefile. In you case you need to modify net/ipv4/Makefile.
Makefiles used for kernel build are described in file Documentation/kbuild/makefiles.txt.
I just needed to add protocol.o in the makefile in net/ipv4/
I have a question regarding an article of JNI at http://java.sun.com/developer/onlineTraining/Programming/JDCBook/jniexamp.html.
gcc -o libnativelib.so -shared -Wl,-soname,libnative.so
-I/export/home/jdk1.2/include
-I/export/home/jdk1.2/include/linux nativelib.c
-static -lc
I guess I am still a little confused with the function of '-o libnativelib.so' and '-Wl,-soname,libnative.so'.
'-o libnativelib.so' specify the name of output file of gcc to be libnativelib.so. From what i understand it is the library name to load from JAVA side as shown in the article:
static {
System.loadLibrary("nativelib");
}
So what's the use of '-Wl,-soname,libnative.so'?
I found following info on ld option manual:
-soname=name
When creating an ELF shared object, set the internal DT_SONAME field to the specified name. When an executable is linked with a shared object which has a DT_SONAME field, then when the executable is run the dynamic linker will attempt to load the shared object specified by the DT_SONAME field rather than the using the file name given to the linker.
So what does it mean? When final executable is run, linker will attempt to load ?? rather than ?? in the name of ??
This is useful for a system, where one library can be present under several names, for example: libz.so, libz.so.1, libz.so.1.2.3. All these libraries are symlinks to one file, and DT_SONAME inside it points to "libz.so.1". When you link your code against libz.so, it will record dependency on "libz.so.1" in the executable file. And when your file is executed on another system, which contains, say, libz.so.1.2.5, it will still work, because it will look for libz.so.1. But if the destination system will have much newer version, like libz.2.3.4, it will fail, because libz.so.2, but not libz.so.1 will be present.
DT_SONAME field is used only by linker. When you use System.loadLibrary(), the file name is specified by you, and the value of this option is not used. If you want, you can implement a similar versioning scheme for you libnative, to ensure that you java code always load a compatible version.
From GCC-HOWTO:
Each library has a soname. When the linker finds one of these in a library it is searching, it embeds the soname into the binary instead of the actual filename it is looking at. At runtime, the dynamic loader will then search for a file with the name of the soname, not the library filename. Thus a library called libfoo.so could have a soname libbar.so, and all programs linked to it would look for libbar.so instead when they started.
In your case, the soname libnative.so is different from the file name libnativelib.so.
You'll have to symlink libnative.so to libnativelib.so to allow the dynamic loader to find the shared lib.
I have to instrument gcc for some purposes. The goal is to be able to track what GCC functions are called during a particularly compile. Unfortunately I'm not really familiar with the architecture of GCC so I need a little help. I tried the following steps:
1) Hacking gcc/Makefile.in and adding "-finstrument-functions" flag to T_CFLAGS.
2) I have an already implemented and tested version of start_test and end_test functions. They are called from gcc/main.c, before and after toplev_main() call. The containing file is linked to gcc (the object is added to OBJS-common and the dependency is defined later in gcc/Makefile.in)
3) Downloading prerequisites with contrib/download_prerequisites.
4) Executing the configuration from a clean build directory (on the same level with the source dir): ./../gcc-4.6.2/configure --prefix="/opt/gcc-4.6.2/" --enable-languages="c,c++"
5) Starting the build with "make all"
This way I runned out of memory, although I had 28G.
Next I tried to remove the T_CFLAGS settings from the Makefile and give -finstrument-functions to the make command: make CFLAGS="-finstrument-functions". The build was successful this way but when I tried to compile something it resulted empty output files. (Theoretically end_test should have written its result to a given file.)
What do I make wrong?
Thanks in advance!
Unless you specifically exclude it from being instrumented, main itself is subject to instrumentation, so placing calls to your start_test and end_test inside main is not how you want to do it. The 'correct' way to ensure that the file is opened and closed at the right times is to define a 'constructor' and 'destructor', and GCC automatically generates calls to them before and after main:
void start_test (void)
__attribute__ ( (no_instrument_function, constructor));
void end_test (void)
__attribute__ ( (no_instrument_function, destructor));
/* FILE to write profiling information. */
static FILE *profiler_out;
void start_test (void)
{
profiler_out = fopen ("profiler.out", "w");
if (profiler_out == NULL)
exit (-1);
}
void end_test (void)
{
fclose (profiler_out);
}
Footnotes:
Read more about constructor, destructor and no_instrument_function attributes here. They are function attributes that GCC understands.
Read this excellent guide to instrumentation, on the IBM website.
Hey, I just Downloaded openvrml from macports
(port install openvrml)
Now I have a Sample program (pretty_print.cpp from openvrml at sourceforge) that begins like this:
# ifdef HAVE_CONFIG_H
# include <config.h>
# endif
# include <openvrml/vrml97_grammar.h>
# include <openvrml/browser.h>
# include <fstream>
...
then in Xcode, I added the following path and check "recursive" for the Header search path and Lib Search Path:
/opt/local/var/macports/software
And all '***.h file not found' errors disappeared, but now I have the following two:
complex.h 943 '__pow_helper' is not a member of std
c++locale.h 71 'vsnprintf' is not a member of std
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/complex: In function 'std::complex<_Tp> std::pow(const std::complex<_Tp>&, int)':
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/complex:943: error: '__pow_helper' is not a member of 'std'
both errors come from system files.
I wonder what is causing this errors...
Can anyone advice me on how to use openvrml samples on Macs?
thanks in advance.
I've had a similar problem. I defined "recursive" flag for an '/opt/local/include' path. This pulled in some strange c++ headers from boost compatiblity includes.
In general, you do not want "recursive" flag on your include paths.
Try unchecking "recursive" from your paths.
if you put recursive on a path containing boost headers you'll use some random boost headers, which are likely designed to be used in different environment and/or different compiler, instead of standard C++ headers, meaning, for example, you'll include TR1 header instead of standard header. This is likely to be the cause of your problem (it happened to me too).
Just locate the directory which contains the headers you need and put only that in header search path instead of being lazy and using "recursive" flag, since there are a lot of header files which have same name but differ in location only.