I have the offset address's of all symbols (obtained with libelf executing on its own binary .so). Now, at runtime, I would need to calculate the absolutue address's of all those symbols and for that I would need to get the base address (where the shared library is loaded) and do a calculation:
symbol_address = base_address + symbol_offset
How can a shared lib get its own base address? On Windows I would use the parameter passed to DllMain, is there some equivalent in linux?
On Linux, dladdr() on any symbol from libfoo.so will give you
void *dli_fbase; /* Load address of that object */
More info here.
Alternatively, dl_iterate_phdr can give you load address of every ELF image loaded into current process.
Both are GLIBC extensions. If you are not using GLIBC, do tell what you are using, so more appropriate answer can be given.
This is an old question, but still relevant.
I found this example code from ubuntu to be very useful. It will print all your shared libraries and their segments.
http://manpages.ubuntu.com/manpages/bionic/man3/dl_iterate_phdr.3.html
After some research I managed to find out the method of discovering the address of the library loading by its descriptor, which is returned by the dlopen() function. It is performed with the help of such macro:
#define LIBRARY_ADDRESS_BY_HANDLE(dlhandle) ((NULL == dlhandle) ? NULL : (void*)*(size_t const*)(dlhandle))
Related
Sorry if the title is not very clear. I am using MinGW with GCC 6.3.0 to build a x86 32-bit DLL on Windows (so far). I'll spare you the details why I need hacky offsets amongst its sections accessible from code, so please do not ask if it's useful or not (because I don't want to bother explaining that).
So, if I can get the following testcase to work, I'm good. Here's my problem:
In a C++ file, I want to access a linker symbol as an absolute numeric value, not relocated, directly. Remember that I am building a 32-bit DLL which requires a .reloc section for relocations, but in this case I do NOT want relocation, in fact a relocation would screw it up completely.
Here's an example: retrieve the offset of say __imp__MessageBoxW#16 relative to __IAT_start__, in case you don't know what they are, __imp__MessageBoxW#16 is the relocated pointer to the actual function at runtime, and __IAT_start__ is a linker symbol in the default script file. Here's where it is defined:
.idata BLOCK(__section_alignment__) :
{
/* This cannot currently be handled with grouped sections.
See pe.em:sort_sections. */
KEEP (SORT(*)(.idata$2))
KEEP (SORT(*)(.idata$3))
/* These zeroes mark the end of the import list. */
LONG (0); LONG (0); LONG (0); LONG (0); LONG (0);
KEEP (SORT(*)(.idata$4))
__IAT_start__ = .;
KEEP (SORT(*)(.idata$5))
__IAT_end__ = .;
KEEP (SORT(*)(.idata$6))
KEEP (SORT(*)(.idata$7))
}
So far, no problem. Because GAS doesn't allow me to "subtract" two externally defined symbols (both symbols are defined in the linker), I have to define the symbol in the linker script, so at the end of the linker script I have this:
test_symbol = ABSOLUTE("__imp__MessageBoxW#16" - __IAT_start__);
Then in C++ I use this little inline asm to retrieve this relative difference which is supposed to be a fixed value once linked:
asm("movl $test_symbol, %0":"=r"(var));
Now var should contain that fixed number right? Wrong!
Because test_symbol is an "undefined" symbol as far as the assembler is concerned, it makes it relocated. Or I don't know why, but I tried so many things to force it to be an "absolute constant value symbol" instead of a "relocated symbol" to no avail. Even editing the linker script with many things like LD_FEATURE("SANE_EXPR") and others, doesn't work at all.
Its value is correct only if the DLL does not get relocated.
You see, either GNU LD or the assembler adds an entry in the .reloc section for that movl instruction, which is WRONG!
Is there a way to force it to treat an external/undefined symbol as a fixed CONSTANT and apply no relocation to it whatsoever? Basically, omit it from the .reloc section.
I am going crazy with this, please tell me there's something easy I overlooked, I searched for hours!
In other words, is there a way to use a Linker Symbol from within inline asm/C++ without having it relocated whatsoever? No entry to the .reloc section or anything, basically same as a constant like $1234. So if a DLL gets loaded into another base address, that constant would be the same everytime.
UPDATE: I forgot about this question but decided to bring an update, since it seems it's likely not possible as nobody even commented. For anyone else in the same boat as me, I presume this is a limitation of the COFF object format itself. In other words, external symbols are implicitly relocated, and it doesn't seem there's a way against this.
I didn't "fix" it the way I wanted, I did it in a very hacky way though. If anyone is interested, here's my ugly "hack":
First I put a special "custom" instruction in the inline assembly where I reference this external symbol from C++. This "custom" instruction holds a placeholder instruction that grabs the symbol (normal x86 asm instruction with a dummy constant, e.g. 1234) and a way to identify it. Then let GCC generate the assembly files (.S files), then I parse the assembly with a simple script and when I find that "custom" instruction I insert a label for the linker (make it .global) and at the same time add a directive to a custom "on-the-fly" generated linker script that gets included from my main linker script at the end.
This places data in a temporary section in the resulting DLL with absolute offsets to the custom instruction that I need, but without relocation.
Next, I parse the binary DLL itself, in particular that temporary section I added with all this hack. I take the offsets from there, convert them to file offsets, and modify the DLL's .text section directly where those offsets point (remember those placeholder instructions? it is replacing their immediate constants 1234 with the respective value from the linker's non-relocated constant). Then I strip the temporary section from the DLL, and it's done. Of course, all of this is done automatically by a helper program and script
It's an insane hack, but it works and it's fully automatic now that I got it going. If my assumption is correct that COFF doesn't support non-relocated external symbols, then it's really the only way to use linker constants from C++ without them being relocated, which would be a disaster.
gdb provides a command "print localx" which prints the value stored in the localx variable. So, it basically must be using the symbol table to find the mapping (localx -> addressx on stack). I am unable to understand how this mapping can be created.
What I tried
I studied the intermediate temporary files of gcc using -save-temps option, and observed that a local variable local1 was mapped to a symbol name "LASF8". However, the objdump utility tool did not show this symbol name.
Context :
I am working on a project which requires building a pin-tool to print the accesses of local variables. Given a function, I would like to say that this address corresponds to this variable name. This requires reading the symbol table to correspond an address to a symbol table entry. GDB does the exact reverse mapping. Hence, I would like to understand the same.
The symbol table is contained in the debugging information. This debugging information is emitted by gcc -g. gdb reads the debugging information to get symbolic information, among other things.
Typically the debugging information is in DWARF format. See http://www.dwarfstd.org/ for the specification.
You can also see DWARF more directly using readelf. For example readelf -wi will show the main (".debug_info") debugging information for an ELF file.
Note that doing the mapping in reverse -- that is, assigning a name to every stack slot -- is not entirely easy. First, not every stack slot will have a name. This is because the compiler may spill temporaries to the stack. Second, many locals will have DWARF location expressions to represent their location. This means you'll need to write an expression evaluator (not hard but also not trivial); you could conceivably (unlikely in practice but possible in theory) run into expressions which cannot be evaluated without a real stack frame; and finally the names will therefore generally only be valid at a given PC.
I believe there's a feature request in gdb bugzilla to add this feature to gdb.
I have recently written a simple test program to call a few private functions in a DLL that ships with Windows. Since private functions are not exported, their address cannot be found with GetProcAddress(), and I am looking for a way to achieve the same without it.
With IDA Pro, I took note of the offset of an exported function, DllRegisterServer in this case. Since this function is exported, I can query its address with GetProcAddress. By knowing its offset from the beginning of the .text section of the PE executable, I can then dynamically query the address of the .text section after loading the DLL with LoadLibrary.
Again, still using IDA Pro, I took note of the offsets of each of the private functions of interest. Once I have the address of the .text section, I only need to add the offset for the known private functions and I now have the correct address to call those functions. There is only one problem, however: since the offset has been hardcoded, this will only work with this exact version of the DLL.
I know there are many tools that can read PE and give addresses of private functions. I assume that the debug symbols are used to query the names of those private functions. What I'm looking for is a simple library that can read PE and debug symbols and allow me to query by name the address of a private function.
Does anybody know of such a library that can help find the addresses of private function by names, making use if debug symbols when available?
gperftools documentation says that libprofiler should be linked into a target program:
$ gcc myprogram.c -lprofiler
(without changing a code of the program).
And then program should be run with a specific environment variable:
CPUPROFILE=/tmp/profiler_output ./a.out
The question is: how does libprofile have a chance to start and finish a profiler when it is merely loaded, but its functions are not called?
There is no constructor function in that library (proof).
All occasions of "CPUPROFILE" in library code do not refer to any place where profiler is started.
I am out of ideas, where to look next?
As per the documentation the linked webpage, under Linking the library, it describes that the -lprofiler step is the same as linking against the shared object file with LD_PRELOAD option.
The shared object file isn't the same as just the header file. The header file contains function declarations which are looked up by when compiling a program , so the names of the functions resolve, but the names are just names, not implementations. The shared object file (.so) contains the implementations of the functions. For more information see the following StackOverflow answer.
Source file of /trunk/src/profiler.cc on Line 182, has a CPUProfiler constructor, that checks for whether profiling should be enabled or not based on the CPUPROFILE environment variable (Line 187 and Line 230).
It then calls the Start function on Line 237. As per the comments in this file, the destructor calls the Stop function on Line 273.
To answer your question I believe Line 132 CpuProfiler CpuProfiler::instance_; is the line where the CpuProfiler is instantiated.
This lack of clarity in the gperftools documentation is known issue see here.
I think the profiler gets initialized with the REGISTER_MODULE_INITIALIZER macro seen at the bottom of profile-handler.cc (as well as in heap-checker.cc, heap-profiler.cc, etc). This calls src/base/googleinit.h which defines a dummy static object whose constructor is called when the library is loaded. That dummy constructor then calls ProfileHandlerRegisterThread() which then uses the pthread_once variable to initialize the singleton object (ProfileHandler::instance_).
The function REGISTER_MODULE_INITIALIZER simulates the module_init()/module_exit() functions seen in Linux loadable kernel modules.
(my answer is based on the 2.0 version of gperftools)
I have a static library *.lib created using MSVC on windows. The size of library is say 70KB. Then I have an application which links this library. But now the size of the final executable (*.exe) is 29KB, less than the library. What i want to know is :
Since the library is statically linked, I was thinking it should add directly to the executable size and the final exe size should be more than that? Does windows exe format also do some compression of the binary data?
How is it for linux systems, that is how do sizes of library on linux (*.a/*.la file) relate with size of linux executable (*.out) ?
-AD
A static library on both Windows and Unix is a collection of .obj/.o files. The linker looks at each of these object files and determines if it is needed for the program to link. If it isn't needed, then the object file won't get included in the final executable. This can lead to executables that are smaller then the library.
EDIT: As MSalters points out, on Windows the VC++ compiler now supports generating object files that enable function-level linking, e.g., see here. In fact, edit-and-continue requires this, since the edit-and-continue needs to be able to replace the smallest possible part of the executable.
There is additional bookkeeping information in the .lib file that is not needed for the final executable. This information helps the linker find the code to actually link. Also, debug information may be stored in the .lib file but not in the .exe file (I don't recall where debug info is stored for objs in a lib file, it might be somewhere else).
The static library probably contains several functions which are never used. When the linker links the library with the main executable, it sees that certain functions are never used (and that their addresses are never taken and stored in function pointers), it just throws away the code. It can also do this recursively: if function A() is never called, and A() calls B(), but B() is never otherwise called, it can remove the code for both A() and B(). On Linux, the same thing happens.
A static library has to contain every symbol defined in its source code, because it might get linked into an executable which needs just that specific symbol. But once it is linked into an executable, we know exactly which symbols end up being used, and which ones don't. So the linker can trivially remove unused code, trimming the file size by a lot. Similarly, any duplicate symbols (anything that's defined in both the static library and the executable it's linked into gets merged into a single instance.
Disclaimer: It's been a long time since I dealt with static linking, so take my answer with a grain of salt.
You wrote: I was thinking it should add directly to the executable size and final exe size should be more than that?
Naive linkers work exactly this way - back when I was doing hobby development for CP/M systems (a LONG time ago), this was a real problem.
Modern linkers are smarter, however - they only link in the functions referenced by the original code, or as required.
Additionally to the current answers, the linker is allowed to remove function definitions if they have identical object code - this is intended to help reduce the bloating effects of templated code.
#All: Thanks for the pointers.
#Greg Hewgill - Your answer was a good pointer. Thanks.
The answer i found out was as follows:
1.)During Library building what happens is if the option "Keep Program debug databse" in MSVC (or something alike ) is ON, then library will have this debug info bloating its size.
but when i statically include that library and create a executable, the linker strips all that debug info from the library before geenrating the exe and hence the exe size is less than that of the library.
2.) When i disabled the option "Keep Program debug databse", i got an library whose size was smaller than the final executable, which was what i thought is nromal in most situations.
-AD