GNU binutils: move compiled functions to sections - gcc

GCC has a useful option to place each function into its own section during compile time (-ffunction-sections). It's useful to optimize away unused functions during linking the binary (-gc-sections).
I have a static library and don't have a source of it. The library has hundreds of functions, but they are all placed inside a single .text section. The code size is critical for my application (in fact, it's an embedded ARM application) and the GNU linker cannot optimize the unused functions away because they are all in a single section.
How can I move the functions from a compiled object file to their own sections?

Related

How does gcc linker get the size of a function?

As a result of studying the ELF format, I can see that the object file has a symbol corresponding to each function, and the corresponding symbol table entry has a value of st_size, which means the size of the function.
The problem is that the executable file was successfully created even though I changed the st_size of the specific function in the object file and linked it. The following code is the test code I used.
// In main.c,
int main(void)
{
myprintf("TEST");
}
// In log.c
#include <stdio.h>
void myprintf(const char *str)
{
printf(str);
}
In the code above, I changed the st_size value of the myprintf function in the log.o file, and linked the log.o and main.o files. In the default, the st_size value was 0x13. I tested it by changing it to 0x00. I tested it by changing it to 0x40. But the myprintf function of the a.out result file is well up. How does the linker determine the size of each function?
Well, firstly I'd like to begin with an old saying that it is more likely for humanity to find a theory of everything and unify quantum mechanics with general relativity rather than understand the optimizations and a decision tree of a linker.
back to our business, I've played with this on my machine and came to the conclusion that the only reasonable explanation for this, is that the linker doesn't truly need the size of a function in order to unify raw machine instruction from different compilation units into a single executable, and let's discuss why:
Let's say you have two compilation units, each containing three consecutive functions,
why would one need to know the size of each function? Isn't the fixed resolved virtual address granted to that function by that specific linker enough for relocation? and the true answer is - it is sufficient to have nothing but the offset of a function within the object file to link different compilation units into one executables.
However, with that being said, some executable formats such as ELF don't offer you an offset for a function machine code within a compilation unit, and you must calculate it yourself by indeed using the offset of that section within the ELF file and the size of each symbol entry within the section being pointed by the symbol table. which simply means, if you had as I've said earlier, two compilation units with three functions each after corrupting the size entry within the symbol table, as the linker would attempt resolving the compilation units into a single executable, it would simply corrupt it, and your executable would cause you segfaults quickly. I've attempted this at my home, and these are the results I've received:
when corrupting a symbol table's size entry of a compilation unit with one function nothing happens, as the entire text section's size (for this matter) is the exact same as that function's size, so the linker has no problem resolving it,
and when doing the same thing for compilation units with three functions it corrupts my executable, as the linker starts copying corrupted offsets of text from one compilation unit into the final executable.
Generally speaking, if you were to use an executable format which offers the linker an immediate offset of that function within the object file, without the need of calculation by size and section offset within the file, you'd probably end up with the same results even if you've had more than one function in a single compilation unit, unless there is some sanity test done by the linker. in my opinion the only reason a linker would need to use the size rather than the one I've just mentioned, is probably the need to clean out some section from redundant functions or variables not being referenced by anyone else (link time optimizations) and hence the need to recalculate relocation offsets for other referenced functions within that compilation unit, or to somehow recalculate relative jumps from within that same compilation unit.
hope this somehow answers your question, I'd be more than glad to help if you'd like a deeper demonstration of this

Compiling and linking NASM and 64-bit C code together into a bootloader [duplicate]

This question already has an answer here:
Relocation error when compiling NASM code in 64-bit mode
(1 answer)
Closed 4 years ago.
I made a very simple 1 stage bootloader that does two main things: it switches from 16 bit real mode to 64 bit long mode, and it read the next few sectors from the hard disk that are for initiating the basic kernel.
For the basic kernel, I am trying to write code in C instead of assembly, and I have some questions regarding that:
How should I compile and link the nasm file and the C file?
When compiling the files, should I compile to 16 bit or 64 bit? since I am switching from 16 to 64 bits.
How would I add more files from either C or assembly to the project?
I rewrote the question to make my goal more clear, so if source code is needed tell me to add it.
Code: https://github.com/LatKid/BasicBootloaderNASMC
since I am also linking a nasm file with the C file, it spits an error from the nasm object file, which is relocation R_X86_64_16 against .text' can not be used when making a shared object; recompile with -fPIC
One of your issues is probably inside that nasm assembler file (which you don't show in the initial version of your question). It should contain only position-independent code (PIC) so cannot produce an object file with relocation R_X86_64_16 (In your edited question, mov sp, main is obviously not PIC, you should use instruction pointer relative data access of x86-64, and you cannot define main both in your nasm file and in a C file, and you cannot mix 16 bits mode with 64 bits mode when linking).
Study ELF, then the x86-64 ABI to understand what kind of relocations are permitted in a PIC file (and what constraints an assembler file should follow to produce a PIC object file).
Use objdump(1) & readelf(1) to inspect object files (and shared objects and executables).
Once your nasm code produces a PIC object file, link with gcc and use gcc -v to understand what happens under the hoods (you'll see that extra libraries and object files, including crt0 ones, -lgcc and -lc, are used).
Perhaps you need to understand better compilation and linking. Read Levine's book Linkers and Loaders, Drepper's paper How To Write Shared Libraries, and -about compilation- the Dragon book.
You might want to link with gcc but use your own linker script. See also this answer to a very related question (probably with motivations similar to yours); the references there are highly relevant for you.
PS. Your question lacks motivation and context (it has no MCVE but needs one) and might be some XY problem. I guess you are on Linux. I strongly recommend publishing your actual full code -even buggy- (perhaps on github or gitlab or elsewhere) as free software to get potential help. I strongly recommend using an existing bootloader (probably GRUB) and focus your efforts on your OS code (which should be published as free software, to get some feedback).

Where is _start symbol likely to be defined

I have some startup assembly for RISCV which defines the .text section as beginning at .globl _start.
I know what this is - as a disassembly shows me the address, but I cannot see where it is defined. It's not in the linker script and a grep in the build directories shows it is in various binary files, but I cannot find a definition.
I am guessing this appears in a file somewhere as a function of the architecture, but can anyone tell me where? (This is all being built using RISCV GNU cross compilers on Linux)
Unless you control it yourself there is usually at least in the gnu tools world a file called crt0.s. Or perhaps some other name. Should be one per architecture since it is in assembly language. It is the default bootstrap, zeros .bss copies .data as needed, etc.
I dont remember if it is part of the C library (glibc, newlib, etc), or if it is added on later by folks that build a toolchain targeting some specific platform.
Not required certainly but it is not uncommon to see _start be the label of the beginning of the binary, it is supposed to be the entry point certainly. So if you have an operating system/loader that uses a binary with labels present (elf, etc), then it can load the binary and instead of branching to the first address it branches to the entry point.
So the _start is merely defined as being at the start of the .text section, and the address of the .text section is defined in the linker script.

How does gcc's linktime optimisation (-flto flag) work

I understand more or less the idea: When compiling separate modules and producing assembly code, functions calling each other have to respect strictly the calling convention, which kills the opportunity for many optimisations when compiling separate modules.
For instance if I have function A which calls function B which calls function C, all 3 in their own separate source files, it becomes possible to allocate registers evenly within the functions so that no register saving on the stack is necessary at all during those calls. With traditional compile-assembly-linking this is not possible, as the caller-saved and callee-saved registers are imposed by the calling convention.
Another optimisation is to inline functions which are called only once. This previously was possible only if a function is local, but thanks to linktime optimisation it's now possible even if the function is in another source file.
Now, if I compile with both -flto and -S flags, I see that instead of normal assembly instructions, gcc generates an encoded representation of the program, such as this:
.section .gnu.lto_.inline.c3c5e6ef8ec983c,"dr0"
.ascii "x\234mQ;N\303#\20}\273\353\17\370C\234\20\242`\"!Q\20\11Ah\322&\25\242\314\231|\4\32\220\220(,$.#\205D\343\3P Z.\341Tn\231\35\274\31L\342\342\355\314\274\371<\317\30\354\376\356\365\357\333\7\262"
.ascii "1\240G\325\273\202\7\216\232\204\36\205"
.ascii "8\242\370\240|\222"
.ascii "8\374\21\205ty\352\"*r\340!:!n\357n%]\224\345\10|\304\23\342\274z\346"
.ascii "8\35\23\370\7\4\1\366s\362\203j\271]\27bb{\316\353\27\343\310\4\371\374\237*n#\220\342rA\31"
.ascii "7\365\263\327\231\26\364\10"
.ascii "2\\-\311\277\255^w\220}|\340\233\306\352\263\362Qo+e+\314\354\277\246\354\252\277\20\364\224%T\233'eR\301{\32\340\372\313\362\263\242\331\314\340\24\6\21s\210\243!\371\347\325\333&m\210\305\203\355\277*\326\236\34\300-\213\327\306\2Td\317\27\231\26tl,\301\26\21cd\27\335#\262L\223"
.ascii "8\353\30\351\264{I\26\316\11\14"
.ascii "9\326h\254\220B}6a\247\13\353\27M\274\231"
.ascii "0\23M\332\272\272%d[\274\36Q\200\37\321\1&\35"
Since the data is in its own particular section, the linker sees this, and does the code generation. If the module was written in either assembly or with no -flto flag, then the linker would see data in the .text section instead, so there is no confusion possible for the linker.
The problem is: How can the linker generate code? Normally only gcc can generate code, the linker's role is just here to change a few offsets and adapt the binary format. In order to generate code, the linker would need to contain a second copy of the entire gcc backend (half of the compiler which generates assembly code from intermediate representation), as well as the entire assembler (since no assembly code was produced). How is such a thing possible, especially considering that binutils is a completely separate entity from gcc, developed by different teams?
GCC's -flto emits a serialized form of GCC's internal representation, as you discovered.
Then, at link time, the linker reinvokes GCC and passes it the objects that need final compilation. GCC reads the internal representation and does the work.
I think the actual work is done in collect2, which is part of GCC that is used when invoking the linker (I'm a little fuzzy on the details). There is also a "linker plugin" system that enables this to work a little better (like letting the linker decide how to split the compilation). This is implemented at least by the binutils ld and by gold; but as far as I recall this is just an optimization and isn't needed to get the basic -flto feature to work. You can see a bit more information on the original LTO project page; and maybe links from there would explain more.
There is more overlap between the GCC and binutils teams than you might think. The two projects share some code and have a long history of working together. Some people work on both projects.
From https://gcc.gnu.org/wiki/LinkTimeOptimization:
Despite the "link time" name, LTO does not need to use any special
linker features. The basic mechanism needed is the detection of GIMPLE
sections inside object files. This is currently implemented in
collect2 [which is called by gcc; -ps]. Therefore, LTO will work on any linker already supported by
GCC.
I assume this means you must link calling the compiler driver gcc. Simply linking with the system's vanilla linker wouldn't optimize the whole program, as you already concluded.
Update:
https://gcc.gnu.org/onlinedocs/gccint/Collect2.html says
The program collect2 is installed as ld in the directory where the
passes of the compiler are installed. When collect2 needs to find the
real ld it tries the following file names: [...]
(The page goes on detailing how collect2 looks for configuration-dependent executables and ones with well-known names like real-ld, finally even ld; but will not call itself recursively.)

Flash size of a bare metal ARM program

How to know the flash size of a bare metal arm code. If I have the elf is it possible to know how much flash will be required to store the program? For example if I have the elf file that is supposed to go into a ARM based MCU, how can I determine how much of the MCU's flash will be consumed by the code?
The ELF headers should contain the information you need. You can use either the objdump (with -h) or readelf tool to read these. Those tools should be included with your toolchain.
Basically, you're looking to add up the size all the loadable sections, such as .text and .data. Look for the LOAD flag in the output from objdump, for example.
You can ignore non-loadable sections such as .comment, .debug and .bssĀ· Some of those are there for the benefit of the debugger, for example, and some are just placeholders for memory that will be used by the program at run time, but contains no pre-existing data.
When I say "add up the size", that's not strictly true; the linker will have already allocated each section to a specific address in flash (I'm assuming your program will run directly from ROM), so you need to find the end-address of the last section to determine how much is left.

Resources