How to know the flash size of a bare metal arm code. If I have the elf is it possible to know how much flash will be required to store the program? For example if I have the elf file that is supposed to go into a ARM based MCU, how can I determine how much of the MCU's flash will be consumed by the code?
The ELF headers should contain the information you need. You can use either the objdump (with -h) or readelf tool to read these. Those tools should be included with your toolchain.
Basically, you're looking to add up the size all the loadable sections, such as .text and .data. Look for the LOAD flag in the output from objdump, for example.
You can ignore non-loadable sections such as .comment, .debug and .bss· Some of those are there for the benefit of the debugger, for example, and some are just placeholders for memory that will be used by the program at run time, but contains no pre-existing data.
When I say "add up the size", that's not strictly true; the linker will have already allocated each section to a specific address in flash (I'm assuming your program will run directly from ROM), so you need to find the end-address of the last section to determine how much is left.
Related
I'm trying to build bootable code for an ARM M7-based embedded system that is able to execute in place at two different locations in the QSPI, so that if one version gets corrupted, the backup version of the image can be executed in a different place.
Compiling with -fpic seems to produce a relocatable code image that is (nearly) able to execute in both places fine. However, the problem is that the data/bss the code refers to is also getting offset by the same amount - that is, the compiler is assuming that the .data and .bss segments live immediately after the .text segment, which isn't true for XIP embedded systems (where the RAM is separate).
As a result, if the original binary was linked to run at 0x60000000 (and using a fixed ram area at 0x20000000) but is then executed in place at 0x60100000 instead , the ram addresses will be shifted by 0x100000 as well (i.e. to 0x20100000), which isn't what I want at all.
Clearly, what I'd like to do is to modify gcc's behaviour so that references to the code (executing in place in two different places in the QSPI) are position-independent, while references to the .data/bss segments (in a fixed position in RAM) are position-dependent (as per normal).
Is this something that gcc can be tweaked to achieve (e.g. by some obscure linker attribute flag)? Or is this just out of its reach? Thanks!
ELF executables have a fixed load address (0x804800 on 32-bit x86 Linux binaries, and 0x40000 on 64-bit x86_64 binaries).
I read the SO answers (e.g., this one) about the historical reasons for those specific addresses. What I still don't understand is why to use a fixed load address and not a randomized one (given some range to be randomized within)?
why to use a fixed load address and not a randomized one
Traditionally that's how executables worked. If you want a randomized load address, build a PIE binary (which is really a special case of shared library that has startup code in it) with -fPIE and link with -pie flags.
Building with -fPIE introduces runtime overhead, in some cases as bad as 10% performance degradation, which may not be tolerable if you have a large cluster or you need every last bit of performance.
not sure if I understood your question correct, but saying I did, that's sort-off a "legacy" / historical issue, ELF is the file format used by UNIX derived operating systems, both POSIX (IOS) and Unix-like (Linux).
and the elf format simply states that there must be some resolved and absolute virtual address that the code is loaded into and begins running from...
and simply that's how the file format is, and during to historical reasons that cant be changed... you couldn't just "throw" the executable in any memory address and have it run successfully, back in the 90's when the ELF format was introduced problems such as calling functions with virtual tables we're raised and it was decided that the elf format would have absolute addresses within it.
Also think about it, take a look at the elf format -https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
how would you design an OS executable-loader that would be able to handle an executable load it to ANY desired virtual address and have the code run successfully without actually having to change the binary itself... if you would like to do something like that you'd either need to vastly change the output compilers generate or the format itself, which again isn't possible
As time passed the requirement of position independent executing (PIE/PIC) has raised and shared objects we're introduced in order to allow that and ASLR
(Address Space Layout Randomization) - which means that the code could be thrown in any memory address and still be able to execute, that is simply implemented by making sure that all calls within the code itself are relative to the current address of the executed instruction, AND that when the shared object is loaded the OS loader would have to change some data within the binary given that the data changed is not executable instructions (R E) but actual data (RW, e.g .data segment), which also is implemented by calling functions from some "Jump tables" ( which would be changed at load time ) for example PLT / GOT.... those shared objects allow absolute randomization of the addresses the code is loaded to and if you want to execute some more "secure" code you'd have to compile it as a shared object and and dynamically link it and load time or run time..
( hope I've cleared some things out :) )
Before asking my question, I would like to cover some few technical details I want to make sure I've got correct:
A Position Independent Executable (PIE) is a program that would be able to execute regardless of which memory address it is loaded into, right?
ASLR (Address Space Layout Randomization) pretty much states that in order to keep addresses static, we would randomize them in some manner,
I've read that specifically within Linux and Unix based systems, implementing ASLR is possible regardless of if our code is a PIE, if it is PIE, all jumps, calls and offsets are relative hence we have no problem.
If it's not, code somehow gets modified and addresses are edited regardless of whether the code is an executable or a shared object.
Now this leads me to ask a few questions
If ASLR is possible to implement within codes that aren't PIE and are executables AND NOT SHARED / RELOCATABLE OBJECT (I KNOW HOW RELOCATION WORKS WITHIN RELOCATABLE OBJECTS!!!!), how is it done? ELF format should hold no section that states where within the code sections are functions so the kernel loader could modify it, right? ASLR should be a kernel functionality so how on earth could, for example, an executable containing, for example, these instructions.
pseudo code:
inc_eax:
add eax, 5
ret
main:
mov eax, 5
mov ebx, 6
call ABSOLUTE_ADDRES{inc_eax}
How would the kernel executable loader know how to change the
addresses if they aren't stored in some relocatable table within the ELF
file and aren't relative in order to load the executable into some
random address?
Let's say I'm wrong, and in order to implement ASLR you must have a
PIE executable. All segments are relative. How would one compile a
C++ OOP code and make it work, for example, if I have some instance
of a class using a pointer to a virtual table within its struct,
and that virtual table should hold absolute addresses, hence I
wouldn't be able to compile a pure PIE for C++ programs that have
usage of run time virtual tables, and again ASLR isn't possible....
I doubt that virtual tables would contain relative addresses and
there would be a different virtual table for each call of some
virtual function...
My last and least significant question is regarding ELF and PIE — is there some special way to detect an ELF executable is PIE? I'm familiar with the ELF format so I doubt that there is a way, but I might be wrong. Anyway, if there isn't a way, how does the kernel loader know if our executable is PIE hence it could use ASLR on it.
I've got this all messed up in my head and I'd love it if someone could help me here.
Your question appears to be a mish-mash of confusion and misunderstanding.
A Position Independent Executable (PIE) is a program that would be able to execute regardless of which memory address it is loaded into, right?
Almost. A PIE binary usually can not be loaded into memory at arbitrary address, as its PT_LOAD segments will have some alignment requirements (e.g. 0x400, or 0x10000). But it can be loaded and will run correctly if loaded into memory at address satisfying the alignment requirements.
ASLR (Address Space Layout Randomization) pretty much states that in order to keep addresses static we would randomize them in some manner,
I can't parse the above statement in any meaningful way.
ASLR is a technique for randomizing various parts of address space, in order to make "known address" attacks more difficult.
Note that ASLR predates PIE binaries, and does not in any way require PIE. When ASLR was introduced, it randomized placement of stack, heap, and shared libraries. The placement of (non-PIE) main executable could not be randomized.
ASLR has been considered a success, and therefore extended to also support PIE main binary, which is really a specially crafted shared library (and has ET_DYN file type).
call ABSOLUTE_ADDRES{inc_eax}
how would the kernel executable loader know how to change the addresses if > they aren't stored in some relocatable table
Simple: on x86, there is no instruction to call ABSOLUTE_ADDRESS -- all calls are relative.
2 ... I wouldn't be able to compile a pure PIE for C++ programs that have usage of run time virtual tables, and again ASLR isn't possible..
PIE binary requires relocation, just like a shared library. Virtual tables in PIE binaries work exactly the same way they work in shared libraries: ld-linux.so.2 updates GOT (global offset table) before transferring control to the PIE binary.
3 ... is there some special way to detect an ELF executable is PIE
Simple: a PIE binary has ELF file type set to ET_DYN (a non-PIE binary will have type ET_EXEC). If you run file a.out on a PIE executable, you'll see that it's a "shared library".
I've written a program using Keil C for a MegaWin 8051 MPC82G516A. When I check the file size of the Intel generated hex file it has a size of 8kb (I see the code in the binary code window), but when I go to program the device using Megawin's tool it increases the code size to around 29kb!? Can anyone provide the reason for why it might be doing this?
Also, something else that is strange is the fact that it seems to be writing the code at the top of the processor memory and not at the start. There are like 4 bytes at the start of the code, but the complete rest of it is in the end of the memory.
Please help
Cameron.
you write that the file size of the intel hex file with your code is about 8k. Part of your program is written to the bottom of the address space.
Another part is written to the bottom of the address space.
The intel hex file not only contains the program code but also the address where that code should be written to.
You can check yourself if your file contains code for the bottom and the top of the address space.
Some information about intel hex format: http://www.keil.com/support/man/docs/oh166/oh166_ih_record.htm
If this is the case you can check the .m51 file which is generated by the linker during the build process.
This file contains information about the modules included in your programm and the addresses they are linked to.
Perhaps there is some linker setting in your project which tells the linker behave as you tell.
I was miss-reading the file size on the editor. Also, Keil's free version starts at position 2000Kb. It's part of it's limitations of the evaluation version.
I am working with Firefox on a research project. Firefox makes uses of lots of JIT'ed code during run time.
I instrumented Firefox using a custom PIN tool to find out locations(address) of some things I as looking for. The issue is that those location are in JIT'ed code. I want to know what is actually happening over there in the code.
To do this I dumped the corresponding memory region and used objdump to disassemble the dump.
I used objdump -D -b binary -mi386 file.dump to see the instructions that would have been executed. To my surprise the only section listed is .data section (a very big one).
Either i am incorrectly disassembling it or something else is wrong with my understanding. I expect to see more sections like .text where actual executable instructions should be present and .data section should not be executable.
Am I correct in my understanding here?
Also If some one can please advise me on how to properly know what is happening in Jit'ed code.
Machine
Linux 3.13.0-24-generic #47-Ubuntu SMP x86_64
or something else is wrong with my understanding
Yes: something else is wrong with your understanding.
Sections (such as .text and .data) only make sense at static link time (the static linker groups .text from multiple .o files together into a single .text in the final executable). They are not useful, and in fact could be completely stripped, at execution time. On ELF systems, all that you need at runtime are segments (PT_LOAD segments in particular), which you can see with readelf -l binary.
Sections in ELF file are "parts of the file". When you dump memory, sections don't make any sense to even talk about.
The .data that you see in objdump output is not really there either, it's just an artifact that objdump manufactures.