Why is gcc placing my global data in the wrong address? - gcc

I am trying to compile an app for a custom OS I'm writing for the ARM Cortex M0+. The app is written in C. I have a global variable, int newInt = 4; defined at the very top of my code. The rest of the app just calls a print function to print out the value of that variable. However, it kept crashing. To check it, I instead printed out the address of the variable, newInt. It was well outside of the valid memory map of the chip, hence why it crashed.
My linker script is simple:
SECTIONS
{
. = 0x20001580;
.text :
{
_text = .;
*(.text)
_etext = .;
}
.data :
{
_data = .;
KEEP(*(.data))
_edata = .;
}
.bss :
{
_bss = .;
*(.bss)
_ebss = .;
}
}
Now, the .text segment is placed correctly, starting at 0x20001580. However, the address of my global variable, which SHOULD be somewhere around that value (0x20001580 or so, plus the code size, which is around 40 bytes), is actually placed at 0x18060, which as far as I'm concerned is a totally random address. So, whenever I try to access newInt's value, the code tries to access an out of range memory address, and it fails.
Shouldn't newInt be placed in the .data segment? If so, why would the .data segment be at such an odd location, given my linker script?

This might be relevant to other people later:
The problem lay in my linking process. In order to make it work, I had to compile my app using the -c flag, to get the correct output file. THEN link against the output of gcc. I tried to do compilation and linking in one step and something went awry when that was done.

Related

Is there a way to force a variable to be placed at top of .bss section?

I am using GCC on a Cortex M0 from NXP.
I have a non-initialized buffer which needs to be placed at 512 byte boundary due to DMA access restrictions:
DMA_CH_DESCRIPTOR_T __attribute__ ((aligned (512))) Chip_DMA_Table[MAX_DMA_CHANNEL];
This will end up in .bss section, but of course, due to alignment, there will be some lost space before. I know that .bss starts (in my MCU) at 0x10000000 which is already 512 aligned.
So the big question is how can I force my buffer to be the first symbol in .bss ?
I already tried like this but it doesn't work
.bss : ALIGN(4)
{
_bss = .;
PROVIDE(__start_bss_RAM = .) ;
PROVIDE(__start_bss_SRAM = .) ;
drv_dma.o (.bss)
*(.bss*)
*(COMMON)
. = ALIGN(4) ;
_ebss = .;
PROVIDE(__end_bss_RAM = .) ;
PROVIDE(__end_bss_SRAM = .) ;
PROVIDE(end = .);
} > SRAM AT> SRAM
Note: I can see several potential resolves:
defining my own .bss_top for example, and modify my startup script to consider it as a separate .bss and initialize it.
defining a separate section BEFORE actual .bss and initialize my buffer from code somewhere
memset(...)
But I said it's worth to ask, maybe there is a simple linker catch on this one.
Thank you,

How to debug code copied from FLASH to RAM in Atmel Studio?

I have a custom boot loader program. The loader must be able to program any part of the flash, including upgrading itself. To accomplish this, the loader startup code copies the whole program from flash into RAM and then jumps to main().
The program works fine, but I can't get the debugger to set breakpoints. The specific error message that I get is "Unable to set requested breakpoint on target". Reading the state of variables and single stepping DOES seem to work.
How can I get the debugger to work in this setup?
Development Environment: Atmel Studio 7
Processor: ATSAME70 (This is an ARM Cortex M7)
Compiler: GCC
Tool: Atmel-ICE
Interface: SWD (Serial Wire Debug)
The relevant portions of the linker script look like this...
/* Memory Spaces Definitions */
MEMORY
{
rom (rx) : ORIGIN = 0x00400000, LENGTH = 0x00004000 /* rom, 2097152K */
ram (rwx) : ORIGIN = 0x20400000, LENGTH = 0x00060000 /* ram, 393216K */
}
SECTIONS
{
.reset : {
. = ALIGN(4);
KEEP("Device_Startup/startup.o"(.text))
KEEP(*(.reset_code))
} > rom
PROVIDE(_exit = .);
.text :
{
. = ALIGN(4);
_rom_to_ram_copy_vma_start = .;
KEEP(*(.vectors .vectors.*))
*(.text .text.* .gnu.linkonce.t.*)
...
...
_rom_to_ram_copy_vma_end = .;
} > ram AT > rom
_rom_to_ram_copy_lma_start = LOADADDR(.text);
}
The program coplies everthing between _rom_to_ram_copy_vma_start and _rom_to_ram_copy_vma_end into RAM and then jumps to main in RAM.
Given that I used "ram AT > rom" in the linker script one would think that the debugger should know that the code is in RAM and should have no problem setting the breakpoint there.

ARM bare-metal program compilation -control flash writes

I'm trying to compile some C code to run on an ARMv6 simulator, with FLASH memory starting # 0x0 and RAM starting at 0x800000. Right now, I can pass binary files off the the simulator just fine...
However, I want the instructions generated to not include any writes to flash memory, and only operate within RAM memory (after copying RAM). Is this possible?
I am using the GNU toolchain to compile.
This is my current linker script:
MEMORY
{
rom(rx) : ORIGIN = 0x00000000, LENGTH = 0x00800000
ram(!rx) : ORIGIN = 0x40000000, LENGTH = 0x00800000
h : ORIGIN = 0x40000000, LENGTH = 0x00400000
}
SECTIONS
{
.text : { *(.text*) } > rom
.bss : { *(.bss*) } > ram
.heap : { *(.heap*) } > h
}
end = ORIGIN(h) + LENGTH(h);
_stacktop = ORIGIN(ram) + LENGTH(ram);
Your build linker script (normally a .ld file) determines the locations of your device's memory and how the linker sections are mapped to that. Your link map should not include writable sections in read-only memory, that will fail.
[Added after linker script added to question]
You linker script seems unusual in lacking a .data section:
.data : { *(.data) } > ram
Without that it is not clear what the linker will do with static initialised data.
Also your question states that the RAM starts at 0x800000, but the linker script clearly locates it at 0x40000000. Perhaps this misunderstanding of your memory map is leading you to erroneously believe that writes to the ROM region are occurring?

undefined reference to _GLOBAL_OFFSET_TABLE_ (only when generating binaries)

this is the problem:
When I link my scripts in C, using ld, when I generate elf32-i386 files as output format in ld, putting it as OUTPUT_FORMAT() in the ld script, I dont have any error, but if I try to put in this last OUTPUT_FORMAT() "binary" or try to output a file with .bin extension, I get a mixture of errors like:
kernel.o: In function `k_main':
kernel.c:(.text+0xe): undefined reference to `_GLOBAL_OFFSET_TABLE_'
kernelutils.o: In function `k_clear_screen':
kernelutils.c:(.text+0xc): undefined reference to `_GLOBAL_OFFSET_TABLE_'
kernelutils.o: In function `k_clear_screen_front':
kernelutils.c:(.text+0x56): undefined reference to `_GLOBAL_OFFSET_TABLE_'
kernelutils.o: In function `k_printf':
kernelutils.c:(.text+0xa0): undefined reference to `_GLOBAL_OFFSET_TABLE_'
kernelutils.o: In function `k_sleep_3sec':
kernelutils.c:(.text+0x152): undefined reference to `_GLOBAL_OFFSET_TABLE_'
kernelmalloc.o:kernelmalloc.c:(.text+0xc): more undefined references to `_GLOBAL_OFFSET_TABLE_' follow
This not only happens when compiling specific scripts, all scripts that try to use ld to link, or gcc since this calls ld, die in the attempt of get a binary with .bin extension.
When showing the symbols of one of the executables (kernel.o in the output of above) I see that the symbol _GLOBAL_OFFSET_TABLE_ isnt defined, and the most scary part, all the functions that returned error in the error output of above have their symbols removed, this is the nm output:
cristian#mymethodman:~/Desktop/kernel/0.0.3/Archivos$ nm kernel.o
U _GLOBAL_OFFSET_TABLE_
U k_clear_screen
U k_clear_screen_front
00000000 T k_main
U k_malloc
U k_printf
U k_sleep_3sec
00000000 T __x86.get_pc_thunk.bx
How I can solve this? I will leave the linker script below to ensure it isn a problem of the .ld file, with both "to get elf" and "to get binary" versions. Thanks in advance!
Ld scripts:
To get binary:
ENTRY(loader)
OUTPUT_FORMAT(binary)
SECTIONS {
/* The kernel will live at 3GB + 1MB in the virtual
address space, which will be mapped to 1MB in the
physical address space. */
. = 0xC0100000;
.text : AT(ADDR(.text) - 0xC0000000) {
*(.text)
*(.rodata*)
}
.data ALIGN (0x1000) : AT(ADDR(.data) - 0xC0000000) {
*(.data)
}
.bss : AT(ADDR(.bss) - 0xC0000000) {
_sbss = .;
*(COMMON)
*(.bss)
_ebss = .;
}
}
To get ELF:
ENTRY(loader)
OUTPUT_FORMAT(elf32-i386)
SECTIONS {
/* The kernel will live at 3GB + 1MB in the virtual
address space, which will be mapped to 1MB in the
physical address space. */
. = 0xC0100000;
.text : AT(ADDR(.text) - 0xC0000000) {
*(.text)
*(.rodata*)
}
.data ALIGN (0x1000) : AT(ADDR(.data) - 0xC0000000) {
*(.data)
}
.bss : AT(ADDR(.bss) - 0xC0000000) {
_sbss = .;
*(COMMON)
*(.bss)
_ebss = .;
}
}
As yo ucan see between both only changes the OUTPUT_FORMAT() line.
Your toolchain probably defaults to generating position-independent executables (PIE). Try compiling with gcc -fno-pie.
If you want to keep PIE for security reasons, you'll need a more complicated linker script and something that performs the initial relocation (such as a dynamic linker, but simpler constructions are possible as well).

why TI starterware examples do not clear correctly BSS segment when compiled using CodeSourcery gcc

I ran into severe troubles with beaglebone running TI AM3359arm. I'm using code sourcery to compile code. I tried to compile one of the examples, called enet_lwip, which uses lightweight IP (lwip) to provide http server.
The application crashes at certain point. By debugging I have found, that it is this piece of code, which is responsible for it:
unsigned int lwIPInit(LWIP_IF *lwipIf)
{
struct ip_addr ip_addr;
struct ip_addr net_mask;
struct ip_addr gw_addr;
unsigned int *ipAddrPtr;
static unsigned int lwipInitFlag = 0;
unsigned int ifNum;
unsigned int temp;
/* do lwip library init only once */
if(0 == lwipInitFlag)
{
lwip_init();
}
A very funny thing happens to this: one would expect, that lwipInitFlag gets initialized to 0 and hence the function calls lwip_init();
Well, this does not happen even the very first time the lwIPInit function gets called. The reason for this is, that the variable lwipInitFlag is not set to 0.
I would like to know why this is. If such initialization appears in the code, compiler should generate sequence to null it. But probably because it is preceded by static modifier, it leaves it 'as is'. Why?
The lwipInitFlag is in .bss linker section, which points to DDR memory. How can I assure, that such static assignments get initialized?
For the moment I'll hack the code for lwIP to see if this works, but it is just a warning for me, that there might be another statically declared variables somewhere in the libraries, which do not get initialized.
Any hint how to resolve this?
Adding more information to this: after your fruitful hints I think I have even more mess in how it should work. So: It is true, that I do not call/link crt*.o. On the other hand the TI starterware platform contains initialization asm source, which DOES BSS cleanup. It does it between addresses _bss_start and _bss_end.
When looking into linker script, everything looks pretty ordinary:
SECTIONS
{
. = 0x80000000;
. = ALIGN(4);
.startcode :
{
*init.o (.text)
}
. = ALIGN(4);
.text :
{
*(.text)
}
. = ALIGN(4);
.data :
{
*(.data)
}
. = ALIGN(4);
_bss_start = .;
.bss :
{
*(.bss)
}
. = ALIGN(4);
_bss_end = .;
_stack = 0x87FFFFF8;
}
So _bss_start is address before BSS block and _bss_end is at the end of the block. The trouble is what map the Codesourcery generates.
When looking at the end of BSS in generated map file, I can see this:
COMMON 0x80088f0c 0x200 ../binary/armv7a/gcc/am335x/system_config/Debug/libsystem_config.a(interrupt.o)
0x80088f0c fnRAMVectors
0x8008910c . = ALIGN (0x4)
0x8008910c _bss_end = .
0x87fffff8 _stack = 0x87fffff8
LOAD ../binary/armv7a/gcc/am335x/drivers/Debug/libdrivers.a
LOAD ../binary/armv7a/gcc/utils/Debug/libutils.a
LOAD ../binary/armv7a/gcc/am335x/beaglebone/platform/Debug/libplatform.a
LOAD ../binary/armv7a/gcc/am335x/system_config/Debug/libsystem_config.a
LOAD /opt/CodeSourcery/arm-none-eabi/lib//libc.a
LOAD /opt/CodeSourcery/lib/gcc/arm-none-eabi/4.5.2//libgcc.a
LOAD ../binary/armv7a/gcc/am335x/drivers/Debug/libdrivers.a
LOAD ../binary/armv7a/gcc/utils/Debug/libutils.a
LOAD ../binary/armv7a/gcc/am335x/beaglebone/platform/Debug/libplatform.a
LOAD ../binary/armv7a/gcc/am335x/system_config/Debug/libsystem_config.a
LOAD /opt/CodeSourcery/arm-none-eabi/lib//libc.a
LOAD /opt/CodeSourcery/lib/gcc/arm-none-eabi/4.5.2//libgcc.a
OUTPUT(Debug/bbdidt.out elf32-littlearm)
.bss.pageTable 0x8008c000 0x4000
.bss.pageTable
0x8008c000 0x4000 Debug/enetLwip.o
.bss.ram 0x80090000 0x4
.bss.ram 0x80090000 0x4 Debug/lwiplib.o
There is clearly 'something'. There is another BSS section after the _bss_end, which contains a lot of stuff which is expected to be zeroed, but it is not zeroed, because zeroing finishes at address given by _bss_end.
The probable reason why this is done like this is, that the pageTable is statically declared and required to have 16kiB boundary address:
static volatile unsigned int pageTable[4*1024] __attribute__((aligned(16*1024)));
So as there is a gap between last linker declared BSS segment and pageTable, it places the _bss_end in the middle of the bss segment.
Now the question is, how to tell to linker (I'm using for this arm-none-eabi-ld) that _bss_end should be really at the end of BSS and not somewhere in the middle?
Many thanks
The fact that no statics are initialised makes me wonder: how have you come by your startup code? This code is required to perform the initialisations.
See http://doc.ironwoodlabs.com/arm-arm-none-eabi/html/getting-started/sec-cs3-startup.html - section 5.2.3 which says:
The C startup function is declared as follows:
void __cs3_start_c (void) __attribute__ ((noreturn));
This function performs the following steps:
Initialize all .data-like sections by copying their contents. For example, ROM-profile linker scripts use this mechanism to initialize writable data in RAM from the read-only data program image.
... etc
It sounds like you might be lacking that code.
thanks for all these comments. It was for me almost a detective work. At the end I have changed the linker script to get something like this:
SECTIONS
{
. = 0x80000000;
. = ALIGN(4);
.startcode :
{
*init.o (.text)
}
. = ALIGN(4);
.text :
{
*(.text)
}
. = ALIGN(4);
.data :
{
*(.data)
}
. = ALIGN(4);
_bss_start = .;
.bss :
{
*(.bss)
*(COMMON)
*(.bss.*)
}
. = ALIGN(4);
_bss_end = .;
_stack = 0x87FFFFF8;
}
So I basically forced linker to include into BSS segment all the sub-segments which start with COMMON and .bss..
This apparently resolves the issue as the linker now generates correct map such, that it places _bss_end pointer really to the last address of BSS section.
So my soft now runs correctly, gets PHY running. I still cannot acquire the DHCP, but I guess this is a problem of uninitialised .data segment. LwIP uses at some places static assignments as
static u32_t xid = 0xABCD0000;
which is going into .data segment, but apparently it does not get initialised and hence I cannot get any DHCP anwer... but this is another story
thanks to all

Resources