Can't debug after using SysTick_Config - debugging

I - an embedded beginner - am fighting my way through the black magic world of embedded programming. So far I won already a bunch of fights, but this new bug seems to be a hard one.
First, my embedded setup:
Olimex STM32-P207 (STM32F207)
Olimex ARM-USB-OCD-H JTAG
OpenOCD
Eclipse (with CDT and GDB hardware debugging)
Codesourcery Toolchain
Startup file and linker script (adapted memory map for the STM32F207) for RIDE (what uses GCC)
STM32F2xx_StdPeriph_Lib_V1.1.0
Using the many tutorials and Q&As out there I was able to set-up makefile, linker and startup code and got some simple examples running using STM's standard library (classic blinky, using buttons and interrupts etc.). However, once I started playing around with the SysTick interrupts, things got messy.
If add the SysTick_Config() call to my code (even with an empty SysTick_Handler), ...
int main(void)
{
/*!< At this stage the microcontroller clock setting is already configured,
[...]
*/
if (SysTick_Config(SystemCoreClock / 120000))
{
// Catch config error
while(1);
}
[...]
... then my debugger starts at the (inline) function NVIC_SetPriority() and once I hit "run" I end up in the HardFault_Handler().
This only happens, when using the debugger. Otherwise the code runs normal (telling from the blinking LEDs).
I already read a lot and tried a lot (modifying compiler options, linker, startup, trying other code with SysTick_Config() calls), but nothing solved this problem.
One thing could be a hint:
The compiler starts in both cases (with and without the SysTick_Config call) at 0x00000184. In the case without the SysTick_Config call this points at the beginnig of main(). Using SysTick_Config this pionts at NVIC_SetPriority().
Does anybody have a clue what's going on? Any hint about where I could continue my search for a solution?
I don't know what further information would be helpful to solve this riddle. Please let me know and I will be happy to provide the missing pieces.
Thanks a lot!
/edit1: Added results of arm-none-eabi-readelf, -objdump and -size.
/edit2: I removed the code info to make space for the actual code. With this new simplified version debugging starts at
08000184: stmdaeq r0, {r4, r6, r8, r9, r10, r11, sp}
readelf:
[ 2] .text PROGBITS 08000184 008184 002dcc 00 AX 0 0 4
...
2: 08000184 0 SECTION LOCAL DEFAULT 2
...
46: 08000184 0 NOTYPE LOCAL DEFAULT 2 $d
main.c
/* Includes ------------------------------------------------------------------*/
#include "stm32f2xx.h"
/* Private typedef -----------------------------------------------------------*/
/* Private define ------------------------------------------------------------*/
/* Private macro -------------------------------------------------------------*/
/* Private variables ---------------------------------------------------------*/
uint16_t counter = 0;
/* Private function prototypes -----------------------------------------------*/
/* Private functions ---------------------------------------------------------*/
void assert_failed(uint8_t* file, uint32_t line);
void Delay(__IO uint32_t nCount);
void SysTick_Handler(void);
/**
* #brief Main program
* #param None
* #retval None
*/
int main(void)
{
if (SysTick_Config(SystemCoreClock / 12000))
{
// Catch config error
while(1);
}
/*
* Configure the LEDs
*/
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOF, ENABLE); // LEDs
GPIO_InitTypeDef GPIO_InitStructure;
GPIO_InitStructure.GPIO_Pin = GPIO_Pin_6 | GPIO_Pin_7 | GPIO_Pin_8 | GPIO_Pin_9;
GPIO_InitStructure.GPIO_Mode = GPIO_Mode_OUT;
GPIO_InitStructure.GPIO_OType = GPIO_OType_PP;
GPIO_InitStructure.GPIO_Speed = GPIO_Speed_2MHz;
GPIO_InitStructure.GPIO_PuPd = GPIO_PuPd_NOPULL;
GPIO_Init(GPIOF, &GPIO_InitStructure);
GPIOF->BSRRL = GPIO_Pin_6;
while (1)
{
if (GPIO_ReadInputDataBit(GPIOF, GPIO_Pin_9) == SET)
{
GPIOF->BSRRH = GPIO_Pin_9;
}
else
{
GPIOF->BSRRL = GPIO_Pin_9;
}
Delay(500000);
}
return 0;
}
#ifdef USE_FULL_ASSERT
/**
* #brief Reports the name of the source file and the source line number
* where the assert_param error has occurred.
* #param file: pointer to the source file name
* #param line: assert_param error line source number
* #retval None
*/
void assert_failed(uint8_t* file, uint32_t line)
{
/* User can add his own implementation to report the file name and line number,
ex: printf("Wrong parameters value: file %s on line %d\r\n", file, line) */
/* Infinite loop */
while (1)
{
}
}
#endif
/**
* #brief Delay Function.
* #param nCount:specifies the Delay time length.
* #retval None
*/
void Delay(__IO uint32_t nCount)
{
while (nCount > 0)
{
nCount--;
}
}
/**
* #brief This function handles Hard Fault exception.
* #param None
* #retval None
*/
void HardFault_Handler(void)
{
/* Go to infinite loop when Hard Fault exception occurs */
while (1)
{
}
}
/**
* #brief This function handles SysTick Handler.
* #param None
* #retval None
*/
void SysTick_Handler(void)
{
if (counter > 10000 )
{
if (GPIO_ReadInputDataBit(GPIOF, GPIO_Pin_8) == SET)
{
GPIOF->BSRRH = GPIO_Pin_8;
}
else
{
GPIOF->BSRRL = GPIO_Pin_8;
}
counter = 0;
}
else
{
counter++;
}
}
/edit3:
Soultion:
Because the solution is burrowed in a comment, I put it up here:
My linker file was missing ENTRY(your_function_of_choice); (e.g. Reset_Handler). Adding this made my debugger work again (it now starts at the right point).
Thanks everybody!

I have a strong feeling that the entry point isn't specified correctly in the compiler options or linker script...
And whatever code comes first at link time in the ".text" section, it gets to have the entry point.
The entry point, however, should point to a special "init" function that would set up the stack, initialize the ".bss" section (and maybe some other sections as well), initialize any hardware that's necessary for basic operation (e.g. the interrupt controller and maybe some system timer) and any remaining portions of the and standard libraries before actually transferring control to main().
I'm not familiar with the tools and your hardware, so I can't say exactly what that special "init" function is, but that's pretty much the problem. It's not being pointed to by the entry point of the compiled program. NVIC_SetPriority() doesn't make any sense there.

Related

Shared Memory's atomicAdd with int and float have different SASS

I encountered a performance issue, where the shared memory's atomicAdd on float is much more expensive than it on int after profiling with nv-nsight-cu-cli.
After checking the generated SASS, I found the generated SASS of the shared memory's atomicAdd on float and int are not similar at all.
Here I show a example in minimal cuda code:
$ cat test.cu
__global__ void testAtomicInt() {
__shared__ int SM_INT;
SM_INT = 0;
__syncthreads();
atomicAdd(&(SM_INT), ((int)1));
}
__global__ void testAtomicFloat() {
__shared__ float SM_FLOAT;
SM_FLOAT = 0.0;
__syncthreads();
atomicAdd(&(SM_FLOAT), ((float)1.1));
}
$ nvcc -arch=sm_86 -c test.cu
$ cuobjdump -sass test.o
Fatbin elf code:
================
arch = sm_86
code version = [1,7]
producer = <unknown>
host = linux
compile_size = 64bit
code for sm_86
Function : _Z15testAtomicFloatv
.headerflags #"EF_CUDA_SM86 EF_CUDA_PTX_SM(EF_CUDA_SM86)"
/*0000*/ MOV R1, c[0x0][0x28] ; /* 0x00000a0000017a02 */
/* 0x000fc40000000f00 */
/*0010*/ STS [RZ], RZ ; /* 0x000000ffff007388 */
/* 0x000fe80000000800 */
/*0020*/ BAR.SYNC 0x0 ; /* 0x0000000000007b1d */
/* 0x000fec0000000000 */
/*0030*/ LDS R2, [RZ] ; /* 0x00000000ff027984 */
/* 0x000e240000000800 */
/*0040*/ FADD R3, R2, 1.1000000238418579102 ; /* 0x3f8ccccd02037421 */
/* 0x001fcc0000000000 */
/*0050*/ ATOMS.CAST.SPIN R3, [RZ], R2, R3 ; /* 0x00000002ff03738d */
/* 0x000e240001800003 */
/*0060*/ ISETP.EQ.U32.AND P0, PT, R3, 0x1, PT ; /* 0x000000010300780c */
/* 0x001fda0003f02070 */
/*0070*/ #!P0 BRA 0x30 ; /* 0xffffffb000008947 */
/* 0x000fea000383ffff */
/*0080*/ EXIT ; /* 0x000000000000794d */
/* 0x000fea0003800000 */
/*0090*/ BRA 0x90; /* 0xfffffff000007947 */
/* 0x000fc0000383ffff */
/*00a0*/ NOP; /* 0x0000000000007918 */
..........
Function : _Z13testAtomicIntv
.headerflags #"EF_CUDA_SM86 EF_CUDA_PTX_SM(EF_CUDA_SM86)"
/*0000*/ MOV R1, c[0x0][0x28] ; /* 0x00000a0000017a02 */
/* 0x000fc40000000f00 */
/*0010*/ STS [RZ], RZ ; /* 0x000000ffff007388 */
/* 0x000fe80000000800 */
/*0020*/ BAR.SYNC 0x0 ; /* 0x0000000000007b1d */
/* 0x000fec0000000000 */
/*0030*/ ATOMS.POPC.INC.32 RZ, [URZ] ; /* 0x00000000ffff7f8c */
/* 0x000fe2000d00003f */
/*0040*/ EXIT ; /* 0x000000000000794d */
/* 0x000fea0003800000 */
/*0050*/ BRA 0x50; /* 0xfffffff000007947 */
/* 0x000fc0000383ffff */
/*0060*/ NOP; /* 0x0000000000007918 */
..........
Fatbin ptx code:
================
arch = sm_86
code version = [7,5]
producer = <unknown>
host = linux
compile_size = 64bit
compressed
From the generated SASS code above, we could clearly obtain, that the shared memory's atomicAdd on int generates single lightweight ATOMS.POPC.INC.32 RZ, [URZ], while it on float generating a bunch of SASS with a heavyweight ATOMS.CAST.SPIN R3, [RZ], R2, R3 .
The CUDA Binary Utilities doesn't show me the meaning of CAST or SPIN. However, I could guess it means an exclusive spin lock on a shared memory address. (Correct me, if my guess goes wrong.)
In my real code, none of the SASS of atomicAdd of int has a hotspot. However, this ATOMS.CAST.SPIN is significantly hotter than other SASS code generated by of the atomicAdd of float.
In addition, I tested with compiler flag -arch=sm_86, -arch=sm_80 and -arch=sm_75. Under those CCs, the generated SPSS code of atomicAdd of float is very similar. Another fact is, with no surprise, the atomicAdd of double generates SPSS alike it of float.
This observation caused me more confusion than questions. I would go with some simple questions from my profiling experience and hope we could have a nice discussion.
What does exactly ATOMS.CAST.SPIN do? The only SASS document I am aware of is the CUDA Binary Utilities.
Why should the atomicAdd of float generates more SASS code and does more work than it on int? I know it is a general question and hard to be answered. Maybe the ATOMS.POPC.INC simply doesn't apply to data type float or double?
If it is more vulnerable to have more shared memory load and store conflict and thus more stall time for the atomicAdd of float than the atomicAdd of int? The former has clearly more instruction to be executed and divergent branches. I have the following code snippet in my project where the number of function calls on two functions is the same. However, the atomicAdd of float builds a runtime bottleneck while it is on int doesn't.
atomicAdd(&(SM_INT), ((int)1)); // no hotspot
atomicAdd(&(SM_FLOAT), ((float)1.1)); // a hotspot
I probably won't be able to provide an answer addressing every possible question. CUDA SASS is really not documented to the level to explain these things.
What does exactly ATOMS.CAST.SPIN do? The only SASS document I am aware of is the CUDA Binary Utilities.
ATOMS.CAST.SPIN
^^^^^ ^^^
|| |
|| compare and swap
|shared
atomic
The programming guide gives an indication of how one can implement an "arbitrary" atomic operation, using atomic CAS (Compare And Swap). You should first familiarize yourself with how atomic CAS works.
Regarding the "arbitrary atomic" example, the thing to note is that it can evidently be used to provide atomic operations for e.g. datatypes that are not supported by a "native" atomic instruction, such as atomic add. Another thing to note is that it is essentially a loop around an atomic CAS instruction, with the loop checking to see if the operation was "successful" or not. If it was "unsuccessful", the loop continues. If it was "successful", the loop exits.
This is effectively what we see depicted in SASS code in your float example:
/*0030*/ LDS R2, [RZ] ; // get the current value in the location
FADD R3, R2, 1.1000000238418579102 ; // perform ordinary floating-point add
ATOMS.CAST.SPIN R3, [RZ], R2, R3 ; // attempt to atomically replace the result in the location
ISETP.EQ.U32.AND P0, PT, R3, 0x1, PT ; // check if replacement was successful
#!P0 BRA 0x30 // if not, loop and try again
These are essentially the steps that are outlined in the "arbitrary atomic" example in the programming guide. Based on this I would conclude the following:
the architecture you compiled for does not actually have a "native" atomic operation of the type you are requesting
the atomic operation you are requesting can be done using the looping method
the compiler tool chain (typically ptxas, but could also be the JIT system), as a convenience feature, is automatically implementing this looping method for you, rather than throwing a compile error
Why should the atomicAdd of float generates more SASS code and does more work than it on int?
Evidently, the architecture you are compiling for does not have a "native" implementation of atomic add for float, and so the compiler tool chain has chosen to implement this looping method for you. Since the loop effectively involves the possibility of success/failure which will determine whether this loop continues, and success/failure depends on other threads behavior (contention to perform the atomic), the looping method may do considerably more "work" than a native single instruction will.
If it is more vulnerable to have more shared memory load and store conflict and thus more stall time for the atomicAdd of float than the atomicAdd of int?
Yes, I personally would conclude that the native atomic method is more efficient, and the looping method may be less efficient, which could be expressed in a variety of ways in the profiler, such as warp stalls.
EDIT:
It's possible for things to be implemented/available in one GPU architecture but not another. This is certainly applicable to atomics, and you can see examples of this if you read the previously linked section on atomics in the programming guide. I don't know of any architectures today that are "newer" than cc8.0 or cc8.6 (Ampere) but it is certainly possible that the behavior of a future (or any other) GPU could be different here.
This loop-around-atomicCAS method is distinct from a previous methodology (lock/update/unlock, which also involves a loop for lock negotiation) the compiler toolchain used on Kepler and prior architectures to provide atomics on shared memory when no formal SASS instructions existed to do so.

Trap memory accesses inside a standard executable built with MinGW

So my problem sounds like this.
I have some platform dependent code (embedded system) which writes to some MMIO locations that are hardcoded at specific addresses.
I compile this code with some management code inside a standard executable (mainly for testing) but also for simulation (because it takes longer to find basic bugs inside the actual HW platform).
To alleviate the hardcoded pointers, i just redefine them to some variables inside the memory pool. And this works really well.
The problem is that there is specific hardware behavior on some of the MMIO locations (w1c for example) which makes "correct" testing hard to impossible.
These are the solutions i thought of:
1 - Somehow redefine the accesses to those registers and try to insert some immediate function to simulate the dynamic behavior. This is not really usable since there are various ways to write to the MMIO locations (pointers and stuff).
2 - Somehow leave the addresses hardcoded and trap the illegal access through a seg fault, find the location that triggered, extract exactly where the access was made, handle and return. I am not really sure how this would work (and even if it's possible).
3 - Use some sort of emulation. This will surely work, but it will void the whole purpose of running fast and native on a standard computer.
4 - Virtualization ?? Probably will take a lot of time to implement. Not really sure if the gain is justifiable.
Does anyone have any idea if this can be accomplished without going too deep? Maybe is there a way to manipulate the compiler in some way to define a memory area for which every access will generate a callback. Not really an expert in x86/gcc stuff.
Edit: It seems that it's not really possible to do this in a platform independent way, and since it will be only windows, i will use the available API (which seems to work as expected). Found this Q here:
Is set single step trap available on win 7?
I will put the whole "simulated" register file inside a number of pages, guard them, and trigger a callback from which i will extract all the necessary info, do my stuff then continue execution.
Thanks all for responding.
I think #2 is the best approach. I routinely use approach #4, but I use it to test code that is running in the kernel, so I need a layer below the kernel to trap and emulate the accesses. Since you have already put your code into a user-mode application, #2 should be simpler.
The answers to this question may provide help in implementing #2. How to write a signal handler to catch SIGSEGV?
What you really want to do, though, is to emulate the memory access and then have the segv handler return to the instruction after the access. This sample code works on Linux. I'm not sure if the behavior it is taking advantage of is undefined, though.
#include <stdint.h>
#include <stdio.h>
#include <signal.h>
#define REG_ADDR ((volatile uint32_t *)0x12340000f000ULL)
static uint32_t read_reg(volatile uint32_t *reg_addr)
{
uint32_t r;
asm("mov (%1), %0" : "=a"(r) : "r"(reg_addr));
return r;
}
static void segv_handler(int, siginfo_t *, void *);
int main()
{
struct sigaction action = { 0, };
action.sa_sigaction = segv_handler;
action.sa_flags = SA_SIGINFO;
sigaction(SIGSEGV, &action, NULL);
// force sigsegv
uint32_t a = read_reg(REG_ADDR);
printf("after segv, a = %d\n", a);
return 0;
}
static void segv_handler(int, siginfo_t *info, void *ucontext_arg)
{
ucontext_t *ucontext = static_cast<ucontext_t *>(ucontext_arg);
ucontext->uc_mcontext.gregs[REG_RAX] = 1234;
ucontext->uc_mcontext.gregs[REG_RIP] += 2;
}
The code to read the register is written in assembly to ensure that both the destination register and the length of the instruction are known.
This is how the Windows version of prl's answer could look like:
#include <stdint.h>
#include <stdio.h>
#include <windows.h>
#define REG_ADDR ((volatile uint32_t *)0x12340000f000ULL)
static uint32_t read_reg(volatile uint32_t *reg_addr)
{
uint32_t r;
asm("mov (%1), %0" : "=a"(r) : "r"(reg_addr));
return r;
}
static LONG WINAPI segv_handler(EXCEPTION_POINTERS *);
int main()
{
SetUnhandledExceptionFilter(segv_handler);
// force sigsegv
uint32_t a = read_reg(REG_ADDR);
printf("after segv, a = %d\n", a);
return 0;
}
static LONG WINAPI segv_handler(EXCEPTION_POINTERS *ep)
{
// only handle read access violation of REG_ADDR
if (ep->ExceptionRecord->ExceptionCode != EXCEPTION_ACCESS_VIOLATION ||
ep->ExceptionRecord->ExceptionInformation[0] != 0 ||
ep->ExceptionRecord->ExceptionInformation[1] != (ULONG_PTR)REG_ADDR)
return EXCEPTION_CONTINUE_SEARCH;
ep->ContextRecord->Rax = 1234;
ep->ContextRecord->Rip += 2;
return EXCEPTION_CONTINUE_EXECUTION;
}
So, the solution (code snippet) is as follows:
First of all, i have a variable:
__attribute__ ((aligned (4096))) int g_test;
Second, inside my main function, i do the following:
AddVectoredExceptionHandler(1, VectoredHandler);
DWORD old;
VirtualProtect(&g_test, 4096, PAGE_READWRITE | PAGE_GUARD, &old);
The handler looks like this:
LONG WINAPI VectoredHandler(struct _EXCEPTION_POINTERS *ExceptionInfo)
{
static DWORD last_addr;
if (ExceptionInfo->ExceptionRecord->ExceptionCode == STATUS_GUARD_PAGE_VIOLATION) {
last_addr = ExceptionInfo->ExceptionRecord->ExceptionInformation[1];
ExceptionInfo->ContextRecord->EFlags |= 0x100; /* Single step to trigger the next one */
return EXCEPTION_CONTINUE_EXECUTION;
}
if (ExceptionInfo->ExceptionRecord->ExceptionCode == STATUS_SINGLE_STEP) {
DWORD old;
VirtualProtect((PVOID)(last_addr & ~PAGE_MASK), 4096, PAGE_READWRITE | PAGE_GUARD, &old);
return EXCEPTION_CONTINUE_EXECUTION;
}
return EXCEPTION_CONTINUE_SEARCH;
}
This is only a basic skeleton for the functionality. Basically I guard the page on which the variable resides, i have some linked lists in which i hold pointers to the function and values for the address in question. I check that the fault generating address is inside my list then i trigger the callback.
On first guard hit, the page protection will be disabled by the system, but i can call my PRE_WRITE callback where i can save the variable state. Because a single step is issued through the EFlags, it will be followed immediately by a single step exception (which means that the variable was written), and i can trigger a WRITE callback. All the data required for the operation is contained inside the ExceptionInformation array.
When someone tries to write to that variable:
*(int *)&g_test = 1;
A PRE_WRITE followed by a WRITE will be triggered,
When i do:
int x = *(int *)&g_test;
A READ will be issued.
In this way i can manipulate the data flow in a way that does not require modifications of the original source code.
Note: This is intended to be used as part of a test framework and any penalty hit is deemed acceptable.
For example, W1C (Write 1 to clear) operation can be accomplished:
void MYREG_hook(reg_cbk_t type)
{
/** We need to save the pre-write state
* This is safe since we are assured to be called with
* both PRE_WRITE and WRITE in the correct order
*/
static int pre;
switch (type) {
case REG_READ: /* Called pre-read */
break;
case REG_PRE_WRITE: /* Called pre-write */
pre = g_test;
break;
case REG_WRITE: /* Called after write */
g_test = pre & ~g_test; /* W1C */
break;
default:
break;
}
}
This was possible also with seg-faults on illegal addresses, but i had to issue one for each R/W, and keep track of a "virtual register file" so a bigger penalty hit. In this way i can only guard specific areas of memory or none, depending on the registered monitors.

Why is this loop not executed?

I compile with GCC 5.3 2016q1 for STM32 microcontroller.
Right at the beginning of main I placed a small routine to fill stack with a pattern. Later I search the highest address that still holds this pattern to find out about stack usage, you surely know this. Here is my routine:
uint32_t* Stack_ptr = 0;
uint32_t Stack_bot;
uint32_t n = 0;
asm volatile ("str sp, [%0]" :: "r" (&Stack_ptr));
Stack_bot = (uint32_t)(&_estack - &_Min_Stack_Size);
//*
n = 0;
while ((uint32_t)(Stack_ptr) > Stack_bot)
{
Stack_ptr--;
n++;
*Stack_ptr = 0xAA55A55A;
} // */
After that I initialize hardware, also a UART and print out values of Stack_ptr, Stack_bot and n and then stack contents.
The results are 0x20007FD8 0x20007C00 0
Stack_bot is the expected value because I have 0x400 Bytes in 32k RAM starting at 0x20000000. But I would expect Stack_ptr to be 0x20008000 and n somewhat under 0x400 after the loop is finished. Also stack contents shows no entries of 0xAA55A55A. This means the loop is not executed.
I could only manage to get it executed by creating a small function that holds the above routine and disable optimization for this function.
Anybody knows why that is? And the strangest thing about it is that I could swear it worked a few days ago. I saw a lot of 0xAA55A55A in the stack dump.
Thanks a lot
Martin
Probably problem is with the assembler function, In my code I use this:
// defined by linker script, pointing to end of static allocation
extern unsigned _heap_start;
void fill_heap(unsigned fill=0x55555555) {
unsigned *dst = &_heap_start;
register unsigned *msp_reg;
__asm__("mrs %0, msp\n" : "=r" (msp_reg) );
while (dst < msp_reg) {
*dst++ = fill;
}
}
it will fill memory between _heap_start and current SP.

Need help understanding stack frame layout

While implementing a stack walker for a debugger I am working on I reached the point to extract the arguments to a function call and display them. To make it simple I started with the cdecl convention in pure 32-bit (both debugger and debuggee), and a function that takes 3 parameters. However, I cannot understand why the arguments in the stack trace are out of order compared to what cdecl defines (right-to-left, nothing in registers), despite trying to figure it out for a few days now.
Here is a representation of the function call I am trying to stack trace:
void Function(unsigned long long a, const void * b, unsigned int c) {
printf("a=0x%llX, b=%p, c=0x%X\n", a, b, c);
_asm { int 3 }; /* Because I don't have stepping or dynamic breakpoints implemented yet */
}
int main(int argc, char* argv[]) {
Function(2, (void*)0x7A3FE8, 0x2004);
return 0;
}
This is what the function (unsurprisingly) printed to the console:
a=0x2, c=0x7a3fe8, c=0x2004
This is the stack trace generated at the breakpoint (the debugger catches the breakpoint and there I try to walk the stack):
0x3EF5E0: 0x10004286 /* previous pc */
0x3EF5DC: 0x3EF60C /* previous fp */
0x3EF5D8: 0x7A3FE8 /* arg b --> Wait... why is b _above_ c here? */
0x3EF5D4: 0x2004 /* arg c */
0x3EF5D0: 0x0 /* arg a, upper 32 bit */
0x3EF5CC: 0x2 /* arg a, lower 32 bit */
The code that's responsible for dumping the stack frames (implemented using the DIA SDK, though, I don't think that is relevant to my problem) looks like this:
ULONGLONG stackframe_top = 0;
m_frame->get_base(&stackframe_top); /* IDiaStackFrame */
/* dump 30 * 4 bytes */
for (DWORD i = 0; i < 30; i++)
{
ULONGLONG address = stackframe_top - (i * 4);
DWORD value;
SIZE_T read_bytes;
if (ReadProcessMemory(m_process, reinterpret_cast<LPVOID>(address), &value, sizeof(value), &read_bytes) == TRUE)
{
debugprintf(L"0x%llX: 0x%X\n", address, value); /* wrapper around OutputDebugString */
}
}
I am compiling the test program without any optimization in vs2015 update 3.
I have validated that I am indeed compiling it as cdecl by looking in the pdb with the dia2dump sample application.
I do not understand what is causing the stack to look like this, it doesn't match anything I learned, nor does it match the documentation provided by Microsoft.
I also checked google a whole lot (including osdev wiki pages, msdn blog posts, and so on), and checked my (by now probably outdated) books on 32-bit x86 assembly programming (that were released before 64-bit CPUs existed).
Thank you very much in advance for any explanations or links!
I had somehow misunderstood where the arguments to a function call end up in memory compared to the base of the stack frame, as pointed out by Raymond. This is the fixed code snippet:
ULONGLONG stackframe_top = 0;
m_frame->get_base(&stackframe_top); /* IDiaStackFrame */
/* dump 30 * 4 bytes */
for (DWORD i = 0; i < 30; i++)
{
ULONGLONG address = stackframe_top + (i * 4); /* <-- Read before the stack frame */
DWORD value;
SIZE_T read_bytes;
if (ReadProcessMemory(m_process, reinterpret_cast<LPVOID>(address), &value, sizeof(value), &read_bytes) == TRUE)
{
debugprintf(L"0x%llX: 0x%X\n", address, value); /* wrapper around OutputDebugString */
}
}

STM32F1 RTC_EnterConfigMode not always setting Config Mode

Preface: This is all being evaluated when attached to the target using an ST-Link and in debug mode in IAR Embedded Workbench IDE.
The Real Time Clock in the STM32F1 is supported in the Standard Peripheral Libraries provided by STM. I'm trying to set the RTC to 107301722, or "Sat, 26 May 2013 22:02:02 GMT", using RTC_SetCounter().
void RTC_SetCounter(uint32_t CounterValue) /*From Std Periph Lib */
{
RTC_EnterConfigMode();
/* Set RTC COUNTER MSB word */
RTC->CNTH = CounterValue >> 16;
/* Set RTC COUNTER LSB word */
RTC->CNTL = (CounterValue & RTC_LSB_MASK);
RTC_ExitConfigMode();
}
Note that it calls RTC_EnterConfigMode(), which is a requirement for modifying RTC register values: "To write in the RTC_PRL, RTC_CNT, RTC_ALR registers, the peripheral must enter Configuration Mode. This is done by setting the CNF bit in the RTC_CRL register."
void RTC_EnterConfigMode(void) /*From Std Periph Lib */
{
/* Set the CNF flag to enter in the Configuration Mode */
RTC->CRL |= RTC_CRL_CNF;
}
This is the code for entering config mode. Simple enough. And here's the disassembly (no optimizations are enabled). The 0x10 is the bit position of the CNF flag.
// RTC->CRL |= RTC_CRL_CNF;
RTC_EnterConfigMode:
0x8053ed6: 0x4829 LDR.N R0, ??DataTable13_1 ; RTC_CRL
0x8053ed8: 0x8800 LDRH R0, [R0]
0x8053eda: 0xf050 0x0010 ORRS.W R0, R0, #16 ; 0x10
0x8053ede: 0x4927 LDR.N R1, ??DataTable13_1 ; RTC_CRL
0x8053ee0: 0x8008 STRH R0, [R1]
//}
0x8053ee2: 0x4770 BX LR
What I've found is if I break anywhere from the call to RTC_SetCounter() to the disassembly at line 0x8053ee0, Config Mode gets enabled, but if I move the breakpoint to the disassembly at line 0x8053ee2 or later, Config Mode does not get set, and therefore the RTC does not get set.
I have not tried anything in the realm of trying to analyze what happens in a non-debug setting simply because part of what I'm working toward is a unit test involving setting the time. The unit test will require debugger attachment.
Is this strictly a debugger problem? Are there any rational reasons to explain this behavior that could lead to a workable solution?
It would turn out that I have overlooked a very important function that is provided to allow current RTC register actions which are not complete to finish: RTC_WaitForLastTask().
/**
* #brief Waits until last write operation on RTC registers has finished.
* #note This function must be called before any write to RTC registers.
* #param None
* #retval None
*/
void RTC_WaitForLastTask(void)
{
/* Loop until RTOFF flag is set */
while ((RTC->CRL & RTC_FLAG_RTOFF) == (uint16_t)RESET)
{
}
}
If I had paid more attention to the other register flags that were set in RTC_CRL, I might have noticed that RTOFF was an issue.

Resources