I am trying to debug a ARM flash program on target MCU using gdb
I am setting up the gdbserver on target system (cortex-m7) with jlinkgdbserver. And I have a elf ready for debug.
For the first time, it is OK for me do debug with the following
> arm-none-eabi-gdb flash_program.elf
(gdb)> target remote localhost:2331 # connect to gdb server on target
(gdb)> load # since it is a flash program, jlink will flash the program
# target is reset to elf entry point
(gdb)> .... (debugging begins)
However, when debug goes to some place, and I want to debug from the entry point again, the way I figured out is do flashing again
(gdb)> Ctrl+D # disconnect the gdbserver
> arm-none-eabi-gdb flash_program.elf
(gdb)> target remote localhost:2331
(gdb)> load
(gdb)> .... (debugging from start again)
So this seems a bit redundant, also it erase and program the same flash area again and again, I am afraid I will end up damaging the storage through my debugging.
The flash program has already been burned into the medium, I simply want to let the target to reset itself and run from entry point again. But I tried things like monitor reset and run. But the target M7 both can't start from beginning again.
Is there any other gdb command that I can try?
I used an STM32F103C8T6 for providing an answer, but you will just have to replace its ROM base address (0x20000000) by the one your Cortex-M7 uses: In my case, I loaded the initial value for the stack pointer from 0x20000000, and the initial value for the program counter from 0x20000000+4.
The program to be debugged was stm32f103c8t6.elf, was already flashed and did contain the debug symbols.
arm-none-eabi-gdb
target remote localhost:2331
0x20000480 in ?? ()
(gdb) monitor halt
(gdb) monitor reset 0
Resets core & peripherals via SYSRESETREQ & VECTRESET bit.
(gdb) monitor reset 1
Resets the core only, not peripherals.
(gdb) monitor reset 2
Resets core & peripherals using RESET pin.
(gdb) symbol-file stm32f103c8t6.elf
Reading symbols from stm32f103c8t6.elf...
(gdb) set $sp = *0x20000000
(gdb) set $pc = *0x20000004
(gdb) stepi
0x200003c2 121 {
(gdb)
0x200003c4 121 {
(gdb) stepi
122 SystemInit(); /* CMSIS System Initialization */
(gdb)
SystemInit () at /opt/arm/ARM.CMSIS.5.6.0//Device/ARM/ARMCM3/Source/system_ARMCM3.c:61
61 {
(gdb)
Depending on the type of reset strategy you want to use, you may have to explicit it in the monitor reset command:
As explained in the Segger documentation and this great article, you can use strategy number 0, 1 or 2:
# Normal
monitor reset
monitor reset 0
# Core
monitor reset 1
# ResetPin
monitor reset 2
My understanding is that being able to use strategy #2 depends on how your RESET pin was wired, i.e. if it is pulled-down or not on your board.
Disclaimer: I am a software person, and all interpretation errors related to hardware-related questions are mine...
gdb command load will flash the image, provided that you have not setup the link address specially.
You have two option to survive:
setup the link address / adjust the linker script, so the program will be totally in RAM. Or
Keep the address not changed, but each time after code change & compile, use load once only (to make the flash being programmed), then later use symbol-file command to load only symbol.
Related
Is there a way to see the code that is being called by a syscall instruction with lldb, or otherwise, on the Mac?
I am trying to understand what goes behind the hood when a "write" syscall is called. I have compiled a simple .c program with gcc -g:
#include <unistd.h>
#include <sys/syscall.h>
int main(void) {
syscall(SYS_write, 1, "hello, world!\n", 14);
return 0;
}
lldb does not step into the syscall instruction even when I use:
s -a false
Is there any way?
No. If you were able to step into a kernel trap, the kernel would be stopped and the debugger would stop running as well. You can debug the kernel from a second system -- if you look for the Kernel Debug Kit on Apple's developer portal download site, there are instructions for how to do two-machine kernel debugging. The instructions are most likely aimed at people doing kernel extension (kext) development, but they'll get in you in the right ballpark.
Your best bet (short of two machines) is to run MacOS in a VM, and then attach a kernel debugger over serial. You'll need to start the VM kernel with boot-args (debug=0x44 or a bit mask of your choice), and connect lldb from the host machine. There are plentiful resources on how to do that over the web. one of the most direct and comprehensive is Scott Knight's - https://knight.sc/debugging/2018/08/15/macos-kernel-debugging.html
You can also figure it out from the code: All sys calls funnel to hndl_unix_scall64, which in turn checks the syscall/machtrap indicator (0x2000000 or 0x1000000), and then directs to unix_syscall64 (for the former), and then dispatches to actual sys call from the table. In a backtrace it would look like:
frame #8: 0xffffff801e4ed8c3 kernel`read_nocancel + 115
frame #9: 0xffffff801e5b62bb kernel`unix_syscall64 + 619
frame #10: 0xffffff801df5c466 kernel`hndl_unix_scall64 + 22
Source: *OS Internals, Volume II, Chapter 4 (http://NewOSXBook.com)
I am developping a Firmware on various STM32L4 Nucleo boards with Atollic Truestudio IDE (basically Eclipse). Until now I was using printf through UART, thanks to the Virtual COM port.
I want to migrate to printf using STM32 ITM.
More precisely I work on Nucleo-L4A6ZG. Debug is through a gdb server.
On Atollic I modified my Debug Configuration to enable SWV with a core clock of 80MHz. I've modified my startup script as described in STM32L4 reference manual as follows. I'm not sure it is necessary since TrueStudio/Eclipse allows to setup SWV from the GUI but seems easier this way:
# Set character encoding
set host-charset CP1252
set target-charset CP1252
# Reset to known state
monitor reset
# Load the program executable
load
# Reset the chip to get to a known state. Remove "monitor reset" command
# if the code is not located at default address and does not run by reset.
monitor reset
# Enable Debug connection in low power modes (DBGMCU->CR) + TPIU for SWV
set *0xE0042004 = (*0xE0042004) | 0x67
# Write 0xC5ACCE55 to the ITM Lock Access Register to unlock the write access to the ITM registers
set *0xE0000FB0 =0xC5ACCE55
# Write 0x00010005 to the ITM Trace Control Register to enable the ITM with Synchronous enabled and an ATB ID different from 0x00
set *0xE0000E80= 0x00010005
# Write 0x1 to the ITM Trace Enable Register to enable the Stimulus Port 0
set *0xE0000E00= (*0xE0000E00) | 0x1
#write 1 to ITM trace privilege register to unmask Stimulus ports 7:0
set *0xE0000E40= (*0xE0000E40) | 0x1
# Set a breakpoint at main().
tbreak main
# Run to the breakpoint.
continue
I've modified my _write function as follows:
static inline unsigned long ITM_SendChar (unsigned long ch)
{
if (((ITM->TCR & ITM_TCR_ITMENA_Msk) != 0UL) && /* ITM enabled */
((ITM->TER & 1UL ) != 0UL) ) /* ITM Port #0 enabled */
{
while (ITM->PORT[0U].u32 == 0UL)
{
__asm("nop");
}
ITM->PORT[0U].u8 = (uint8_t)ch;
}
return (ch);
}
int _write(int file, char *ptr, int len)
{
//return usart_write(platform_get_console(), (u8 *)ptr, len);
int i=0;
for(i=0 ; i<len ; i++)
ITM_SendChar((*ptr++));
return len;
}
Debugging step by step I see that I get at line ITM->PORT[0U].u8 = (uint8_t)ch;
Finally I start the trace in the SWV console in the IDE but I get no output.
Any idea what I am missing ? What about the core clock of the SWV ? I'm not sure what it corresponds to.
I faced a similar situation on my Nucleo-F103RB. What got this working was selecting "Trace Asynchronous" debug option on CubeMX and not "Serial Wire". The trace asynchronous debug dedicates the PB3 pin as a SWO pin.
Then setup the debug configuration as follows:
Project debug configuration to enable Serial Wire Viewer (SWV)
Also, I'd defined the write function inside of the main.c file itself, changing the definition in the syscalls.c wouldn't work.
And finally when debugging the project, under the "Serial Wire Viewer settings" only enable (check) port 0 on ITM Stimulus Ports, like so:
Serial Wire Viewer settings in Debug perpective
One thing I noted when I had enabled the prescaler for timestamps and some trace events, the trace output would not show quite a few trace logs.
Anyone else finding this - The Nucleo-32 line of Nucleo development boards inexplicably DO NOT have the SWO pin routed to the MCU. The SWO pin is necessary for all the SWV features, so it will not work by design. The higher pin-count Nucleo boards seem to have it routed.
See for yourself:
https://www.st.com/resource/en/user_manual/dm00231744-stm32-nucleo32-boards-mb1180-stmicroelectronics.pdf (Nucleo-32)
https://www.st.com/resource/en/user_manual/dm00105823-stm32-nucleo-64-boards-mb1136-stmicroelectronics.pdf (Nucleo-64)
Small-factor Nucleo-32 pin boards generally do not support SWO/SWV but there is one exception: Nucleo-STM32G431KB. As of September 2021 this is probably the only small-factor Nucleo-32 pin, quite powerful board supporting ST-LINK V3 and SWO. See MB1430 schematic.
Although my response is loosely related to the original question pertaining to the Nucleo-L4A6ZG (144 pin large-factor), it may help someone finding this thread.
Just to mention that the ITM printf works perfectly on Keil IDE. Nothing particular to setup, just implement the ITM_SendChar function as showed in my first post and open the debug printf window.
I recently set up my system for kernel debug using qemu+gdb. At present, I can set breakpoints at, for example, __do_page_fault() and trace the call via gdb (with win command). Now I want the following task: A simple C program having a "hello world" printfstatement. Trace the call sequence starting from the userspace down to the write() system call ( or anything in the kernel space that is invoked during the execution of that particular userspace program). I want to learn how userspace program traps into system call w.r.t Linux kernel specifically.
Now my doubt is where to set the breakpoint? We have kernel code as well as the C code of the program. How to go about this situation ? Please give us an explanation with example.
Thank You !
The most easiest way in my opinion is to separate this into two pieces.
Place breakpoint in guest kernel using host gdb.
Place breakpoint in user code before trap instruction, using in-guest target gdb, when hit - print stack using target (in-qemu) gdb. You will get user space stack trace.
Continue execution in guest gdb
In-kernel breakpoint (we have set it at stage 1) will be hit in host gdb. Print kernel stack trace.
P.S.
If your kernel will continuously hit breakpoint (f.e. write syscall is definitely used widely), you can use a conditional breakpoint to hit a breakpoint only with a certain parameters passed.
I have an application compiled using GCC for an STM32F407 ARM processor. The linker stores it in Flash, but is executed in RAM. A small bootstrap program copies the application from Flash to RAM and then branches to the application's ResetHandler.
memcpy(appRamStart, appFlashStart, appRamSize);
// run the application
__asm volatile (
"ldr r1, =_app_ram_start\n\t" // load a pointer to the application's vectors
"add r1, #4\n\t" // increment vector pointer to the second entry (ResetHandler pointer)
"ldr r2, [r1, #0x0]\n\t" // load the ResetHandler address via the vector pointer
// bit[0] must be 1 for THUMB instructions otherwise a bus error will occur.
"bx r2" // jump to the ResetHandler - does not return from here
);
This all works ok, except when I try to debug the application from RAM (using GDB from Eclipse) the disassembly is incorrect. The curious thing is the debugger gets the source code correct, and will accept and halt on breakpoints that I have set. I can single step the source code lines. However, when I single step the assembly instructions, they make no sense at all. It also contains numerous undefined instructions. I'm assuming it is some kind of alignment problem, but it all looks correct to me. Any suggestions?
It is possible that GDB relies on symbol table to check instruction set mode which can be Thumb(2)/ARM. When you move code to RAM it probably can't find this information and opts back to ARM mode.
You can use set arm force-mode thumb in gdb to force Thumb mode instruction.
As a side note, if you get illegal instruction when you debugging an ARM binary this is generally the problem if it is not complete nonsense like trying to disassembly data parts.
I personally find it strange that tools doesn't try a heuristic approach when disassembling ARM binaries. In case of auto it shouldn't be hard to try both modes and do an error count to decide which mode to use as a last resort.
I'm learning about the Linux kernel but I don't understand how to switch from user mode to kernel mode in Linux. How does it work? Could you give me some advice or give me some link to refer or some book about this?
The only way an user space application can explicitly initiate a switch to kernel mode during normal operation is by making an system call such as open, read, write etc.
Whenever a user application calls these system call APIs with appropriate parameters, a software interrupt/exception(SWI) is triggered.
As a result of this SWI, the control of the code execution jumps from the user application to a predefined location in the Interrupt Vector Table [IVT] provided by the OS.
This IVT contains an adress for the SWI exception handler routine, which performs all the necessary steps required to switch the user application to kernel mode and start executing kernel instructions on behalf of user process.
To switch from user mode to kernel mode you need to perform a system call.
If you just want to see what the stuff is going on under the hood, go to TLDP is your new friend and see the code (it is well documented, no need of additional knowledge to understand an assembly code).
You are interested in:
movl $len,%edx # third argument: message length
movl $msg,%ecx # second argument: pointer to message to write
movl $1,%ebx # first argument: file handle (stdout)
movl $4,%eax # system call number (sys_write)
int $0x80 # call kernel
As you can see, a system call is just a wrapper around the assembly code, that performs an interruption (0x80) and as a result a handler for this system call will be called.
Let's cheat a bit and use a C preprocessor here to build an executable (foo.S is a file where you put a code from the link below):
gcc -o foo -nostdlib foo.S
Run it via strace to ensure that we'll get what we write:
$ strace -t ./foo
09:38:28 execve("./foo", ["./foo"], 0x7ffeb5b771d8 /* 57 vars */) = 0
09:38:28 stat(NULL, Hello, world!
NULL) = 14
09:38:28 write(0, NULL, 14)
I just read through this, and it's a pretty good resource. It explains user mode and kernel mode, why changes happen, how expensive they are, and gives some interesting related reading.
https://blog.codinghorror.com/understanding-user-and-kernel-mode
Here's a short excerpt:
Kernel Mode
In Kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. Crashes in kernel mode are catastrophic; they will halt the entire PC.
User Mode
In User mode, the executing code has no ability to directly access hardware or reference memory. Code running in user mode must delegate to system APIs to access hardware or memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Most of the code running on your computer will execute in user mode.