Any way to set a breakpoint in GDB without stopping the target? - debugging

I'm using GDB (through an IDE) to debug an arm cortex microcontroller, and I've encountered an issue where the micro is halted briefly (for 10-20ms) whenever a breakpoint is set or cleared (not hit, but set, as in the code has not yet reached the breakpoint). This long of a pause can cause significant problems when driving an electric motor, for example.
The IDE has a debug console which shows that the GDB client is sending a SIGINT to the GDB server whenever I add or remove a breakpoint. I know that in the command-line client you have to use ctrl+c to interrupt the process to issue any command, but for modern microcontrollers (ARM cortex-m, etc.) it is not necessary to interrupt the processor to insert breakpoints, read memory, and in some cases to trace program execution. I am wondering if this is something that is being imposed by the GDB interface artificially.
Is there any way to create a new breakpoint without halting the target?
I have tried using "async" mode in GDB, but it informs me that I must halt the program to insert breakpoints. I have also verified that breakpoints can be set with the underlying debug server (OpenOCD) without halting, so in this case GDB is incorrect.
Any input is appreciated, and thanks in advance.

Related

Breakpoints not being hit in remote linux kernel debugging using gdb

I am trying to remotely debug a linux kernel running on an arm cortex-a9 target using jtag probe and gdb.
I can connect to the kernel and halt it with gdb. I am able to set breakpoints in the kernel code and gdb confirms there placement as well but the problem is that once I start the execution and issue a continue command, the breakpoints are never hit and the kernel continues to run....
Please help me out in this regard.
Thanks.
As pointed in this thread, you should set the breakpoints as hardware breakpoints, namely - using the hbreak command. Only then the breakpoints will be hit.
For anyone reading this, the debugger will not break with software breakpoints by default, see the relevant doc:
"If the architecture that you are using supports the kernel option CONFIG_STRICT_KERNEL_RWX, you should consider turning it off. This option will prevent the use of software breakpoints because it marks certain regions of the kernel’s memory space as read-only. If kgdb supports it for the architecture you are using, you can use hardware breakpoints if you desire to run with the CONFIG_STRICT_KERNEL_RWX option turned on, else you need to turn off this option."
at https://www.kernel.org/doc/html/v4.14/dev-tools/kgdb.html
Disable RWX and recompile, then software breakpoints should work (they started working normally here after this)
Is some cases, KASLR (Kernel Address Space Layout Randomization) can be the culprit.
Even though you setup hbreak, the actual code location can be different from the address seen from the .elf file when using KASLR, so either pass --append "nokaslr" to the kernel boot argument, or configure the kernel with RANDOMIZE_BASE=n. This applies for arm64 and x86_64. (maybe other architectures too).

Should Atollic Studio be pausing on interrupts while debugging?

I'm using Atollic TrueSTUDIO for ARM 5.0.0 Lite for debugging an STM32F3 application via the SWD debug interface. The application receives data via interrupts from a USART.
When I "step over" a relatively long function, the application doesn't pause, i.e. the program does not reach the line after the call. When I then manually pause the application, I find it to be at the entry of the USART ISR, so I concluded that the execution was paused, even though Atollic's debugger didn't recognize it.
The bigger problem is that the same happens when I simply resume: I can't run my application with the debugger attached, as every byte on the USART pauses it.
Is my analysis of the situation correct? Is this the expected behavior, and is there a way to work around it? Non-Atollic specific answers are also very welcome!
To be honest, I couldn't form a clear picture in my mind of what's really going on, but here's a possibility: you're not clearing the proper flags using the USART_ClearITPendingBit() function call from the standard peripheral library, or its equivalent in terms of direct register access. If you don't clear the proper bits, as soon as you return from the ISR, the hardware executes it again, so it looks like you're in an infinite loop inside the ISR.

Assembly code breakpoint does not work as expected

I am developing a STM32F2 device using GCC 4.7.4 and a Lauterbach Combiprobe JTAG debugger. In my code, I have the following statement to always break at a certain spot for testing purposes:
asm volatile ("BKPT #0");
This is the only breakpoint. When I run the program, I can see that my program reaches the breakpoint, but I cannot step beyond this breakpoint using my JTAG debugger. Instead, I have to move the PC counter past this instruction to get the program to execute.
This was working in the past, but I am at a loss to figure out why the behavior is different now. Any clues or hints would be appreciated.
There are so many broken JTAG debuggers. Probably you installed an update for the JTAG adapter?
When you have GDB as Debugger you might check if you can add a macro set PC=PC+4 to a button or key. But if this is possible depends on your IDE.
There is no general rule what happens with the program counter if you have a breakpoint instruction in you application code. Some CPUs stop at the address containing the breakpoint instruction, others stop after the breakpoint instruction.
Since you use the tag "lauterbach" I assume you are using a TRACE32 debugger from Lauterbach.
If you think the debuggers behavior was different in the past, I think you should contact the Lauterbach support.
For now you can workaround the issue with the following TRACE32 command
Break.Set T:0x1000 /Program /CMD "Register.Set PP r(PP)+2"
(where 0x1000 stands for the address, where your BKPT instruction is located.)

How does debuggers/exceptions work on a compiled program?

A debugger makes perfect sense when you're talking about an interpreted program because instructions always pass through the interpreter for verification before execution. But how does a debugger for a compiled application work? If the instructions are already layed out in memory and run, how can I be notified that a 'breakpoint' has been reached, or that an 'exception' has occurred?
With the help of hardware and/or the operating system.
Most modern CPUs have several debug registers that can be set to trigger a CPU exception when a certain address is reached. They often also support address watchpoints, which trigger exceptions when the application reads from or writes to a specified address or address range, and single-stepping, which causes a process to execute a single instruction and throw an exception. These exceptions can be caught by a debugger attached to the program (see below).
Alternatively, some debuggers create breakpoints by temporarily replacing the instruction at the breakpoint with an interrupt or trap instruction (thereby also causing the program to raise a CPU exception). Once the breakpoint is hit, the debugger replaces it with the original instruction and single-steps the CPU past that instruction so that the program behaves normally.
As far as exceptions go, that depends on the system you're working on. On UNIX systems, debuggers generally use the ptrace() system call to attach to a process and get a first shot at handling its signals.
TL;DR - low-level magic.

How does a debugger work?

I keep wondering how does a debugger work? Particulary the one that can be 'attached' to already running executable. I understand that compiler translates code to machine language, but then how does debugger 'know' what it is being attached to?
The details of how a debugger works will depend on what you are debugging, and what the OS is. For native debugging on Windows you can find some details on MSDN: Win32 Debugging API.
The user tells the debugger which process to attach to, either by name or by process ID. If it is a name then the debugger will look up the process ID, and initiate the debug session via a system call; under Windows this would be DebugActiveProcess.
Once attached, the debugger will enter an event loop much like for any UI, but instead of events coming from the windowing system, the OS will generate events based on what happens in the process being debugged – for example an exception occurring. See WaitForDebugEvent.
The debugger is able to read and write the target process' virtual memory, and even adjust its register values through APIs provided by the OS. See the list of debugging functions for Windows.
The debugger is able to use information from symbol files to translate from addresses to variable names and locations in the source code. The symbol file information is a separate set of APIs and isn't a core part of the OS as such. On Windows this is through the Debug Interface Access SDK.
If you are debugging a managed environment (.NET, Java, etc.) the process will typically look similar, but the details are different, as the virtual machine environment provides the debug API rather than the underlying OS.
As I understand it:
For software breakpoints on x86, the debugger replaces the first byte of the instruction with CC (int3). This is done with WriteProcessMemory on Windows. When the CPU gets to that instruction, and executes the int3, this causes the CPU to generate a debug exception. The OS receives this interrupt, realizes the process is being debugged, and notifies the debugger process that the breakpoint was hit.
After the breakpoint is hit and the process is stopped, the debugger looks in its list of breakpoints, and replaces the CC with the byte that was there originally. The debugger sets TF, the Trap Flag in EFLAGS (by modifying the CONTEXT), and continues the process. The Trap Flag causes the CPU to automatically generate a single-step exception (INT 1) on the next instruction.
When the process being debugged stops the next time, the debugger again replaces the first byte of the breakpoint instruction with CC, and the process continues.
I'm not sure if this is exactly how it's implemented by all debuggers, but I've written a Win32 program that manages to debug itself using this mechanism. Completely useless, but educational.
In Linux, debugging a process begins with the ptrace(2) system call. This article has a great tutorial on how to use ptrace to implement some simple debugging constructs.
If you're on a Windows OS, a great resource for this would be "Debugging Applications for Microsoft .NET and Microsoft Windows" by John Robbins:
http://www.amazon.com/dp/0735615365
(or even the older edition: "Debugging Applications")
The book has has a chapter on how a debugger works that includes code for a couple of simple (but working) debuggers.
Since I'm not familiar with details of Unix/Linux debugging, this stuff may not apply at all to other OS's. But I'd guess that as an introduction to a very complex subject the concepts - if not the details and APIs - should 'port' to most any OS.
I think there are two main questions to answer here:
1. How the debugger knows that an exception occurred?
When an exception occurs in a process that’s being debugged, the debugger gets notified by the OS before any user exception handlers defined in the target process are given a chance to respond to the exception. If the debugger chooses not to handle this (first-chance) exception notification, the exception dispatching sequence proceeds further and the target thread is then given a chance to handle the exception if it wants to do so. If the SEH exception is not handled by the target process, the debugger is then sent another debug event, called a second-chance notification, to inform it that an unhandled exception occurred in the target process. Source
2. How the debugger knows how to stop on a breakpoint?
The simplified answer is: When you put a break-point into the program, the debugger replaces your code at that point with a int3 instruction which is a software interrupt. As an effect the program is suspended and the debugger is called.
Another valuable source to understand debugging is Intel CPU manual (Intel® 64 and IA-32 Architectures
Software Developer’s Manual). In the volume 3A, chapter 16, it introduced the hardware support of debugging, such as special exceptions and hardware debugging registers. Following is from that chapter:
T (trap) flag, TSS — Generates a debug exception (#DB) when an attempt is
made to switch to a task with the T flag set in its TSS.
I am not sure whether Window or Linux use this flag or not, but it is very interesting to read that chapter.
Hope this helps someone.
My understanding is that when you compile an application or DLL file, whatever it compiles to contains symbols representing the functions and the variables.
When you have a debug build, these symbols are far more detailed than when it's a release build, thus allowing the debugger to give you more information. When you attach the debugger to a process, it looks at which functions are currently being accessed and resolves all the available debugging symbols from here (since it knows what the internals of the compiled file looks like, it can acertain what might be in the memory, with contents of ints, floats, strings, etc.). Like the first poster said, this information and how these symbols work greatly depends on the environment and the language.

Resources