Assume a debugger(common x86 ring3 debugger such as olly, IDA, gdb...) sets a
software breakpoint to virtual address 0x1234.
this is accomplished by replacing the whatever opcode at 0x1234 to '0xCC'
now let's assume that debugee process runs this 0xCC instruction and raises
software exception and debugger catches this.
debugger inspects memory contents, registers and do some stuff.. and
now it wants to resume the debugee process.
this is as far as I know. from now, its my assumption.
debugger recovers the original opcode(which was replaced to 0xCC) of
debugee in order to resume the execution.
debugger manipulates the EIP of debugee's CONTEXT to point the
recovered instruction.
debugger handles the exception and now, debugee resumes from breakpoint.
but debugger wants the breakpoint to remain.
how can debugger manage this?
To answer the original question directly, from the GDB internals manual:
When the user says to continue, GDB will restore the original
instruction, single-step, re-insert the trap, and continue on.
in short and common people words:
Since getting into debug state is atomic operation in X86 and in ARM the processor gets into it and exit of debug state as same as any other instruction in the architecture.
see gdb documentation explains how it works and can be used.
Here are some highlights from ARM and X86 specifications:
in ARM:
SW (Software) breakpoints are implemented by temporarily replacing the
instruction opcode at the breakpoint location with a special
"breakpoint" instruction immediately prior to stepping or executing
your code. When the core executes the breakpoint instruction, it will
be forced into debug state. SW breakpoints can only be placed in RAM
because they rely on modifying target memory.
A HW (Hardware) breakpoint is set by programming a watchpoint unit to monitor the core
busses for an instruction fetch from a specific memory location. HW
breakpoints can be set on any location in RAM or ROM. When debugging
code where instructions are copied (Scatterloading), modified or the
processor MMU remaps areas of memory, HW breakpoints should be used.
In these scenarios SW breakpoints are unreliable as they may be either
lost or overwritten.
In X86:
The way software breakpoints work is fairly simple. Speaking about x86
specifically, to set a software breakpoint, the debugger simply writes
an int 3 instruction (opcode 0xCC) over the first byte of the target
instruction. This causes an interrupt 3 to be fired whenever execution
is transferred to the address you set a breakpoint on. When this
happens, the debugger “breaks in” and swaps the 0xCC opcode byte with
the original first byte of the instruction when you set the
breakpoint, so that you can continue execution without hitting the
same breakpoint immediately. There is actually a bit more magic
involved that allows you to continue execution from a breakpoint and
not hit it immediately, but keep the breakpoint active for future use;
I’ll discuss this in a future posting.
Hardware breakpoints are, as you might imagine given the name, set
with special hardware support. In particular, for x86, this involves a
special set of perhaps little-known registers know as the “Dr”
registers (for debug register). These registers allow you to set up to
four (for x86, this is highly platform specific) addresses that, when
either read, read/written, or executed, will cause the processor to
throw a special exception that causes execution to stop and control to
be transferred to the debugger
Related
As far as I know SW-breakpoints are working as follows:
The instruction the BP is set to gets substituted by a int/trap instruction, than the trap is handled in a trap handler, on continue the trap is replaced by the original instruction, the instruction is executed in single step mode, now the PC points to the next instruction and the original instruction is replaced again by a int/trap instruction.
HW Breakpoints work as follows according to my understanding:
The address of the instruction the BP is set to is written in a HW-BP Register. If the instruction is hit respectively the PC matches the address in the HW-BP Register, the CPU raises an interrupt which is also handled by a trap handler. Now if the program returns to the orignial instruction the HW BP is still active and one is caught in an infinite loop.
How is that problem treated?
Is the HW BP disabled before continuing and is the orignal instruction also getting executed in single step mode? Or is the original instruction executed before the trap handler is entered, so that the trap handler returns to the instruction after the original instruction? Or is there an other mechanism?
In case of the Intel 64 and IA-32 ("x64/x86") architectures, this is the task of the Resume Flag (RF), bit 16 in EFLAGS. (Other processor architectures that support hardware breakpoints probably have a similar mechanism.)
See section 18.3.1.1 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B:
Because the debug exception for an instruction breakpoint is generated before the instruction is executed, if the instruction breakpoint is not removed by the exception handler; the processor will detect the instruction breakpoint again when the instruction is restarted and generate another debug exception. To prevent looping on an instruction breakpoint, the Intel 64 and IA-32 architectures provide the RF flag (resume flag) in the EFLAGS register (see Section 2.3, “System Flags and Fields in the EFLAGS Register,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A). When the RF flag is set, the processor ignores instruction breakpoints.
[...]
The RF Flag is cleared at the start of the instruction after the check for code breakpoint, CS limit violation and FP exceptions.
[...]
If the RF flag in the EFLAGS image is set when the processor returns from the exception handler, it is copied into the RF flag in the EFLAGS register by IRETD/IRETQ or a task switch that causes the return. The processor then ignores instruction breakpoints for the duration of the next instruction. (Note that the POPF, POPFD, and IRET instructions do not transfer the RF image into the EFLAGS register.) Setting the RF flag does not prevent other types of debug-exception conditions (such as, I/O or data breakpoints) from being detected, nor does it prevent non-debug exceptions from being generated.
(Emphasis mine.)
So, the debugger will set RF before returning from the exception handler so that instruction breakpoints are "muted" for one instruction, after which the flag is automatically cleared by the processor.
Note that this is not a concern in the case of data breakpoints because these will fire after the instruction that triggered the read/write operation.
Recommendation: I find the slides of "Intermediate x86 Part 4" by Xeno Kovah to be helpful in understanding these things. He talks about various topics there but starts with debugging. This information in particular can be found on slides 12-13:
Image credit: Xeno Kovah, CC BY-SA 3.0
The windbg command tct executes a program until it reaches a call instruction or a ret instruction. I am wondering how the debugger implements this functionality under the hood.
I could imagine that the debugger scans the instructions from the current instructions for the next call or ret and sets according breakpoints on the found instructions. However, I think this is unlikely because it would also have to take into account jmp instructions so that there are an arbitrary number of possible call or ret instructions where such a breakpoint would have to be set.
On the other hand, I wonder if the x86/x64 CPU provides a functionality that raises an exception to be caught by the debugger whenever the CPU is about to process a call or ret instruction. Yet, I have not heard of such a functionality.
I'd guess that it single-steps repeatedly, until the next instruction is a call or ret, instead of trying to figure out where to set a breakpoint. (Which in the general case could be as hard as solving the Halting Problem.)
It's possible it could optimize that by scanning forward over "straight line" code and setting a breakpoint on the next jmp/jcc/loop or other control-transfer instruction (e.g. xabort), and also catching signals/exceptions that could transfer control to an SEH handler.
I'm also not aware of any HW support for breaking on a certain type of instruction or opcode: the x86 debug registers DR0..7 allow hardware breakpoints at code addresses without rewriting machine code to int3, and also hardware watchpoints (to trap data load/store to a specific address or range of addresses). But not filtering by opcode.
In the Linux kernel, when a breakpoint I register with register_wide_hw_breakpoint is triggered, the callback handler endlessly runs until the breakpoint is unregistered.
Background: To test a driver for some hardware we are making, I am writing a second kernel module that emulates the hardware interface. My intent is to set a watchpoint on a memory location that in the hardware would be a control register, so that writing to this 'register' can trigger an operation by the emulator driver.
See here for a complete sample.
I set the breakpoint as follows:
hw_breakpoint_init(&attr);
attr.bp_addr = kallsyms_lookup_name("test_value");
attr.bp_len = HW_BREAKPOINT_LEN_4;
attr.bp_type = HW_BREAKPOINT_W;
test_hbp = register_wide_hw_breakpoint(&attr, test_hbp_handler, NULL);
but when test_value is written to, the callback (test_hbp_handler) is triggered continually without control ever returning to the code that was about to write to test_value.
1) What should I be doing differently for this to work as expected (return execution to code that triggered breakpoint)?
2) How do I capture the value that was being written to the memory location?
In case this matters:
$ uname -a
Linux socfpga-cyclone5 3.10.37-ltsi-rt37-05714-ge4ee387 #1 SMP PREEMPT RT Mon Jan 5 17:51:35 UTC 2015 armv7l GNU/Linux
This is by design. When an ARM hardware watchpoint is hit, it generates a Data Abort exception. On ARM, Data Abort exceptions trigger before the instruction that triggers them finishes1. This means that, in the exception handler, registers and memory locations affected by the instruction still hold their old values (or, in some cases, undefined values). As such, when the handler finishes, it must retry the aborted instruction so that the interrupted program runs as intended2. If the watchpoint is still set when the handler returns, the instruction will trigger it again. This causes the loop you're seeing.
To get around this, userspace debuggers like GDB single-step over any instruction that hits a watchpoint with that watchpoint disabled before resuming execution. The underlying kernel API, however, just exposes the hardware watchpoint behavior directly. Since you're using the kernel API, it's up to your event handler to ensure that the watchpoint doesn't trigger on the retried instruction.
[The ARM watchpoint code in the kernel actually does support automatic single-step, but only under very specific conditions. Namely, it requires 1) that no event handler is registered to the watchpoint, 2) that the watchpoint is in userspace, and 3) that the watchpoint is not associated with a particular CPU. Since your use case violates at least (1) and (2), you have to find another solution.]3
Unfortunately, on ARM, there's no foolproof way to keep the watchpoint enabled without causing a loop. The breakpoint mode that GDB uses to single-step programs, "instruction mismatch," generates UNPREDICTABLE behaviour when used in kernel mode4. The best you can do is disable the watchpoint in your handler and then set a standard breakpoint to re-enable it on an instruction that you know will execute soon after.
For your MMIO emulation driver, watchpoints are probably not the answer. In addition to the issues just mentioned, most ARM cores have very few watchpoint registers, so the solution would not scale. I'm afraid I'm not familiar enough with ARM's memory model to suggest an alternative approach. However, Linux's existing code for emulating memory-mapped IO for virtual machines might be a good place to start.
1There are two types of Data Abort exceptions, synchronous and asynchronous, and it's left to the implementation to decide which one a watchpoint generates. I'm describing the behavior of synchronous exceptions in this answer, because that's what would cause the problem you're having.
2ARMv7-A/R Architecture Reference Manual, B1.9.8, "Data Abort exception."
3Linux Kernel v4.6, arch/arm/kernel/hw_breakpoint.c, lines 634-661.
4ARMv7-A/R Architecture Reference Manual, C3.3.3, "UNPREDICTABLE cases when Monitor debug-mode is selected."
How does a debugger set breakpoints if the image is in read-only memory? I know there are hardware breakpoints, but in the debugger I use (OllyDbg) those have to be set specially using a different dialog than normal breakpoints.
Explanation:
Here is a routine in a debugger that is comparing itself to a copy of itself. EDX points to the running image, EBX points to the known good copy of the image. The breakpoint on 4010CE only is reached if there is a mismatch. The character being compared is in the AL register. As you can see the debugger shows EB F6 at 10CE, but this is false. 10CE actually has CC in it, as you can see by looking at the AL register. This is because the debugger has secretely inserted the CC to perform the breakpoint.
The debugger first has to change the memory protection of the page it wants to write to. This can be done with VirtualProtectEx. After that it is able to write with WriteProcessMemory and then set the protection back to the original value.
Let me preface this with a disclaimer that I'm not familiar with your particular toolset.
If you haven't enabled hardware breakpoints, the only remaining breakpoint type is a software breakpoint. These are only hit (on x86 because that's what I'm most familiar with) when you replace the first byte of an instruction with a trap instruction, and will only be routed through the breakpoint mechanism of your OS to your debugger if the correct trap instruction for your OS is used and the debugger has already registered itself with the OS as a debugger for this process. In order to cause the software breakpoint to happen at the correct moment, the trap instruction must be written into your code segment over the first byte of your correct instruction.
The two answers that got here first explain the two scenarios which could get you here (at least, the only two I can think of):
The kernel always has write access everywhere, except for hardware-protected pages (ie on some sort of ROM), which your process' memory is almost certainly not. It has the ability to write the breakpoint instruction regardless of the permissions exposed to the user process being debugged.
The debugger must use some syscall to change the access rights on the memory of the target process before inserting the breakpoint.
Personally, I'm guessing the first thing is happening. The segment permissions are only in place to protect your target process from itself, not from a debugger process or from the kernel. Debugging mechanisms in operating systems pretty regularly violate "normal" permissions to allow the debugger to do whatever it wants to the target process. This, of course, is why some operating systems require you to enter a password before you're allowed to use the debugger in certain scenarios.
However, you can test if it's the second one by attempting to write to the code segment from inside the target process after a breakpoint has been set. If the write succeeds, you know the permissions have been lowered by the OS (to allow the process to be debugged). It would be pretty awkward for the OS to require a debugger to jump through this hoop since it can already insert arbitrary code into the writeable parts of memory and then force a jump to it by generating a stack frame overflow.
The debugger takes advantage of the WriteProcessMemory() function to alter the instruction in place. It'll keep a copy of the instruction. When the bp is hit it will reset the old byte value and set EIP back to the previous instruction so the real instruction can execute.
Here I searched that:
Trap Flag (T) – This flag is used for on-chip debugging. Setting trap
flag puts the microprocessor into single step mode for debugging. In
single stepping, the microprocessor executes a instruction and enters
into single step ISR.
If trap flag is set (1), the CPU automatically generates an internal
interrupt after each instruction, allowing a program to be inspected
as it executes instruction by instruction.
If trap flag is reset (0), no function is performed.
https://en.wikipedia.org/wiki/Trap_flag
Now I am coding on emu-8086. As explained, TF must be set in order to debugger work.
Should I set a TF always myself or it is set automatically?
If I somehow set a TF to 0, will the whole computer systems debuggers work or just emu-8086 wont debug?
I've never used emu8086 but by looking at some screenshot of it and judging by its name it's probably an emulator - this means it is not running the code natively.
Each instruction is changing the state of a virtual 8086 CPU (represented as a data structure in memory) and not the state of your real CPU.
With this emulation, emu8086 doesn't need to rely on the TF flag to single-step your program, it just needs to stop after one step of emulation and wait for you to hit another button.
This is also why you can find a thing such as "Step back".
If you were wondering what would happen if a debugged program (and not an emulated one) sets the TF flag then the answer is that it depends on the debugger.
The correct behaviour is the one where the debuggee receives the exceptions but this is hard to handle correctly (since the debugger itself uses the TF flag).
Some debugger just don't care and swallow the exception (i.e. they don't forward it to the program under debug) assuming that a well written program doesn't need to use the TF flag.
Unfortunately malwares routinely use a set of anti-debug technique including setting the TF and checking it back/waiting for exceptions to detect the presence of a debugger.
A truly transparent debugger has to handle the RFLAGS register carefully.
When debugging with breakpoints the TF is not set while the program is executing, so there is nothing to worry about.
However when single stepping the TF is set during the next instruction, this is problematic during a pushfd/q and the debugger must explicitly handle that case to avoid detection.
If the debuggee sets the TF the debugger must pass the debug exception to the program - under current OS the TF won't last more than an instruction because the OS will catch the exception,
trasnform it in a signal and dispatch it to the program while clearing the TF. So the debugger can simply do a check before stepping into a popfd/q instruction.
Where the TF doesn't get cleared by the OS the debugger must effectively emulate RFLAGS with a copy.
The debugger sets TF according to what it needs to do. The code being debugged should not modify TF.