Assembly - Why this CALL function doesn't work? - debugging

I don't understand why CALL function in this code doesn't work:
#include<stdio.h>
void main() {
__asm {
jmp L1
L2:
mov eax, 8
ret
L1:
call L2
}
}
If i debug the code step by step, the line 'call L1' is not processed, and program directly skips to the end. What is wrong? I'm working on VisualStudio2015 with Intel 32-bit registers.

The problem
You've stumbled on the difference between step over F10 and step into F11.
When you use (the default) step over, call appears to be ignored.
You need to step into the code and then the debugger will behave as you'd expect.
Step over
The way this works with step over is that the debugger sets a breakpoint on the next instruction, halts there and moves the breakpoint to the next instruction again.
Step over knows about (conditional) jumps and accounts for that, but disregards (steps over) call statements; it interprets a call as a jump to another subroutine and 'assumes' you want to stay within the current context.
These automatic breakpoints are ephemeral, unlike manual breakpoints which persist until you cancel them.
Step into
Step into does the same, but also sets a breakpoint at every call destination; in effect leading you deep into the woods traversing every subroutine.
Step out
If you've stepped too deep 'into' a subroutine Visual Studio allows you to step out using ShiftF11; this will take you back to the next instruction after the originating call.
Some other debuggers name this feature "run until return".
Debugging high level code
When the debugger is handling higher language source code (e.g. C) it keeps a list of target addresses for every line of source code. It will plan its breakpoints per line of source code.
Other than the fact that every line of high level code translates to zero or more lines of assembly it works the same as stepping through raw assembly code.

Related

How does the tct command work under the hood?

The windbg command tct executes a program until it reaches a call instruction or a ret instruction. I am wondering how the debugger implements this functionality under the hood.
I could imagine that the debugger scans the instructions from the current instructions for the next call or ret and sets according breakpoints on the found instructions. However, I think this is unlikely because it would also have to take into account jmp instructions so that there are an arbitrary number of possible call or ret instructions where such a breakpoint would have to be set.
On the other hand, I wonder if the x86/x64 CPU provides a functionality that raises an exception to be caught by the debugger whenever the CPU is about to process a call or ret instruction. Yet, I have not heard of such a functionality.
I'd guess that it single-steps repeatedly, until the next instruction is a call or ret, instead of trying to figure out where to set a breakpoint. (Which in the general case could be as hard as solving the Halting Problem.)
It's possible it could optimize that by scanning forward over "straight line" code and setting a breakpoint on the next jmp/jcc/loop or other control-transfer instruction (e.g. xabort), and also catching signals/exceptions that could transfer control to an SEH handler.
I'm also not aware of any HW support for breaking on a certain type of instruction or opcode: the x86 debug registers DR0..7 allow hardware breakpoints at code addresses without rewriting machine code to int3, and also hardware watchpoints (to trap data load/store to a specific address or range of addresses). But not filtering by opcode.

Setting a random address breakpoint in gdb

I am learning some mechanism of breakpoint and I learned that 'In x86, there exist a instruction called int3 for debugger to interrupt the CPU. And then CPU will interrupt the running program by signal'.
For example:
8048e20: 55 push %ebp
8048e21: 89 e5 mov %esp,%ebp
When the user input
b *0x8048e21
The instruction will be replaced by int3(opcode 0xcc) and become this:
8048e20: 55 push %ebp
8048e21: cc e5 mov %esp,%ebp
And it will stop at the right place.
Then comes the question:
What would happen if I set the breakpoint not at the beginning of a instruction? ie, if I input:
b *0x8048e22
will debugger still replace the e5 with cc? So I write a simple example and run it with gdb.
As you can see above, I set two break points and the second is at the middle of a break points. I Input r and stop at the first breakpoint and input c and run to the end.
So it seems that the gdb ignore the second breakpoint. (For if it really repalce it with a int3 the program would be totally wrong).
Question: What happen to the second breakpoint, more specifically, what does gdb deal with it( or what I learn is wrong?)
Edit: #dbrank already give a great example about altering the data field of a instruction, I will try to make it more comprehensive with a similar example (it seems the register).
(Any reference about mechanism of breakpoint is appreciated!)
Inserting breakpoint in the middle of instruction will alter the instruction.
See this example of a program, where inserting a breakpoint overwrites original value assigned to variable (42 (0x2a)) with breakpoint instruction (0xcc (204)).
You can find more about how breakpoints work here.
You can also look into GDB sources (breakpoint.c & infrun.c mostly).

Debugger implementation - Step over issue

I am currently writing a debugger for a script virtual machine.
The compiler for the scripts generates debug information, such as function entry points, variable scopes, names, instruction to line mappings, etc.
However, and have run into an issue with step-over.
Right now, I have the following:
1. Look up the current IP
2. Get the source line from that
3. Get the next (valid) source line
4. Get the IP where the next valid source line starts
5. Set a temporary breakpoint at that instruction
or: if the next source line no longer belongs to the same function, set the temp breakpoint at the next valid source line after return address.
So far this works well. However, I seem to be having problems with jumps.
For example, take the following code:
n = 5; // Line A
if(n == 5) // Line B
{
foo(); // Line C
}
else
{
bar(); // Line D
--n;
}
Given this code, if I'm on line B and choose to step-over, the IP determined for the breakpoint will be on line C. If, however, the conditional jump evaluates to false, it should be placed on line D. Because of this, the step-over wouldn't halt at the expected location (or rather, it wouldn't halt at all).
There seems to be little information on debugger implementation of this specific issue out there. However, I found this. While this is for a native debugger on Windows, the theory still holds true.
It seems though that the author has not considered this issue, either, in section "Implementing Step-Over" as he says:
1. The UI-threads calls CDebuggerCore::ResumeDebugging with EResumeFlag set to StepOver.
This tells the debugger thread (having the debugger-loop) to put IBP on next line.
2. The debugger-thread locates next executable line and address (0x41141e), it places an IBP on that location.
3. It calls then ContinueDebugEvent, which tells the OS to continue running debuggee.
4. The BP is now hit, it passes through EXCEPTION_BREAKPOINT and reaches at EXCEPTION_SINGLE_STEP. Both these steps are same, including instruction reversal, EIP reduction etc.
5. It again calls HaltDebugging, which in turn, awaits user input.
Again:
The debugger-thread locates next executable line and address (0x41141e), it places an IBP on that location.
This statement does not seem to hold true in cases where jumps are involved, though.
Has anyone encountered this problem before? If so, do you have any tips on how to tackle this?
Since this thread comes in Google first when searching for "debugger implement step over". I'll share my experiences regarding the x86 architecture.
You start first by implementing step into: This is basically single stepping on the instructions and checking whether the line corresponding to the current EIP changes. (You use either the DIA SDK or the read the dwarf debug data to find out the current line for an EIP).
In the case of step over: before single stepping to the next instruction, you'll need to check if the current instruction is a CALL instuction. If it's a CALL instruction then put a temporary breakpoint on the instruction following it and continue execution till the execution stops (then remove it). In this case you effectively stepped over function calls literally in the assembly level and so in the source too.
No need to manage stack frames (unless you'll need to deal with single line recursive functions). This analogy can be applied to other architectures as well.
Ok, so since this seems to be a bit of black magic, in this particular case the most intelligent thing was to enumerate the instruction where the next line starts (or the instruction stream ends + 1), and then run that many instructions before halting again.
The only gotcha was that I have to keep track of the stack frame in case CALL is executed; those instructions should run without counting in case of step-over.

How can I debug into an unmanaged BCL (InternalCall) method?

I want to debug into the implementation of a [MethodImpl(MethodImplOptions.InternalCall)] BCL method, which is presumably implemented in C++. (In this particular case, I'm looking at System.String.nativeCompareOrdinal.) This is mainly because I'm nosy and want to know how it's implemented.
However, the Visual Studio debugger is refusing to step into that method. I can set a breakpoint on this call:
"Hello".Equals("hello", StringComparison.OrdinalIgnoreCase);
then bring up Debug > Windows > Disassembly, step into the Equals call, and step until it gets to the call x86 instruction. But when I try to use "Step Into" on that call (which I know from Reflector is the nativeCompareOrdinal call), it doesn't step to the first instruction inside nativeCompareOrdinal like I want -- it steps over instead, and goes straight to the next x86 instruction in Equals.
I'm building as x86, since mixed-mode debugging isn't supported for x64 apps. I've unchecked "Just My Code" in Tools > Options > Debugging, and I have "Enable unmanaged code debugging" checked in project properties > Debug tab, but it still steps over the call. I also tried starting the process and then attaching the debugger, and explicitly attaching both the managed and native debuggers, but it still won't step into that InternalCall method.
How can I get the Visual Studio debugger to step into an unmanaged method?
Yes, it is tricky. The offset you see for the CALL instruction is bogus. Plus it won't let you navigate to an unmanaged code address when the current focus is on a managed function.
Start by enabling unmanaged code debugging and setting a breakpoint on the call. Run the code and when the break point hits use Debug + Windows + Disassembly:
"Hello".Equals("hello", StringComparison.OrdinalIgnoreCase);
00000025 call 6E53D5D0
0000002a nop
The debugger tries to display the absolute address but gets it wrong because it uses the bogus incremental address instead of the real instruction address. So first recover the true relative value: 0x6E53D5D0 - 0x2A = 0x6E53D5A6.
Next you need to find the real code address. Debug + Windows + Registers and look at the value of the EIP register. 0x009A0095 in my case. Add 5 to get to the nop, then add the relative offset: 0x9A0095 + 5 + 0x6E53D5A6 = 0x6EEDD640. The real address of the function.
Debug + Windows + Call Stack and double-click an unmanaged stack frame. Now you can enter the calculated address in the Disassembly window's Address box, prefix with 0x.
6EEDD640 push ebp
6EEDD641 mov ebp,esp
6EEDD643 push edi
6EEDD644 push esi
6EEDD645 push ebx
6EEDD646 sub esp,18h
etc...
Bingo, you're know you're good if you see the stack frame setup code. Set a breakpoint on it and press F5.
Of course, you'll be stepping machine code since there's no source code available. You'll get much better insight in what this code is doing by looking at the SSCLI20 source code. No guarantee that it will be a match for the actual code in your current version of the CLR but my experience is that these low-level code chunks that have been around since 1.0 are highly conserved. The implementation is in clr\src\classlibnative\nls, not sure which source code file. It won't be named "nativeCompareOrdinal", that's just an internal name used by ecall.cpp.

implementing step over, dwarf

Im working on a source level debugger. The debug info available in elf
format. How could be 'step over' implemented?
The problem is at 'Point1', anyway I can wait for the
next source line (reading it from the .debug_line table).
Thanks
if (a == 1)
x = 1; //Point1
else if (a == 2)
x = 1;
z = 1;
I'm not sure I understand the question entirely, but I can tell you how GDB implements its step command.
Once control has entered a particular compilation unit, GDB reads that CU's debugging information; in particular, it reads the CU's portion of the .debug_line section and builds a table that maps instruction addresses to source code positions.
When the step begins, GDB looks up the source location for the current PC. Then it steps by machine instruction, looking up the source location of the new PC each time, until the source location changes. When the source location changes, the step is complete.
It also computes the frame ID—the base address of the stack frame, and the start address of the function—after each step, and checks if that has changed. If it has, that means that we've stepped into or returned from a recursive call, and the step is complete.
To see why it's necessary to check the frame ID as well as the source location, consider stepping through a call to the following function:
int fact(n) { if (n > 0) { return n * fact(n-1); } else return 1; }
Since this function is defined entirely on the same source line, stepping by instruction until the source line changes would step you through all the recursive calls without stopping. However, when we enter a new call to fact, the stack frame base address will have changed, indicating that we should stop. This gives us the following behavior:
fact (n=10) at recurse.c:4
(gdb) step
fact (n=9) at recurse.c:4
(gdb) step
fact (n=8) at recurse.c:4
GDB's next command combines this general behavior with appropriate logic for recognizing function calls and letting them return to completion. As before, one must use frame IDs in deciding when calls have truly returned to the original frame; and there are other complications.
It's worth thinking a bit about how to treat inlined instances of functions (which DWARF does describe). But that's a bit much for this question.
Not to discourage experimentation, but if I were beginning a debugger project, I would want to look at Apple's work-in-progress debugger, lldb, which is open source.

Resources