Im working on a source level debugger. The debug info available in elf
format. How could be 'step over' implemented?
The problem is at 'Point1', anyway I can wait for the
next source line (reading it from the .debug_line table).
Thanks
if (a == 1)
x = 1; //Point1
else if (a == 2)
x = 1;
z = 1;
I'm not sure I understand the question entirely, but I can tell you how GDB implements its step command.
Once control has entered a particular compilation unit, GDB reads that CU's debugging information; in particular, it reads the CU's portion of the .debug_line section and builds a table that maps instruction addresses to source code positions.
When the step begins, GDB looks up the source location for the current PC. Then it steps by machine instruction, looking up the source location of the new PC each time, until the source location changes. When the source location changes, the step is complete.
It also computes the frame ID—the base address of the stack frame, and the start address of the function—after each step, and checks if that has changed. If it has, that means that we've stepped into or returned from a recursive call, and the step is complete.
To see why it's necessary to check the frame ID as well as the source location, consider stepping through a call to the following function:
int fact(n) { if (n > 0) { return n * fact(n-1); } else return 1; }
Since this function is defined entirely on the same source line, stepping by instruction until the source line changes would step you through all the recursive calls without stopping. However, when we enter a new call to fact, the stack frame base address will have changed, indicating that we should stop. This gives us the following behavior:
fact (n=10) at recurse.c:4
(gdb) step
fact (n=9) at recurse.c:4
(gdb) step
fact (n=8) at recurse.c:4
GDB's next command combines this general behavior with appropriate logic for recognizing function calls and letting them return to completion. As before, one must use frame IDs in deciding when calls have truly returned to the original frame; and there are other complications.
It's worth thinking a bit about how to treat inlined instances of functions (which DWARF does describe). But that's a bit much for this question.
Not to discourage experimentation, but if I were beginning a debugger project, I would want to look at Apple's work-in-progress debugger, lldb, which is open source.
Related
I'll be using Visual Studio for C/C++ as a framework because that's the debugger I'm most interested in.
When I set a breakpoint in the code, it becomes "immediately" active even if the code is already running. As far as I understand, this is done by Read/WriteProcessMemory. What I don't understand is how you get the exact memory address that you write the int 3 instruction into.
For static breakpoints, it's easy because then the compiler could essentially parse the breakpoint alongside the source code, and insert the int 3 instruction as a natural part of code gen. But with dynamic breakpoints I don't see how you can map a breakpoint on an arbitrary line of source code into the correct address for the executable as it is running. How do you calculate it?
I'm messing around with running old DOS programs in an emulator, and I've gotten to the point where I'd like to trace the program's stack. However, I'm running into a problem, specifically how to detect near calls and far calls. Some pretext:
A near call pushes only the IP onto the stack, and is expected to be paired with a ret which pops only the IP to return to.
A far call pushes both the CS and IP onto the stack, and is expected to be paired with a retf which pops both the CS and IP to return to.
There is no way to know whether a call is a near call or a far call, except by knowing which kind of instruction called it, or which return it uses.
Luckily, for the period this program was developed in, BP-based stack frames were very common, so walking the stack doesn't seem to be a problem: I just follow the BP-chain. Unfortunately, getting the CS and/or IP is difficult, because there doesn't seem to be any way for me to determine whether a call is a near call or a far call by looking at the stack alone.
I have metadata about functions available, so I can tell whether a function is a near or far call if I already know the actual CS and IP, but I can't figure out the IP and CS unless I already know if it's a far call or near call.
I'm having a little success by just guessing and seeing if my guess results in a valid function lookup, but I think this method will produce a lot of false positives.
So my question is this: How did debuggers of the DOS era deal with this problem and produce stack traces? Is there some algorithm for this I'm missing, or did they just encode debug information in the stack? (If this is the case, then I'll have to come up with something else.)
Just a guess, I've never actually used 16-bit x86 development tools (modern or back in the day):
You know the CS:IP value of the current function (or one that triggered a fault or whatever from an exception frame).
You might have metadata that tells you whether this is a "far" function that's called with a far call or not. Or you could attempt decoding until you get to a retn or retf, and use that to decide whether the return address is a near IP or a far CS:IP.
(Assuming this is a normal function that returns with some kind of ret. Or if it ends with a jmp tailcall to another function, then the return address probably matches that, but that's another level of assumptions. And figuring out that a near jmp is the end of a function instead of just a jump within a large function is am ambiguous problem without any symbol metadata.)
But anyway, apply the same thing to the parent function: after one level of successful backtracing, you now have the CS:IP of the instruction after the call in your parent function, and the SS:BP value of the BP linked list.
And BTW, yes there's a very good reason for legacy BP stack frames being widely used: [SP] isn't a valid 16-bit addressing mode, and only [BP] as a base implies SS as a segment, so yes, using BP for access to the stack was the only good option for random access (not just push/pop for temporaries). No reason not to save/restore it first (before any other registers or reserving stack space) to make a conventional stack-frame.
I don't understand why CALL function in this code doesn't work:
#include<stdio.h>
void main() {
__asm {
jmp L1
L2:
mov eax, 8
ret
L1:
call L2
}
}
If i debug the code step by step, the line 'call L1' is not processed, and program directly skips to the end. What is wrong? I'm working on VisualStudio2015 with Intel 32-bit registers.
The problem
You've stumbled on the difference between step over F10 and step into F11.
When you use (the default) step over, call appears to be ignored.
You need to step into the code and then the debugger will behave as you'd expect.
Step over
The way this works with step over is that the debugger sets a breakpoint on the next instruction, halts there and moves the breakpoint to the next instruction again.
Step over knows about (conditional) jumps and accounts for that, but disregards (steps over) call statements; it interprets a call as a jump to another subroutine and 'assumes' you want to stay within the current context.
These automatic breakpoints are ephemeral, unlike manual breakpoints which persist until you cancel them.
Step into
Step into does the same, but also sets a breakpoint at every call destination; in effect leading you deep into the woods traversing every subroutine.
Step out
If you've stepped too deep 'into' a subroutine Visual Studio allows you to step out using ShiftF11; this will take you back to the next instruction after the originating call.
Some other debuggers name this feature "run until return".
Debugging high level code
When the debugger is handling higher language source code (e.g. C) it keeps a list of target addresses for every line of source code. It will plan its breakpoints per line of source code.
Other than the fact that every line of high level code translates to zero or more lines of assembly it works the same as stepping through raw assembly code.
I am currently writing a debugger for a script virtual machine.
The compiler for the scripts generates debug information, such as function entry points, variable scopes, names, instruction to line mappings, etc.
However, and have run into an issue with step-over.
Right now, I have the following:
1. Look up the current IP
2. Get the source line from that
3. Get the next (valid) source line
4. Get the IP where the next valid source line starts
5. Set a temporary breakpoint at that instruction
or: if the next source line no longer belongs to the same function, set the temp breakpoint at the next valid source line after return address.
So far this works well. However, I seem to be having problems with jumps.
For example, take the following code:
n = 5; // Line A
if(n == 5) // Line B
{
foo(); // Line C
}
else
{
bar(); // Line D
--n;
}
Given this code, if I'm on line B and choose to step-over, the IP determined for the breakpoint will be on line C. If, however, the conditional jump evaluates to false, it should be placed on line D. Because of this, the step-over wouldn't halt at the expected location (or rather, it wouldn't halt at all).
There seems to be little information on debugger implementation of this specific issue out there. However, I found this. While this is for a native debugger on Windows, the theory still holds true.
It seems though that the author has not considered this issue, either, in section "Implementing Step-Over" as he says:
1. The UI-threads calls CDebuggerCore::ResumeDebugging with EResumeFlag set to StepOver.
This tells the debugger thread (having the debugger-loop) to put IBP on next line.
2. The debugger-thread locates next executable line and address (0x41141e), it places an IBP on that location.
3. It calls then ContinueDebugEvent, which tells the OS to continue running debuggee.
4. The BP is now hit, it passes through EXCEPTION_BREAKPOINT and reaches at EXCEPTION_SINGLE_STEP. Both these steps are same, including instruction reversal, EIP reduction etc.
5. It again calls HaltDebugging, which in turn, awaits user input.
Again:
The debugger-thread locates next executable line and address (0x41141e), it places an IBP on that location.
This statement does not seem to hold true in cases where jumps are involved, though.
Has anyone encountered this problem before? If so, do you have any tips on how to tackle this?
Since this thread comes in Google first when searching for "debugger implement step over". I'll share my experiences regarding the x86 architecture.
You start first by implementing step into: This is basically single stepping on the instructions and checking whether the line corresponding to the current EIP changes. (You use either the DIA SDK or the read the dwarf debug data to find out the current line for an EIP).
In the case of step over: before single stepping to the next instruction, you'll need to check if the current instruction is a CALL instuction. If it's a CALL instruction then put a temporary breakpoint on the instruction following it and continue execution till the execution stops (then remove it). In this case you effectively stepped over function calls literally in the assembly level and so in the source too.
No need to manage stack frames (unless you'll need to deal with single line recursive functions). This analogy can be applied to other architectures as well.
Ok, so since this seems to be a bit of black magic, in this particular case the most intelligent thing was to enumerate the instruction where the next line starts (or the instruction stream ends + 1), and then run that many instructions before halting again.
The only gotcha was that I have to keep track of the stack frame in case CALL is executed; those instructions should run without counting in case of step-over.
I've heard the Google talk (http://www.youtube.com/watch?v=_gZK0tW8EhQ) by Ron Garret and read the paper (http://www.flownet.com/gat/jpl-lisp.html), but I'm not understanding how it worked to "correct" supposedly running code with a REPL. Was the DS-1's Lisp code running is some sort of virtual machine? Or was it "live" in the REPL's actual world? Or was the Lisp code an executable that got replaced? What exactly happened/happens when you dynamically change running Lisp code through a REPL?
Whereas most programs are built and distributed as an executable that contains only the necessary components to run the program, Lisp can be distributed as an image that contains not just the components for the specific program, but also much or all of the Lisp runtime and development environment.
The REPL is the quintessential mechanism for providing interactive access to a running Lisp environment. The two key components of the REPL, Read, and Eval, expose much of the Lisp runtime system. For example, many Lisp systems today implement Eval by compiling the provided form (that is read by the Reader), compiling the form to machine code, and then executing the result. This is in contrast to interpreting the form. Some systems, especially in the past, contained both an interpreter that executes quickly and is suitable for interactive access, and a compiler that produces better code. But modern systems are fast enough that the compiler phase isn't noticeable and simply forgo the interpreter.
Of course, you can do very similar things today. A simple example is running SSH to your Linux box that's hosting PHP. Your PHP server is up and running and live, serving pages and requests. But you login through SSH, go over and fix a PHP file, and as soon as you save that file, all of your users see the new result in real time -- the system updated itself on the fly.
The fact that PHP is running on a Linux runtime vs Lisp running on a Lisp runtime, is a detail. The effect is the same. The fact that PHP isn't compiled is a detail also. For example, you can do the same thing on a Java server: modify a JSP, save it, and the JSP is converted in to a Servlet as Java source code, then compiled on the fly by the Java runtime, then loaded in to the executing container, replacing the old code.
Lisps capability to do this is very nice, and it was very interesting far back in the day. Today, it's less so, as there are different system providing similar capabilities.
Addenda:
No, Lisp is not a virtual machine, there's no need for it to be that complicated.
The key to the concept is dynamic dispatch. With dynamic dispatch there is some lookup involved before a function is invoked.
In a static language like C, locations of things are pretty much set in stone once the linker and loader have finished processing the executable in preparation to start executing.
So, in C if you have something simple like:
int add(int i) {
return i + 1;
}
void main() {
add(1);
}
After compiling and linking and loading of the program, the address of the add function will be set in stone, and thus thing referring to that function will know exactly where to find it.
So, in assembly language: (note this is a contrived assembly language)
add: pop r1 ; pop R1 from the stack, loading the i parameter
add r1, 1; Add 1 to the parameter.
push r1 ; push result of function call
rts ; return from subroutine
main: push 1 ; Push parameter to function
call add ; call function
pop r1 ; gather (and ignore) the result
So, you can see here that add is fixed in place.
In something like Lisp, function are referred to indirectly.
int add(int i) {
return i + 1;
}
int *add_ptr() = &add;
void main() {
*(add_ptr)(1);
}
In assembly you get:
add: pop r1 ; pop R1 from the stack, loading the i parameter
add r1, 1; Add 1 to the parameter.
push r1 ; push result of function call
rts ; return from subroutine
add_ptr: dw add ; put the address of the add routine in add_ptr
main: push 1 ; Push parameter to function
mov r1, add_ptr ; Put the contents of add_ptr into R1
call (r1) ; call function indirectly through R1
pop r1 ; gather (and ignore) the result
Now, you can see here that rather than calling add directly, it is called indirectly through the add_ptr. In a Lisp runtime, it has the capability of compiling new code, and when that happens, add_ptr would be overwritten to point to the newly compiled code. You can see how the code in main never has to change, it will call whatever function add_ptr is pointing to.
Since most all of the functions in Lisp are indirectly referenced through their symbols, a lot can change "behind the back" of a running system, and the system, will continue to run.
When a function is recompiled, the old function code (assuming no other references) become eligible for garbage collection, and will, typically, eventually go away.
You can also see that when the system is garbage collected, any code that is moved (such as the code for the add function) can be moved by the runtime, and it's new location updated in the add_ptr so the system will continue operating even after code and been relocated by the garbage collector.
So, the key to it all, is to have your functions invoked through some lookup mechanism. Having this gives you a lot of flexibility.
Note, you can also do this is a running C system, for example. You can put code in a dynamic library, load the library, execute the code, and if you want you can build a new dynamic library, close the old one, open the new one, and invoke the new code -- all in a "running" system. The dynamic library interface provides the lookup mechanism that isolates the code.