Set breakpoint on variable value change - debugging

I'm just wondering if is it possible to set breakpoint on change of variable value (in any programming language and tool) ?
For example, I want to say: "Stop anywhere, when value of variable 'a' will be changed".
I know that there is ability to set condition breakpoint and to stop execution when a variable have some specific value, but I didn't hear about observing variable changes.
If it is not possible, why ?

In my experience you can achieve this with a "memory breakpoint" or "memory watch point". For example gdb does it like this: Can I set a breakpoint on 'memory access' in GDB?
As far as I've seen with write watchpoints, the break actually triggers when a is written to, regardless of whether the new value is equal to the old value. So if by "changed" you really mean "changed" then there are fewer examples out there. Possibly even none, I'm not sure, although I don't suppose it would be technically difficult to implement change-only watchpoints, assuming that you were implementing write watchpoints.
For some languages it makes a difference what kind of variable a is. For example, in C or C++ variables can be "lifted" into registers when optimization is enabled, in which case hardware memory watchpoints on the address of the variable will not necessarily catch every change.
There's also a limitation with variables on the stack, that if your function exits but the watchpoint is still set, then it could catch access to the same address, now in use for a different variable in a different function. If your function is called again later (or recursively), it's not necessarily starting from the same stack position, and if not then your watchpoint would fail to catch access to the "same" variable at a different location.
"Stop when a particular condition is true at a particular line of code" is in my experience called a "conditional breakpoint". It generally uses a different mechanism --
the debugger will most likely put a breakpoint instruction at that line of code. Each time it triggers the debugger will check the condition and continue execution if it's false.

Some processors support hardware breakpoints which will break when an address is read or written. For example, if I have a 4 byte variable at address 0x10005060, then I can set a hardware breakpoint like this (using windbg): ba w4 0x10005060. The processor will break if any of the 4 bytes are written. The following command instructs the processor to break when any of those 4 bytes a read or written: ba r4 0x10005060.

Related

How to set GDB read watchpoint in registers?

I'm trying to set a read watch point in registers.
By reading the user manual. I understood how to set watchpoint in registers
watch [-l|-location] expr [thread thread-id] [mask maskvalue]
Set a watchpoint for an expression. GDB will break when the expression expr is written into by the program and its value changes.
For example: watch $r1. It means GDB will break when the register r1 is written into by the program and its value changes. It works!
But when I use: rwatch $r1, the gdb's reply is "Expression cannot be implemented with read/access watchpoint."
So I want to know how can I set a read watch point in registers. or are There any ways to break when the register is read by the program?
Many Thanks!
So I want to know how can I set a read watch point in registers
I don't believe GDB provides any facility to do so.
When you set a watchpoint on memory location, GDB can implement it very efficiently on platforms that provide special debug registers (e.g. x86_64). The program then runs at full speed, until the location is accessed.
On platforms that don't provide debug registers, GDB has to single-step the program, and compare the value at location with a value it previously recorded. This is very slow (but sometimes useful), and can only work for write watchpoints for obvious reason.
Since registers don't have a "location", when you set a watchpoint on register, GDB performs the same "single step and compare with previous value" dance. And the read watchpoint can't work at all.
What you ask for could be implemented in GDB: single step and examine every instruction -- does it access the named register or not?
But the need for this is very rare, and the implementation complexity is quite high, so I don't think anybody would be willing to actually implement this.

SPARC single-stepping mode

Is there a SPARC equivalent to x86's single step mode? What I want is to stop execution after every instruction and move control flow to a trap handler or something similar.
I thought of using the ta instruction in the delayed execution slot but this would not work when the previous instruction is a branching instruction with the annul bit set.
Sparc lacks a single step bit in PSR, so it's harder to single step. But I've used a trick to help get closer. Set TPC to the address of the instruction you want to single step, and set TNPC to an address someplace else where you've placed a trap instruction. When you execute the retry instruction to get back to the process context, it will single step the one instruction you want, then it will next execute the trap instruction which will bring you right back to the kernel, where you can do whatever you want. (n.b this is for sparc64, not sure about sparc32). This is a nice trick because you don't have modify existing instructions in the user's address space. This was important to me since I was single stepping instructions in the kernel.
Another idea I had, but never tried, was to simply set TNPC to an illegal address. Then after the instruction at TPC was executed, you'd get an automatic trap back into the kernel. And since the trap handling code knows that the process is being single stepped, there would be no confusion over a "real" illegal address trap.

Why doesn't gcc handle volatile register?

I'm working on a timing loop for the AVR platform where I'm counting down a single byte inside an ISR. Since this task is a primary function of my program, I'd like to permanently reserve a processor register so that the ISR doesn't have to hit a memory barrier when its usual code path is decrement, compare to zero, and reti.
The avr-libc docs show how to bind a variable to a register, and I got that working without a problem. However, since this variable is shared between the main program (for starting the timer countdown) and the ISR (for actually counting and signaling completion), it should also be volatile to ensure that the compiler doesn't do anything too clever in optimizing it.
In this context (reserving a register across an entire monolithic build), the combination volatile register makes sense to me semantically, as "permanently store this variable in register rX, but don't optimize away checks because the register might be modified externally". GCC doesn't like this, however, and emits a warning that it might go ahead and optimize away the variable access anyway.
The bug history of this combination in GCC suggests that the compiler team is simply unwilling to consider the type of scenario I'm describing and thinks it's pointless to provide for it. Am I missing some fundamental reason why the volatile register approach is in itself a Bad Idea, or is this a case that makes semantic sense but that the compiler team just isn't interested in handling?
The semantics of volatile are not exactly as you describe "don't optimize away checks because the register might be modified externally" but are actually more narrow: Try to think of it as "don't cache the variable's value from RAM in a register".
Seen this way, it does not make any sense to declare a register as volatile because the register itself cannot be 'cached' and therefore cannot possibly be inconsistent with the variable's 'actual' value.
The fact that read accesses to volatile variables are usually not optimzed away is merely a side effect of the above semantics, but it's not guaranteed.
I think GCC should assume by default that a value in a register is 'like volatile' but I have not verified that it actually does so.
Edit:
I just did a small test and found:
avr-gcc 4.6.2 does not treat global register variables like volatiles with respect to read accesses, and
the Naggy extension for Atmel Studio detects an error in my code: "global register variables are not supported".
Assuming that global register variables are actually considered "unsupported" I am not surprised that gcc treats them just like local variables, with the known implications.
My test code looks like this:
uint8_t var;
volatile uint8_t volVar;
register uint8_t regVar asm("r13");
#define NOP asm volatile ("nop\r\n":::)
int main(void)
{
var = 1; // <-- kept
if ( var == 0 ) {
NOP; // <-- optimized away, var is not volatile
}
volVar = 1; // <-- kept
if ( volVar == 0 ) {
NOP; // <-- kept, volVar *is* volatile
}
regVar = 1; // <-- optimized away, regVar is treated like a local variable
if ( regVar == 0 ) {
NOP; // <-- optimized away consequently
}
for(;;){}
}
The reason you would use the volatile keyword on AVR variables is to, as you said, avoid the compiler optimizing access to the variable. The question now is, how does this happen though?
A variable has two places it can reside. Either in the general purpose register file or in some location in RAM. Consider the case where the variable resides in RAM. To access the latest value of the variable, the compiler loads the variable from RAM, using some form of the ld instruction, say lds r16, 0x000f. In this case, the variable was stored in RAM location 0x000f and the program made a copy of this variable in r16. Now, here is where things get interesting if interrupts are enabled. Say that after loading the variable, the following occurs inc r16, then an interrupt triggers and its corresponding ISR is run. Within the ISR, the variable is also used. There is a problem, however. The variable exists in two different versions, one in RAM and one in r16. Ideally, the compiler should use the version in r16, but this one is not guaranteed to exist, so it loads it from RAM instead, and now, the code does not operate as needed. Enter then the volatile keyword. The variable is still stored in RAM, however, the compiler must ensure that the variable is updated in RAM before anything else happens, thus the following assembly may be generated:
cli
lds r16, 0x000f
inc r16
sei
sts 0x000f, r16
First, interrupts are disabled. Then, the the variable is loaded into r16. The variable is increased, interrupts are enabled and then the variable is stored. It may appear confusing for the global interrupt flag to be enabled before the variable is stored back in RAM, but from the instruction set manual:
The instruction following SEI will be executed before any pending interrupts.
This means that the sts instruction will be executed before any interrupts trigger again, and that the interrupts are disabled for the minimum amount of time possible.
Consider now the case where the variable is bound to a register. Any operations done on the variable are done directly on the register. These operations, unlike operations done to a variable in RAM, can be considered atomic, as there is no read -> modify -> write cycle to speak of. If an interrupt triggers after the variable is updated, it will get the new value of the variable, since it will read the variable from the register it was bound to.
Also, since the variable is bound to a register, any test instructions will utilize the register itself and will not be optimized away on the grounds the compiler may have a "hunch" it is a static value, given that registers by their very nature are volatile.
Now, from experience, when using interrupts in AVR, I have sometimes noticed that the global volatile variables never hit RAM. The compiler kept them on the registers all the time, bypassing the read -> modify -> write cycle alltogether. This was due, however, to compiler optimizations, and it should not be relied on. Different compilers are free to generate different assembly for the same piece of code. You can generate a disassembly of your final file or any particular object files using the avr-objdump utility.
Cheers.
Reserving a register for one variable for a complete compilation unit is probably too restrictive for a compiler's code generator. That is, every C routine would have to NOT use that register.
How do you guarantee that other called routines do NOT use that register once your code goes out of scope? Even stuff like serial i/o routines would have to NOT use that reserved register. Compilers do NOT recompile their run-time libraries based on a data definition in a user program.
Is your application really so time sensitive that the extra delay for bringing memory up from L2 or L3 can be detected? If so, then your ISR might be running so frequently that the required memory location is always available (i.e. it doesn't get paged back down thru the cache) and thus does NOT hit a memory barrier (I assume by memory barrier you are referring to how memory in a cpu really operates, through caching, etc.). But for this to really be true the up would have to have a fairly large L1 cache and the ISR would have to run at a very high frequency.
Finally, sometimes an application's requirements make it necessary to code it in ASM in which case you can do exactly what you are requesting!

How does a debugger set breakpoints if the image is in read-only memory?

How does a debugger set breakpoints if the image is in read-only memory? I know there are hardware breakpoints, but in the debugger I use (OllyDbg) those have to be set specially using a different dialog than normal breakpoints.
Explanation:
Here is a routine in a debugger that is comparing itself to a copy of itself. EDX points to the running image, EBX points to the known good copy of the image. The breakpoint on 4010CE only is reached if there is a mismatch. The character being compared is in the AL register. As you can see the debugger shows EB F6 at 10CE, but this is false. 10CE actually has CC in it, as you can see by looking at the AL register. This is because the debugger has secretely inserted the CC to perform the breakpoint.
The debugger first has to change the memory protection of the page it wants to write to. This can be done with VirtualProtectEx. After that it is able to write with WriteProcessMemory and then set the protection back to the original value.
Let me preface this with a disclaimer that I'm not familiar with your particular toolset.
If you haven't enabled hardware breakpoints, the only remaining breakpoint type is a software breakpoint. These are only hit (on x86 because that's what I'm most familiar with) when you replace the first byte of an instruction with a trap instruction, and will only be routed through the breakpoint mechanism of your OS to your debugger if the correct trap instruction for your OS is used and the debugger has already registered itself with the OS as a debugger for this process. In order to cause the software breakpoint to happen at the correct moment, the trap instruction must be written into your code segment over the first byte of your correct instruction.
The two answers that got here first explain the two scenarios which could get you here (at least, the only two I can think of):
The kernel always has write access everywhere, except for hardware-protected pages (ie on some sort of ROM), which your process' memory is almost certainly not. It has the ability to write the breakpoint instruction regardless of the permissions exposed to the user process being debugged.
The debugger must use some syscall to change the access rights on the memory of the target process before inserting the breakpoint.
Personally, I'm guessing the first thing is happening. The segment permissions are only in place to protect your target process from itself, not from a debugger process or from the kernel. Debugging mechanisms in operating systems pretty regularly violate "normal" permissions to allow the debugger to do whatever it wants to the target process. This, of course, is why some operating systems require you to enter a password before you're allowed to use the debugger in certain scenarios.
However, you can test if it's the second one by attempting to write to the code segment from inside the target process after a breakpoint has been set. If the write succeeds, you know the permissions have been lowered by the OS (to allow the process to be debugged). It would be pretty awkward for the OS to require a debugger to jump through this hoop since it can already insert arbitrary code into the writeable parts of memory and then force a jump to it by generating a stack frame overflow.
The debugger takes advantage of the WriteProcessMemory() function to alter the instruction in place. It'll keep a copy of the instruction. When the bp is hit it will reset the old byte value and set EIP back to the previous instruction so the real instruction can execute.

What happens if TF(trap flag) is set to 0 in 8086 microprocessors?

Here I searched that:
Trap Flag (T) – This flag is used for on-chip debugging. Setting trap
flag puts the microprocessor into single step mode for debugging. In
single stepping, the microprocessor executes a instruction and enters
into single step ISR.
If trap flag is set (1), the CPU automatically generates an internal
interrupt after each instruction, allowing a program to be inspected
as it executes instruction by instruction.
If trap flag is reset (0), no function is performed.
https://en.wikipedia.org/wiki/Trap_flag
Now I am coding on emu-8086. As explained, TF must be set in order to debugger work.
Should I set a TF always myself or it is set automatically?
If I somehow set a TF to 0, will the whole computer systems debuggers work or just emu-8086 wont debug?
I've never used emu8086 but by looking at some screenshot of it and judging by its name it's probably an emulator - this means it is not running the code natively.
Each instruction is changing the state of a virtual 8086 CPU (represented as a data structure in memory) and not the state of your real CPU.
With this emulation, emu8086 doesn't need to rely on the TF flag to single-step your program, it just needs to stop after one step of emulation and wait for you to hit another button.
This is also why you can find a thing such as "Step back".
If you were wondering what would happen if a debugged program (and not an emulated one) sets the TF flag then the answer is that it depends on the debugger.
The correct behaviour is the one where the debuggee receives the exceptions but this is hard to handle correctly (since the debugger itself uses the TF flag).
Some debugger just don't care and swallow the exception (i.e. they don't forward it to the program under debug) assuming that a well written program doesn't need to use the TF flag.
Unfortunately malwares routinely use a set of anti-debug technique including setting the TF and checking it back/waiting for exceptions to detect the presence of a debugger.
A truly transparent debugger has to handle the RFLAGS register carefully.
When debugging with breakpoints the TF is not set while the program is executing, so there is nothing to worry about.
However when single stepping the TF is set during the next instruction, this is problematic during a pushfd/q and the debugger must explicitly handle that case to avoid detection.
If the debuggee sets the TF the debugger must pass the debug exception to the program - under current OS the TF won't last more than an instruction because the OS will catch the exception,
trasnform it in a signal and dispatch it to the program while clearing the TF. So the debugger can simply do a check before stepping into a popfd/q instruction.
Where the TF doesn't get cleared by the OS the debugger must effectively emulate RFLAGS with a copy.
The debugger sets TF according to what it needs to do. The code being debugged should not modify TF.

Resources