x86 assembler pushf causes program to exit - visual-studio

I think my real problem is I don't completely understand the stack frame mechanism so I am looking to understand why the following code causes the program execution to resume at the end of the application.
This code is called from a C function which is several call levels deep and the pushf causes program execution to revert back several levels through the stack and completely exit the program.
Since my work around works as expected I would like to know why using the pushf instruction appears to be (I assume) corrupting the stack.
In the routines I usually setup and clean up the stack with :
sub rsp, 28h
...
add rsp, 28h
However I noticed that this is only necessary when the assembly code calls a C function.
So I tried removing this from both routines but it made no difference. SaveFlagsCmb is an assembly function but could easily be a macro.
The code represents an emulated 6809 CPU Rora (Rotate Right Register A).
PUBLIC Rora_I_A ; Op 46 - Rotate Right through Carry A reg
Rora_I_A PROC
sub rsp, 28h
; Restore Flags
mov cx, word ptr [x86flags]
push cx
popf
; Rotate Right the byte and save the FLAGS
rcr byte ptr [q_s+AREG], 1
; rcr only affects Carry. Save the Carry first in dx then
; add 0 to result to trigger Zero and Sign/Neg flags
pushf ; this causes jump to end of program ????
pop dx ; this line never reached
and dx, CF ; Save only Carry Flag
add [q_s+AREG], 0 ; trigger NZ flags
mov rcx, NF+ZF+CF ; Flag Mask NZ
Call SaveFlagsCmb ; NZ from the add and CF saved in dx
add rsp, 28h
ret
Rora_I_A ENDP
However if I use this code it works as expected:
PUBLIC Rora_I_A ; Op 46 - Rotate Right through Carry A reg
Rora_I_A PROC
; sub rsp, 28h ; works with or without this!!!
; Restore Flags
mov ah, byte ptr [x86flags+LSB]
sahf
; Rotate Right the byte and save the FLAGS
rcr byte ptr [q_s+AREG], 1
; rcr only affects Carry. Save the Carry first in dx then
; add 0 to result to trigger Zero and Sign/Neg flags
lahf
mov dl, ah
and dx, CF ; Save only Carry Flag
add [q_s+AREG], 0 ; trigger NZ flags
mov rcx, NF+ZF+CF ; Flag Mask NZ
Call SaveFlagsCmb ; NZ from the add and CF saved in dx
; add rsp, 28h ; works with or without this!!!
ret
Rora_I_A ENDP

Your reported behaviour doesn't really make sense. Mostly this answer is just providing some background not a real answer, and a suggestion not to use pushf/popf in the first place for performance reasons.
Make sure your debugging tools work properly and aren't being fooled by something into falsely showing a "jump" to somewhere. (And jump where exactly?)
There's little reason to mess around with 16-bit operand size, but that's probably not your problem.
In Visual Studio / MASM, apparently (according to OP's comment) pushf assembles as pushfw, 66 9C which pushes 2 bytes. Presumably popf also assembles as popfw, only popping 2 bytes into FLAGS instead of the normal 8 bytes into RFLAGS. Other assemblers are different.1
So your code should work. Unless you're accidentally setting some other bit in FLAGS that breaks execution? There are bits in EFLAGS/RFLAGS other than condition codes, including the single-step TF Trap Flag: debug exception after every instruction.
We know you're in 64-bit mode, not 32-bit compat mode, otherwise rsp wouldn't be a valid register. And running 64-bit machine code in 32-bit mode wouldn't explain your observations either.
I'm not sure how that would explain pushf being a jump to anywhere. pushf itself can't fault or jump, and if popf set TF then the instruction after popf would have caused a debug exception.
Are you sure you're assembling 64-bit machine code and running it in 64-bit mode? The only thing that would be different if a CPU decoded your code in 32-bit mode should be the REX prefix on sub rsp, 28h, and the RIP-relative addressing mode on [x86flags] decoding as absolute (which would presumably fault). So I don't think that could explain what you're seeing.
Are you sure you're single-stepping by instructions (not source lines or C statements) with a debugger to test this?
Use a debugger to look at the machine code as you single-step. This seem really weird.
Anyway, it seems like a very low-performance idea to use pushf / popf at all, and also to be using 16-bit operand-size creating false dependencies.
e.g. you can set x86 CF with movzx ecx, word ptr [x86flags] / bt ecx, CF.
You can capture the output CF with setc cl
Also, if you're going to do multiple things to the byte from the guest memory, load it into an x86 register. A memory-destination RCR and a memory-destination ADD are unnecessarily slow vs. load / rcr / ... / test reg,reg / store.
LAHF/SAHF may be useful, but you can also do without them too for many cases. popf is quite slow (https://agner.org/optimize/) and it forces a round trip through memory. However, there is one condition-code outside the low 8 in x86 FLAGS: OF (signed overflow). asm-source compatibility with 8080 is still hurting x86 in 2019 :(
You can restore OF from a 0/1 integer with add al, 127: if AL was originally 1, it will overflow to 0x80, otherwise it won't. You can then restore the rest of the condition codes with SAHF. You can extract OF with seto al. Or you can just use pushf/popf.
; sub rsp, 28h ; works with or without this!!!
Yes of course. You have a leaf function that doesn't use any stack space.
You only need to reserve another 40 bytes (align the stack + 32 bytes of shadow space) if you were going to make another function call from this function.
Footnote 1: pushf/popf in other assemblers:
In NASM, pushf/popf default to the same width as other push/pop instructions: 8 bytes in 64-bit mode. You get the normal encoding without an operand-size prefix. (https://www.felixcloutier.com/x86/pushf:pushfd:pushfq)
Like for integer registers, both 16 and 64-bit operand-size for pushf/popf are encodeable in 64-bit mode, but 32-bit operand size isn't.
In NASM, your code would be broken because push cx / popf would push 2 bytes and pop 8, popping 6 bytes of your return address into RFLAGS.
But apparently MASM isn't like that. Probably a good idea to use explicit operand-size specifiers anyway, like pushfw and popfw if you use it at all, to make sure you get the 66 9C encoding, not just 9C pushfq.
Or better, use pushfq and pop rcx like a normal person: only write to 8 or 16-bit partial registers when you need to, and keep the stack qword-aligned. (16-byte alignment before call, 8-byte alignment always.)

I believe this is a bug in Visual Studio. I'm using 2022, so it's an issue that's been around for a while.
I don't know exactly what is triggering it, however stepping over one specific pushf in my code had the same symptoms, albeit with the code actually working.
Putting a breakpoint on the line after the pushf did break, and allowed further debugging of my app. Adding a push ax, pop ax before the pushf also seemed to fix the issue. So it must be a Visual Studio issue.
At this point I think MASM and debugging in Visual Studio has pretty much been abandoned. Any suggestions for alternatives for developing dlls on Windows would be appreciated!

Related

LOOP is done only one time when single stepping in Turbo Debugger

The code must output 'ccb',but output only 'c', LOOP is done only one time, i have calibrated in TD, but why LOOP is done only one time?
I THINK THAT I MUST TO DECREMENT STRING_LENGTH, SO I WROTE
DEC STRING_LENGTH
BUT IT NOT WORK, SO I WROTE LIKE THAT
MOV SP,STRING_LENGTH
DEC SP
MOV STRING_LENGTH,SP
I KNOW WHAT ARE YOU THINKING RIGHT NOW, THAT IS SO INCORRECT, YOU ARE RIGHT)))
I CAN USE C++, BUT I WANT TO DO IT ONLY IN ASSEMBLY,
DOSSEG
.MODEL SMALL
.STACK 200H
.DATA
STRING DB 'cScbd$'
STRING_LENGTH EQU $-STRING
STRING1 DB STRING_LENGTH DUP (?) , '$'
.CODE
MOV AX,#DATA
MOV DS,AX
XOR SI,SI
XOR DI,DI
MOV CX,STRING_LENGTH
S:
MOV BL,STRING[DI]
AND STRING[DI],01111100B
CMP STRING[DI],01100000B
JNE L1
MOV AL,BL
MOV STRING1[SI],AL
ADD SI,2
L1:
ADD DI,2
LOOP S
MOV DL,STRING1
MOV AH,9
INT 21H
MOV AH,4CH
INT 21H
END
In Turbo Debugger (TD.EXE) the F8 "F8 step" will execute the loop completely, until the cx becomes zero (you can even create infinite loop by updating cx back to some value, preventing it from reaching the 1 -> 0 step).
To get "single-step" out of the loop instruction, use the F7 "F7 trace" - that will cause the cx to go from 6 to 5, and the code pointer will follow the jump back on the start of the loop.
About some other issues of your code:
MOV SP,STRING_LENGTH
DEC SP
MOV STRING_LENGTH,SP
sp is not general purpose register, don't use it for calculation like this. Whenever some instruction does use stack implicitly (push, pop, call, ret, ...), the values are being written and read in memory area addressed by the ss:sp register pair, so by manipulating the sp value you are modifying the current "stack".
Also in 16 bit x86 real mode all the interrupts (keyboard, timer, ...), when they occur, the current state of flag register and code address is stored into stack, before giving the control to the interrupt handler code, which usually will push additional values to the stack, so whatever is in memory on addresses below current ss:sp is not safe in 16 bit x86 real mode, and the memory content keeps "randomly" changing there by all the interrupts being executed meanwhile (the TD.EXE itself does use part of this stack memory after every single step).
For arithmetic use other registers, not sp. Once you will know enough about "stack", you will understand what kind of sp manipulation is common and why (like sub sp,40 at beginning of function which needs additional "local" memory space), and how to restore stack back into expected state.
One more thing about that:
MOV SP,STRING_LENGTH
DEC SP
MOV STRING_LENGTH,SP
The STRING_LENGTH is defined by EQU, which makes it compile time constant, and only compile time. It's not "variable" (memory allocation), contrary to the things like someLabel dw 1345, which cause the assembler to emit two bytes with values 0100_0001B, 0000_0101B (when read as 16 bit word in little-endian way, that's value 1345 encoded), and the first byte address has symbolic name someLabel, which can be used in further instructions, like dec word ptr [someLabel] to decrement that value in memory from 1345 to 1344 during runtime.
But EQU is different, it assigns the symbol STRING_LENGTH final value, like 14.
So your code can be read as:
mov sp,14 ; makes almost sense, (practically destroys stack setup)
dec sp ; still valid
mov 14,sp ; doesn't make any sense, constant can't be destination for MOV

Direction Flag DF in the Windows 64-bit calling convention?

According to the docs I can find on calling windows functions, the following applies:-
The Microsoft x64 calling convention[12][13] is followed on Windows
and pre-boot UEFI (for long mode on x86-64). It uses registers RCX,
RDX, R8, R9 for the first four integer or pointer arguments (in that
order), and additional arguments are pushed onto the stack (right to
left). Integer return values (similar to x86) are returned in RAX if
64 bits or less.
In the Microsoft x64 calling convention, it's the caller's
responsibility to allocate 32 bytes of "shadow space" on the stack
right before calling the function (regardless of the actual number of
parameters used), and to pop the stack after the call. The shadow
space is used to spill RCX, RDX, R8, and R9,[14] but must be made
available to all functions, even those with fewer than four
parameters.
The registers RAX, RCX, RDX, R8, R9, R10, R11 are considered volatile
(caller-saved).[15]
The registers RBX, RBP, RDI, RSI, RSP, R12, R13, R14, and R15 are
considered nonvolatile (callee-saved).[15]
So, I have been happily calling kernel32 until a call to GetEnvironmentVariableA failed under certain circumstances. I finally traced it back to the fact that the direction flag DF was set and I needed to clear it.
I have not up till now been able to find any mention of this and wondered if it was prudent to always clear it before a call.
Or maybe that would cause other problems. Anyone aware of the conventions of calling in this instance?
Windows assumes that the direction flag is cleared. Despite in article said about C run-time only, this is true for whole windows (I think because windows code itself is primarily written in c/c++). So when your programme begins to execute - you can assume that DF is 0. Usually you do not need to change this flag. However if you temporarily change it (set it to 1) in some internal routine you must clear it by cld before calling any windows API or any external module (because it assumes that DF is 0).
All windows interrupts at very beginning of execution clear DF to 0 - so it is safe to temporarily set DF to 1 in own internal code, main - before any external call reset it back to 0.

Compiler generated unexpected `IN AL, DX` (opcode `EC`) while setting up call stack

I was looking at some compiler output, and when a function is called it usually starts setting up the call stack like so:
PUSH EBP
MOV EBP, ESP
PUSH EDI
PUSH ESI
PUSH EBX
So we save the base pointer of the calling routine on the stack, move our own base pointer up, and then store the contents of a few registers on the stack. These are then restored to their original values at the end of the routine, like so:
LEA ESP, [EBP-0Ch]
POP EBX
POP ESI
POP EDI
POP EBP
RET
So far, so good. However, I noticed that in one routine the code that sets up the call stack looks a little different. In fact, it looks like this:
IN AL, DX
PUSH EDI
PUSH ESI
PUSH EBX
This is quite confusing for a number of reasons. For one thing, the end-of-method code is identical to that quoted above for the other method, and in particular seems to expect a saved copy of EBP to be available on the stack.
For another, if I understand correctly the command IN AL, DX reads into the AL register, which is the same as the EAX register, and as it so happens the very next command here is
XOR EAX, EAX
as the program wants to zero a few things it allocated on the stack.
Question: I'm wondering exactly what's going on here that I don't understand. The machine code being translated as IN AL, DX is the single byte EC, whereas the pair of instructions
PUSH EBP
MOV EBP, ESP
would correspond to three byte 55 88 EC. Is the disassembler misreading this somehow? Or is something relying on a side effect I don't understand?
If anyone's curious, this machine code was generated by the CLR's JIT compiler, and I'm viewing it with the Visual Studio debugger. Here's a minimal reproduction in C#:
class C {
string s = "";
public void f(string s) {
this.s = s;
}
}
However, note that this seems to be non-deterministic; sometimes I seem to get the IN AL, DX version, while other times there's a PUSH EBP followed by a MOV EBP, ESP.
EDIT: I'm starting to strongly suspect a disassembler bug -- I just got another situation where it shows IN AL, DX (opcode EC) and the two preceding bytes in memory are 55 88. So perhaps the disassembler is simply confused about the entry point of the method. (Though I'd still like some insight as to why that's happening!)
Sounds like you are using VS2015. Your conclusion is correct, its debugging engine has a lot of bugs. Yes, wrong address. Not the only problem, it does not restore breakpoints properly and you are apt to see the INT3 instruction still in the code. And it can't correctly refresh the disassembly when the jitter has re-generated the code and replace stub calls. You can't trust anything you see.
I recommend you use Tools > Options > Debugging > General and tick the "Use Managed Compatibility Mode" checkbox. That forces the debugger to use an older debugging engine, VS2010 vintage. It is much more stable.
You'll lose some features with this engine, like return value inspection and 64-bit Edit+Continue. Won't be missed when you do this kind of debugging. You will however see fake code addresses, as was always common before, so all CALL addresses are wrong and you can't easily identify calls into the CLR. Flipping the engine back-and-forth is a workaround of sorts, but of course a big annoyance.
This has not been worked on either, I saw no improvements in the Updates. But they no doubt had a big bug list to work through, VS2015 shipped before it was done. Hopefully VS2017 is better, we'll find out soon.
As Hans's answered, it's a bug in Visual Studio.
To confirm the same, I disassembled a binary using IDA 6.5 and Visual Studio 2019. Here is the screenshot:
Visual Studio 2019 missed 2 bytes (0x55 0x8B) while considering the start of main.
Note: 'Use managed compatibility mode' mentioned by Hans didn't fix the issue in VS2019.

Structured Exception Handler catches near-zero EIP trap differently on nearly identical machines?

I have a rather complex, but extremely well-tested assembly language x86-32 application running on variety of x86-32 and x86-64 boxes. This is a runtime system for a language compiler, so it supports the execution of another compiled binary program, the "object code".
It uses Windows SEH to catch various kinds of traps: division by zero, illegal access, ... and prints a register dump using the context information provided by Windows, that shows the state of the machine at the time of the trap. (It does lots of other stuff irrelevant to the question, such as printing a function backtrace or recovering from the division by zero as appropriate). This allows the writer of the "object code" to get some idea what went wrong with his program.
It behaves differently on two Windows 7-64 systems, that are more or less identical, on what I think is an illegal memory access. The specific problem is that the "object code" (not the well-tested runtime system) somewhere stupidly loads 0x82 into EIP; that is a nonexistent page in the address space AFAIK. I expect a Windows trap though the SEH, and expect to a register dump with EIP=00000082 etc.
On one system, I get exactly that register dump. I could show it here, but it doesn't add anything to my question. So, it is clear the SEH in my runtime system can catch this, and display the situation. This machine does not have any MS development tools on it.
On the other ("mystery") system, with the same exact binaries for runtime system and object code, all I get is the command prompt. No further output. FWIW, this machine has MS Visual Studio 2010 on it. The mystery machine is used heavily for other purposes, and shows no other funny behaviors in normal use.
I assume the behavior difference is caused by a Windows configuration somewhere, or something that Visual Studio controls. It isn't the DEP configuration the system menu; they are both configured (vanilla) as "DEP for standard system processes". And my runtime system executable has "No (/NXCOMPAT:NO)" configured.
Both machines are i7 but different chips, 4 cores, lots of memory, different motherboards. I don't think this is relevant; surely both of these CPUs take traps the same way.
The runtime system includes the following line on startup:
SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX); // stop Windows pop-up on crashes
This was recently added to prevent the "mystery" system from showing a pop-up window, "xxx.exe has stopped working" when the crash occurs. The pop-up box behaviour doesn't happen on the first system, so all this did was push the problem into a different corner on the "mystery" machine.
Any clue where I look to configure/control this?
I provide here the SEH code I am using. It has been edited
to remove a considerable amount of sanity-checking code
that I claim has no effect on the apparant state seen
in this code.
The top level of the runtime system generates a set of worker
threads (using CreateThread) and points to execute ASMGrabGranuleAndGo;
each thread sets up its own SEH, and branches off to a work-stealing scheduler, RunReadyGranule. To the best of my knowledge, the SEH is not changed
after that; at least, the runtime system and the "object code" do
not do this, but I have no idea what the underlying (e.g, standard "C")
libraries might do.
Further down I provide the trap handler, TopLevelEHFilter.
Yes, its possible the register printing machinery itself blows
up causing a second exception. I'll try to check into this again soon,
but IIRC my last attempt to catch this in the debugger on the
mystery machine, did not pass control to the debugger, just
got me the pop up window.
public ASMGrabGranuleAndGo
ASSUME FS:NOTHING ; cancel any assumptions made for this register
ASMGrabGranuleAndGo:
;Purpose: Entry for threads as workers in PARLANSE runtime system.
; Each thread initializes as necessary, just once,
; It then goes and hunts for work in the GranulesQ
; and start executing a granule whenever one becomes available
; install top level exception handler
; Install handler for hardware exceptions
cmp gCompilerBreakpointSet, 0
jne HardwareEHinstall_end ; if set, do not install handler
push offset TopLevelEHFilter ; push new exception handler on Windows thread stack
mov eax, [TIB_SEH] ; expected to be empty
test eax, eax
BREAKPOINTIF jne
push eax ; save link to old exception handler
mov fs:[TIB_SEH], esp ; tell Windows that our exception handler is active for this thread
HardwareEHinstall_end:
;Initialize FPU to "empty"... all integer grains are configured like this
finit
fldcw RTSFPUStandardMode
lock sub gUnreadyProcessorCount, 1 ; signal that this thread has completed its initialization
##: push 0 ; sleep for 0 ticks
call MySleep ; give up CPU (lets other threads run if we don't have enuf CPUs)
lea esp, [esp+4] ; pop arguments
mov eax, gUnreadyProcessorCount ; spin until all other threads have completed initialization
test eax, eax
jne #b
mov gThreadIsAlive[ecx], TRUE ; signal to scheduler that this thread now officially exists
jmp RunReadyGranule
ASMGrabGranuleAndGo_end:
;-------------------------------------------------------------------------------
TopLevelEHFilter: ; catch Windows Structured Exception Handling "trap"
; Invocation:
; call TopLevelEHFilter(&ReportRecord,&RegistrationRecord,&ContextRecord,&DispatcherRecord)
; The arguments are passed in the stack at an offset of 8 (<--NUMBER FROM MS DOCUMENT)
; ESP here "in the stack" being used by the code that caused the exception
; May be either grain stack or Windows thread stack
extern exit :proc
extern syscall #RTSC_PrintExceptionName#4:near ; FASTCALL
push ebp ; act as if this is a function entry
mov ebp, esp ; note: Context block is at offset ContextOffset[ebp]
IF_USING_WINDOWS_THREAD_STACK_GOTO unknown_exception, esp ; don't care what it is, we're dead
; *** otherwise, we must be using PARLANSE function grain stack space
; Compiler has ensured there's enough room, if the problem is a floating point trap
; If the problem is illegal memory reference, etc,
; there is no guarantee there is enough room, unless the application is compiled
; with -G ("large stacks to handle exception traps")
; check what kind of exception
mov eax, ExceptionRecordOffset[ebp]
mov eax, ExceptionRecord.ExceptionCode[eax]
cmp eax, _EXCEPTION_INTEGER_DIVIDE_BY_ZERO
je div_by_zero_exception
cmp eax, _EXCEPTION_FLOAT_DIVIDE_BY_ZERO
je float_div_by_zero_exception
jmp near ptr unknown_exception
float_div_by_zero_exception:
mov ebx, ContextOffset[ebp] ; ebx = context record
mov Context.FltStatusWord[ebx], CLEAR_FLOAT_EXCEPTIONS ; clear any floating point exceptions
mov Context.FltTagWord[ebx], -1 ; Marks all registers as empty
div_by_zero_exception: ; since RTS itself doesn't do division (that traps),
; if we get *here*, then we must be running a granule and EBX for granule points to GCB
mov ebx, ContextOffset[ebp] ; ebx = context record
mov ebx, Context.Rebx[ebx] ; grain EBX has to be set for AR Allocation routines
ALLOCATE_2TOK_BYTES 5 ; 5*4=20 bytes needed for the exception structure
mov ExceptionBufferT.cArgs[eax], 0
mov ExceptionBufferT.pException[eax], offset RTSDivideByZeroException ; copy ptr to exception
mov ebx, ContextOffset[ebp] ; ebx = context record
mov edx, Context.Reip[ebx]
mov Context.Redi[ebx], eax ; load exception into thread's edi
GET_GRANULE_TO ecx
; This is Windows SEH (Structured Exception Handler... see use of Context block below!
mov eax, edx
LOOKUP_EH_FROM_TABLE ; protected by DelayAbort
TRUST_JMP_INDIRECT_OK eax
mov Context.Reip[ebx], eax
mov eax, ExceptionContinueExecution ; signal to Windows: "return to caller" (we've revised the PC to go to Exception handler)
leave
ret
TopLevelEHFilter_end:
unknown_exception:
<print registers, etc. here>
"DEP for standard system processes" won't help you; it's internally known as "OptIn". What you need is the IMAGE_DLLCHARACTERISTICS_NX_COMPAT flag set in the PE header of your .exe file. Or call the SetProcessDEPPolicy function in kernel32.dll The SetProcessMitigationPolicy would be good also... but it isn't available until Windows 8.
There's some nice explanation on Ed Maurer's blog, which explains both how .NET uses DEP (which you won't care about) but also the system rules (which you do).
BIOS settings can also affect whether hardware NX is available.

Visual Studio only breaks on second line of assembly?

The short description:
Setting a breakpoint on the first line of my .CODE segment in an assembly program will not halt execution of the program.
The question:
What about Visual Studio's debugger would allow it to fail to create a breakpoint at the first line of a program written in assembly? Is this some oddity of the debugger, a case of breaking on a multi-byte instruction, or am I just doing something silly?
The details:
I have the following assembly program compiling and running in Visual Studio:
; Tell MASM to use the Intel 80386 instruction set.
.386
; Flat memory model, and Win 32 calling convention
.MODEL FLAT, STDCALL
; Treat labels as case-sensitive (required for windows.inc)
OPTION CaseMap:None
include windows.inc
include masm32.inc
include user32.inc
include kernel32.inc
include macros.asm
includelib masm32.lib
includelib user32.lib
includelib kernel32.lib
.DATA
BadText db "Error...", 0
GoodText db "Excellent!", 0
.CODE
main PROC
;int 3 ; <-- If uncommented, this will not break.
mov ecx, 6 ; <-- Breakpoint here will not hit.
xor eax, eax ; <-- Breakpoint here will.
_label: add eax, ecx
dec ecx
jnz _label
cmp eax, 21
jz _good
_bad: invoke StdOut, addr BadText
jmp _quit
_good: invoke StdOut, addr GoodText
_quit: invoke ExitProcess, 0
main ENDP
END main
If I try to set a breakpoint on the first line of the main function, mov ecx, 6, it is ignored, and the program executes without stopping. Only will a breakpoint be hit if I set it on the line after that, xor eax, eax, or any subsequent line.
I have even tried inserting a software breakpoint, int 3, as the first line of the function, and it is also ignored.
The first thing I notice that is odd: viewing the disassembly after hitting one of my breakpoints gives me the following:
01370FFF add byte ptr [ecx+6],bh
--- [Path]\main.asm
xor eax, eax
00841005 xor eax,eax --- <-- Breakpoint is hit here
_label: add eax, ecx
00841007 add eax,ecx
dec ecx
00841009 dec ecx
jnz _label
0084100A jne _label (841007h)
cmp eax, 21
0084100C cmp eax,15h
What's interesting here is that the xor is, in Visual Studio's eyes, the first operation in my program. Absent is the line move ecx, 6. Directly above where it thinks my source begins is the line that actually sets ecx to 6. So the actual start of my program has been mangled according to the disassembly.
If I make the first line of my program int 3, the line that appears above where my code is in the disassembly is:
00F80FFF add ah,cl
As suggested in one of the answers, I turned off ASLR, and it looks like the disassembly is a little more stable:
.CODE
main PROC
;mov ecx, 6
xor eax, eax
00401000 xor eax,eax --- <-- Breakpoint is present here, but not hit.
_label: add eax, ecx
00401002 add eax,ecx --- <-- Breakpoint here is hit.
dec ecx
00401004 dec ecx
The complete program is visible in the disassembly, but the problem still perists. Despite my program starting on an expected address, and the first breakpoint being shown in the disassembly, it is still skipped. Placing an int 3 as the first line still results in the following line:
00400FFF add ah,cl
and does not stop execution, and re-mangles the view of my program in the disassembly again. The next line of my program is then at location 00401001, which I suppose makes sense because int 3 is a one-byte instruction, but why would it have disappeared in the disassembly?
Even starting the program using the 'Step Into (F11)' command does not allow me to break on the first line. In fact, with no breakpoint, starting the program with F11 does not halt execution at all.
I'm not really sure what else I can try to solve the problem, beyond what I have detailed here. This is stretching beyond my current understanding of assembly and debuggers.
01370FFF add byte ptr [ecx+6],bh
At least I can explain away one mystery. Note the address, 0x1370fff. The CODE segment never starts at an address like that, segments begin at an address that's a multiple of 0x1000. Which makes the last 3 hex digits of the start address always 0. The debugger got confuzzled and started disassembling the code at the wrong address, off by one. The actual start address is 0x1371000. The disassembly starts off poorly because there's a 0 at 0x1370fff. That's a multi-byte ADD instruction. So it displays garbage for a while until it catches up with real machine code instructions by accident.
You need to help it along and give it a command to start disassembling at the proper address. In VS that's the Address box, type "0x1371000".
Another notable quirk is the strange value of the start address. A process normally starts at address 0x400000. You have a feature called ASLR turned on, Address Space Layout Randomization. It is an anti-virus feature that makes programs start at an unpredictable start address. Nice feature but it doesn't exactly help debugging programs. It isn't clear how you built this code but you need the /DYNAMICBASE:NO linker option to turn it off.
Another important quirk of debuggers you need to keep in mind here is the way they set breakpoints. They do so by patching the code, replacing the start byte of an instruction with an int 3 instruction. When the breakpoint hits, it quickly replaces the byte with the original machine code instruction byte. So you never see this. This goes wrong if you pick the wrong address to set the breakpoint, like in the middle of a multi-byte instruction. It now no longer breaks the code, the altered byte messes up the original instruction. You can easily fall into this trap when you started with a bad disassembly.
Well, do this the Right Way. Start debugging with the debugger's STEP command instead.
I have discovered what the root of the problem is, but I haven't a clue why it is so.
After creating another MASM project, I noticed that the new one would break on the first line of the program, and the disassembly did not appear to be mangled or altered. So, I compared its properties to my original project (for the Debug configuration). The only difference I found was that my original project had Incremental Linking disabled. Specifically, it added /INCREMENTAL:NO to the linker command line.
Removing this option from the command line (thereby enabling Incremental Linking) resulted in the program behaving as expected during debugging; my code shown in the disassembly window remained unaltered, I could hit a breakpoint on the first line of the main procedure, and an int 3 instruction would also execute properly as the first line.
If you press F+11 (step into) instead of Start Debugging the debugger will stop on the first line.
It is possible there is some messed up breakpoint setting. Delete any *.suo files in your project directory to reset all breakpoints.
Note that your project will have a secret headers and stuff in it if it has a main function. To set a breakpoint at the real entry point use: Debug + New Breakpoint + Break at Function -> wWinMainCRTStartup for a windows program or mainCRTStartup or wmainCRTStartup for a console program.

Resources