DOS DEBUG trace command doesn't work as I would expect - debugging

I have ASM code which print abc using looping syntax. Here is my code
;abc.com
.model small
.code
org 100h
start:
mov ah, 02h
mov dl, 'a'
mov cx, 3h
ulang:
int 21h
inc dl
loop ulang
int 20h
end start
the COM program run normally
result of debug abc.com followed with -t looks like
The question is why it's NOP after INT 21, instead of INC dl? AFAIK it should INC dl then LOOP xxxx for three times then INT 20.
When I press -t continously it's go somewhere I don't know till crash, means can't find INT 20h
it's different with debug abc.com followed with -u
it's show INC dl and LOOP 0107 which indicate looping.
FYI:
Win 7 Ultimate SP 1 32 Bit
GUI Turbo ASM x86 3.0
Celeron Dual Core n2840

The Trace command in debug is the equivalent of the STEP INTO feature of modern day debuggers. The int instruction (like call) executes a series of instructions and then returns back to the caller. Trace will step into a software interrupt handler or a function and execute each instruction one at a time. The MSDN documentation for debug says this about Trace:
Executes one instruction and displays the contents of all registers, the status of all flags, and the decoded form of the instruction executed.
In your case you hit int 21h and jumped to the software interrupt handlers code at CS:IP 00A7:107C . If you trace through all the interrupt handler code you'd eventually reach CS:IP of 1400:0109 where the INC DL instruction is.
In order to execute a function or interrupt without stepping through each instruction associated with it, you can use the proceed command. Proceed is akin to the STEP OVER feature of modern day debuggers. The code of an interrupt handler or a function/subroutine will execute and then break on the instruction after the INT or CALL instruction.
The documentation says this about PROCEED:
When the p command transfers control from Debug to the program being tested, that program runs without interruption until the loop, repeated string instruction, software interrupt, or subroutine at the specified address is completed, or until the specified number of machine instructions have been executed. Control then returns to Debug.

Related

How to run stand alone codes like this in assembly language

I have a Windows 10 computer and Visual Studio 2017 for my C/C++ development.
I am currently reading "Code Optimization: Effective Memory Usage" and the author suggest running the following code to capture accurate run times of various codes.
XOR EAX, EAX ;CPUID command is called to ensure all preceding commands
CPUID ;have left the pipeline, hence, not influencing measurement results
RDTSC ;This returns to the EAX register the lower DWord of current
;value of time stamp counter
MOV [clock], EAX ; the obtained result is save to the clock variable
//...
//profiled code ; here the profiled code is run
//---
XOR EAX, EAX ; the CPUID command is executed again to ensure all preceding
CPUID ;instructions have left the pipeline
RDTSC ; read the new value of the time stamp counter
SUB EAX, [clock] ; calculate the difference to capture actual time to execute code fragment
I can understand what the above assembly code is doing. But where exactly do I type this in in Windows and observe the results? Should I be saving this as a .asm file in Visual studio and then running it within that? Should I create a C file and then include this assembly instructions within it? Or, should I stay away from Visual Studio completely and use a stand alone Assembly Language compiler/debugger?

8086 Program print your name as array of hex values of ascii using loop

Here is my code, but when I use my debugger I get an error once I reach the int21 h command which says:
Unhandled exception at 0x00007FF6E9B01034 in MP2_KyleRuiter.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF.
Program:
ExitProcess PROTO
.data
string DB 4bh, 79h, 6ch, 65h, 20h, 52h, 75h, 69h, 74h, 65h, 72h, 00h ; My Name
COUNT = ($-string) ; string length calculation
.code
main proc
mov rcx,COUNT ; loop counter
mov rsi,offset string
L1:
mov dl,[rsi] ;gets character from the array
mov ah,2 ;displays character
inc rsi; points to next character
Loop L1 ;decrements rcx until 0
mov rax, 4c00h
int 21h ; displays
RET
main ENDP
END
int 21h & co. is 16-bit MS-DOS stuff, while the rest of the code you wrote is x86 64bit assembly. On 64-bit Windows you are invoking god-knows-what interrupt handler, which results in a crash.
If you want to print stuff when running under 64 bit Windows you have to invoke the relevant syscalls (GetStdHandle to get a handle to the console, WriteFile to write the data); MASM makes this relatively simple through the INVOKE directive.
You can't use DOS interrupts, like int 21h, in a 64-bit Windows executable. Modern Windows isn't a DOS-based system, so it doesn't use that interface anymore.
If you want to write a DOS executable, you'll need to use 16-bit instructions, and run it in an emulator (like DOSBox).
If you want to write a 64-bit Windows executable, you'll need to use Windows library calls.
Pick one.
int 21h with AH set to 4Ch says to terminate with a return code. It looks like your debugger does not know how to step over/into a terminate. That makes some sense, I suppose.
Belay my last. I stand corrected.
You might find this helpful, though:
Why does using "int 21h" on Assembly x86 MASM cause my program to crash?

Structured Exception Handler catches near-zero EIP trap differently on nearly identical machines?

I have a rather complex, but extremely well-tested assembly language x86-32 application running on variety of x86-32 and x86-64 boxes. This is a runtime system for a language compiler, so it supports the execution of another compiled binary program, the "object code".
It uses Windows SEH to catch various kinds of traps: division by zero, illegal access, ... and prints a register dump using the context information provided by Windows, that shows the state of the machine at the time of the trap. (It does lots of other stuff irrelevant to the question, such as printing a function backtrace or recovering from the division by zero as appropriate). This allows the writer of the "object code" to get some idea what went wrong with his program.
It behaves differently on two Windows 7-64 systems, that are more or less identical, on what I think is an illegal memory access. The specific problem is that the "object code" (not the well-tested runtime system) somewhere stupidly loads 0x82 into EIP; that is a nonexistent page in the address space AFAIK. I expect a Windows trap though the SEH, and expect to a register dump with EIP=00000082 etc.
On one system, I get exactly that register dump. I could show it here, but it doesn't add anything to my question. So, it is clear the SEH in my runtime system can catch this, and display the situation. This machine does not have any MS development tools on it.
On the other ("mystery") system, with the same exact binaries for runtime system and object code, all I get is the command prompt. No further output. FWIW, this machine has MS Visual Studio 2010 on it. The mystery machine is used heavily for other purposes, and shows no other funny behaviors in normal use.
I assume the behavior difference is caused by a Windows configuration somewhere, or something that Visual Studio controls. It isn't the DEP configuration the system menu; they are both configured (vanilla) as "DEP for standard system processes". And my runtime system executable has "No (/NXCOMPAT:NO)" configured.
Both machines are i7 but different chips, 4 cores, lots of memory, different motherboards. I don't think this is relevant; surely both of these CPUs take traps the same way.
The runtime system includes the following line on startup:
SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX); // stop Windows pop-up on crashes
This was recently added to prevent the "mystery" system from showing a pop-up window, "xxx.exe has stopped working" when the crash occurs. The pop-up box behaviour doesn't happen on the first system, so all this did was push the problem into a different corner on the "mystery" machine.
Any clue where I look to configure/control this?
I provide here the SEH code I am using. It has been edited
to remove a considerable amount of sanity-checking code
that I claim has no effect on the apparant state seen
in this code.
The top level of the runtime system generates a set of worker
threads (using CreateThread) and points to execute ASMGrabGranuleAndGo;
each thread sets up its own SEH, and branches off to a work-stealing scheduler, RunReadyGranule. To the best of my knowledge, the SEH is not changed
after that; at least, the runtime system and the "object code" do
not do this, but I have no idea what the underlying (e.g, standard "C")
libraries might do.
Further down I provide the trap handler, TopLevelEHFilter.
Yes, its possible the register printing machinery itself blows
up causing a second exception. I'll try to check into this again soon,
but IIRC my last attempt to catch this in the debugger on the
mystery machine, did not pass control to the debugger, just
got me the pop up window.
public ASMGrabGranuleAndGo
ASSUME FS:NOTHING ; cancel any assumptions made for this register
ASMGrabGranuleAndGo:
;Purpose: Entry for threads as workers in PARLANSE runtime system.
; Each thread initializes as necessary, just once,
; It then goes and hunts for work in the GranulesQ
; and start executing a granule whenever one becomes available
; install top level exception handler
; Install handler for hardware exceptions
cmp gCompilerBreakpointSet, 0
jne HardwareEHinstall_end ; if set, do not install handler
push offset TopLevelEHFilter ; push new exception handler on Windows thread stack
mov eax, [TIB_SEH] ; expected to be empty
test eax, eax
BREAKPOINTIF jne
push eax ; save link to old exception handler
mov fs:[TIB_SEH], esp ; tell Windows that our exception handler is active for this thread
HardwareEHinstall_end:
;Initialize FPU to "empty"... all integer grains are configured like this
finit
fldcw RTSFPUStandardMode
lock sub gUnreadyProcessorCount, 1 ; signal that this thread has completed its initialization
##: push 0 ; sleep for 0 ticks
call MySleep ; give up CPU (lets other threads run if we don't have enuf CPUs)
lea esp, [esp+4] ; pop arguments
mov eax, gUnreadyProcessorCount ; spin until all other threads have completed initialization
test eax, eax
jne #b
mov gThreadIsAlive[ecx], TRUE ; signal to scheduler that this thread now officially exists
jmp RunReadyGranule
ASMGrabGranuleAndGo_end:
;-------------------------------------------------------------------------------
TopLevelEHFilter: ; catch Windows Structured Exception Handling "trap"
; Invocation:
; call TopLevelEHFilter(&ReportRecord,&RegistrationRecord,&ContextRecord,&DispatcherRecord)
; The arguments are passed in the stack at an offset of 8 (<--NUMBER FROM MS DOCUMENT)
; ESP here "in the stack" being used by the code that caused the exception
; May be either grain stack or Windows thread stack
extern exit :proc
extern syscall #RTSC_PrintExceptionName#4:near ; FASTCALL
push ebp ; act as if this is a function entry
mov ebp, esp ; note: Context block is at offset ContextOffset[ebp]
IF_USING_WINDOWS_THREAD_STACK_GOTO unknown_exception, esp ; don't care what it is, we're dead
; *** otherwise, we must be using PARLANSE function grain stack space
; Compiler has ensured there's enough room, if the problem is a floating point trap
; If the problem is illegal memory reference, etc,
; there is no guarantee there is enough room, unless the application is compiled
; with -G ("large stacks to handle exception traps")
; check what kind of exception
mov eax, ExceptionRecordOffset[ebp]
mov eax, ExceptionRecord.ExceptionCode[eax]
cmp eax, _EXCEPTION_INTEGER_DIVIDE_BY_ZERO
je div_by_zero_exception
cmp eax, _EXCEPTION_FLOAT_DIVIDE_BY_ZERO
je float_div_by_zero_exception
jmp near ptr unknown_exception
float_div_by_zero_exception:
mov ebx, ContextOffset[ebp] ; ebx = context record
mov Context.FltStatusWord[ebx], CLEAR_FLOAT_EXCEPTIONS ; clear any floating point exceptions
mov Context.FltTagWord[ebx], -1 ; Marks all registers as empty
div_by_zero_exception: ; since RTS itself doesn't do division (that traps),
; if we get *here*, then we must be running a granule and EBX for granule points to GCB
mov ebx, ContextOffset[ebp] ; ebx = context record
mov ebx, Context.Rebx[ebx] ; grain EBX has to be set for AR Allocation routines
ALLOCATE_2TOK_BYTES 5 ; 5*4=20 bytes needed for the exception structure
mov ExceptionBufferT.cArgs[eax], 0
mov ExceptionBufferT.pException[eax], offset RTSDivideByZeroException ; copy ptr to exception
mov ebx, ContextOffset[ebp] ; ebx = context record
mov edx, Context.Reip[ebx]
mov Context.Redi[ebx], eax ; load exception into thread's edi
GET_GRANULE_TO ecx
; This is Windows SEH (Structured Exception Handler... see use of Context block below!
mov eax, edx
LOOKUP_EH_FROM_TABLE ; protected by DelayAbort
TRUST_JMP_INDIRECT_OK eax
mov Context.Reip[ebx], eax
mov eax, ExceptionContinueExecution ; signal to Windows: "return to caller" (we've revised the PC to go to Exception handler)
leave
ret
TopLevelEHFilter_end:
unknown_exception:
<print registers, etc. here>
"DEP for standard system processes" won't help you; it's internally known as "OptIn". What you need is the IMAGE_DLLCHARACTERISTICS_NX_COMPAT flag set in the PE header of your .exe file. Or call the SetProcessDEPPolicy function in kernel32.dll The SetProcessMitigationPolicy would be good also... but it isn't available until Windows 8.
There's some nice explanation on Ed Maurer's blog, which explains both how .NET uses DEP (which you won't care about) but also the system rules (which you do).
BIOS settings can also affect whether hardware NX is available.

CPU idle loop without Sleep

I am learning WIN32 ASM right now and I was wondering if there is something like an "idle" infinite loop that doesn't consume any resources at all. Basically I need a running process to experiment with that goes like this:
loop:
; alternative to sleep...
jmp loop
Is there something that may idle the process?
You can't have it both ways. You can either consume the CPU or you can let it do something else. You can conserve power and avoid depriving other cores of resources with rep; nop; (also known as pause), as Vlad Lazarenko suggested. But if you loop without yielding the core to another process, at least that virtual core cannot do anything else.
Note: You should never use empty loops to make the application idle. It will load your processor to 100%
There are several ways to make the application idle, when there is no GUI activity in Win32 environment. The main (documented in MSDN) is to use "GetMessage" function in the main loop in order to extract the messages from the message queue.
When the message queue is empty, this function will idle, consuming very low processor time, waiting for message to arrive in the message queue.
Below is an example, using FASM macro library:
msg_loop:
invoke GetMessage, msg, NULL, 0, 0
cmp eax, 1
jb end_loop
jne msg_loop
invoke TranslateMessage, msg
invoke DispatchMessage, msg
jmp msg_loop
Another approach is used when you want to catch the moment when the application goes to idle state and make some low priority, one-time processing (for example enabling/disabling the buttons on the toolbar, according to the state of the application).
In this case, a combination of PeekMessage and WaitMessage have to be used. PeekMessage function returns immediately, even when the message queue is empty. This way, you can detect this situation and provide some idle tasks to be done, then you have to call WaitMessage in order to idle the process waiting for incoming messages.
Here is an simplified example from my code (using FreshLib macros):
; Main message loop
Run:
invoke PeekMessageA, msg, 0, 0, 0, PM_REMOVE
test eax,eax
jz .empty
cmp [msg.message], WM_QUIT
je .terminate
invoke TranslateMessage, msg
invoke DispatchMessageA, msg
jmp Run
.empty:
call OnIdle
invoke WaitMessage
jmp Run
.terminate:
FinalizeAll
stdcall TerminateAll, 0
What do you mean by "consume resources"?
If you just want a command that does nothing? If so nop will do that, and you can loop as much as you want even: rep; nop. However, the CPU will actually be busy doing work: executing the "no operation" instruction.
If you want an instruction that will cause the CPU itself to stop, then you are sorta-kinda out of luck: although there are ways to do that, you cannot do it from userspace.
With ring 0 access level (like kernel driver), you could use the x86 HLT opcode, but you need to have a system programming skill to really understand how to use it. Using HLT this way requires interrupts to be enabled (not masked) and the guarantee of an interrupt occurring (e.g. system timer), because the return from the interrupt will execute the next instruction after the HLT.
Without ring 0 access you'll never find any x86 opcode to enter an "idle" mode...
You only could find some instructions which consume less power (no memory access, no cache access, no FPU access, low ALU usage...).
Yes, there are some architectures support intrinsic idle state AKA halt.
For example, in x86 hlt, opcode 0xf4. Probably can be called only on privileged mode.
CPU Switches from User mode to Kernel Mode : What exactly does it do? How does it makes this transition?
How to completely suspend the processor?
a Linux's userspace example I found here:
.section .rodata
greeting:
.string "Hello World\n"
.text
_start:
mov $12,%edx /* write(1, "Hello World\n", 12) */
mov $greeting,%ecx
mov $1,%ebx
mov $4,%eax /* write is syscall 4 */
int $0x80
xorl %ebx, %ebx /* Set exit status and exit */
mov $0xfc,%eax
int $0x80
hlt /* Just in case... */

Visual Studio only breaks on second line of assembly?

The short description:
Setting a breakpoint on the first line of my .CODE segment in an assembly program will not halt execution of the program.
The question:
What about Visual Studio's debugger would allow it to fail to create a breakpoint at the first line of a program written in assembly? Is this some oddity of the debugger, a case of breaking on a multi-byte instruction, or am I just doing something silly?
The details:
I have the following assembly program compiling and running in Visual Studio:
; Tell MASM to use the Intel 80386 instruction set.
.386
; Flat memory model, and Win 32 calling convention
.MODEL FLAT, STDCALL
; Treat labels as case-sensitive (required for windows.inc)
OPTION CaseMap:None
include windows.inc
include masm32.inc
include user32.inc
include kernel32.inc
include macros.asm
includelib masm32.lib
includelib user32.lib
includelib kernel32.lib
.DATA
BadText db "Error...", 0
GoodText db "Excellent!", 0
.CODE
main PROC
;int 3 ; <-- If uncommented, this will not break.
mov ecx, 6 ; <-- Breakpoint here will not hit.
xor eax, eax ; <-- Breakpoint here will.
_label: add eax, ecx
dec ecx
jnz _label
cmp eax, 21
jz _good
_bad: invoke StdOut, addr BadText
jmp _quit
_good: invoke StdOut, addr GoodText
_quit: invoke ExitProcess, 0
main ENDP
END main
If I try to set a breakpoint on the first line of the main function, mov ecx, 6, it is ignored, and the program executes without stopping. Only will a breakpoint be hit if I set it on the line after that, xor eax, eax, or any subsequent line.
I have even tried inserting a software breakpoint, int 3, as the first line of the function, and it is also ignored.
The first thing I notice that is odd: viewing the disassembly after hitting one of my breakpoints gives me the following:
01370FFF add byte ptr [ecx+6],bh
--- [Path]\main.asm
xor eax, eax
00841005 xor eax,eax --- <-- Breakpoint is hit here
_label: add eax, ecx
00841007 add eax,ecx
dec ecx
00841009 dec ecx
jnz _label
0084100A jne _label (841007h)
cmp eax, 21
0084100C cmp eax,15h
What's interesting here is that the xor is, in Visual Studio's eyes, the first operation in my program. Absent is the line move ecx, 6. Directly above where it thinks my source begins is the line that actually sets ecx to 6. So the actual start of my program has been mangled according to the disassembly.
If I make the first line of my program int 3, the line that appears above where my code is in the disassembly is:
00F80FFF add ah,cl
As suggested in one of the answers, I turned off ASLR, and it looks like the disassembly is a little more stable:
.CODE
main PROC
;mov ecx, 6
xor eax, eax
00401000 xor eax,eax --- <-- Breakpoint is present here, but not hit.
_label: add eax, ecx
00401002 add eax,ecx --- <-- Breakpoint here is hit.
dec ecx
00401004 dec ecx
The complete program is visible in the disassembly, but the problem still perists. Despite my program starting on an expected address, and the first breakpoint being shown in the disassembly, it is still skipped. Placing an int 3 as the first line still results in the following line:
00400FFF add ah,cl
and does not stop execution, and re-mangles the view of my program in the disassembly again. The next line of my program is then at location 00401001, which I suppose makes sense because int 3 is a one-byte instruction, but why would it have disappeared in the disassembly?
Even starting the program using the 'Step Into (F11)' command does not allow me to break on the first line. In fact, with no breakpoint, starting the program with F11 does not halt execution at all.
I'm not really sure what else I can try to solve the problem, beyond what I have detailed here. This is stretching beyond my current understanding of assembly and debuggers.
01370FFF add byte ptr [ecx+6],bh
At least I can explain away one mystery. Note the address, 0x1370fff. The CODE segment never starts at an address like that, segments begin at an address that's a multiple of 0x1000. Which makes the last 3 hex digits of the start address always 0. The debugger got confuzzled and started disassembling the code at the wrong address, off by one. The actual start address is 0x1371000. The disassembly starts off poorly because there's a 0 at 0x1370fff. That's a multi-byte ADD instruction. So it displays garbage for a while until it catches up with real machine code instructions by accident.
You need to help it along and give it a command to start disassembling at the proper address. In VS that's the Address box, type "0x1371000".
Another notable quirk is the strange value of the start address. A process normally starts at address 0x400000. You have a feature called ASLR turned on, Address Space Layout Randomization. It is an anti-virus feature that makes programs start at an unpredictable start address. Nice feature but it doesn't exactly help debugging programs. It isn't clear how you built this code but you need the /DYNAMICBASE:NO linker option to turn it off.
Another important quirk of debuggers you need to keep in mind here is the way they set breakpoints. They do so by patching the code, replacing the start byte of an instruction with an int 3 instruction. When the breakpoint hits, it quickly replaces the byte with the original machine code instruction byte. So you never see this. This goes wrong if you pick the wrong address to set the breakpoint, like in the middle of a multi-byte instruction. It now no longer breaks the code, the altered byte messes up the original instruction. You can easily fall into this trap when you started with a bad disassembly.
Well, do this the Right Way. Start debugging with the debugger's STEP command instead.
I have discovered what the root of the problem is, but I haven't a clue why it is so.
After creating another MASM project, I noticed that the new one would break on the first line of the program, and the disassembly did not appear to be mangled or altered. So, I compared its properties to my original project (for the Debug configuration). The only difference I found was that my original project had Incremental Linking disabled. Specifically, it added /INCREMENTAL:NO to the linker command line.
Removing this option from the command line (thereby enabling Incremental Linking) resulted in the program behaving as expected during debugging; my code shown in the disassembly window remained unaltered, I could hit a breakpoint on the first line of the main procedure, and an int 3 instruction would also execute properly as the first line.
If you press F+11 (step into) instead of Start Debugging the debugger will stop on the first line.
It is possible there is some messed up breakpoint setting. Delete any *.suo files in your project directory to reset all breakpoints.
Note that your project will have a secret headers and stuff in it if it has a main function. To set a breakpoint at the real entry point use: Debug + New Breakpoint + Break at Function -> wWinMainCRTStartup for a windows program or mainCRTStartup or wmainCRTStartup for a console program.

Resources