getch() in TASM - tasm

So, i'm writing a program in Borland C with TASM in DOS. The program should switch s.c. "tasks" (or processes) until completion, and switching fuction should be operated via key click (getch), and this getch should be written in TASM, inserted in C++ . So, is there any getch() analog in TASM?

It depends on what environment that the program is supposed to be running in.
If it is intended to run in DOS, you can use interrupt 16h to retrieve pressed keys from the keyboard event handler. I have never used it myself, so I don't know more about it than that.
You can also install your program as the handler for hardware-interrupts from the keyboard. This is done using subfunction 25h (specified in AH register) of interrupt 21h. AL specifies the interrupt to install a handler for (keyboard interrupts is 9h), and DS:DX (segment:offset) specifies address to the handler.
As for Windows I am not as sure, but perhaps this will answer your question: https://msdn.microsoft.com/en-us/library/system.windows.forms.control.keypress%28v=vs.110%29.aspx

Related

What is the difference between the following two eBPF program types BPF_PROG_TYPE_SYSCALL and BPF_PROG_TYPE_KPROBE?

So I am assuming that BPF_PROG_TYPE_SYSCALL programs are triggered whenever a particular syscall is executed inside the kernel. Can't BPF_PROG_TYPE_KPROBE ebpf programs be used for that purpose? kprobes can hook into any kernel function and syscalls are also kernel functions.
So what is the difference between the two types of programs and when to use which?
You would think that but actually BPF_PROG_TYPE_SYSCALL is a program type which can execute syscalls itself. https://lwn.net/Articles/854228/ It was introduced as an attempt to make one BPF program load another so the first program can be signed with a certificate. But it hasn't caught on very well yet as of writing this.
Indeed if you want to trigger on syscall execution, kprobes are the way to go.

.call alternative in kernel mode

I see that .call is not supported in kernel mode for WinDbg. But I seem to remember GDB does allow call with kernel mode debugging.
Can anyone:
Suggest how I could call an arbitrary function in WinDbg in kernel mode debugging
Explain why .call is only supported in user mode?
Raymond Chen describes how .call is implemented here:
Stupid debugger tricks: Calling functions and methods
Back in the old days, if you wanted to call a function from inside the debugger, you had to do it by hand: Save the registers, push the parameters onto the stack (or into registers if the function uses fastcall or thiscall) push the address of the ntdll!DbgBreakPoint function, move the instruction pointer to the start of the function you want to call, then hit "g" to resume execution. The function runs then returns to the ntdll!DbgBreakPoint, where the debugger regains control and you can look at the results. Then restore the registers (including the original instruction pointer) and resume debugging. (That paragraph was just a quick recap; I'm assuming you already knew that.)
You would need a different return address since ntdll!DbgBreakPoint is a user mode address. Since he mentions using g then you would want to set a breakpoint on the return address.

Is there a way to remove "getKey"'s input lag?

I've recently decided to try ti-basic programming, and while I was playing with getKey; I noticed that it had a 1s~ input lag after the first input. Is this built into the calculator, or can this be changed?
I recognize that "Quick Key" code above ;) (I'm the original author and very glad to see it spread around!).
Anyway, here is my low-level knowledge of the subject:
The operating system uses what is known as an interrupt in order to handle reading the keyboard, link port, USB port, and the run indicator among other things. The interrupt is just software code, nothing hardware implemented. So it is hardwired into the OS not the calculator.
The gist of the code TI uses is that once it reads that a key press occurred, it resets a counter to 50 and decrements it so long as the user holds down the key. Once the counter reaches zero, it tells getKey to recognize it as a new keypress and then it resets the counter to 10. This cause the initial delay to be longer than subsequent delays.
The TI-OS allows third party "hooks" to jump in and modify the getkey process and I used such a hook in another more complicated program (Speedy Keys). However, this hook is never called during BASIC program execution except at a Pause or Menu( command, where it isn't too helpful.
Instead what we can do is setup a parser hook that modifies the getkey counters. Alternatively, you can use the QuickKey code above, or you can use Hybrid BASIC which requires you to download a third-party App. A few of these apps (BatLib [by me], Celtic 3, DoorsCS7, and xLIB) offer a very fast getKey alternative as well as many other powerful functions.
The following is the code for setting up the parser hook. It works very well in my tests! See notes below:
#include "ti83plus.inc" ; ~~This column is the stuff for manually
_EnableParserHook = 5026h ; creating the code on calc. ~~
.db $BB,$6D ;AsmPrgm
.org $9D95 ;
ld hl,hookcode ;21A89D
ld de,appbackupscreen ;117298
ld bc,hookend-hookcode ;010A00
ldir ;EDB0
ld hl,appbackupscreen ;217298
ld a,l ;7D
bcall(_EnableParserHook);EF2650
ret ;C9
hookcode: ;
.db 83h ;83
push af ;F5
ld a,1 ;3E01
ld (8442h),a ;324284
pop af ;F1
cp a ;BF
ret ;C9
hookend: ;
Notes: other apps or programs may use parser hooks. Using this program will disable those hooks and you will need to reinstall them. This is pretty easy.
Finally, if you manually putting this on your calculator, use the right column code. Here is an animated .gif showing how to make such a program:
You will need to run the program once either on the homescreen or at the start of your main program. After this, all getKeys will have no delay.
I figured out this myself too when I was experimenting with my Ti-84 during the summer. This lag cannot be changed. This is built into the calculator. I think this is because of how the microchip used in ti-84 is a Intel Zilog Z80 microprocessor which was made in 1984.
This is unfortunately simply the inefficiency of the calculator. TI-basic is a fairly high-level language and meant to be easy to use and is thus not very efficient or fast. Especially with respect to input and output, i.e. printing messages and getting input.
Quick Key
:AsmPrgm3A3F84EF8C47EFBF4AC9
This is a getKey routine that makes all keys repeat, not just arrows and there is no delay between repeats. The key codes are different, so you might need to experiment.

int instruction from user space

I was under the impression that "int" instruction on x86 is not privileged. So, I thought we should be able to execute this instruction from the user space application. But does not seem so.
I am trying to execute int from user application on windows. I know it may not be right to do so. But I wanted to have some fun. But windows is killing my application.
I think the issue is due to condition cpl <=iopl. Does anyone know how to get around it?
Generally the old dispatcher mechanism for user mode code to transition to kernel mode in order to invoke kernel services was implemented by int 2Eh (now replaced by sysenter). Also int 3 is still reserved for breakpoints to this very day.
Basically the kernel sets up traps for certain interrupts (don't remember whether for all, though) and depending the trap code they will perform some service for the user mode invoker or if that is not possible your application would get killed, because it attempts a privileged operation.
Details would anyway depend on the exact interrupt you were trying to invoke. The functions DbgBreakPoint (ntdll.dll) and DebugBreak (kernel32.dll) do nothing other than invoking int 3 (or actually the specific opcode int3), for example.
Edit 1: On newer Windows versions (XP SP2 and newer, IIRC) sysenter replaces int 2Eh as I wrote in my answer. One possible reason why it gets terminated - although you should be able to catch this via exception handling - is because you don't pass the parameters that it expects on the stack. Basically the usermode part of the native API places the parameters for the system service you call onto the stack, then places the number of the service (an index into the system service dispatcher table - SSDT, sometimes SDT) into a specific register and then calls on newer systems sysenter and on older systems int 2Eh.
The minimum ring level of a given interrupt vector (which decides whether a given "int" is privileged) is based on the ring-level descriptor associated with the vector in the interrupt descriptor table.
In Windows the majority of interrupts are privileged instructions. This prevents user-mode from merely calling the double-fault handler to immediately bugcheck the OS.
There are some non-privileged interrupts in Windows. Specifically:
int 1 (both CD 01 encoding and debug interrupt occurs after a single instruction if EFLAGS_TF is set in eflags)
int 3 (both encoding CC and CD 03)
int 2E (Windows system call)
All other interrupts are privileged, and calling them causes the "invalid instruction" interrupt to be issued instead.
INT is a 'privilege controlled' instruction. It has to be this way for the kernel to protect itself from usermode. INT goes through the exact same trap vectors that hardware interrupts and processor exceptions go through, so if usermode could arbitrarily trigger these exceptions, the interrupt dispatching code would get confused.
If you want to trigger an interrupt on a particular vector that's not already set up by Windows, you have to modify the IDT entry for that interrupt vector with a debugger or a kernel driver. Patchguard won't let you do this from a driver on x64 versions of Windows.

What happens if TF(trap flag) is set to 0 in 8086 microprocessors?

Here I searched that:
Trap Flag (T) – This flag is used for on-chip debugging. Setting trap
flag puts the microprocessor into single step mode for debugging. In
single stepping, the microprocessor executes a instruction and enters
into single step ISR.
If trap flag is set (1), the CPU automatically generates an internal
interrupt after each instruction, allowing a program to be inspected
as it executes instruction by instruction.
If trap flag is reset (0), no function is performed.
https://en.wikipedia.org/wiki/Trap_flag
Now I am coding on emu-8086. As explained, TF must be set in order to debugger work.
Should I set a TF always myself or it is set automatically?
If I somehow set a TF to 0, will the whole computer systems debuggers work or just emu-8086 wont debug?
I've never used emu8086 but by looking at some screenshot of it and judging by its name it's probably an emulator - this means it is not running the code natively.
Each instruction is changing the state of a virtual 8086 CPU (represented as a data structure in memory) and not the state of your real CPU.
With this emulation, emu8086 doesn't need to rely on the TF flag to single-step your program, it just needs to stop after one step of emulation and wait for you to hit another button.
This is also why you can find a thing such as "Step back".
If you were wondering what would happen if a debugged program (and not an emulated one) sets the TF flag then the answer is that it depends on the debugger.
The correct behaviour is the one where the debuggee receives the exceptions but this is hard to handle correctly (since the debugger itself uses the TF flag).
Some debugger just don't care and swallow the exception (i.e. they don't forward it to the program under debug) assuming that a well written program doesn't need to use the TF flag.
Unfortunately malwares routinely use a set of anti-debug technique including setting the TF and checking it back/waiting for exceptions to detect the presence of a debugger.
A truly transparent debugger has to handle the RFLAGS register carefully.
When debugging with breakpoints the TF is not set while the program is executing, so there is nothing to worry about.
However when single stepping the TF is set during the next instruction, this is problematic during a pushfd/q and the debugger must explicitly handle that case to avoid detection.
If the debuggee sets the TF the debugger must pass the debug exception to the program - under current OS the TF won't last more than an instruction because the OS will catch the exception,
trasnform it in a signal and dispatch it to the program while clearing the TF. So the debugger can simply do a check before stepping into a popfd/q instruction.
Where the TF doesn't get cleared by the OS the debugger must effectively emulate RFLAGS with a copy.
The debugger sets TF according to what it needs to do. The code being debugged should not modify TF.

Resources