x86 Entering Graphics Mode on Macs - macos

I am try to enter graphics mode in assembly on my Mac for learning purposes mostly. I have seen how to do it on Windows based like...
mov ax, 13h
int 10h
However, since I am on a Mac, I cannot use 'int' calls. Instead I use 'syscall.' So next I looked through Apples system calls here in hopes of finding something but I didn't come across anything that seemed helpful. Lastly I tried what how I would think the Mac equivalent would be like but I was confused without system calls. So....
mov rax, 0x200000(number) ; The number would be the system call
syscall ; int equivalent
I don't know what the number there would be. This may not even be possible, and if that is the case please say so, otherwise if anyone has any ideas if I'm headed in the right direction or completely wrong direction, help is appreciated.

Related

LOW() HIGH() to use variable instead of value

I know this is how we use LOW() and HIGH() operators:
MOV P3,#LOW(-10)
But what if -10 is a variable or an input on one of the ports? This is what I need:
MOV P3,#LOW(P0)
Which does not work. The Edsim51 simulator says that a label is expected. But I don't think how I can use a label here, maybe it means a function label, I tried that, but as far as I know, we cannot return a value from a function, so I still don't know how to use a function label here.
MOV P3,#LOW(func)
func:
RET P0
Which is incorrect.
This is the only thing I could find on the internet, just a discussion about this issue: https://community.arm.com/developer/tools-software/tools/f/keil-forum/22073/how-to-use-low-or-high-in-a51
On the official documentations, I can't even find LOW() and HIGH() operators anywhere, which is weird to me: https://www.keil.com/support/man/docs/is51/is51_instructions.htm
You don't.
It doesn't make sense, how do you take the low byte or the high byte of something that is only one byte wide to begin with?
You're not finding them in the official instruction documentation because they aren't instructions. The CPU never sees them. They're just assembler shortcuts. MOV P3, #LOW(-10) is just another way of writing MOV P3, #246 that makes programmer intent clearer.

Compiler generated unexpected `IN AL, DX` (opcode `EC`) while setting up call stack

I was looking at some compiler output, and when a function is called it usually starts setting up the call stack like so:
PUSH EBP
MOV EBP, ESP
PUSH EDI
PUSH ESI
PUSH EBX
So we save the base pointer of the calling routine on the stack, move our own base pointer up, and then store the contents of a few registers on the stack. These are then restored to their original values at the end of the routine, like so:
LEA ESP, [EBP-0Ch]
POP EBX
POP ESI
POP EDI
POP EBP
RET
So far, so good. However, I noticed that in one routine the code that sets up the call stack looks a little different. In fact, it looks like this:
IN AL, DX
PUSH EDI
PUSH ESI
PUSH EBX
This is quite confusing for a number of reasons. For one thing, the end-of-method code is identical to that quoted above for the other method, and in particular seems to expect a saved copy of EBP to be available on the stack.
For another, if I understand correctly the command IN AL, DX reads into the AL register, which is the same as the EAX register, and as it so happens the very next command here is
XOR EAX, EAX
as the program wants to zero a few things it allocated on the stack.
Question: I'm wondering exactly what's going on here that I don't understand. The machine code being translated as IN AL, DX is the single byte EC, whereas the pair of instructions
PUSH EBP
MOV EBP, ESP
would correspond to three byte 55 88 EC. Is the disassembler misreading this somehow? Or is something relying on a side effect I don't understand?
If anyone's curious, this machine code was generated by the CLR's JIT compiler, and I'm viewing it with the Visual Studio debugger. Here's a minimal reproduction in C#:
class C {
string s = "";
public void f(string s) {
this.s = s;
}
}
However, note that this seems to be non-deterministic; sometimes I seem to get the IN AL, DX version, while other times there's a PUSH EBP followed by a MOV EBP, ESP.
EDIT: I'm starting to strongly suspect a disassembler bug -- I just got another situation where it shows IN AL, DX (opcode EC) and the two preceding bytes in memory are 55 88. So perhaps the disassembler is simply confused about the entry point of the method. (Though I'd still like some insight as to why that's happening!)
Sounds like you are using VS2015. Your conclusion is correct, its debugging engine has a lot of bugs. Yes, wrong address. Not the only problem, it does not restore breakpoints properly and you are apt to see the INT3 instruction still in the code. And it can't correctly refresh the disassembly when the jitter has re-generated the code and replace stub calls. You can't trust anything you see.
I recommend you use Tools > Options > Debugging > General and tick the "Use Managed Compatibility Mode" checkbox. That forces the debugger to use an older debugging engine, VS2010 vintage. It is much more stable.
You'll lose some features with this engine, like return value inspection and 64-bit Edit+Continue. Won't be missed when you do this kind of debugging. You will however see fake code addresses, as was always common before, so all CALL addresses are wrong and you can't easily identify calls into the CLR. Flipping the engine back-and-forth is a workaround of sorts, but of course a big annoyance.
This has not been worked on either, I saw no improvements in the Updates. But they no doubt had a big bug list to work through, VS2015 shipped before it was done. Hopefully VS2017 is better, we'll find out soon.
As Hans's answered, it's a bug in Visual Studio.
To confirm the same, I disassembled a binary using IDA 6.5 and Visual Studio 2019. Here is the screenshot:
Visual Studio 2019 missed 2 bytes (0x55 0x8B) while considering the start of main.
Note: 'Use managed compatibility mode' mentioned by Hans didn't fix the issue in VS2019.

HEX Substraction in Windows ASM Shellcode returns wrong values

I'm working on some shellcoding, and have this weird result on my Windows VM. The idea is, starting from Zero, carve out some value. In this case, I need that the value is equals to: 0x50505353
(gdb) print /x 0 - 0x38383536 - 0x77777777
$1 = 0x50505353
In Python, GDB, OSX, Linux and everything else, this operation works as expected. On the Windows VM, when I perform the PUSH on OllyDBG or Immunity the result is: 0x52535051
So the question is, WHY? What I'm missing here?
This is the operation done in ASM:
AND EAX,65656565
AND EAX,1A1A1A1A
SUB EAX,38383536
SUB EAX,77777777
PUSH EAX
The first couple of AND transform EAX = 0x00000000. Then I can subtract from that. Just to be clear, I cannot use XOR on this test.
This is because you are not subtracting 0x38383536, you are subtracting 0x36353838, which gives the result you stated.
I just found the problem. When I do the calculations outside the Debugger, forgot the small detail that in order to use them in the shellcode, I should use the same values in reverse Endian Order.
Mental note: Always remember to check the order that you are working.
Thanks to all.
To be clear, are you saying error happens on the gdb command line as well as in your windows VM program? If gdb is giving rubbish then something is wrong with the VM.
Do you have gdb or another debugger working on the Windows VM? I'm a little confused about your runtime environment where you are seeing the error
If you have gdb where the error is occurring then here is how to debug.
If it is the program only then use stepi and info regs to see what is going on. The value of $eax before that last subtraction is likely the one of interest but check each instruction. If $eax has the correct value at the push then look around $sp in memory for the value. If it is still correct then follow the return from the function and look for the corresponding pop or stack read and check the value there. Perhaps the stack is getting corrupted before you actually print/use the value?

Debugging a SHA-256 implementation

I've been working on a SHA-256 implementation using MASM32, and got some source done. However, I can't get it working correctly and have looked at it, rewrote bits of it, and copied some source into the inline Delphi ASM and got it running perfectly, yet my original ASM source has problems. Given that I'm not incredibly experienced with it, Would it be possible for someone to look at the source and tell me if they see something I'm missing? I already did a Delphi implementation and have it working perfectly, so I know it's not the algorithm itself at fault but the ASM code itself.
I was planning on optimizing tasks after I simply got it working. But bear in mind that I am still learning (self-taught), so if you see something I do that is on the stupid side in this source, I'd like to be able to learn, too. But my main concern is getting it working since I'm not seeing where the error(s) are.
(Removed the ASM code for space concerns, since I know the problem now)
Edit: I figured out what the problem was. Which leads into the next logical question since I don't know: Why did this code cause a problem?
Changing the following at the end of the SHA256Loop macro:
ADD h, ECX
ADD h, EBX ; h := t1 + t2;
To this:
ADD ECX, EBX ; h := t1 + t2;
MOV h, ECX
Fixed it. Why couldn't I do two ADD instructions to the memory and get the same result as the ADD to register and then a MOV to memory?
Your first example with two ADD instructions depends on the previous contents of h. The second example is independent of the previous contents of h. If the value of h is not guaranteed to be zero, then those two examples will behave differently.

How to change screen background in assembler

This is for homework:
How do I clear the screen and change foreground and background colors in assembler (NASM on windows)
EDIT: It turns out the answer is something like
mov bh, 71h
int 10h
Check out FillConsoleOutputCharacter and SetConsoleTextAttribute.
You'll probably need some operating system services to get that kind of functionality. Since that's a requirement, how would you do it from another language? Once you figure that out, you can just make the same calls from your assembly language program. Something like:
call OSServiceClearScreen
where OSServiceClearScreen is the name of the system call or library function that performs the operation you want. Then just link your assembly program with the right libraries and it should all "just work".

Resources