Debugging a SHA-256 implementation - debugging

I've been working on a SHA-256 implementation using MASM32, and got some source done. However, I can't get it working correctly and have looked at it, rewrote bits of it, and copied some source into the inline Delphi ASM and got it running perfectly, yet my original ASM source has problems. Given that I'm not incredibly experienced with it, Would it be possible for someone to look at the source and tell me if they see something I'm missing? I already did a Delphi implementation and have it working perfectly, so I know it's not the algorithm itself at fault but the ASM code itself.
I was planning on optimizing tasks after I simply got it working. But bear in mind that I am still learning (self-taught), so if you see something I do that is on the stupid side in this source, I'd like to be able to learn, too. But my main concern is getting it working since I'm not seeing where the error(s) are.
(Removed the ASM code for space concerns, since I know the problem now)
Edit: I figured out what the problem was. Which leads into the next logical question since I don't know: Why did this code cause a problem?
Changing the following at the end of the SHA256Loop macro:
ADD h, ECX
ADD h, EBX ; h := t1 + t2;
To this:
ADD ECX, EBX ; h := t1 + t2;
MOV h, ECX
Fixed it. Why couldn't I do two ADD instructions to the memory and get the same result as the ADD to register and then a MOV to memory?

Your first example with two ADD instructions depends on the previous contents of h. The second example is independent of the previous contents of h. If the value of h is not guaranteed to be zero, then those two examples will behave differently.

Related

LOW() HIGH() to use variable instead of value

I know this is how we use LOW() and HIGH() operators:
MOV P3,#LOW(-10)
But what if -10 is a variable or an input on one of the ports? This is what I need:
MOV P3,#LOW(P0)
Which does not work. The Edsim51 simulator says that a label is expected. But I don't think how I can use a label here, maybe it means a function label, I tried that, but as far as I know, we cannot return a value from a function, so I still don't know how to use a function label here.
MOV P3,#LOW(func)
func:
RET P0
Which is incorrect.
This is the only thing I could find on the internet, just a discussion about this issue: https://community.arm.com/developer/tools-software/tools/f/keil-forum/22073/how-to-use-low-or-high-in-a51
On the official documentations, I can't even find LOW() and HIGH() operators anywhere, which is weird to me: https://www.keil.com/support/man/docs/is51/is51_instructions.htm
You don't.
It doesn't make sense, how do you take the low byte or the high byte of something that is only one byte wide to begin with?
You're not finding them in the official instruction documentation because they aren't instructions. The CPU never sees them. They're just assembler shortcuts. MOV P3, #LOW(-10) is just another way of writing MOV P3, #246 that makes programmer intent clearer.

HEX Substraction in Windows ASM Shellcode returns wrong values

I'm working on some shellcoding, and have this weird result on my Windows VM. The idea is, starting from Zero, carve out some value. In this case, I need that the value is equals to: 0x50505353
(gdb) print /x 0 - 0x38383536 - 0x77777777
$1 = 0x50505353
In Python, GDB, OSX, Linux and everything else, this operation works as expected. On the Windows VM, when I perform the PUSH on OllyDBG or Immunity the result is: 0x52535051
So the question is, WHY? What I'm missing here?
This is the operation done in ASM:
AND EAX,65656565
AND EAX,1A1A1A1A
SUB EAX,38383536
SUB EAX,77777777
PUSH EAX
The first couple of AND transform EAX = 0x00000000. Then I can subtract from that. Just to be clear, I cannot use XOR on this test.
This is because you are not subtracting 0x38383536, you are subtracting 0x36353838, which gives the result you stated.
I just found the problem. When I do the calculations outside the Debugger, forgot the small detail that in order to use them in the shellcode, I should use the same values in reverse Endian Order.
Mental note: Always remember to check the order that you are working.
Thanks to all.
To be clear, are you saying error happens on the gdb command line as well as in your windows VM program? If gdb is giving rubbish then something is wrong with the VM.
Do you have gdb or another debugger working on the Windows VM? I'm a little confused about your runtime environment where you are seeing the error
If you have gdb where the error is occurring then here is how to debug.
If it is the program only then use stepi and info regs to see what is going on. The value of $eax before that last subtraction is likely the one of interest but check each instruction. If $eax has the correct value at the push then look around $sp in memory for the value. If it is still correct then follow the return from the function and look for the corresponding pop or stack read and check the value there. Perhaps the stack is getting corrupted before you actually print/use the value?

x86 Entering Graphics Mode on Macs

I am try to enter graphics mode in assembly on my Mac for learning purposes mostly. I have seen how to do it on Windows based like...
mov ax, 13h
int 10h
However, since I am on a Mac, I cannot use 'int' calls. Instead I use 'syscall.' So next I looked through Apples system calls here in hopes of finding something but I didn't come across anything that seemed helpful. Lastly I tried what how I would think the Mac equivalent would be like but I was confused without system calls. So....
mov rax, 0x200000(number) ; The number would be the system call
syscall ; int equivalent
I don't know what the number there would be. This may not even be possible, and if that is the case please say so, otherwise if anyone has any ideas if I'm headed in the right direction or completely wrong direction, help is appreciated.

(8051) Check if a single bit is set

I'm writing a program for a 8051 microcontroller. In the first part of the program I do some calculations and based on the result, I either light the LED or not (using CLR P1.7, where P1.7 is the port the LED is attached to in the microcontroller).
In the next part of the program I want to retrieve the bit, perhaps store it somewhere, and use it in a if-jump instruction like JB. How can I do that?
Also, I've seen the instruction MOV C, P1.7 in a code sample. What's the C here?
The C here is the 8051's carry flag - called that because it can be used to hold the "carry" when doing addition operations on multiple bytes.
It can also be used as a single-bit register - so (as here) where you want to move bits around, you can load it with a port value (such as P1.7) then store it somewhere else, for example:
MOV C, P1.7
MOV <bit-address>, C
Then later you can branch on it using:
JB <bit-address>, <label>
Some of the special function registers are also bit addressable. I believe its all the ones ending in 0 or 8. Don't have a reference in front of me but you can do something like setb r0.1. That way if you need the carry for something you dont have to worry about pushing it and using up space on your stack.

grdb not working variables

i know this is kinda retarded but I just can't figure it out. I'm debugging this:
xor eax,eax
mov ah,[var1]
mov al,[var2]
call addition
stop: jmp stop
var1: db 5
var2: db 6
addition:
add ah,al
ret
the numbers that I find on addresses var1 and var2 are 0x0E and 0x07. I know it's not segmented, but that ain't reason for it to do such escapades, because the addition call works just fine. Could you please explain to me where is my mistake?
I see the problem, dunno how to fix it yet though. The thing is, for some reason the instruction pointer starts at 0x100 and all the segment registers at 0x1628. To address the instruction the used combination is i guess [cs:ip] (one of the segment registers and the instruction pointer for sure). The offset to var1 is 0x10 (probably because from the begining of the code it's the 0x10th byte in order), i tried to examine the memory and what i got was:
1628:100 8 bytes
1628:108 8 bytes
1628:110 <- wtf? (assume another 8 bytes)
1628:118 ...
whatever tricks are there in the memory [cs:var1] points somewhere else than in my code, which is probably where the label .data would usually address ds.... probably.. i don't know what is supposed to be at 1628:10
ok, i found out what caused the assness and wasted me whole fuckin day. the behaviour described above is just correct, the code is fully functional. what i didn't know is that grdb debugger for some reason sets the begining address to 0x100... the sollution is to insert the directive ORG 0x100 on the first line and that's the whole thing. the code was working because instruction pointer has the right address to first instruction and goes one by one, but your assembler doesn't know what effective address will be your program stored at so it pretty much remains relative to first line of the code which means all the variables (if not using label for data section) will remain pointing as if it started at 0x0. which of course wouldn't work with DOS. and grdb apparently emulates some DOS features... sry for the language, thx everyone for effort, hope this will spare someone's time if having the same problem...
heheh.. at least now i know the reason why to use .data section :))))
Assuming that is x86 assembly, var1 and var2 must reside in the .data section.
Explanation: I'm not going to explain exactly how the executable file is structured (not to mention this is platform-specific), but here's a general idea as to why what you're doing is not working.
Assembly code must be divided into data sections due to the fact that each data section corresponds directly (or almost directly) to a specific part of the binary/executable file. All global variables must be defined in the .data sections since they have a corresponding location in the binary file which is where all global data resides.
Defining a global variable (or a globally accessed part of the memory) inside the code section will lead to undefined behavior. Some x86 assemblers might even throw an error on this.

Resources