Reverse Engineering - AND 0FF - debugging

I do some reverse engineering stuff with simple crackme app and I'am debugging it with OllyDbg.
I'm stuck at the behavior of instruction AND with operand 0x0FF. I mean It's equivalent in C++ to
if(... = true).
So what's confusing is that:
ECX = CCCCCC01
ZF = 1
AND ECX, 0FF
### After instruction
ECX = 00000001
ZF = 0
ZF - Should be active
I don't know why is result of ECX register 1 and ZF isn't active.
AND => 1 , 1 = 1 (Same operands)
Otherwise = 0
Can someone explain me that?
thankx for help

It's a bit-wise AND, so in binary you have
1100 1100 1100 1100 1100 1100 0000 0001
AND 0000 0000 0000 0000 0000 0000 1111 1111
----------------------------------------
0000 0000 0000 0000 0000 0000 0000 0001

Related

Why does the algorithm expression result get trucated?

If I run $((0x100 - 0 & 0xff)), I got 0.
However $((0x100 - 0)) gives me 256.
Why the result from the first expression got truncated?
Because & is a bitwise operator, and there are no matching bits in 0x100 and 0xff.
What that means is it looks at the bits that make up your numbers and you get a 1 back in the position where both inputs have a 1.
So if you do $((0x06 & 0x03))
In binary you end up with
6 = 0110
3 = 0011
So when you logical and those together, you'll get
0010 (binary) or 0x02
For the numbers you have, there are no bits in common:
0x100 in binary is
0000 0001 0000 0000
0xff in binary is
0000 0000 1111 1111
If you bitwise and them together, there are no matching bits, so you'll end up with
0000 0000 0000 0000
Interestingly, it does the subtraction before it does the bitwise and operation (I expected it to do the other way):
$((0x100 - 1 & 0xff)) gives 255 or 0xff because 0x100 - 1 = 0xff

Why does the CRC of "1" yield the generator polynomial itself?

While testing a CRC implementation, I noticed that the CRC of 0x01 usually (?) seems to be the polynomial itself. When trying to manually do the binary long division however, I keep ending up losing the leading "1" of the polynomial, e.g. with a message of "0x01" and the polynomial "0x1021", I would get
1 0000 0000 0000 (zero padded value)
(XOR) 1 0000 0010 0001
-----------------
0 0000 0010 0001 = 0x0021
But any sample implementation (I'm dealing with XMODEM-CRC here) results in 0x1021 for the given input.
Looking at https://en.wikipedia.org/wiki/Computation_of_cyclic_redundancy_checks, I can see how the XOR step of the upper bit leaving the shift register with the generator polynomial will cause this result. What I don't get is why this step is performed in that manner at all, seeing as it clearly alters the result of a true polynomial division?
I just read http://www.ross.net/crc/download/crc_v3.txt and noticed that in section 9, there is mention of an implicitly prepended 1 to enforce the desired polynomial width.
In my example case, this means that the actual polynomial used as divisor would not be 0x1021, but 0x11021. This results in the leading "1" being dropped, and the remainder being the "intended" 16-bit polynomial:
1 0000 0000 0000 0000 (zero padded value)
(XOR) 1 0001 0000 0010 0001
-----------------
0 0001 0000 0010 0001 = 0x1021

Why does x3103 have the value x1482 at the end

Edit: The original question is this
Suppose the following LC-3 program is loaded into memory starting at
location x30FF:
x30FF 1110 0010 0000 0001
x3100 0110 0100 0100 0010
x3101 1111 0000 0010 0101
x3102 0001 0100 0100 0001
x3103 0001 0100 1000 0010
If the program is executed, what is the value in R2 at the end of
execution?
x30FF 1110 0010 0000 0001 ; R1 <- PC' + 1 ; R1 <- x3101
x3100 0110 0100 0100 0010 ; R2 <- mem[R1 + 2] ; R2 <- mem[x3103] = x1482
x3101 1111 0000 0010 0101 ; TRAP x25 = HALT
x3102 0001 0100 0100 0001 ; x1441
x3103 0001 0100 1000 0010 ; x1482
The question is what is the content of R2 at the end of the program
In this problem I understand everything until x3100
However I don't understand what mem[R1+2] means and how x3102 has x1441 in Register 2 and how x3103 has the value x1482.
As far I can tell, nothing is loaded into R2 at any point.
Where does x1441 and x1482 come from?
Can somebody explain how R2 has x1482 in it?
Going over the machine language you posted.
The first instruction which is LEA R1, 1 will simply store PC + 1 into R1. Since the PC will be x3100 at the time that instruction is executed x3101 is stored into R1.
The second instruction which is LDR R2, R1, 2 will take R1's value add 2
and then load from memory at the address formed from the previous computation and store it in R2. R1's value is x3101, x3101 + 2 is x3103 so whatever is at address x3103 will be stored in R2. Since you posted that x3103 contains x1482 that is what gets stored in R2.
The phrasing mem[R1+2] means to load from memory at the address computed by taking R1's value and adding 2 to it.
From your edit, yeah the x1441 and x1482 appear to just be data.

decoding HID data

I am using an rs232 HID reader.
Its manual says that its output is
CCDDDDDDDDDDXX
where CC is reserved for HID
DDDDDDDDDD is the transponder (the card) data
XX is a checksum
the checksum is well explained and irrelevant here. About DDDDDDDDDD only says valid values are 0000000000 to 1FFFFFFFFF but no indication of how it converts to what is printed on front face of the card.
I have 3 sample cards, sadly on a short range (edit plus an extra one). here I show them:
readed from rs232 shown on card
00000602031C27 00398
00000602031F2A 00399
0000060203202B 00400
00000601B535F1 55962 **new
Also I have a DB with 1000 cards loaded (what is printed on front) so I need the the decode path from what I read on rs232 to what is printed on front.
Some values from DB (I have seen the cards, but I have no phisical access to them now)
55503
60237
00833
Thanks a lot to every one.
Googling for the string "CCDDDDDDDDDDXX" returns http://www.rfideas.com/downloads/SerialAppNote8.pdf which seems to describe how to decode the numbers. I don't guarantee if that is accurate.
Decoding the Standard 26-bit Format
Message sent by the reader:
C C D D D D D D D D D D X X
---------------------------
0 0 0 0 0 6 0 2 0 3 1 C 2 7
0 0 0 0 0 6 0 2 0 3 1 F 2 A
0 0 0 0 0 6 0 2 0 3 2 0 2 B
0 0 0 0 0 6 0 1 B 5 3 5 F 1
Stripping off the checksum, X, and reducing the data to binary gives:
C C D D D D D D D D D D
cccc cccc zzzz zzzz zzzz zspf ffff fffn nnnn nnnn nnnn nnnp
-----------------------------------------------------------
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0001 1100
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0001 1111
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0010 0000
0000 0000 0000 0000 0000 0110 0000 0001 1011 0101 0011 0101
All the Card Data Characters to the left of the 7th can be ignored.
c = HID Specific Code.
z = leading zeros
s = start sentinel (it is always a 1)
p = parity odd and even (12 bits each).
f = Facility Code 8 bits
n = Card Number 16 bits
From this we can see that
00000602031C27 → n = 0b0000000110001110 = 398
00000602031F2A → n = 0b0000000110001111 = 399
0000060203202B → n = 0b0000000110010000 = 400
00000601B535F1 → n = 0b1101101010011010 = 55962
So, for your example, we may probably get:
55503
(f, n) = 0b0000_0001__1101_1000_1100_1111
odd parity of first 12 bits = 0
even parity of last 12 bits = 0
result = 00000403b19e56

Are these endian transformations correct?

I am struggling to figure this out, I am trying to represent a 32bit variable in both big and little endian. For the sake of argument let's say we try the number, "666."
Big Endian: 0010 1001 1010 0000 0000 0000 0000
Little Endian: 0000 0000 0000 0000 0010 1001 1010
Is this correct, or is my thinking wrong here?
666 (decimal) as 32-bit binary is represented as:
[0000 0000] [0000 0000] [0000 0010] [1001 1010] (big endian, most significant byte first))
[1001 1010] [0000 0010] [0000 0000] [0000 0000] (little endian, least significant byte first)
Ref.
(I have used square brackets to group 4-bit nibbles into bytes)

Resources