I am using an rs232 HID reader.
Its manual says that its output is
CCDDDDDDDDDDXX
where CC is reserved for HID
DDDDDDDDDD is the transponder (the card) data
XX is a checksum
the checksum is well explained and irrelevant here. About DDDDDDDDDD only says valid values are 0000000000 to 1FFFFFFFFF but no indication of how it converts to what is printed on front face of the card.
I have 3 sample cards, sadly on a short range (edit plus an extra one). here I show them:
readed from rs232 shown on card
00000602031C27 00398
00000602031F2A 00399
0000060203202B 00400
00000601B535F1 55962 **new
Also I have a DB with 1000 cards loaded (what is printed on front) so I need the the decode path from what I read on rs232 to what is printed on front.
Some values from DB (I have seen the cards, but I have no phisical access to them now)
55503
60237
00833
Thanks a lot to every one.
Googling for the string "CCDDDDDDDDDDXX" returns http://www.rfideas.com/downloads/SerialAppNote8.pdf which seems to describe how to decode the numbers. I don't guarantee if that is accurate.
Decoding the Standard 26-bit Format
Message sent by the reader:
C C D D D D D D D D D D X X
---------------------------
0 0 0 0 0 6 0 2 0 3 1 C 2 7
0 0 0 0 0 6 0 2 0 3 1 F 2 A
0 0 0 0 0 6 0 2 0 3 2 0 2 B
0 0 0 0 0 6 0 1 B 5 3 5 F 1
Stripping off the checksum, X, and reducing the data to binary gives:
C C D D D D D D D D D D
cccc cccc zzzz zzzz zzzz zspf ffff fffn nnnn nnnn nnnn nnnp
-----------------------------------------------------------
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0001 1100
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0001 1111
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0010 0000
0000 0000 0000 0000 0000 0110 0000 0001 1011 0101 0011 0101
All the Card Data Characters to the left of the 7th can be ignored.
c = HID Specific Code.
z = leading zeros
s = start sentinel (it is always a 1)
p = parity odd and even (12 bits each).
f = Facility Code 8 bits
n = Card Number 16 bits
From this we can see that
00000602031C27 → n = 0b0000000110001110 = 398
00000602031F2A → n = 0b0000000110001111 = 399
0000060203202B → n = 0b0000000110010000 = 400
00000601B535F1 → n = 0b1101101010011010 = 55962
So, for your example, we may probably get:
55503
(f, n) = 0b0000_0001__1101_1000_1100_1111
odd parity of first 12 bits = 0
even parity of last 12 bits = 0
result = 00000403b19e56
Related
If I run $((0x100 - 0 & 0xff)), I got 0.
However $((0x100 - 0)) gives me 256.
Why the result from the first expression got truncated?
Because & is a bitwise operator, and there are no matching bits in 0x100 and 0xff.
What that means is it looks at the bits that make up your numbers and you get a 1 back in the position where both inputs have a 1.
So if you do $((0x06 & 0x03))
In binary you end up with
6 = 0110
3 = 0011
So when you logical and those together, you'll get
0010 (binary) or 0x02
For the numbers you have, there are no bits in common:
0x100 in binary is
0000 0001 0000 0000
0xff in binary is
0000 0000 1111 1111
If you bitwise and them together, there are no matching bits, so you'll end up with
0000 0000 0000 0000
Interestingly, it does the subtraction before it does the bitwise and operation (I expected it to do the other way):
$((0x100 - 1 & 0xff)) gives 255 or 0xff because 0x100 - 1 = 0xff
This is the documentation for the Windows .lnk shortcut format:
https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-shllink/16cb4ca1-9339-4d0c-a68d-bf1d6cc0f943
The ShellLinkHeader structure is described like this:
This is a file:
Looking at HeaderSize, the bytes are 4c 00 00 00 and it's supposed to mean 76 decimal. This is a little-endian integer, no surprise here.
Next is the LinkCLSID with the bytes 01 14 02 00 00 00 00 00 c0 00 00 00, representing the value "00021401-0000-0000-C000-000000000046". This answer seems to explain why the byte order changes because the last 8 bytes are a byte array while the others are little-endian numbers.
My question is about the LinkFlags part.
The LinkFlags part is described like this:
And the bytes in my file are 9b 00 08 00, or in binary:
9 b 0 0 0 8 0 0
1001 1011 0000 0000 0000 1000 0000 0000
^
By comparing different files I found out that the bit marked with ^ is bit 6/G in the documentation (marked in red).
How to interpret this? The bytes are in the same order as in the documentation but each byte has its bits reversed?
The issue here springs from the fact the shown list of bits in these specs is not meant to fit a number underneath it at all. It is meant to fit a list of bits underneath it, and that list goes from the lowest bit to the highest bit, which is the complete inverse of how we read numbers from left to right.
The list clearly shows bits numbered from 0 to 31, though, meaning this is indeed one 32-bit value, and not four bytes. Specifically, this means the original read bytes need to be interpreted as a single 32-bit integer before doing anything else. Like with all other values, this means it needs to be read as little-endian number, with its bytes reversed.
So your 9b 00 08 00 becomes 0008009b, or, in binary, 0000 0000 0000 1000 0000 0000 1001 1011.
But, as I said, that list in the specs shows the bits from lowest to highest. So to fit them under that, reverse the binary version:
0 1 2 3
0123 4567 8901 2345 6789 0123 4567 8901
ABCD EFGH IJKL MNOP QRST UVWX YZ#_ ____
---------------------------------------
1101 1001 0000 0000 0001 0000 0000 0000
^
So bit 6, indicated in the specs as 'G', is 0.
This whole thing makes a lot more sense if you invert the specs, though, and list the bits logically, from highest to lowest:
3 2 1 0
1098 7654 3210 9876 5432 1098 7654 3210
____ _#ZY XWVU TSRQ PONM LKJI HGFE DCBA
---------------------------------------
0000 0000 0000 1000 0000 0000 1001 1011
^
0 0 0 8 0 0 9 b
This makes the alphabetic references look a lot less intuitive, but it does perfectly fit the numeric versions underneath. The bit matches your findings (third bit on what you have as value '9'), and you can also clearly see that the highest 5 bits are unused.
I know that you can bitmask by ANDing a value with 0. However, how can I both bitmask certain nibbles and maintain others. In other words if I have 0x000f0b7c and I wanted to mask the everything but b (in other words my result would be 0x00000b00) how would I use AND to do this? Would it require multiple steps?
You can better understand boolean operations if you represent values in binary form.
The AND operation between two binary digits returns 1 if both the binary digits have a value of 1, otherwise it returns 0.
Suppose you have two binary digits a and b, you can build the following "truth table":
a | b | a AND b
---+---+---------
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
The masking operation consists of ANDing a given value with a "mask" where every bit that needs to be preserved is set to 1, while every bit to discard is set to 0.
This is done by ANDing each bit of the given value with the corresponding bit of the mask.
The given value, 0xf0b7c, can be converted as follows:
f 0 b 7 c (hex)
1111 0000 1011 0111 1100 (bin)
If you want to preserve only the bits corresponding to the "b" value (bits 8..11) you can mask it this way:
f 0 b 7 c
1111 0000 1011 0111 1100
0000 0000 1111 0000 0000
The value 0000 0000 1111 0000 0000 can be converted to hex and has a value of 0xf00.
So if you calculate "0xf0b7c AND 0xf00" you obtain 0xb00.
I do some reverse engineering stuff with simple crackme app and I'am debugging it with OllyDbg.
I'm stuck at the behavior of instruction AND with operand 0x0FF. I mean It's equivalent in C++ to
if(... = true).
So what's confusing is that:
ECX = CCCCCC01
ZF = 1
AND ECX, 0FF
### After instruction
ECX = 00000001
ZF = 0
ZF - Should be active
I don't know why is result of ECX register 1 and ZF isn't active.
AND => 1 , 1 = 1 (Same operands)
Otherwise = 0
Can someone explain me that?
thankx for help
It's a bit-wise AND, so in binary you have
1100 1100 1100 1100 1100 1100 0000 0001
AND 0000 0000 0000 0000 0000 0000 1111 1111
----------------------------------------
0000 0000 0000 0000 0000 0000 0000 0001
I compressed the text "TestingTesting" and the hex result was: 0B 49 2D 2E C9 CC 4B 0F 81 50 00. I can't figure out the length and distance codes. The binary below is reversed because the RFC says to read the bits from right to left (thanks Matthew Slattery for the help). Here is what was parsed so far:
1 BFINAL (last block)
01 BTYPE (static)
1000 0100 132-48= 84 T
1001 0101 149-48= 101 e
1010 0011 163-48= 115 s
1010 0100 164-48= 116 t
1001 1001 153-48= 105 i
1001 1110 158-48= 110 n
1001 0111 151-48= 103 g
These are the remaining bits that I don't know how to parse:
1000 0100 0000 1000 0101 0000 0000 0
The final 10 bits (end of block value is x100) is the only part I can parse. I think the length and distance values should be 7 (binary 0111) since the length of "Testing" is 7 letters, and it gets copied 7 characters after the current position, but I can't figure out how its representing this in the remaining bits. What am I doing wrong?
The distance code is 5, but a distance code of 5 is followed by one "extra bit" to indicate an actual distance of either 7 or 8. (See the second table in paragraph 3.2.5 of the RFC.)
The complete decoding of the data is:
1 BFINAL
01 BTYPE=static
10000100 'T'
10010101 'e'
10100011 's'
10100100 't'
10011001 'i'
10011110 'n'
10010111 'g'
10000100 another 'T'
0000100 literal/length code 260 = length 6
00101 distance code 5
0 extra bit => the distance is 7
0000000 literal/length code 256 = end of block