Calculating NTFS $MFT start cluster from bytes in boot sector - windows

I'm wondering how windows interprets the 8 bytes that define the starting cluster for the $MFT. Convering the 8 bytes to decimal using a calculator doesn't work, luckily WinHex has a calculator which displays the correct value, though I don't know how it's calculated:
http://imgur.com/4UEZvNy
The picture above shows WinHeX data interpreter showing the correct amount. So my question is, how does '00 00 0C 00 00 00 00 00' equate to '786432'.
I saw another user had a similar question about the number of bytes per sector, which was answered by someone that it was down to a byte being base 256. But that doesn't really help me figure it out. Any help would be really appreciated.

Related

How to detect tabular data in raw bytes

There are a number of incoming data blocks containing raw bytes with unknown structure, and block size is known in advance. There is a need to detect whether a block contains some tabular data structure or not. In other words, there is a need to know are the raw bytes formed as tabular data or not. Is there any known approach to detect it? Maybe applying autocorrelation function helps or something else?
Simple example:
0000:|0A 0B 01 02 03|0A 0B 04 05 06|0A 0B 07 08 09|0A
0010: 0B 0A 0B 0C|0A 0B 0D 0E 0F|0A 0B 10 11 12|0A 0B
0020: 13 14 15|0A 0B 16 17 18|0A 0B 19 1A 1B|0A 0B 1C
...
In this example, there is the incoming binary data block containing some 5-byte records starting with the obvious marker - 0x0b0a. In case the marker is known in advance, records can be easily extracted from the data block. However markers are not known, and records, if any, can have different formats and sizes.
I'm trying to solve the problem in general, so it doesn't matter if a block matches any particular data format, e.g. csv, or a custom format. What I have already found is the one page article with the described idea. The idea works in general, but not for all data. (At least it doesn't work for data with variable record length.) So now I'm wondering if there are implemented approaches, however I don’t need particular implementation. The question is more about existing solutions and sharing experience if any.

Storage size of NFC Tag

I have an NXP NTAG216 and I want to know the storage size of the tag. From the specification I read that the size of the data area is in Block 3 Byte 2 which is 7F in my case which is 127 in decimal and times 8 Bytes it is 1016 Bytes. From the NXP website It states that the tag only has 924 Bytes NXP NTAG 213/215/216.
[0] 04 : 91 : 52 : 4F
[1] 9A : 9A : 40 : 80
[2] C0 : 48 : 00 : 00
[3] E1 : 10 : 7F : 00
Similar with a NXP NTAG215 which has 3E in decimal 62 times 8 Bytes is 496 Bytes where the website says 540 Bytes.
[0] 04 : 34 : DB : 63
[1] 6A : 83 : 5C : 81
[2] 34 : 48 : 00 : 00
[3] E1 : 10 : 3E : 00
Can someone explain to me how this number is calculated?
If you read the datasheet for cards https://www.nxp.com/docs/en/data-sheet/NTAG213_215_216.pdf
It says
NTAG216 EEPROM:
924 bytes, organized in 231 pages of 4 byte per page.
26 bytes reserved for manufacturer and configuration data
37 bits used for the read-only locking mechanism
4 bytes available as capability container
888 bytes user programmable read/write memory
From the spec for Type 2 cards http://apps4android.org/nfc-specifications/NFCForum-TS-Type-2-Tag_1.1.pdf
The Capability Container Byte 2 indicates the memory size of the data area of the Type 2 Tag Platform. The value of byte 2 multiplied by 8 is equal to the data area size measured in bytes
Note your question said multiply by 8Bits (1 Byte) which is wrong, I'm sure this was just a typo and you meant 8Bytes
So some of the headline 924 bytes is actually reserved for other uses and would never be included in the size listed in the Capability Container that leave 888 bytes to use of user usable memory.
The datasheet says the value in the should be 6Dh which is 109 * 8 = 872 bytes.
And all my NTAG216 have a value of 6Dh
I'm not sure why this value Capability Container value is less than the actual usable memory size, it might because the wording of the spec is unclear but it might be "usable data area" of a NDEF message and might not include the mandatory headers (TLV, etc) that make up a valid NDEF message and Records.
So the the NTAG215 example value of 3Eh is exactly as the datasheets says it should be and smaller than the total memory size as outlined above because some memory pages are reserved for non NDEF data, etc.
The next question is why is your NTAG216 example not have the correct value of 6Dh
The spec for Type 2 cards and the datasheet the NTAG21x cards says:-
The Capability Container CC (page 3) is programmed during the IC production according to the NFC Forum Type 2 Tag specification (see Ref. 2). These bytes may be bit-wise modified by a WRITE or COMPATIBILITY_WRITE command. The parameter bytes of the WRITE command and the current contents of the CC bytes are bit-wise OR’ed. The result is the new CC byte contents. This process is irreversible and once a bit is set to logic 1, it cannot be changed back to logic 0.
The 4 bytes of the Capability Container have to be writeable because you can only write in blocks of 4 bytes and Byte 3 of the Capability Container indicates the read and write access capability of the data area and CC area of the Type 2 Tag Platform and therefore can be legitimately changed to make the card read only.
So it is possible that a bad write to the Capability Container has increased the value of the Byte 2 (size value) to an invalid value on this particular card.
The other possibility is that there are a lot of Fake NTAG21x cards around, may be this is a fake card with actually more memory than a genuine NXP card.
Either use the Originality signature method outlined in the datasheet to verify it is genuine or the Taginfo smartphone App from NXP will also verify it is genuine.

Websocket stumper

I am writing an embedded WebSocket server and working directly from the relevant RFC.
My server responds properly to the upgrade request from the browser and the browser, in its example javascript, proceeds to send a short message through the newly established socket. so it's all working fine.
The message is short (complete frame is only 21 bytes) and contains all relevant fields which my server happily decodes.
The stumper is in bits 9 to 15 which are supposed to contain the length of the payload.
here is an hex dump of the captured message on WireShark:
81 8f 11 ab d5 0b 5c ce a6 78 70 cc b0 2b 65 c4 f5 78 74 c5 b1
so as you can see the first byte contains FIN (1 bit), RSVD1 (1 bit), RSVD2 (1 bit), RSVD3 (1 bit) and the 4 bits of the opcode. so far so good.
8f is the stumper: contains the MASK bit and the payload length, MASK bit is set at 1 which is fine but the remaining 7 bits have a value of 71 (0x47) when the entire frame is only 21 bytes long and the payload is only 15 bytes long.
So what am I doing wrong?
I can decode the message by applying the XOR mask to the payload but the length is the problem as it governs the decoding loop and goes on 71 iterations instead of the 15 that it should.
I am grateful for any clue as to what am doing wrong
Thanks
Problem was that my structure did not take in account the endianess of the AMD 64 processor!!! sometimes the answer is right there ;-)

Understanding disassembler: See how many bytes are used for add

I disassembled a program (with objdump -d a.out) and now I would like understand what the different sections in a line like
400586: 48 83 c4 08 add $0x8,%rsp
stand for. More specifically I would like to know how you can see how many bytes are used for adding two registers. My idea was that the 0x8 in add $0x8,%rsp, which is 8 in decimal gives me 2 * 4 so 2 bytes for adding 2 registers. Is that correct?
PS: compiler is gcc, OS is suse linux
In the second column you see 48 83 c4 08. Every two-digit hex-number stands for one byte, so the amount of bytes is four. The last 08 correlates to $0x8, the other three bytes are the machine code for "add an 8-bit constant to RSP" (for pedantic editors: Intel writes its registers upper case). It's quite difficult to deconstruct the machine code, but your assumption is completely wrong.

What does the beginning of process memory mean

I am trying to learn more about how to read process memory. So I opened the "entire memory" of the Firefox process in WinHex and saw the following hex values starting at offset 10000.
00 00 00 00 00 00 00 00 EC 6B 3F 80 0C 6D 00 01 EE FF EE FF 01 00 00 00
My question is
Is it possible for a human to interpret this without further knowledge? Are these pointers or values? Is there anything, which is common for different programs created with different compilers with regards to the process memory apart from things like endianness? Why does it start with lots of zeroes, isn't that a very odd way to start using space?
Obviously, you can't do anything "without further knowledge". But we already know a whole lot from the fact that it's Windows. For starters, we know that the executable gets its own view of memory, and in that virtual view the executable is loaded at its preferred starting address (as stated in the PE header of the EXE).
The start at 0x00010000 is a compatibility thing with MS-DOS (yes, that 16 bit OS) - the first 64KB are reserved and are never valid addresses. The pages up to 0x00400000 (4MB) are reserved for the OS, and in general differ between OS versions.
A common data structure in that range is the Process Environment Block. With the WinDBG tool, and the Microsoft Symbol Server, you can figure whether the Process Envirionment Block is indeed located at offset 0x10000, and what its contents mean.

Resources