Storage size of NFC Tag - nfc

I have an NXP NTAG216 and I want to know the storage size of the tag. From the specification I read that the size of the data area is in Block 3 Byte 2 which is 7F in my case which is 127 in decimal and times 8 Bytes it is 1016 Bytes. From the NXP website It states that the tag only has 924 Bytes NXP NTAG 213/215/216.
[0] 04 : 91 : 52 : 4F
[1] 9A : 9A : 40 : 80
[2] C0 : 48 : 00 : 00
[3] E1 : 10 : 7F : 00
Similar with a NXP NTAG215 which has 3E in decimal 62 times 8 Bytes is 496 Bytes where the website says 540 Bytes.
[0] 04 : 34 : DB : 63
[1] 6A : 83 : 5C : 81
[2] 34 : 48 : 00 : 00
[3] E1 : 10 : 3E : 00
Can someone explain to me how this number is calculated?

If you read the datasheet for cards https://www.nxp.com/docs/en/data-sheet/NTAG213_215_216.pdf
It says
NTAG216 EEPROM:
924 bytes, organized in 231 pages of 4 byte per page.
26 bytes reserved for manufacturer and configuration data
37 bits used for the read-only locking mechanism
4 bytes available as capability container
888 bytes user programmable read/write memory
From the spec for Type 2 cards http://apps4android.org/nfc-specifications/NFCForum-TS-Type-2-Tag_1.1.pdf
The Capability Container Byte 2 indicates the memory size of the data area of the Type 2 Tag Platform. The value of byte 2 multiplied by 8 is equal to the data area size measured in bytes
Note your question said multiply by 8Bits (1 Byte) which is wrong, I'm sure this was just a typo and you meant 8Bytes
So some of the headline 924 bytes is actually reserved for other uses and would never be included in the size listed in the Capability Container that leave 888 bytes to use of user usable memory.
The datasheet says the value in the should be 6Dh which is 109 * 8 = 872 bytes.
And all my NTAG216 have a value of 6Dh
I'm not sure why this value Capability Container value is less than the actual usable memory size, it might because the wording of the spec is unclear but it might be "usable data area" of a NDEF message and might not include the mandatory headers (TLV, etc) that make up a valid NDEF message and Records.
So the the NTAG215 example value of 3Eh is exactly as the datasheets says it should be and smaller than the total memory size as outlined above because some memory pages are reserved for non NDEF data, etc.
The next question is why is your NTAG216 example not have the correct value of 6Dh
The spec for Type 2 cards and the datasheet the NTAG21x cards says:-
The Capability Container CC (page 3) is programmed during the IC production according to the NFC Forum Type 2 Tag specification (see Ref. 2). These bytes may be bit-wise modified by a WRITE or COMPATIBILITY_WRITE command. The parameter bytes of the WRITE command and the current contents of the CC bytes are bit-wise OR’ed. The result is the new CC byte contents. This process is irreversible and once a bit is set to logic 1, it cannot be changed back to logic 0.
The 4 bytes of the Capability Container have to be writeable because you can only write in blocks of 4 bytes and Byte 3 of the Capability Container indicates the read and write access capability of the data area and CC area of the Type 2 Tag Platform and therefore can be legitimately changed to make the card read only.
So it is possible that a bad write to the Capability Container has increased the value of the Byte 2 (size value) to an invalid value on this particular card.
The other possibility is that there are a lot of Fake NTAG21x cards around, may be this is a fake card with actually more memory than a genuine NXP card.
Either use the Originality signature method outlined in the datasheet to verify it is genuine or the Taginfo smartphone App from NXP will also verify it is genuine.

Related

Websocket stumper

I am writing an embedded WebSocket server and working directly from the relevant RFC.
My server responds properly to the upgrade request from the browser and the browser, in its example javascript, proceeds to send a short message through the newly established socket. so it's all working fine.
The message is short (complete frame is only 21 bytes) and contains all relevant fields which my server happily decodes.
The stumper is in bits 9 to 15 which are supposed to contain the length of the payload.
here is an hex dump of the captured message on WireShark:
81 8f 11 ab d5 0b 5c ce a6 78 70 cc b0 2b 65 c4 f5 78 74 c5 b1
so as you can see the first byte contains FIN (1 bit), RSVD1 (1 bit), RSVD2 (1 bit), RSVD3 (1 bit) and the 4 bits of the opcode. so far so good.
8f is the stumper: contains the MASK bit and the payload length, MASK bit is set at 1 which is fine but the remaining 7 bits have a value of 71 (0x47) when the entire frame is only 21 bytes long and the payload is only 15 bytes long.
So what am I doing wrong?
I can decode the message by applying the XOR mask to the payload but the length is the problem as it governs the decoding loop and goes on 71 iterations instead of the 15 that it should.
I am grateful for any clue as to what am doing wrong
Thanks
Problem was that my structure did not take in account the endianess of the AMD 64 processor!!! sometimes the answer is right there ;-)

How to communicate with Felica memory/smart cards?

I have a Felica card. The first question is what actually is this card? Is it a Smart Card or it is a simple memory card? Is it a kind of Java Card and can I load .cap files inside or it has its proprietary fixed contents and I can't load any applet? Is it GlobalPlatform standard complaint?
I read here that:
Sony’s proprietary FeliCa is a smartcard technology that is similar to
ISO/IEC
14443. FeliCa has a file system similar to that defined in ISO/IEC 7816-4. The file system and commands for access to the file system are
standardized in JIS X 6319-4 [28]. In addition, the FeliCa system has
proprietary cryptography and security features.
After that I tried to send some APDU commands to it. The first step was do some configuration changes with the reader. Because my reader is configured to read ISO14443 Type A and Type B cards and not Felica cards.
As both Felica and ISO/IEC 14443 cards use 13.56 MHz frequency for the carrier, I think the difference between these types is in the Protocol layer only. Am I right? If so, what is the name of Felica cards transmission protocol? (For ISO/IEC 14443 cards, we have T=1 and T=CL protocols).
After configuring the reader, I tried to send commands to card:
Connect successful.
Send: 00 A4 04 00 00
Recv: 6A 81
Time used: 31.000 ms
Send: 00 C0 00 00 00
Recv: 6A 81
Time used: 28.000 ms
Send: 00 CA 00 00 00
Recv: 6A 81
Time used: 35.000 ms
As you see above, I receive 0x6A81 status words only.
I also searched a lot of ACS Reader Datasheets, Some NXP Application notes and for sure JIS X 6319-4 standard for a list of commands for this type of cards. But I found nothing applicable.
So, the questions are:
What actually is Felica? (Smart? Memory?)
What is the difference between Felica cards and ISO/IEC14443 cards? Is it related to NFC?
How to communicate with this card and transfer data?
Update:
My card'S ATR is : 3b 8f 80 01 80 4f 0c a0 00 00 03 06 11 00 3b 00 00 00 00 42
What actually is Felica? (Smart? Memory?)
It's more like a memory card than a smart card with regards to functionality. Reading data in blocks is typical for a memory card and the card has very limited functionality besides basic authentication based on symmetric cryptography.
You could argue that it is a smart card in the sense that the implementation seems to carry a multi-purpose CPU (see Annex B).
It seems however impossible to change the behavior of the smart card the same way you'd do e.g. in a Global Platform Java Card. So I'd classify it as a memory card with a proprietary protocol.
What is the difference between Felica cards and ISO/IEC14443 cards? Is it related to NFC?
It uses a proprietary communication protocol which includes both the data -link layer (which you are asking about here) and the command/response layer.
How to communicate with this card and transfer data?
The fact that you are sending APDU's instead of FeliCa's proprietary command / response pairs indicate that you are using a translation layer. This translation layer is likely to be in the reader / reader driver. The API of this translation layer is likely to be specified in the PCSC 2.01 specifications (section 3.2.2.1 Storage Card Functionality Support, using CLA byte 0xFF).
You probably need the reader's user manual as well, if just to figure out in which location to store the required keys.

Calculating NTFS $MFT start cluster from bytes in boot sector

I'm wondering how windows interprets the 8 bytes that define the starting cluster for the $MFT. Convering the 8 bytes to decimal using a calculator doesn't work, luckily WinHex has a calculator which displays the correct value, though I don't know how it's calculated:
http://imgur.com/4UEZvNy
The picture above shows WinHeX data interpreter showing the correct amount. So my question is, how does '00 00 0C 00 00 00 00 00' equate to '786432'.
I saw another user had a similar question about the number of bytes per sector, which was answered by someone that it was down to a byte being base 256. But that doesn't really help me figure it out. Any help would be really appreciated.

Understanding disassembler: See how many bytes are used for add

I disassembled a program (with objdump -d a.out) and now I would like understand what the different sections in a line like
400586: 48 83 c4 08 add $0x8,%rsp
stand for. More specifically I would like to know how you can see how many bytes are used for adding two registers. My idea was that the 0x8 in add $0x8,%rsp, which is 8 in decimal gives me 2 * 4 so 2 bytes for adding 2 registers. Is that correct?
PS: compiler is gcc, OS is suse linux
In the second column you see 48 83 c4 08. Every two-digit hex-number stands for one byte, so the amount of bytes is four. The last 08 correlates to $0x8, the other three bytes are the machine code for "add an 8-bit constant to RSP" (for pedantic editors: Intel writes its registers upper case). It's quite difficult to deconstruct the machine code, but your assumption is completely wrong.

Error correction code for lost bit

If we have to receive a data from sender as a chunk of bits (say 8 bits).
But, the transferring is unreliable which result in bit-lost. (Not bit-flip)
That mean, any bit in the chunk can be absent and receiver would receive only 7 bits.
I studied some error-correction coding, such as 'Hamming code'. But, the code is designed to recover fliped-bit not lost-bit in this situation.
If you are happy with a low entropy code and you can detect an end-of-message, you can simply send each bit twice.
This code can recover from any number of deletions that happen in different runs:
On receiving, find all odd-sized runs and extend them by one bit. If you end up with less than the expected count of bits, you are unable to recover from multiple deletion.
If you want to ensure a fixed error rate that can be recovered, use bit-stuffing.
Example:
0110
encoded as:
00 11 11 00
a double error occurs:
0x 11 x1 00
received:
011100
'0' and '111' are odd-sized runs. Fix:
00111100
we have 8 bits, and have recovered from a double error.
decode:
0110
Example 2:
0101
encoded as
00110011
transmitted as
0xxx0011
received as
00011
corrected as
000011
decoded as
001
which is shorter than expected. A transmission error has occured.
Example 3 (bit stuffing after a run of 3 bits):
0000 1111
stuffed as
00010 11101
sent as
0000001100 1111110011

Resources