I have an NXP NTAG216 and I want to know the storage size of the tag. From the specification I read that the size of the data area is in Block 3 Byte 2 which is 7F in my case which is 127 in decimal and times 8 Bytes it is 1016 Bytes. From the NXP website It states that the tag only has 924 Bytes NXP NTAG 213/215/216.
[0] 04 : 91 : 52 : 4F
[1] 9A : 9A : 40 : 80
[2] C0 : 48 : 00 : 00
[3] E1 : 10 : 7F : 00
Similar with a NXP NTAG215 which has 3E in decimal 62 times 8 Bytes is 496 Bytes where the website says 540 Bytes.
[0] 04 : 34 : DB : 63
[1] 6A : 83 : 5C : 81
[2] 34 : 48 : 00 : 00
[3] E1 : 10 : 3E : 00
Can someone explain to me how this number is calculated?
If you read the datasheet for cards https://www.nxp.com/docs/en/data-sheet/NTAG213_215_216.pdf
It says
NTAG216 EEPROM:
924 bytes, organized in 231 pages of 4 byte per page.
26 bytes reserved for manufacturer and configuration data
37 bits used for the read-only locking mechanism
4 bytes available as capability container
888 bytes user programmable read/write memory
From the spec for Type 2 cards http://apps4android.org/nfc-specifications/NFCForum-TS-Type-2-Tag_1.1.pdf
The Capability Container Byte 2 indicates the memory size of the data area of the Type 2 Tag Platform. The value of byte 2 multiplied by 8 is equal to the data area size measured in bytes
Note your question said multiply by 8Bits (1 Byte) which is wrong, I'm sure this was just a typo and you meant 8Bytes
So some of the headline 924 bytes is actually reserved for other uses and would never be included in the size listed in the Capability Container that leave 888 bytes to use of user usable memory.
The datasheet says the value in the should be 6Dh which is 109 * 8 = 872 bytes.
And all my NTAG216 have a value of 6Dh
I'm not sure why this value Capability Container value is less than the actual usable memory size, it might because the wording of the spec is unclear but it might be "usable data area" of a NDEF message and might not include the mandatory headers (TLV, etc) that make up a valid NDEF message and Records.
So the the NTAG215 example value of 3Eh is exactly as the datasheets says it should be and smaller than the total memory size as outlined above because some memory pages are reserved for non NDEF data, etc.
The next question is why is your NTAG216 example not have the correct value of 6Dh
The spec for Type 2 cards and the datasheet the NTAG21x cards says:-
The Capability Container CC (page 3) is programmed during the IC production according to the NFC Forum Type 2 Tag specification (see Ref. 2). These bytes may be bit-wise modified by a WRITE or COMPATIBILITY_WRITE command. The parameter bytes of the WRITE command and the current contents of the CC bytes are bit-wise OR’ed. The result is the new CC byte contents. This process is irreversible and once a bit is set to logic 1, it cannot be changed back to logic 0.
The 4 bytes of the Capability Container have to be writeable because you can only write in blocks of 4 bytes and Byte 3 of the Capability Container indicates the read and write access capability of the data area and CC area of the Type 2 Tag Platform and therefore can be legitimately changed to make the card read only.
So it is possible that a bad write to the Capability Container has increased the value of the Byte 2 (size value) to an invalid value on this particular card.
The other possibility is that there are a lot of Fake NTAG21x cards around, may be this is a fake card with actually more memory than a genuine NXP card.
Either use the Originality signature method outlined in the datasheet to verify it is genuine or the Taginfo smartphone App from NXP will also verify it is genuine.
I am writing an embedded WebSocket server and working directly from the relevant RFC.
My server responds properly to the upgrade request from the browser and the browser, in its example javascript, proceeds to send a short message through the newly established socket. so it's all working fine.
The message is short (complete frame is only 21 bytes) and contains all relevant fields which my server happily decodes.
The stumper is in bits 9 to 15 which are supposed to contain the length of the payload.
here is an hex dump of the captured message on WireShark:
81 8f 11 ab d5 0b 5c ce a6 78 70 cc b0 2b 65 c4 f5 78 74 c5 b1
so as you can see the first byte contains FIN (1 bit), RSVD1 (1 bit), RSVD2 (1 bit), RSVD3 (1 bit) and the 4 bits of the opcode. so far so good.
8f is the stumper: contains the MASK bit and the payload length, MASK bit is set at 1 which is fine but the remaining 7 bits have a value of 71 (0x47) when the entire frame is only 21 bytes long and the payload is only 15 bytes long.
So what am I doing wrong?
I can decode the message by applying the XOR mask to the payload but the length is the problem as it governs the decoding loop and goes on 71 iterations instead of the 15 that it should.
I am grateful for any clue as to what am doing wrong
Thanks
Problem was that my structure did not take in account the endianess of the AMD 64 processor!!! sometimes the answer is right there ;-)
I have got the number 317 saved in 2 bytes (00000001 00111101) and it should be transfered via SPI (serial) to a slave device.
The device expect the two bytes B11 and B12, but in a certain order:
"The highest bits of the data words are send first and the lowest bits
are send last, that is, byte B11 is the highest byte and byte B12 is
the lowest byte."
My question is, what do they exactly mean? Am I supposed to flip the bytes itselves (10000000 10111100) or to flip bytes AND bits (10111100 10000000)?
Flip the bytes. Or switch , rather:
00000001 00111101
->
00111101 00000001
This is known as endianness (which should help you find related questions)
I'm wondering how windows interprets the 8 bytes that define the starting cluster for the $MFT. Convering the 8 bytes to decimal using a calculator doesn't work, luckily WinHex has a calculator which displays the correct value, though I don't know how it's calculated:
http://imgur.com/4UEZvNy
The picture above shows WinHeX data interpreter showing the correct amount. So my question is, how does '00 00 0C 00 00 00 00 00' equate to '786432'.
I saw another user had a similar question about the number of bytes per sector, which was answered by someone that it was down to a byte being base 256. But that doesn't really help me figure it out. Any help would be really appreciated.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
How does ECC (error correction codes) for burst error correction work (disk drive style)?
It is either a curse or blessing, but often my brain tries to solve technical problems in my dreams. Sometimes it does. Like last night, my brain demanded to understand how to design an ECC algorithm (software program but possibly FPGA circuitry eventually) to implement the kind of ECC appropriate for disk drives. The kind of ECC appropriate for these devices appears to be "burst error detection".
As I understand this, the reason disk drives have errors is due to imperfections on the disk surface (specs or scratches). When the head is reading data bits and passes over a narrow scratch, the circuitry generates a random mix of correct and erroneous bit values over a "burst" of perhaps 1 to 64-ish bits. Therefore, as I understand it, the goal of disk drive ECC is to be able to correct all erroneous bits in any one random burst of errors.
BTW, I don't naturally "think in math", so please don't point me to math papers! I already spent a couple hours trying to read through wikipedia about reed-solomon and various other schemes, but the math in those articles is utterly incomprehensible to me (unless I spend a few weeks studying them... if I'm lucky). Besides, from the text, I don't think any of those schemes apply to disk drives (but maybe CDs / DVDs).
Anyway, I'll describe what my brain dreamed up in my sleep, and ask anyone to explain how this kind of ECC actually should be done, and how much better are conventional approaches. I'm sure my scheme must be less efficient that a technique done by someone who knows what they are doing, and maybe even designed while they were awake! Before I woke up, I was trying to figure out how to handle two bursts per track, but woke up defeated. So I also ask how to achieve that.
My mental image was a 4096 byte sector, which I mentally broke up into 512 chunks of 64-bits each (since I'm used to thinking in 64-bit chunks, and because I'm guessing 64-bit bursts of errors is sufficient for practical purposes. In my application each data-stream will definitely be 4096 to 8192 bytes.
My approach is to compute ten 64-bit ECC codes from the 4096 bytes of data. So the ECC my scheme would write after the last of the 4096 bytes of data would be ten 64-bit code == 80 bytes, which is just short of 2% overhead. I'll call those ten 64-bit ECC codes "code0" to "code9", each of which starts out cleared to zero before each sector is processed. And every 64-bit (8-byte) sequence of data I will call a "chunk" for lack of any better term.
code9 = XOR chunks 000 to 511 == 000 to 1FF : every chunk
code8 = XOR chunks 256 to 511 == 100 to 1FF : every chunk # with bit #8 == 1
code7 = XOR chunks 128 to 255 == 080 to 0FF : every chunk # with bit #7 == 1
and chunks 384 to 511 == 180 to 1FF
code6 = XOR chunks 064 to 127 == 040 to 07F : every chunk # with bit #6 == 1
and chunks 192 to 255 == 0C0 to 0FF
and chunks 320 to 384 == 140 to 17F
and chunks 448 to 511 == 1C0 to 1FF
code5 = XOR chunks 032 to 063 == 020 to 03F : every chunk # with bit #5 == 1
and chunks 096 to 127 == 060 to 07F
and chunks 160 to 191 == 0A0 to 0BF
and chunks 224 to 255 == 0E0 to 0FF
and chunks 288 to 319 == 120 to 13F
and chunks 352 to 383 == 160 to 17F
and chunks 416 to 447 == 1A0 to 1BF
and chunks 480 to 511 == 1E0 to 1FF
code4 = XOR chunks 016 to 031 == 010 to 01F : every chunk # with bit #4 == 1
and chunks 048 to 063 == 030 to 04F
and chunks 080 to 095 == 050 to 07F
and so forth
code3 = XOR chunks 008 to 015 == 008 to 00F : every chunk # with bit #3 == 1
and chunks 024 to 031 == 018 to 01F
and chunks 040 to 047 == 028 to 02F
and so forth
code2 = XOR chunks 004 to 007 == 004 to 007 : every chunk # with bit #2 == 1
and chunks 012 to 015 == 00C to 00F
and chunks 020 to 023 == 014 to 017
and so forth
code1 = XOR chunks 002 to 003 == 002 to 003 : every chunk # with bit #1 == 1
and chunks 006 to 007 == 006 to 007
and chunks 010 to 011 == 00A to 00B
and so forth
code0 = XOR chunks 001 to 001 == 001 to 001 : every chunk # with bit #0 == 1
and chunks 003 to 003 == 003 to 003
and chunks 005 to 005 == 005 to 005
and so forth
Okay, I should explain the purpose of this approach. The ECC produced by the algorithm must somehow encode the following information:
#1: What is the correct state of every bit (all 4KB == 32Kb)?
#2: Where in the 4KB == 32Kb stream did the error burst occur?
Now I'll try to explain why my dream (?nightmare?) believed these ten 64-bit ECC codes can detect any one burst of error bits up to 64-bits long anywhere in the 4KB == 32Kb stream.
Let's start slow and consider a simple example. Let's assume that when the disk drive read back a sector, bit #0 in one of the 512 "chunks" was wrong == inverted.
Does the ECC code9 tell us anything? Well, code9 is an XOR of every 64-bit chunk in the sector. Therefore, bit #0 of code9 is the parity of bit #0 of every 64-bit chunk of data written to the sector. Therefore, when we read the sector back, an error in bit #0 of ANY one 64-bit chunk of data will generate an error we can detect with the 64-bit code9 alone (no need for code8, code7... code0). If bit #0 of any 64-bit chunk of data is incorrect, then bit #0 of code9 in the ECC read back from disk will not agree with bit #0 of code9 we compute from the read-back data!
Nice! We detected an error in bit #0 of some 64-bit chunk with only code9. However, we have no idea which of the 511 chunks of data contains an error in its bit #0.
That's what the other eight ECC codes are for (in a manner of speaking).
The other eight ECC codes let us "narrow down" where this error is.
So we ask ourselves, what can code8 tell us? Well, that's totally obvious! code8 only considers chunks 256-511 (last half of the sector), so if the bit #0 error is anywhere in chunks 000-255 (first half of sector), code8 will not detect any error. But wait! If we know the error in bit #0 is NOT in chunks 256-511, then it MUST BE somewhere in chunks 000-255 (first half of sector). So now we know the error is somewhere in chunk 000-255, and not in chunk 256-511. Excellent!
Now we ask ourselves, what can code7 tell us? Well, of the region we are interested in (chunks 000-255), code7 only checks chunks 128-255. So if bit #0 of the code7 ECC we read back from disk differs from the code7 ECC we compute from the read data, we know that bit #0 error is somewhere in chunk 128-255. Sweet! Again we cut the possible location of the error to half the range.
Now what can code6 tell us? How this works is becoming obvious. As before, code6 only detects errors in half of the region we know the error is in. Of the region we narrowed the error down to (chunks 128-255), code6 only checks the second half (chunks 192-255). So when we find no error in bit #0 of code6, we know the bit #0 error is not in chunks 192-255, and thus must be somewhere in chunk 128-191.
When we find an error in bit #0 of code5, we know the error must be somewhere in chunks 160-191.
When we find an error in bit #0 of code4, we know the error must be somewhere in chunks 176-191.
When we find an error in bit #0 of code3, we know the error must be somewhere in chunks 184-191.
When we find NO error in bit #0 of code2, we know the error must be somewhere in chunks 184-187.
When we find NO error in bit #0 of code1, we know the error must be somewhere in chunks 184-185.
When we find an error in bit #0 of code0, we know the error must be in chunk 185.
!!!!! DONE !!!!!
We now know exactly where the error is in our 4096-byte sector --- at bit #0 of 64-bit chunk #185.
And chunk 185 == 0x0B9 == 0 1011 1001
Hmmm. Very interesting! Each zero bit in the chunk # of the error is a code# where we did not find an error, and each one bit in the chunk # of the error is a code# where we did find an error. Which means, we automagically get the chunk # that contains the error in the process of checking the code chunks. When a bit in the readback ECC matches the same bit in the ECC we computed from the read data, we generate a 0, otherwise we generate a 1 (readback-ECC XOR computed-ECC). How simple is that? !!!
With a little extra thought, we see that each bit # in the data-chunks and ECC-chunks is independent. In other words, bit #0 in the ECC chunks is checking bit #0 of every 64-bit chunk of data, while bit #1 in the ECC chunks is checking bit #1 of every 64-bit chunk of data, and so forth. Adjacent bits in the ECC values and data chunks are utterly independent from each other. No bits closer than 64-bits from each other interact in any way. This scheme treats a sector like 64 separate sectors, each only 1/64 as large.
Aha! This must be why this technique can detect and correct any burst of error bits up to 64-bits long --- because each bit is utterly independent (in the ECC chunks) from any bit closer than 64-bits away.
That's it. In my dream state at least, that explains how this works.
What the above doesn't explain is the following (my questions):
#1: Is this how burst ECC is performed on disk drives, etc?
#2: If this technique not conventional, is this technique new?
#3: Is this technique easier for software than conventional techniques?
#4: Do conventional routines have more or less overhead (1.8% for 64-bit bursts in 4K packet)?
#5: How can this be extended to correct two bursts?
PS: My application for ECC is not related to disk drives, but has similar characteristics and requirements. At least initially, my application would do everything in software: compute ECC in software, append to the data-stream (packet), send the data-stream (packet), receive the data-stream (packet), and perform the above process to detect and fix any errors in the data-stream (packet). I suppose this might also turn into circuitry in an FPGA someday, but... not today.