I have a Felica card. The first question is what actually is this card? Is it a Smart Card or it is a simple memory card? Is it a kind of Java Card and can I load .cap files inside or it has its proprietary fixed contents and I can't load any applet? Is it GlobalPlatform standard complaint?
I read here that:
Sony’s proprietary FeliCa is a smartcard technology that is similar to
ISO/IEC
14443. FeliCa has a file system similar to that defined in ISO/IEC 7816-4. The file system and commands for access to the file system are
standardized in JIS X 6319-4 [28]. In addition, the FeliCa system has
proprietary cryptography and security features.
After that I tried to send some APDU commands to it. The first step was do some configuration changes with the reader. Because my reader is configured to read ISO14443 Type A and Type B cards and not Felica cards.
As both Felica and ISO/IEC 14443 cards use 13.56 MHz frequency for the carrier, I think the difference between these types is in the Protocol layer only. Am I right? If so, what is the name of Felica cards transmission protocol? (For ISO/IEC 14443 cards, we have T=1 and T=CL protocols).
After configuring the reader, I tried to send commands to card:
Connect successful.
Send: 00 A4 04 00 00
Recv: 6A 81
Time used: 31.000 ms
Send: 00 C0 00 00 00
Recv: 6A 81
Time used: 28.000 ms
Send: 00 CA 00 00 00
Recv: 6A 81
Time used: 35.000 ms
As you see above, I receive 0x6A81 status words only.
I also searched a lot of ACS Reader Datasheets, Some NXP Application notes and for sure JIS X 6319-4 standard for a list of commands for this type of cards. But I found nothing applicable.
So, the questions are:
What actually is Felica? (Smart? Memory?)
What is the difference between Felica cards and ISO/IEC14443 cards? Is it related to NFC?
How to communicate with this card and transfer data?
Update:
My card'S ATR is : 3b 8f 80 01 80 4f 0c a0 00 00 03 06 11 00 3b 00 00 00 00 42
What actually is Felica? (Smart? Memory?)
It's more like a memory card than a smart card with regards to functionality. Reading data in blocks is typical for a memory card and the card has very limited functionality besides basic authentication based on symmetric cryptography.
You could argue that it is a smart card in the sense that the implementation seems to carry a multi-purpose CPU (see Annex B).
It seems however impossible to change the behavior of the smart card the same way you'd do e.g. in a Global Platform Java Card. So I'd classify it as a memory card with a proprietary protocol.
What is the difference between Felica cards and ISO/IEC14443 cards? Is it related to NFC?
It uses a proprietary communication protocol which includes both the data -link layer (which you are asking about here) and the command/response layer.
How to communicate with this card and transfer data?
The fact that you are sending APDU's instead of FeliCa's proprietary command / response pairs indicate that you are using a translation layer. This translation layer is likely to be in the reader / reader driver. The API of this translation layer is likely to be specified in the PCSC 2.01 specifications (section 3.2.2.1 Storage Card Functionality Support, using CLA byte 0xFF).
You probably need the reader's user manual as well, if just to figure out in which location to store the required keys.
Related
There are a number of incoming data blocks containing raw bytes with unknown structure, and block size is known in advance. There is a need to detect whether a block contains some tabular data structure or not. In other words, there is a need to know are the raw bytes formed as tabular data or not. Is there any known approach to detect it? Maybe applying autocorrelation function helps or something else?
Simple example:
0000:|0A 0B 01 02 03|0A 0B 04 05 06|0A 0B 07 08 09|0A
0010: 0B 0A 0B 0C|0A 0B 0D 0E 0F|0A 0B 10 11 12|0A 0B
0020: 13 14 15|0A 0B 16 17 18|0A 0B 19 1A 1B|0A 0B 1C
...
In this example, there is the incoming binary data block containing some 5-byte records starting with the obvious marker - 0x0b0a. In case the marker is known in advance, records can be easily extracted from the data block. However markers are not known, and records, if any, can have different formats and sizes.
I'm trying to solve the problem in general, so it doesn't matter if a block matches any particular data format, e.g. csv, or a custom format. What I have already found is the one page article with the described idea. The idea works in general, but not for all data. (At least it doesn't work for data with variable record length.) So now I'm wondering if there are implemented approaches, however I don’t need particular implementation. The question is more about existing solutions and sharing experience if any.
I have an NXP NTAG216 and I want to know the storage size of the tag. From the specification I read that the size of the data area is in Block 3 Byte 2 which is 7F in my case which is 127 in decimal and times 8 Bytes it is 1016 Bytes. From the NXP website It states that the tag only has 924 Bytes NXP NTAG 213/215/216.
[0] 04 : 91 : 52 : 4F
[1] 9A : 9A : 40 : 80
[2] C0 : 48 : 00 : 00
[3] E1 : 10 : 7F : 00
Similar with a NXP NTAG215 which has 3E in decimal 62 times 8 Bytes is 496 Bytes where the website says 540 Bytes.
[0] 04 : 34 : DB : 63
[1] 6A : 83 : 5C : 81
[2] 34 : 48 : 00 : 00
[3] E1 : 10 : 3E : 00
Can someone explain to me how this number is calculated?
If you read the datasheet for cards https://www.nxp.com/docs/en/data-sheet/NTAG213_215_216.pdf
It says
NTAG216 EEPROM:
924 bytes, organized in 231 pages of 4 byte per page.
26 bytes reserved for manufacturer and configuration data
37 bits used for the read-only locking mechanism
4 bytes available as capability container
888 bytes user programmable read/write memory
From the spec for Type 2 cards http://apps4android.org/nfc-specifications/NFCForum-TS-Type-2-Tag_1.1.pdf
The Capability Container Byte 2 indicates the memory size of the data area of the Type 2 Tag Platform. The value of byte 2 multiplied by 8 is equal to the data area size measured in bytes
Note your question said multiply by 8Bits (1 Byte) which is wrong, I'm sure this was just a typo and you meant 8Bytes
So some of the headline 924 bytes is actually reserved for other uses and would never be included in the size listed in the Capability Container that leave 888 bytes to use of user usable memory.
The datasheet says the value in the should be 6Dh which is 109 * 8 = 872 bytes.
And all my NTAG216 have a value of 6Dh
I'm not sure why this value Capability Container value is less than the actual usable memory size, it might because the wording of the spec is unclear but it might be "usable data area" of a NDEF message and might not include the mandatory headers (TLV, etc) that make up a valid NDEF message and Records.
So the the NTAG215 example value of 3Eh is exactly as the datasheets says it should be and smaller than the total memory size as outlined above because some memory pages are reserved for non NDEF data, etc.
The next question is why is your NTAG216 example not have the correct value of 6Dh
The spec for Type 2 cards and the datasheet the NTAG21x cards says:-
The Capability Container CC (page 3) is programmed during the IC production according to the NFC Forum Type 2 Tag specification (see Ref. 2). These bytes may be bit-wise modified by a WRITE or COMPATIBILITY_WRITE command. The parameter bytes of the WRITE command and the current contents of the CC bytes are bit-wise OR’ed. The result is the new CC byte contents. This process is irreversible and once a bit is set to logic 1, it cannot be changed back to logic 0.
The 4 bytes of the Capability Container have to be writeable because you can only write in blocks of 4 bytes and Byte 3 of the Capability Container indicates the read and write access capability of the data area and CC area of the Type 2 Tag Platform and therefore can be legitimately changed to make the card read only.
So it is possible that a bad write to the Capability Container has increased the value of the Byte 2 (size value) to an invalid value on this particular card.
The other possibility is that there are a lot of Fake NTAG21x cards around, may be this is a fake card with actually more memory than a genuine NXP card.
Either use the Originality signature method outlined in the datasheet to verify it is genuine or the Taginfo smartphone App from NXP will also verify it is genuine.
I am writing an embedded WebSocket server and working directly from the relevant RFC.
My server responds properly to the upgrade request from the browser and the browser, in its example javascript, proceeds to send a short message through the newly established socket. so it's all working fine.
The message is short (complete frame is only 21 bytes) and contains all relevant fields which my server happily decodes.
The stumper is in bits 9 to 15 which are supposed to contain the length of the payload.
here is an hex dump of the captured message on WireShark:
81 8f 11 ab d5 0b 5c ce a6 78 70 cc b0 2b 65 c4 f5 78 74 c5 b1
so as you can see the first byte contains FIN (1 bit), RSVD1 (1 bit), RSVD2 (1 bit), RSVD3 (1 bit) and the 4 bits of the opcode. so far so good.
8f is the stumper: contains the MASK bit and the payload length, MASK bit is set at 1 which is fine but the remaining 7 bits have a value of 71 (0x47) when the entire frame is only 21 bytes long and the payload is only 15 bytes long.
So what am I doing wrong?
I can decode the message by applying the XOR mask to the payload but the length is the problem as it governs the decoding loop and goes on 71 iterations instead of the 15 that it should.
I am grateful for any clue as to what am doing wrong
Thanks
Problem was that my structure did not take in account the endianess of the AMD 64 processor!!! sometimes the answer is right there ;-)
I'm wondering how windows interprets the 8 bytes that define the starting cluster for the $MFT. Convering the 8 bytes to decimal using a calculator doesn't work, luckily WinHex has a calculator which displays the correct value, though I don't know how it's calculated:
http://imgur.com/4UEZvNy
The picture above shows WinHeX data interpreter showing the correct amount. So my question is, how does '00 00 0C 00 00 00 00 00' equate to '786432'.
I saw another user had a similar question about the number of bytes per sector, which was answered by someone that it was down to a byte being base 256. But that doesn't really help me figure it out. Any help would be really appreciated.
I am trying to learn more about how to read process memory. So I opened the "entire memory" of the Firefox process in WinHex and saw the following hex values starting at offset 10000.
00 00 00 00 00 00 00 00 EC 6B 3F 80 0C 6D 00 01 EE FF EE FF 01 00 00 00
My question is
Is it possible for a human to interpret this without further knowledge? Are these pointers or values? Is there anything, which is common for different programs created with different compilers with regards to the process memory apart from things like endianness? Why does it start with lots of zeroes, isn't that a very odd way to start using space?
Obviously, you can't do anything "without further knowledge". But we already know a whole lot from the fact that it's Windows. For starters, we know that the executable gets its own view of memory, and in that virtual view the executable is loaded at its preferred starting address (as stated in the PE header of the EXE).
The start at 0x00010000 is a compatibility thing with MS-DOS (yes, that 16 bit OS) - the first 64KB are reserved and are never valid addresses. The pages up to 0x00400000 (4MB) are reserved for the OS, and in general differ between OS versions.
A common data structure in that range is the Process Environment Block. With the WinDBG tool, and the Microsoft Symbol Server, you can figure whether the Process Envirionment Block is indeed located at offset 0x10000, and what its contents mean.