I am writing an embedded WebSocket server and working directly from the relevant RFC.
My server responds properly to the upgrade request from the browser and the browser, in its example javascript, proceeds to send a short message through the newly established socket. so it's all working fine.
The message is short (complete frame is only 21 bytes) and contains all relevant fields which my server happily decodes.
The stumper is in bits 9 to 15 which are supposed to contain the length of the payload.
here is an hex dump of the captured message on WireShark:
81 8f 11 ab d5 0b 5c ce a6 78 70 cc b0 2b 65 c4 f5 78 74 c5 b1
so as you can see the first byte contains FIN (1 bit), RSVD1 (1 bit), RSVD2 (1 bit), RSVD3 (1 bit) and the 4 bits of the opcode. so far so good.
8f is the stumper: contains the MASK bit and the payload length, MASK bit is set at 1 which is fine but the remaining 7 bits have a value of 71 (0x47) when the entire frame is only 21 bytes long and the payload is only 15 bytes long.
So what am I doing wrong?
I can decode the message by applying the XOR mask to the payload but the length is the problem as it governs the decoding loop and goes on 71 iterations instead of the 15 that it should.
I am grateful for any clue as to what am doing wrong
Thanks
Problem was that my structure did not take in account the endianess of the AMD 64 processor!!! sometimes the answer is right there ;-)
Related
There are a number of incoming data blocks containing raw bytes with unknown structure, and block size is known in advance. There is a need to detect whether a block contains some tabular data structure or not. In other words, there is a need to know are the raw bytes formed as tabular data or not. Is there any known approach to detect it? Maybe applying autocorrelation function helps or something else?
Simple example:
0000:|0A 0B 01 02 03|0A 0B 04 05 06|0A 0B 07 08 09|0A
0010: 0B 0A 0B 0C|0A 0B 0D 0E 0F|0A 0B 10 11 12|0A 0B
0020: 13 14 15|0A 0B 16 17 18|0A 0B 19 1A 1B|0A 0B 1C
...
In this example, there is the incoming binary data block containing some 5-byte records starting with the obvious marker - 0x0b0a. In case the marker is known in advance, records can be easily extracted from the data block. However markers are not known, and records, if any, can have different formats and sizes.
I'm trying to solve the problem in general, so it doesn't matter if a block matches any particular data format, e.g. csv, or a custom format. What I have already found is the one page article with the described idea. The idea works in general, but not for all data. (At least it doesn't work for data with variable record length.) So now I'm wondering if there are implemented approaches, however I don’t need particular implementation. The question is more about existing solutions and sharing experience if any.
I have an NXP NTAG216 and I want to know the storage size of the tag. From the specification I read that the size of the data area is in Block 3 Byte 2 which is 7F in my case which is 127 in decimal and times 8 Bytes it is 1016 Bytes. From the NXP website It states that the tag only has 924 Bytes NXP NTAG 213/215/216.
[0] 04 : 91 : 52 : 4F
[1] 9A : 9A : 40 : 80
[2] C0 : 48 : 00 : 00
[3] E1 : 10 : 7F : 00
Similar with a NXP NTAG215 which has 3E in decimal 62 times 8 Bytes is 496 Bytes where the website says 540 Bytes.
[0] 04 : 34 : DB : 63
[1] 6A : 83 : 5C : 81
[2] 34 : 48 : 00 : 00
[3] E1 : 10 : 3E : 00
Can someone explain to me how this number is calculated?
If you read the datasheet for cards https://www.nxp.com/docs/en/data-sheet/NTAG213_215_216.pdf
It says
NTAG216 EEPROM:
924 bytes, organized in 231 pages of 4 byte per page.
26 bytes reserved for manufacturer and configuration data
37 bits used for the read-only locking mechanism
4 bytes available as capability container
888 bytes user programmable read/write memory
From the spec for Type 2 cards http://apps4android.org/nfc-specifications/NFCForum-TS-Type-2-Tag_1.1.pdf
The Capability Container Byte 2 indicates the memory size of the data area of the Type 2 Tag Platform. The value of byte 2 multiplied by 8 is equal to the data area size measured in bytes
Note your question said multiply by 8Bits (1 Byte) which is wrong, I'm sure this was just a typo and you meant 8Bytes
So some of the headline 924 bytes is actually reserved for other uses and would never be included in the size listed in the Capability Container that leave 888 bytes to use of user usable memory.
The datasheet says the value in the should be 6Dh which is 109 * 8 = 872 bytes.
And all my NTAG216 have a value of 6Dh
I'm not sure why this value Capability Container value is less than the actual usable memory size, it might because the wording of the spec is unclear but it might be "usable data area" of a NDEF message and might not include the mandatory headers (TLV, etc) that make up a valid NDEF message and Records.
So the the NTAG215 example value of 3Eh is exactly as the datasheets says it should be and smaller than the total memory size as outlined above because some memory pages are reserved for non NDEF data, etc.
The next question is why is your NTAG216 example not have the correct value of 6Dh
The spec for Type 2 cards and the datasheet the NTAG21x cards says:-
The Capability Container CC (page 3) is programmed during the IC production according to the NFC Forum Type 2 Tag specification (see Ref. 2). These bytes may be bit-wise modified by a WRITE or COMPATIBILITY_WRITE command. The parameter bytes of the WRITE command and the current contents of the CC bytes are bit-wise OR’ed. The result is the new CC byte contents. This process is irreversible and once a bit is set to logic 1, it cannot be changed back to logic 0.
The 4 bytes of the Capability Container have to be writeable because you can only write in blocks of 4 bytes and Byte 3 of the Capability Container indicates the read and write access capability of the data area and CC area of the Type 2 Tag Platform and therefore can be legitimately changed to make the card read only.
So it is possible that a bad write to the Capability Container has increased the value of the Byte 2 (size value) to an invalid value on this particular card.
The other possibility is that there are a lot of Fake NTAG21x cards around, may be this is a fake card with actually more memory than a genuine NXP card.
Either use the Originality signature method outlined in the datasheet to verify it is genuine or the Taginfo smartphone App from NXP will also verify it is genuine.
I'm wondering how windows interprets the 8 bytes that define the starting cluster for the $MFT. Convering the 8 bytes to decimal using a calculator doesn't work, luckily WinHex has a calculator which displays the correct value, though I don't know how it's calculated:
http://imgur.com/4UEZvNy
The picture above shows WinHeX data interpreter showing the correct amount. So my question is, how does '00 00 0C 00 00 00 00 00' equate to '786432'.
I saw another user had a similar question about the number of bytes per sector, which was answered by someone that it was down to a byte being base 256. But that doesn't really help me figure it out. Any help would be really appreciated.
I disassembled a program (with objdump -d a.out) and now I would like understand what the different sections in a line like
400586: 48 83 c4 08 add $0x8,%rsp
stand for. More specifically I would like to know how you can see how many bytes are used for adding two registers. My idea was that the 0x8 in add $0x8,%rsp, which is 8 in decimal gives me 2 * 4 so 2 bytes for adding 2 registers. Is that correct?
PS: compiler is gcc, OS is suse linux
In the second column you see 48 83 c4 08. Every two-digit hex-number stands for one byte, so the amount of bytes is four. The last 08 correlates to $0x8, the other three bytes are the machine code for "add an 8-bit constant to RSP" (for pedantic editors: Intel writes its registers upper case). It's quite difficult to deconstruct the machine code, but your assumption is completely wrong.
If we have to receive a data from sender as a chunk of bits (say 8 bits).
But, the transferring is unreliable which result in bit-lost. (Not bit-flip)
That mean, any bit in the chunk can be absent and receiver would receive only 7 bits.
I studied some error-correction coding, such as 'Hamming code'. But, the code is designed to recover fliped-bit not lost-bit in this situation.
If you are happy with a low entropy code and you can detect an end-of-message, you can simply send each bit twice.
This code can recover from any number of deletions that happen in different runs:
On receiving, find all odd-sized runs and extend them by one bit. If you end up with less than the expected count of bits, you are unable to recover from multiple deletion.
If you want to ensure a fixed error rate that can be recovered, use bit-stuffing.
Example:
0110
encoded as:
00 11 11 00
a double error occurs:
0x 11 x1 00
received:
011100
'0' and '111' are odd-sized runs. Fix:
00111100
we have 8 bits, and have recovered from a double error.
decode:
0110
Example 2:
0101
encoded as
00110011
transmitted as
0xxx0011
received as
00011
corrected as
000011
decoded as
001
which is shorter than expected. A transmission error has occured.
Example 3 (bit stuffing after a run of 3 bits):
0000 1111
stuffed as
00010 11101
sent as
0000001100 1111110011