ACR122 - Card Emulation - nfc

How can I get the NFC contactless reader ACR122U to behave as a tag (card emulation mode)?
The prospectus claims that the device can do card emulation, but the SDK does not seem to provide an example or documentation for this feature.
Does anybody know how to do this?
Is there additional software required?
Please note that my target platform is MS Windows.
Thanks in advance

For "Card Emulation" or in other words, "Configure as target and wait for initiators", please refer to here: http://code.google.com/p/nfcip-java/source/browse/trunk/nfcip-java/doc/ACR122_PN53x.txt
** Command to PN532 **
0xd4 0x8c TgInitAsTarget instruction code
0x00 Acceptable modes
(0x00 = allow all, 0x01 = only allow to be
initialized as passive, 0x02 = allow DEP only)
_6 bytes (_MIFARE_)_:
0x08 0x00 SENS_RES
0x12 0x34 0x56 NFCID1
0x40 SEL_RES
_18 bytes (_Felica_)_:
0x01 0xfe 0xa2 0xa3 0xa4 0xa5 0xa6 0xa7
NFCID2
0xc0 0xc1 0xc2 0xc3 0xc4 0xc5 0xc6 0xc7
?
0xff 0xff System parameters?
0xaa 0x99 0x88 0x77 0x66 0x55 0x44 0x33 0x22 0x11
NFCID3
0x00 ?
0x00 ?
This is the response when an initiator activated this target:
** Response from PN532 **
0xd5 0x8d TgInitAsTarget response code
0x04 Mode
(0x04 = DEP, 106kbps)
Let me know if it works!

Also you can try to send the following ADPU in HEX to put the reader in "Card emulation" mode:
FF 00 00 00 27 D4 8C 00 08 00 12 34 56 40 01 FE A2 A3 A4 A5 A6 A7 C0 C1 C2 C3 C4 C5 C6 C7 FF FF AA 99 88 77 66 55 44 33 22 11 00 00

For getting the ACR122 (or rather the PN532 NFC controller chip inside it) into card emulation mode, you would do about the following:
ReadRegister:
> FF000000 08 D406 6305 630D 6338
< D507 xx yy zz 9000
Update register values:
xx = xx | 0x004; // CIU_TxAuto |= InitialRFOn
yy = yy & 0x0EF; // CIU_ManualRCV &= ~ParityDisable
zz = zz & 0x0F7; // CIU_Status2 &= ~MFCrypto1On
WriteRegister:
> FF000000 11 D408 6302 80 6303 80 6305 xx 630D yy 6338 zz
< D509 9000
SetParameters:
> FF000000 03 D412 30
< D513 9000
TgInitAsTarget
> FF000000 27 D48C 05 0400 123456 20 000000000000000000000000000000000000 00000000000000000000 00 00
< D58D xx ... 9000
Where xx should be equal to 0x08.
Communicate using a sequence of TgGetData and TgSetData commands:
> FF000000 02 D486
< D587 xx <C-APDU> 9000
Where xx is the status code (should be 0x00 for success) and C-APDU is the command sent from the reader.
> FF000000 yy D48E <R-APDU>
< D587 xx 9000
Where yy is 2 + the length of the R-APDU (response) and xx is the status code (should be 0x00 for success).

You can use LibNFC. It has example code for this.
I still never got this working properly in Windows unfortunately. You will probably have to compile libnfc for specific drivers.
Also, the ACR122u seems to be pretty poorly supported by many libraries. Apparently it's not really designed for this use. There are particular issues for card emulation too (such as the timeout). We really all need to stop by the ACR122u. I just bought what was popular and easy to get hold of but regret it now.
To future browsers/searchers coming across this: please check the compatibility section on the libnfc site and buy something that they recommend!

Related

Problem decoding h264 over RTP TCP stream

I'm trying to receive RTP stream encoding h264 over TCP from my intercom Hikvision DS-KH8350-WTE1. By reverse engineering I was able to replicate how Hikvision original software Hik-Connect on iPhone and iVMS-4200 on MacOS connects and negotaties streaming. Now I'm getting the very same stream as original apps - verified through Wireshark. Now I need to "make sense" of the stream. I know it's RTP because I inspected how iVMS-4200 uses it using /usr/bin/sample on MacOS. Which yields:
! : 2 CStreamConvert::InputData(void*, int) (in libOpenNetStream.dylib) + 52 [0x11ff7c7a6]
+ ! : 2 SYSTRANS_InputData (in libSystemTransform.dylib) + 153 [0x114f917f2]
+ ! : 1 CRTPDemux::ProcessH264(unsigned char*, unsigned int, unsigned int, unsigned int) (in libSystemTransform.dylib) + 302 [0x114fa2c04]
+ ! : | 1 CRTPDemux::AddAVCStartCode() (in libSystemTransform.dylib) + 47 [0x114fa40f1]
+ ! : 1 CRTPDemux::ProcessH264(unsigned char*, unsigned int, unsigned int, unsigned int) (in libSystemTransform.dylib) + 476 [0x114fa2cb2]
+ ! : 1 CRTPDemux::ProcessVideoFrame(unsigned char*, unsigned int, unsigned int) (in libSystemTransform.dylib) + 1339 [0x114fa29b3]
+ ! : 1 CMPEG2PSPack::InputData(unsigned char*, unsigned int, FRAME_INFO*) (in libSystemTransform.dylib) + 228 [0x114f961d6]
+ ! : 1 CMPEG2PSPack::PackH264Frame(unsigned char*, unsigned int, FRAME_INFO*) (in libSystemTransform.dylib) + 238 [0x114f972fe]
+ ! : 1 CMPEG2PSPack::FindAVCStartCode(unsigned char*, unsigned int) (in libSystemTransform.dylib) + 23 [0x114f97561]`
I can catch that with lldb and see the arriving packet data making sense as the format I'm describing.
The packet signatures look following:
0x24 0x02 0x05 0x85 0x80 0x60 0x01 0x57 0x00 0x00 0x00 0x02 0x00 0x00 0x27 0xde
0x0d 0x80 0x60 0x37 0x94 0x71 0xe3 0x97 0x10 0x77 0x20 0x2c 0x51 | 0x7c
0x85 0xb8 0x00 00 00 00 01 65 0xb8 0x0 0x0 0xa 0x35 ...
0x24 0x02 0x05 0x85 0x80 0x60 0x01 0x58 0x00 0x00 0x00 0x02 0x00 0x00 0x27 0xde
0xd 0x80 0x60 0x37 0x95 0x71 0xe3 0x97 0x10 0x77 0x20 0x2c 0x51 | 0x7c 0x05 0x15 0xac ...
0x24 0x02 0x5 0x85 0x80 0x60 0x01 0x59 0x00 0x0 0x00 0x02 0x00 00x0 0x27 0xde
0xd 0x80 0x60 0x37 0x96 0x71 0xe3 0x97 0x10 0x77 0x20 0x2c 0x51 | 0x7c 0x05 0x5d 0x00 ...
By the means of reverse engineering the original software I was able to figure out that 0x7c85 indicates a key frame. 0x7c85 bytes in the genuine software processing do get replaced by h264 00 00 00 01 65 Key frame NALU . That's h264 appendix-B format.
The 0x7c05 packets always follow and are the remaining payload of the key frame. No NALU are added during their handling (the 0x7c05 is stripped away and rest of the bytes is copied). None of the bytes preceding 0x7cXX make it to a mp4 recording (that makes sense as it's the RTP protocol , albeit I'm not sure if it's entirely RTP standard or there's something custom from Hikvision).
If you pay close attention in the Header there are 2 separate bytes indicating order which always match, so I'm sure no packet loss is occurring.
I also observed nonkey frames arriving as 0x7c81 and converted to 00 00 00 01 61 NALU but I want to focus solely on the single key frame for now. Mostly because if I record a movie with the original software it will always begin with 00 00 00 01 65 Key frame (that obviously makes sense).
To get a working mp4 I decided to copy paste a mp4 header of a genuine iVMS-4200 recording (in this sense that's every byte preceding 1st frame NALU 00 00 00 01 65 in the mp4 file). I know that the resolution will match the actual camera footage. With the strategy of waiting for a keyframe , replacing 0x7c85 with 00 00 00 01 65 NALU and appending the remaining bytes, or only appending bytes in the 0x7c05 case I do seem to get something that eventually could work.
When I attempt to ffplay the custom crafted mp4 result I do get something (with a little stretch of imagination that's actually the camera fisheye image forming), but clearly there is a problem.
It seems around 3-4th 0x7c05 packet (as the failing packet differs on every run), when I copy bytes eventually the h264 stream is incorrect. Just by eye-inspecting the bytes I don't see anything unusual.
This is the failing packet around offset 750 decimal, (I know it's around this place because I keep stripping bytes away to see if there's still same amount frame arriving before it breaks).
More over I did dump those bytes from original software using lldb taking out my own python implementation out of equation. And I run into very same problem with the original packets.
The mp4 header I use should work (since it does for original recordings even if I manipulate number of frames and leave just the first keyframe).
Correct me if I'm wrong but the phase of converting this to MPEG2-PS (which iVMS-4200 does and I don't) should be entirely optional and should not be related to my problem.
Update:
I went the path of setting up recording and only then dumping the original iVMS-4200 packets. I edited the recorded movie to only contain keyframe of interest and it works. I found differences but I cannot explain where they are there yet:
Somehow 00 00 01 E0 13 FA 88 00 02 FF FF is inserted in the genuine recording (that's 4th packet), but I have no idea how this byte string was generated and what is its purpose.
When I fixed the first difference the next one is:
The pattern is striking. But what00 00 01 E0 13 FA 88 00 02 FF FF actually is? And why is it inserted after 18 03 25 10 & 2F F4 80 FC
The 00 00 01 E0 signature would suggest those are Packetized Elementary Stream (PES) headers
Going for mp4 container wasn't a good choice after all. It turns out the RTP essentially yields raw h264 stream. To inspect its structure I converted the genuine mp4 recording to .264 like this:
ffmpeg -i recording.mp4 -codec copy recording.264
It's essentially a PPS (00 00 00 01 67) and SPS 00 00 00 01 68 followed by frame data NALUs that I got in the stream.
Raw h264 turned out a way simpler structure to aim at, and I don't have to deal with those Packetized Elementary Stream (PES) headers anymore. That yields correct image. In my case I just took the PPS & SPS settings that original recordings use from recording.264. That could definitely be resolved dynamically somehow but I didn't bother.

How to explain embedded message binary wire format of protocol buffer?

I'm trying to understand protocol buffer encoding method, when translating message to binary(or hexadecimal) format, I can't understand how the embedded message is encoded.
I guess maybe it's related to memory address, but I can't find the accurate relationship.
Here is what i've done.
Step 1: I defined two messages in test.proto file,
syntax = "proto3";
package proto_test;
message Education {
string college = 1;
}
message Person {
int32 age = 1;
string name = 2;
Education edu = 3;
}
Step 2: And then I generated some go code,
protoc --go_out=. test.proto
Step 3: Then I check the encoded format of the message,
p := proto_test.Person{
Age: 666,
Name: "Tom",
Edu: &proto_test.Education{
College: "SOMEWHERE",
},
}
var b []byte
out, err := p.XXX_Marshal(b, true)
if err != nil {
log.Fatalln("fail to marshal with error: ", err)
}
fmt.Printf("hexadecimal format:% x \n", out)
fmt.Printf("binary format:% b \n", out)
which outputs,
hexadecimal format:08 9a 05 12 03 54 6f 6d 1a fd 96 d1 08 0a 09 53 4f 4d 45 57 48 45 52 45
binary format:[ 1000 10011010 101 10010 11 1010100 1101111 1101101 11010 11111101 10010110 11010001 1000 1010 1001 1010011 1001111 1001101 1000101 1010111 1001000 1000101 1010010 1000101]
what I understand is ,
08 - int32 wire type with tag number 1
9a 05 - Varints for 666
12 - string wire type with tag number 2
03 - length delimited which is 3 byte
54 6f 6d - ascii for "TOM"
1a - embedded message wire type with tag number 3
fd 96 d1 08 - ? (here is what I don't understand)
0a - string wire type with tag number 1
09 - length delimited which is 9 byte
53 4f 4d 45 57 48 45 52 45 - ascii for "SOMEWHERE"
What does fd 96 d1 08 stands for?
It seems like that d1 08 always be there, but fd 96 sometimes change, don't know why. Thanks for answering :)
Add
I debugged the marshal process and reported a bug here.
At that location I/you would expect the number of bytes in the embedded message.
I have repeated your experiment in Python.
msg = Person()
msg.age = 666
msg.name = "Tom"
msg.edu.college = "SOMEWHERE"
I got a different result, the one I would expect. A varint stating the size of the embedded message.
0x08
0x9A, 0x05
0x12
0x03
0x54 0x6F 0x6D
0x1A
0x0B <- Equals to 11 decimal.
0x0A
0x09
0x53 0x4F 0x4D 0x45 0x57 0x48 0x45 0x52 0x45
Next I deserialized your bytes:
msg2 = Person()
str = bytearray(b'\x08\x9a\x05\x12\x03\x54\x6f\x6d\x1a\xfd\x96\xd1\x08\x0a\x09\x53\x4f\x4d\x45\x57\x48\x45\x52\x45')
msg2.ParseFromString(str)
print(msg2)
The result of this is perfect:
age: 666
name: "Tom"
edu {
college: "SOMEWHERE"
}
The conclusion I come to is that there are some different ways of encoding in Protobuf. I do not know what is done in this case but I know the example of a negative 32 bit varint. A positive varint is encoded in five bytes, a negative value is cast as a 64 bit value and encoded to ten bytes.

parse the following ibeacon packet

I am trying to parse this ibeacon packet received by scanning through a hci socket
b'\x01\x03\x00\x18\xbe\x99m\xf3\x14\x1e\x02\x01\x1a\x1a\xffL\x00\x02\x15e\xec\xe2\x90\xc7\xdbM\xd0\xb8\x1aV\xa6-b 2\x00\x00\x00\x02\xc5\xcc'
hex format 01 03 00 18 be 99 6d f3 14 1e 02 01 1a 1a ff 4c 00 02 15 65 ec e2 90 c7 db 4d d0 b8 1a 56 a6 2d 62 20 32 00 00 00 02 c5 cc
the parameters after applying the parser are
'UUID': '65ece290c7db4dd0b81a56a62d622032', 'MAJOR': '0000', 'MINOR': '0002', 'TX': -59, 'RSSI': -60
I am not sure if the RSSI portion of this parsing is right.
Referring to this https://stackoverflow.com/a/19040616/10355673
the last bit of the beacon advertising packet is the TX power value.
so how do we get the rssi value? here, I have taken rssi to be cc and tx to be c5. Is this correct?
There are flags headers before the manufacturer advertisement sequence shown below, but you really don't care about the flags. Here are the bytes you care about:
ff # manufacturee adv type
4c 00 # apple Bluetooth company code
02 15 # iBeacon type code
65 ec e2 90 c7 db 4d d0 b8 1a 56 a6 2d 62 20 32 # proximity uuid
00 00 # major
00 02 # minor
c5 # measured power (tx power)
cc # crc
Proximity UUUD: 65ece290-c7db-4dd0-b81a-56a62d622032,
Major: 0,
Minor: 2,
Measured Power: -59 dBm
The RSSI is not part of the transmitted packet, but a measurement taken by the receiver based on the strength of the signal. It will typically be a slightly different value for each packet that is received. You get this value from an API on a mobile device or embedded system that fetches it from the bluetooth chip.

Re-attempting BASIC 6502 N-byte integer addition?

I initially (asked for help) and wrote a BASIC program in the 6502 pet emulator which added two n-byte integers. However, my feedback was that it was simply adding two 16 bit integers (not adding n-byte integers).
Can anyone help me understand this feedback by looking at my code and point me in the right direction to make a program that adds two n-byte integers?
Thank You for the collaboration!
Documentation:
Adds two n-byte integers using absolute indexed addressing. The addends begin at memory locations $0600, $0700 and the answer is at $0800. Byte length of the integers is at $0600 (¢ —> 256)
Machine Code:
18 a2 00 ac 00 06 bd 00
07 7d 00 08 9d 00 09 e8
00 88 00 d0
Op Codes, Documentation, Variables:
A1 = $0600
B1 = $0700
B2 = $0800
Z1 = $0900
[START] = $0500
CLC 18 // loads x with 0
LDX A2 00 // loads length on Y
LDY A1 AC 00 06 // load first operand
loop: LDA B1, x BD 00 07 // adds second operand
ADC B2, x 7D 00 08 // store result
STA Z1, x 9D 00 09 // go to next byte
INX E8 00 // count how many are left
DEY 88 00 // do more if needed
BNE loop D0
It looked to me like your code does what you claim -- adds two N byte operands in little-endian byte order. I vaguely remembered the various addressing modes of the 6502 from my misspent youth and the code seems fine. X is used to index the current byte from the two numbers, Y is a counter for the length of the operands in bytes and you loop over those bytes, stored at addresses 0x0700 and 0x0800 and write the result at address 0x0900.
Rather than get the Commodore 64 out of the attic and try it out I used an online virtual 6502 simulator. On this site we can set the memory address and load the byte values in. They even link to a page to assemble opcodes too. So setting the memory locations 0x0600 to "04" and both 0x0700 and 0x0800 to "04 03 02 01" we should see this code add these two 32 bit values (0x01020304 + 0x01020304 == 0x02040608).
Stepping through the code by clicking on the PC register and setting it to 0x0500 and then single stepping we see there is a bug in your machine code. After INX which compiles to E8 we hit a spurious 0x00 value(BRK) which terminates. The corrected code as below runs to completion and the expected value is seen by reading the memory at 0x0900.
0000 CLC 18
0001 LDX #$00 A2 00
0003 LDY $0600 AC 00 06
0006 LOOP: LDA $0700,X BD 00 07
0009 ADC $0800,X 7D 00 08
000C STA $0900,X 9D 00 09
000F INX E8
0010 DEY 88
0011 BNE LOOP: D0 F3
Memory dump:
:0900 08 06 04 02 00 00 00 00
:0908 00 00 00 00 00 00 00 00

Hashing a 64-bit value into a 32-bit MAC address

I'm looking into suggestions on how to convert a 64-bit die revision field into a 32-bit MAC address I can use a for a wireless application to avoid collisions.
The die information is
struct {
uint32_t lot;
uint16_t X_coordinate;
uint16_t Y_coordinate;
}
I don't know the range of coordinates, but based on a few samples, I think the coordinates are limited to < 256. That effectly reduces the space by 2 bytes. But the lot number is fully populated.
I'm going to try this (pseudocode to make it readable, I'm leaving the casts out)
MAC = X_coordinate | Y_coordinate << 8 | lot << 16;
and throw away the top 16 bits of the lot and the top 8 bits of the coordinates. I feel though that maybe I should XOR in the top 16 bits of the lot somewhere, but I have no experience with this in the real world.
Here is a sample of die revision information: little endian byte dump
lot/wafer ID X coordinate Y coordinate
C3 1B B0 46 20 00 22 00
CB 8B 94 46 14 00 32 00
CB 8B 94 46 27 00 1E 00
B9 F7 80 6F 20 00 08 00

Resources