I'm about to implement the LLCP / SNEP protocol based on a PN532 NFC chip from NXP (purely for learning reasons) and i'm currently studying the LLCP specification of the NFC Forum.
I'm pretty familiar with the MAC-layer of NFC as specified in ISO 18092, but i have some problems understanding how the "asynchronous balanced mode (ABM)" of LLCP works.
To my understanding, the ABM enables the Initiator and the Target a possibility to send data at any time (on top of the actual master / slave approach). Especially for the Target, i don't really understand how this should work.
For example, i have my PN532 acting as Initiator which pushes a NDEF-message via SNEP to a NFC-enabled Smartphone. Let's say, the LLCP connection stays enabled and the Target decides to send another NDEF-message back to the Initiator at a later point in time.
How can the Target start this transmission when the Intiator sends no request for it ?
I'm not sure, but this is maybe linked to the "Symmetry Procedure" as specificed in chapter 5.8 of LLCP 1.0.
My assumption is that in case the Initiator has received the last acknowledge to the previously sent NDEF-message or information block / frame, it continues to send "SYMM"-LLC PDUs just before the LTO occurs. This gives the Target the chance to send a new - for example - information block / frame. This continues until the LLCP link gets deactivated.
Can anybody please tell me if my assumption is correct (if not, how does it actually work..) ?
PS: Sorry for my english - it isn't my native language.
Can anybody please tell me if my assumption is correct (if not, how
does it actually work..) ?
Yes, your assumption is correct. When idle, the initiator will regulary send SYMM frames to:
check if the target is still responding (aka, link is still up)
give the target a chance to send out a pending data-frame
Related
First, I want to apologize. I am complete noob in this area and many of my thoughts are probably misleading.
I need to verify that a user of my app is on a specific place in order to be authirized to perform an action. I want to use NFC for this purpose. The user have to put his smartphone by a NFC tag in order to be authorized to perform the action. Easy but I need it to be reasonably hackerproof. It means that the NFC tag must be impossible to clone without physical damage to the plastics around the NFC chip. It also means that the NFC chip must not contain only static data. The NFC chip must contain an app, that can receive some data (cryptographic challenge) and signs them using secure built-in private key (which must be unreadable through NFC interface). When the user wants to perform the action, he will ask server for the challenge, then he lets the chip to sign it, and then he sends the signed challenge back to the server which will verify the signature using known public key. This should be achievable using NFC JavaCard. But do these NFC JavaCards actually exist? I wasn't able to find a company which would be able to produce such NFC tags for me. When I try to explain my requirements to a NFC tags producer he looks like he has never heard of NFC JavaCards. I have tried about 10 producers without luck.
Can a commonly available chip meet my requirements? I mean a chip from the Mifare familly. I suspect that Mifare DESfire might be able to meet my requirements, but I am not sure.
Feel free to respond with an advertisement, because relevant advertisement is exactly what I look for :)
I try to collect some useful facts:
NFC is a very broad term, just finding that on both sides does not ensure interoperability.
Any ISO 14443 (one of the NFC flavours) compliant smart card with crypto functionality should be usable. Note, that a card with native OS may be a viable alternative to a JavaCard, since the functionality to sign a random number is pretty standard.
Any smart phone sporting a NFC chip can address such a card in principle. Unfortunately this is strongly dependent on the OS of the smart phone, for Android the relevant class to use is IsoDep, which gives you the APDU interface. After triggering the "card enters field" event, then the app receives a handle, via which further communication can take place.
Real smart cards can't be cloned, since you are not able to dump them; especially keys can't be read.
Now some things to consider:
Your approach looks unusual, which might become a problem. (To have a portable card somehow fixed to a wall, just to get the location; so you know where somebody is, but not who? While I don't consider cloning to be an issue, you somehow must ensure destruction in case of a theft attempt, which may collide with the distance topic below.)
I don't see, where the server comes into play. If not involved in the authorized action, provision of a random number is not sufficient reason.
Asymmetric key operations have a comparatively high power consumption, and this power has to be supplied via the electric field. This severely limits the distance between card and phone and may even require direct touch. While a power supply of its own would solve the issue in principle, it is not what ISO-14443 was designed for.
Yes JavaCards do exist.
https://github.com/OpenJavaCard/openjavacard-ndef is a project makes these JavaCards to output standard NDEF messages (thought note issue 4 in that there example uses the wrong APDU but that is easily changed)
This project also give a number of cards it is fully working and tested for
ACS ACOSJ - fully working
NXP JCOP J3D040/J3D081/J2E145 etc - fully working
Both ACS and Cardlogic do cards (just google the model numbers)
e.g.
https://www.acs.com.hk/en/products/405/acosj-java-card-combi/
https://www.smartcardfocus.com/shop/ilp/id~707/j3a081-80k/p/index.shtml
The answer a was looking for is not a chip which runs a custom code. Although this might be possible it is definitely not the best way to achieve the target.
I was looking for a solution that enables strong authentication using NFC data. There might be multiple chips that offers this, but probably the most available chip is NTAG 424 DNA TT. It works like this:
The chip has a memory, which is not readable through NFC. Private key is stored there.
The chip has a read counter. It increments everytime the data are read through NFC.
The chip can generate an AES-128 signature of string UID (chip serial number) + counter using the private key in the inaccessible part of the memory.
The chip can dynamicaly inject the data above into a URL that is stored in the readable memory.
So the solution will be like (I am waiting for delivery of NFC tags right now, so I don't know for sure yet):
Read the tag UID (serial number) and the actual counter value (should be 0 on an unused tag)
Generate the key-pair
Load private key to the chip
Load some data (URL, eg: https://my.app/) to the chip
Store UID, public-key, last-counter on the server
Configure the chip to inject UID, counter, signature to the URL stored on the chip
When a client reads the data, they should contain required variables, eg: https://my.app/?counter=1&uid=ff:ff:ff:ff&signature=xyz. Then on the server:
Fetch stored info (public-key, last-counter) using uid as a primary key
verifies the signature
verifies the counter that must be > last-counter
stores counter as the last-counter
successfully authorized
Is anyone able to hack this without reading the hidden memory of the chip which would require physical tampering with the chip?
I have an HID device that is somewhat unfortunately designed (the Griffin Powermate) in that as you turn it, the input value for the "Rotation Axis" HID element doesn't change unless the speed of rotation dramatically changes or unless the direction changes. It sends many HID reports (angular resolution appears to be about 4deg, in that I get ~90 reports per revolution - not great, but whatever...), but they all report the same value (generally -1 or 1 for CCW and CW respectively -- if you turn faster, it will report -2 & 2, and so on, but you have to turn much faster. As a result of this unfortunate behavior, I'm finding this thing largely useless.
It occurred to me that I might be able to write a background userspace app that seized the physical device and presented another, virtual device with some minor additions so as to cause an input value change for every report (like a wrap-around accumulator, which the HID spec has support for -- God only knows why Griffin didn't do this themselves.)
But I'm not seeing how one would go about creating the kernel side object for the virtual device from userspace, and I'm starting to think it might not be possible. I saw this question, and its indications are not good, but it's low on details.
Alternately, if there's a way for me to spoof reports on the existing device, I suppose that would do it as well, since I could set it back to zero immediately after it reports -1 or 1.
Any ideas?
First of all, you can simulate input events via Quartz Event Services but this might not suffice for your purposes, as that's mainly designed for simulating keyboard and mouse events.
Second, the HID driver family of the IOKit framework contains a user client on the (global) IOHIDResource service, called IOHIDResourceDeviceUserClient. It appears that this can spawn IOHIDUserDevice instances on command from user space. In particular, the userspace IOKitLib contains a IOHIDUserDeviceCreate function which seems to be supposed to be able to do this. The HID family source code even comes with a little demo of this which creates a virtual keyboard of sorts. Unfortunately, although I can get this to build, it fails on the IOHIDUserDeviceCreate call. (I can see in IORegistryExplorer that the IOHIDResourceDeviceUserClient instance is never created.) I've not investigated this further due to lack of time, but it seems worth pursuing if you need its functionality.
I have quite a bewildering problem.
I'm using a big C++ library for handling some proprietary protocol over UDP on Windows XP/7. It listens on one port throughout the run of the program, and waits for connections from distant peers.
Most of the time, this works well. However, due to some problems I'd experienced, I've decided to add a simple debug print directly after the call to WSARecvFrom (the win32 function used in the library to recv datagrams from my socket of interest, and tell what IP and port they came from).
Strangely enough, in some cases, I've discovered packets are dropped at the OS level (i.e. I see them in Wireshark, they have the right dst-port, all checksums are correct - but they never appear in the debug prints I've implanted into the code).
Now, I'm fully of the fact (people tend to mention a bit too often) that "UDP doesn't guarantee delivery" - but this is not relevant, as the packets are received by the machine - I see them in Wireshark.
Also, I'm familiar with OS buffers and the potential to fill up, but here comes the weird part...
I've done some research trying to find out which packets are dropped exactly. What I've discovered, is that all dropped packets share two things in common (though some, but definitely not most, of the packets that aren't dropped share these as well):
They are small. Many of the packets in the protocol are large, close to MTU - but all packets that are dropped are under 100 bytes (gross).
They are always one of two: a SYN-equivalent (i.e. the first packet a peer sends to us in order to initiate communications) or a FIN-equivalent (i.e. a packet a peer sends when it is no longer interested in talking to us).
Can either one of these two qualities affect the OS buffers, and cause packets to be randomly (or even more interesting - selectively) dropped?
Any light shed on this strange issue would be very appreciated.
Many thanks.
EDIT (24/10/12):
I think I may have missed an important detail. It seems that the packets dropped before arrival share something else in common: They (and I'm starting to believe, only they) are sent to the server by "new" peers, i.e. peers that it hasn't tried to contact before.
For example, if a syn-equivalent packet arrives from a peer* we've never seen before, it will not be seen by WSARecvFrom. However, if we have sent a syn-equivalent packet to that peer ourselves (even if it didn't reply at the time), and now it sends us a syn-equivalent, we will see it.
(*) I'm not sure whether this is a peer we haven't seen (i.e. ip:port) or just a port we haven't seen before.
Does this help?
Is this some kind of WinSock option I've never heard of? (as I stated above, the code is not mine, so it may be using socket options I'm not aware of)
Thanks again!
The OS has a fixed size buffer for data that has arrived at your socket but hasn't yet been read by you. When this buffer is exhausted, it'll start to discard data. Debug logging may exacerbate this by delaying the rate you pull data from the socket at, increasing the chances of overflows.
If this is the problem, you could at least reduce the instances of it by requesting a larger recv buffer.
You can check the size of your socket's recv buffer using
int recvBufSize;
int err = getsockopt(socket, SOL_SOCKET, SO_RCVBUF,
(char*)&recvBufSize, sizeof(recvBufSize));
and you can set it to a larger size using
int recvBufSize = /* usage specific size */;
int err = setsockopt(socket, SOL_SOCKET, SO_RCVBUF,
(const char*)&recvBufSize, sizeof(recvBufSize));
If you still see data being received by the OS but not delivered to your socket client, you could think about different approaches to logging. e.g.
Log to a RAM buffer and only print it occasionally (at whatever size you profile to be most efficient)
Log from a low priority thread, either accepting that the memory requirements for this will be unpredictable or adding code to discard data from the log's buffer when it gets full
I had a very similar issue, after confirming that the receive buffer wasn't causing drops, I learned that it was because I had the receive timeout set too low at 1ms. Setting the socket to non-blocking and not setting the receive timeout fixed the issue for me.
Turn off the Windows Firewall.
Does that fix it? If so, you can likely enable the Firewall back on and just add a rule for your program.
That's my most logical guess based on what you said here in your update:
It seems that the packets dropped before arrival share something else
in common: They (and I'm starting to believe, only they) are sent to
the server by "new" peers, i.e. peers that it hasn't tried to contact
before.
Faced same kind of problem on the redhat-linux as well.
This turn out to be a routing issue.
RCA is as follows:
True that UDP is able to reach destination machine(seen on Wireshark).
Now route to source is not found so there is no reply can be seen on the Wireshark.
On some OS you can see the request packet on the Wireshark but OS does not actually delivers the packets socket (You can see this socket in the netstat-nap).
In these case please check ping always (ping <dest ip> -I<source ip>)
I need to get a list of interfaces on my local machine, along with their IP addresses, MACs and a set of QoS measurements ( Delay, Jitter, Error rate, Loss Rate, Bandwidth)...
I'm writing a kernel module to read these information from local network devices,So far I've extracted every thing mentioned above except for both Jitter and Bandwidth...
I'm using linux kernel 2.6.35
It depends what you mean by bandwidth. In most cases you only get from the PHY something that is better called bitrate. I guess you rather need some kind of information on the available bandwidth at a higher layer, which you can't get without active or passive measurements done, e.g. sending ICMP echo-like probe packets, and investigating replies. You should also make clear what the two points in the network are (both the actual endpoints and the communication layer) between which you would like to measure available bandwidth.
As for jitter you also need to do some kind of measurements, basically the same way as above.
I know this is an old post, but you could accomplish at least getting jitter by inspecting the RTCP packets if they're available. They come in on the +1 of the RTP port and come along with any RTP stream as far as I've seen. A lot of information can be gotten from RTCP, but for your purposes just the basic source description would do it:
EDIT: (didn't look at the preview)
Just check out this link for the details of the protocol, but you can get the jitter pretty easily from an RTCP packet.
Depending on what you're using the RTP stream for too there are a lot of other resources, like the VoIP Metrics Report Block in the Extended Report (https://www.rfc-editor.org/rfc/rfc3611#page-25).
EDIT:
As per Artem's request here is a basic flow of how you might do it:
An RTP stream is started on say port 16400 (the needed drivers/mechanism for this to happen are most likely already in place).
Tell the kernel to start listening on port 16401 (1 above your RTP stream's port) as well; this is where the RTCP pkts will start coming in.
As the RTCP pkts come in send them wherever you want to handle them (ie, if you're wanting to parse it in userspace or something).
Parse the pkts for the desired data. I'm not aware of a particular lib to do this, but it's pretty easy to just point some struct at it (in C) and dereference, watching out for Endianess.
I'm writing a linux kernel module that emulates a block device.
There are various calls that can be used to tell the block size to the kernel, so it aligns and sizes every request toward the driver accordingly. This is well documented in the "Linux Device Drives 3" book.
The book describes two methods of implementing a block device: using a "request" function, or using a "make_request" function.
It is not clear, whether the queue limit calls apply when using the minimalistic "make_request" approach (which is also the more efficient one if the underlying device is has really no benefit from sequential over random IO, which is the case with me).
I would really like to get the kernel to talk to me using 4K block sizes, but I see smaller bio-s hitting my make_request function.
My question is that should the blk_queue_limit_* affect the bio size when using make_request?
Thank you in advance.
I think I've found enough evidence in the kernel code that if you use make_request, you'll get correctly sized and aligned bios.
The answer is:
You must call blk_queue_make_request first, because it sets queue limits to defaults. After this, set queue limits as you'd like.
It seems that every part of the kernel submitting bios are do check for validity, and it's up to the submitter to do these checks. I've found incomplete validation in submit_bio and generic_make_request. But as long as no one does tricks, it's fine.
Since it's a policy to submit correct bio's, but it's up to the submitter to take care, and no one in the middle does, I think I have to implement explicit checks and fail the wrong bio-s. Since it's a policy, it's fine to fail on violation, and since it's not enforced by the kernel, it's a good thing to do explicit checks.
If you want to read a bit more on the story, see http://tlfabian.blogspot.com/2012/01/linux-block-device-drivers-queue-and.html.