Smart card readers for MiFare Classic cards - nfc

While reading papers on how to attack MiFare Classic cards to figure out how secure is it, I have noticed many times that researchers talk about "genuine readers" and I couldn't understand very well what is meant by this term. Aren't all the readers deal in the same way with the cards? i.e. a reader at the place to be attacked can be replaced by any reader bought from the market.
Or is there some kind of key or per card reader configuration that gives every place that uses a card reader its unique identity that can't be replaced with any other new card reader bought from the market?

Usually "genuine reader" (or rather "genuine reader system") refers to the reader (including its control software) as it is in use in a live application (access control gate, vending machine, etc.) Thus, this reader (again reader + control software) knows how to access the cards (including the knowledge of the keys necessary to access any protect information on the card).
This term is used to differentiate from a reader that can (technically) communicate with the same card technology but does not (necessarily) have information on how to access the card (or at least does not have knowledge of any necessary secret keys).
For instance with MIFARE Classic, an attacker can compute the keys from any genuine card using any suitable (in terms of the key breaking attack) reader, even if the reader does not (yet) have knowledge about the actual keys of the card. The only requirement is to have a genuien card (i.e. a card that works in the system under attack).

Related

Moleskine Pen+ convertible with Nnotebooks of Neo Smart Lab

I'm very new in using smartpens and I'm not sure in which forum I best post questions about it...
I have a Moleskine Smart writing set but would like to make certain notes on single sheets (like this ones) or use other notebooks than the PaperTablet from Moleskine.
I thought that this would be no problem, since the Pen+ uses the same Ncode technology as the smartpens from Neolab do. But this turned out to be wrong... My Moleskine Notes app doesn't recognise the paper from Neolab.
Do you have an idea, how I can use my Pen+ with single paper sheets and/or other notebooks which use the Ncode technology?
Pen + from moleskine and neo notes work perfectly (I'm using that for 2 years) with that combination you can use all coded papers from neolab including the printed ones
Regarding the first part of your question, printing and using your own pages depends primarily on your printer's ability to print "fine enough") so the pen can read the Ncoded paper pattern. That said, I believe a laser printer matching these NeoLab recommended criteria:
Use a color laser printer for printing Ncode PDF.
Install PCL/PS drivers to print exquisite Ncode. (Ask the printer manufacturer about PCL/PS driver.)
Use plain format for grayscale laser printers.
Regarding the second half of your question concerning off-the-shelf compatibility, I had a similar question and had difficulty finding the answer until I started looking at the Question/Answer of these pens and paper products. (e.g.,Pen Q/A).
Finally my sense of researching this "too much", is the biggest snag is for people to realize the communication protocols between pen and paper are different. In other words, the apps you choose (i.e., Neo's notes or Moleskine+ apps) is the big decision. Thus, I believe choosing how to interface with the pen is the biggest choice to make.
TOTAL DISCLOSURE: I have yet to purchase either of these products and my above statements are somewhat speculative and not COMPLETELY verified.
My experience is that, at least for the Moleskine Elipse + smart pen, the Moleskine Notes app was required, and this unfortunately limited my selection of notebooks to moleskine-branded ones. Neo Notes would not recognize and register the pen.
However, my experience with Neo Studio is that it recognizes both NeoLab ncode and Moleskine-branded ncode notebooks.

Computer programming

i have a question concerning computer programming. Let's say i have only one computer with no OS running. And would like to start to "develop" an OS. basically what i have is a blank sheet an a pen to do so. an a couple of electronic devices. how do i put my instruction into that computer?
because today we use interpreter of compiler that "turn" programming language into what they call "machine code". But my question could be how to generate machine code from nowhere.
Thank you for your replies, a link to learn how to do that will be must welcome.
The first computers where programmed making the "machine code" directly. Just punching one's an zeros into cards (well, in fact they punched octal digits).
This was done that way until somebody thought it would be a nice idea to have an assembler which translated the machine code instructions into that ones and zeros.
After that, another guy thought that it can be very nice idea to have a programming language, who will translate "top level" instructions to machine code.
And after that, or probably at the same time, some "internal procedures" where created to ease the programming: to open a file, to close a file the only thing you have to do is to call an internal subroutine in the machine instead of programming all the open file and close file subroutines by yourself: the seed for the operating systems was planted.
The cross compiling issue that is commented here is the way to create an operating system for a new computer nowadays: you use a working computer as a "lever" to create an operating system for a new computer.
it depends on how far back you want to go. the earliest ones "programming" was moving wires from one essentially analog alu to another.
The woman/women programming at this point were called computers and used use pencil and paper.
later you use a pencil and paper and the datasheet/documentation for the instruction set. assemble by hand basically, there are no compilers or even the concept of a programming language at this point, this has to evolve still. you wrote down the ones and zeros in whatever form you preferred (binary or octal).
one way to enter code at this point is with switches. certainly computers predated it but look for a picture of the front panel of a pdp8 or the altair, etc. you set the switches for the data value and address, and you manually strobe a write. you load the bootstrap in this way and/or the whole program. set the start address and switch to run mode.
over time they developed card and tape readers for which you loaded the bootstrap in by hand (switches) then you could use a reader to load larger programs easier. cards could be punched on a typewriter type thing, literally a keyboard but instead of striking through a ribbon onto paper, it cut slots in a card.
oses and programming languages started to evolve at this point. until you bootstrapped your compiler you had to write the first compiler for a new language in some other language (no different than today). so the first assembler had to be in machine code, then from assembler you could create some other language and so on.
If you wanted to repeat something like this today you would have to build a computer with some sort of manual input. you could certainly but you would have to design it that way, like then you need the debouncing out but you could for example have a processor with an external flash, be it parallel or serial, mux the lines to the switches (a switch controls the mux) and either address/data/write your program, or for fun could use a spi flash and serially load the program into the flash. much better to just use one of the pdp or altair, etc online simulators to get a feel for the experience.
there is no magic here, there is no chicken and egg problem at all. humans had to do it by hand before the computer could do it. a smaller/simpler program had to generate more complicated programs, and so on. this long, slow, evolution is well documented all over the internet and in books in libraries everywhere.
Computers are based on a physical processor which was designed to accept instructions (eg. in assembly code) that only allowed primitive instructions like shift, move, copy, add. This processor decided how it spoke (eg. how big were the words (8-bit) and and other specs (speed/standards etc). Using some type of storage, we could store the instructions (punch cards, disk) and execute huge streams of these instructions.
If instructions were repeated over and over, you could move to an address and execute what was at that location and create loops and other constructs (branches, context switches, recursion).
Since you would have peripherals, you would have some kind of way to interface with it (draw, print dots), and you could create routines to build up on that to build letters, fonts, boxes, lines. Then you could run a subroutine to print the letter 'a' on screen..
An OS is basically a high-level implementation of all those lower level instructions. It is really a collection of all the instructions to interface with different areas (i/o, computations etc). Unix is a perfect example of different folks working on different areas and plugging them all into a single OS.

Windows Phone 8 device as a Proximity Access Card

Can the NFC hardware within Lumia 920 emulate a 125 kHz Proximity Access Card?
It looks like the NFC hardware implements the standard that is a superset of the standard that access cards use. But I don't have enough knowledge of those radio standards to understand if a phone can work only as a receiver or also as a transmitter of such signals.
I will also appreciate a link to a good overview article that explains those standards in simpler terms than the official specifications.
Currently, card emulation is not supported by the Windows Phone proximity API.
Sources:
NFC, card emulation
NFC Developer Comparison
It is impossible from the physics perspective and standards. NFC is based upon ISO 14443 and is using 13,56 MHz wave carrier - which is high-frequency layer - it is typically the electric field scope.
125 kHz is the proximity scope (125 kHz - 134 kHz) which uses magnetic induction (and magnetic field) as the communication environment.
Conclusion - it is physically impossible in this exact case. You could if you would use 13,56-solutions as proximity (this is possible).
Take a look here:
This white paper details the features in Windows Phone 8.1 that you can use to create a UICC-based NFC card-emulation app.
http://www.microsoft.com/en-us/download/details.aspx?id=43681

Image Processing on a micro-controller

I'm interested in starting a hobbyist project, where I do some image processing by interfacing HW and SW. I am quite a newbie to this. I know how to do some basic image processing in Matlab using the existing image processing commands.
I personally enjoy working with HW and wanted to a combination of HW/SW to be able to do this. I've read articles on people using FPGAs and just basic FPGAs/micro-controllers to go about doing this.
Here is my question: can someone recommend languages I should consider that will help me with interfacing on a PC? I image, the SW part would essentially be a GUI and is place-holder for all the processing that is done on the HW. Also in-terms of selecting the HW and realistically considering what I could do on the HW, could I get a few recommendations on that too?
Any recommendations will be appreciated!
EDIT: I read a few of the other posts saying requirements are directly related to knowing what kind of image processing one is doing. Well initially, I want to do finger print recognition. So filtering and locating unique markers in the image etc.
It all depends on what you are familiar with, how you plan on doing the interface between FPGA and PC, and generally the scale of what you want to do. Examples could be:
A fast system could for instance consist of a Xilinx SP605
board, using the PCI Express interface to quickly transfer image
data between PC and FPGA. For this, you'd need to write a device
driver (in C), and a user-space application (I've done this in
C++/Qt).
A more realistic hobbyist system could be a Xilinx SP601
board, using Ethernet to transfer data - you'd then just have to
write a simple protocol (possibly using raw sockets (no TCP/UDP) to
make the FPGA side Ethernet simpler), which can be done in basically
any language offering network access (there's a Xilinx reference
design for the SP605 demonstrating this).
The simplest and cheapest solution would be an FPGA board with a
serial connection - you probably wouldn't be able to do any
"serious" image processing with this, but it should be enough for
very simple proof-of-concept stuff, although the smaller FPGA devices used o these boards typically do not have much on-board memory available.
But again, it all depends on what you actually want to do.

Can the Diffie-Hellman protocol be used as a base for digital signatures?

I am implementing a custo crypto library using the Diffie-Hellman protocol (yes, i know about rsa/ssl/and the likes - i am using it specific purposes) and so far it turned out better than i original expected - using GMP, it's very fast.
My question is, besides the obvious key exchange part, if this protocol can be used for digital signatures as well.
I have looked at quite a few resources online, but so far my search has been fruitless.
Is this at all possible?
Any (serious) ideas are welcome.
Update:
Thanks for the comments. And for the more curious people:
my DH implementation is meant - among other things - to distribute encrypted "resources" to client-side applications. both are, for the most part, my own code.
every client has a DH key pair, and i use it along with my server's public key to generate the shared keys. in turn, i use them for HMACs and symmetric encryption.
DH keys are built anywhere from 128 up to 512 bits, using safe primes as modulus.
I realize how "pure" D-H alone can't be used for signatures, i was hoping for something close to it (or as simple).
It would appear this is feasible: http://www.quadibloc.com/crypto/pk050302.htm.
I would question why you are doing this though. The first rule of implementing crypto is don't implement crypto. There are plenty of libraries that already exist, you would probably be better off leveraging these, crypto code is notoriously hard to get right even if you understand the science behind it.
DSA is the standard way to make digital signatures based on the discrete logarithm problem.
And to answer a potential future question, Ephemeral-static Diffie-Hellman is the standard way to implement asymmetric encryption (to send messages where you know and trust the recipients public key (for example through a certificate), but the recipient does not know your key).

Resources