Accuracy of barcode vs qrcode? - barcode

I want to develop a supermarket application for checking and billing.
Should I use barcodes or qrcodes? Which will give better accuracy?

The biggest difference here is that a linear barcode (e.g. Code 3 of 9, UPC, EAN, etc.) and a 2-dimensional symbology (e.g. QRCode, DataMatrix, etc.) store data in very different ways. A linear barcode can be read with a simple laser scanner, while most 2-D symbologies require an imager in order to be read. In general, imagers can also read linear barcodes, but are also more expensive than laser scanners.
You will want to consider whether your customers may already have linear scanners only, or whether they would be willing to pay the premium for an imager in order to get the benefit of the extra data that can be encoded in the 2-D symbologies.

Both will be accurate, the question is how much data do you need to store. QR has much more capacity than something like 3of9 barcode.

Related

Which is a better input for an autoencoder, one with correlated features or one with uncorrelated features?

I am trying to visualise my data in 2D in order to detect fraud (outliers), all my features are likely to take bigger values in case of a fraud. But I was careful not to include redundant features,
for example the features :
Activity (a score that is higher for active users who use the service everyday) and Money-earned both tend to take higher values in case of fraud, but one can't be deduced from the other.
I figured that choosing features in this way will translate to bigger coordinates in the 2D representation and would make fraudulent points distant/stand out from the rest of my data.
I also feel like having correlated features would make it easier for autoencoder to reconstruct the data. But I read many times that having correlated features isn’t efficient in machine learning.
Should I make an effort to make my features less correlated ? For example replacing the Activity score (higher for active users) with the times between two uses (lower for active users)?
Or maybe this isn't important for the autoencoder?
You are right about your understanding that "having correlated features would make it easier for autoencoder to reconstruct the data".
For example, in case all your data points are i.i.d. Gaussian it would make data compression very difficult for autoencoders since they would fail to learn a low dimensional representation of the data.
Please refer to this Stanford UFLDL Tutorial link for details.

Artificial neural network image transformation

I have a pairs of images (input-output) but I don't know the transformation to going from A (input) to B (output). I want to record image A and get image B. Physically I can change the setup to get A or B, but I want to do it by software.
If I understood well, a trained Artificial Neural Network is able to do that, having an input can give the corresponding output, is it right?
Is there any software/ANN that just "training" it with entering a number of input-output pairs will be able to provide the correct output if the input is a new (but similar to the others) image?
Thanks
If you have some relevant amount of image pairs (input/output pair) and you don't know transformation between input and output you could train ANN on that training set to imitate that unknown transformation. You will be able to well train your ANN only if you have sufficient amount of training image pairs, but it could be pretty impossible when that unknown transformation is complicated.
For example if that transformation simply increases intensity values of pixels at input image by given value, ANN will very fast learn to imitate that behavior, but if that unknown transformation is some complicated convolution or few serial convolutions or something more complicated it will be very hard, near impossible to train ANN to imitate that transformation. So, more complex transformation will need bigger training set and more complex ANN design.
There are plenty of free opensource ANN libraries implemented in many languages. You could start for example with that tutorial: http://www.codeproject.com/Articles/13091/Artificial-Neural-Networks-made-easy-with-the-FANN
What you are asking is possible in principle -- in theory, an ANN with sufficiently many hidden units can learn an arbitrary function to map inputs to outputs. However, as the comments and other answers have mentioned, there may be many technical issues with your particular problem that could make it impractical. I would classify these problems as (a) mapping complexity, (b) model complexity, (c) scaling complexity, and (d) implementation complexity. They are all somewhat related, but hopefully this is a useful way to break things down.
Mapping complexity
As mentioned by Springfield762, there are many possible functions that map from one image to another image. If the relationship between your input images and your output images is relatively simple -- like increasing the intensity of each pixel by a constant amount -- then an ANN would be able to learn this mapping without much difficulty. There are probably many more transformations that would be similarly easy to learn, such as skewing, flipping, rotating, or translating an image -- basically any affine transformation would be easy to learn. Other, nonlinear transformations could also be feasible, such as squaring the intensity of each pixel.
As a general rule, the more complicated the relationship between your input and output images, the more difficult it will be to get a model to learn this mapping for you.
Model complexity
The more complex the mapping from inputs to outputs, the more complex your ANN model will be to be able to capture this mapping. Models with many hidden layers have been shown in the past 10 years to perform quite well on tasks that people had previously thought impossible, but often these state-of-the-art models have millions or even billions of parameters and take weeks to train on GPU hardware. A simple model can capture many simple mappings, but if you have a complex input-output map to learn, you'll need a large, complex model.
Scaling complexity
Yves mentioned in the comments that it can be difficult to scale models up to typical image sizes. If your images are relatively small (currently the state of the art is to model images on the order of 100x100 pixels), then you can probably just throw a bunch of raw pixel data at an ANN model and see what happens. But if you're using 6000x4000 images from your shiny Nikon DSLR, it's going to be quite difficult to process those in a reasonable amount of time. You'd be better off compressing your image data somehow (PCA is a common technique) and then trying to learn the mapping in the compressed space.
In addition, larger images will have a larger space of possible mappings between them, so you'll need more of your larger images as training data than you would if you had small images.
Springfield762 also mentioned this: If the mapping between your input and output images is simple, then you'll only need a few examples to learn the mapping successfully. But if you have a complicated mapping, then you'll need much more training data to have a chance at learning the mapping properly.
Implementation complexity
It's unlikely that a tool already exists that would let you just throw image data into an ANN model and have a mapping appear. Most likely you'll need, at a minimum, to implement some code that will pre-process your image data. In addition, if you have lots of large images you'll probably need to write code to handle loading data from disk, etc. (There are a lot of "big data" tools for things like this, but they all require some amount of work to get set up.)
There are many, many open source ANN toolkits out there nowadays. FANN (already mentioned) is a popular one in C++ with bindings in other languages. Caffe is quite popular, and is also implemented in C++ with bindings. There seem to be many toolkits that use Python and Theano or some other GPU acceleration library -- Keras, Lasagne, Hebel, Pylearn2, neon, and Theanets (I wrote this one). Many people use Torch, written in Lua. Matlab has at least one neural network toolbox. I'm less familiar with other ecosystems, but Java seems to have Deeplearning4j, C# has Accord, and even R has darch.
But with any of these neural network toolkits, you're going to have to write some code to load the data, process it into the appropriate input format, construct (or load) a network model, train the model, etc.
The problem you're trying to solve is a canonical classification problem that neural networks can help you solve. You treat the B images as a set of labels that you match to A, and once trained, the neural network will be able to match the B images to new input based on where the network locates new input in a high-dimensional vector space. I assume you'd use some combination of convolutional networks to create your features, and softmax for multinomial classification on the output layer. More here: http://deeplearning4j.org/convolutionalnets.html
Since this has been written there has been a lot of work in the realm of cgans ( conditional generative adversarial networks ) please refer to:
https://arxiv.org/pdf/1611.07004.pdf

How to detect how similar a speech recording is to another speech recording?

I would like to build a program to detect how close a user's audio recording is to another recording in order to correct the user's pronunciation. For example:
I record myself saying "Good morning"
I let a foreign student record "Good morning"
Compare his recording to mine to see if his pronunciation was good enough.
I've seen this in some language learning tools (I believe Rosetta Stone does this), but how is it done? Note we're only dealing with speech (and not, say, music). What are some algorithms or libraries I should look into?
A lot of people seem to be suggesting some sort of edit distance, which IMO is a totally wrong approach for determining the similarity of two speech patterns, especially for patterns as short as OP is implying. The specific algorithms used by speech-recognition in fact are nearly the opposite of what you would like to use here. The problem in speech recognition is resolving many similar pronunciations to the same representation. The problem here is to take a number of slightly different pronunciations and get some kind of meaningful distance between them.
I've done quite a bit of this stuff for large scale data science, and while I can't comment on exactly how proprietary programs do it, I can comment on how it's done in academia and provide a solution that is straightforward and will give you the power and flexibility that you want for this approach.
Firstly: Assuming that what you have is some chunk of audio without any filtering done on it. Just as it would be acquired from a microphone. The first step is to eliminate background noise. There are a number of different methods for this, but I'm going to assume that what you want is something that will work well without being incredibly difficult to implement.
Filter the audio using scipy's filtering module here. There are a lot of frequencies that microphones pick up that are simply not useful for categorizing speech. I would suggest either a Bessel or a Butterworth filter to ensure that your waveform is persevered through filtering. The fundamental frequencies for everyday speech are generally between 800 and 2000 Hz (reference) so a reasonable cutoff would be something like 300 to 4000 Hz, just to make sure you don't lose anything.
Look for the least active portion of speech and assume that is a reasonable representation of background noise. At this point you're going to want to run a series of fourier transforms along your data (or generate a spectrogram) and find the part of your speech recording that has the lowest average frequency response. Once you have that snapshot, you should subtract it from all other points in your audio sample.
At this point should should have an audio file that is mostly just your user's speech and should be ready to be compared to another file that has gone through this process. Now, we want to actually clip the sound and compare this clip to some master clip.
Secondly: You're going to want to come up with a distance metric between two speech patterns, there are a number of ways to do this, but I'm going to assume we have the output of part one and some master file that has been through similar processing.
Generate a spectrogram of the audio file in question (example). The output from this is ultimately going to be an image that can be represented as a 2-d array of frequency response values. A spectrogram is essentially a fourier transform over time where the colour corresponds to intensity.
Use OpenCV (has python bindings, example) to run blob detection on your spectrogram. Effectively this is going to look for the big colorful blob in the middle of your spectrogram, and give you some limits on this. Effectively, what this should do, is return a significantly more sparse version of the original 2d-array that solely represents the speech in question. (With the assumption that your audio file will have some trailing stuff on the front and back ends of recording)
Normalize the two blobs to account for differences in speech speed. Everyone talks at a different speeds, and as such your blobs will probably have different sizes along the x-axis (time). This will ultimately introduce a level of checks in your algorithm that you don't want for the speed of speech. This step isn't needed if you also want to make sure that they speak at the same speed as the master copy, but I would suggest it. Basically you want to stretch out the shorter version by multiplying it's time axis by some constant that's just the ratio of the lengths of your two blobs.
You should also normalize the two blobs based on maximum and minimum intensity to account for people that talk at different volumes. Again, this is up to your discretion, but to fix this you should find similar ratios for the total span of intensities that you have as well as the two recording's max intensities and make sure that these two values match up between your 2-d arrays.
Third: Now that you have 2-d arrays representing your two speech events, that should in theory contain all of their useful information it's time to directly compare them. Luckily, comparing two matrices is a well-solved problem and there are a number of ways to move forward.
Personally I would suggest using a metric like Cosine Similarity to determine the difference between your two blobs, but that's not the only solution and while it'll give you a quick validation, you can do better.
You could try subtracting one matrix from the other and get an evaluation of how much difference there is between them, which would probably be more accurate than simple cosine distance.
It might be overkill, but you could assume that there are certain regions of speech that matter more or less for evaluating difference between blobs (it might not matter if someone uses a long i instead of a short i, but a g instead of a k could be a different word entirely). For something like that you'd want to develop a mask for the difference array in the previous step and multiply all your values by that.
Whichever method you choose, you can now simply set some difference threshold and make sure that the difference between the two blobs is below your desired threshold. If it is, the captured speech is similar enough to be correct. Otherwise have them try again.
I hope that's helpful, and again, I can't assure you that this is the exact algorithm that a company uses since that information is hugely proprietary and not open for the public, but I can assure you that methods similar to these are used in the very best papers in academia and that these methods will get you a great balance of accuracy and ease of implementation. Let me know if you have any questions, and good luck with your future data science exploits!
The musicg api https://code.google.com/p/musicg/
has a audio fingerprint generator and scorer
along with source code to show how its done.
I think it looks for the most similar point in each track, then scores based on how far it can match.
It might look something like
import com.musicg.wave.Wave
com.musicg.fingerprint.FingerprintSimilarity
com.musicg.fingerprint.FingerprintSimilarityComputer
com.musicg.fingerprint.FingerprintManager
double score =
new FingerprintsSimilarity(
new Wave("voice1.wav").getFingerprint(),
new Wave("voice2.wav").getFingerprint() ).getSimilarity();
Idea:
The way biotechnologists align two protein sequences is as follows: Each sequence is represented as a string on an alphabet as(A/C/G/T - these are different types of proteins, irrelevant for us), where each letter (here, an entry) represents a particular amino acid. The quality of an alignment (its score) is calculated from the similarity of each pair of corresponding entries, and the number and length of the blank entries that need to be inserted to produce that alignment.
Same algorithm (http://en.wikipedia.org/wiki/Needleman-Wunsch_algorithm) can be used for pronunciation, from substitution frequencies in a set of alternate pronunciations. Then you can calculate alignment scores to measure the similarity between the two pronunciations in a way that is sensitive to the differences between phonemes. Measures of similarity that can be used here are Levenshtein distance, phoneme error rate, and word error rate.
Algorithms
The minimum number of insertions, deletions and substitutions required for transformation of one sequence into another is the Levenshtein distance. More info at http://php.net/manual/en/function.levenshtein.php
Phoneme error rate (PER) is the Levenshtein distance between a predicted pronunciation and the reference pronunciation, divided by the number of phonemes in the reference pronunciation.
Word error rate (WER) is the proportion of predicted pronunciations with at least one phoneme error to the total number of pronunciations.
Source: Did an Internship on this at UW-Madison
A carefully configured Levenshtein distance should do the trick.
I know this question is out of date but...
To solve a similar problem I used Google Speech Recognized API to check WHAT was said and visual compare scaled wave forms of volume changes to detect differences in rhythm.
Code & video of the result.
you can use Musicg https://code.google.com/p/musicg/ as roy zhang suggested. In android, just include musicg jar file in your android project and use it. A tested example:
import com.musicg.wave.Wave;
import com.musicg.fingerprint.FingerprintSimilarity;
//somewhere in your code add
String file1 = Environment.getExternalStorageDirectory().getAbsolutePath();
file1 += "/test.wav";
String file2 = Environment.getExternalStorageDirectory().getAbsolutePath();
file2 += "/test.wav";
Wave w1 = new Wave(file1);
Wave w2 = new Wave(file2);
FingerprintSimilarity fps = w1.getFingerprintSimilarity(w2);
float score = fps.getScore();
float sim = fps.getSimilarity();
Log.d("score", score+"");
Log.d("similarities", sim+"");
Good luck
If this is only to check the pronunciation [of course with different accent], you can do this :
Step 1 : Using some voice tool [say dragon dictation], you can have the text with you.
Step 2 : Compare the string or the word formed and compare it with the string that actually was meant to be pronounced.
Step 3 : If you find any discrepancy in the strings, means the word was not spelled correctly. And you can suggest the correct pronunciation.
You have to look into speech recognition algorithms. I understand that you don't need to translate speech to text (that is done by speech recognition algorithms), however, in your case many algorithms would be the same.
Probably, HMM would be helpful here (hidden markov models).
Also look into here: http://htk.eng.cam.ac.uk/

True Random Number Generator using atmospheric noise

I have to build an One Time Pad system and for that, I have to build my own TRNG. I want to know how to make record atmospheric noise and use that to generate random numbers. I've tried so far to record a .wav file and read it in Java, but the values don't seem very...random. Any suggestions? I know about Random.org, but I can't really use their generators, I have to build my own, so what I want is some insight into how the folks at Random.org have built their numbers generator, with atmospheric noise as a source of 'randomness'.
Non Real-time solution
What you can do is record the audio surrounding the room before in and save a temporary WAV file. If you know how the WAV file works which is based on the RIFF specification. Then strip the WAV header which is 44 bytes in length. Then read the audio bytes and do the proper conversions depending on whether you want to generate WORDS, DWORDS, or BYTES, it is up to you. Then you should have some random values to work with. Then use those random values accordingly.
Real-time solution
Since I do not know whether you want to program this in Java or some other language. In addition, I do not know the intended platform; so I cannot recommend you any realtime audio processing libraries.
For C# you can use NAudio and you can record the audio in realtime and recieve the audio bytes. Then you can convert the audio bytes into either a DWORD, QWORD, WORD, etc. You should be able to have some random values. Remember to stop recording and to release unmanaged resources when generating random numbers has ceased.
Good Resources On The WAV File Specification
Link to the specification (Easy to understand)
The answer is unknown and probably intentionally so. Although hard to be sure, the site seems to be a combination of charity and for-profit work. Each radio source only produces a few Kbps of random data. How he describes it in many links, I don't see evidence of a CSRNG. It doesn't matter. For OTP purposes, if it's not truly random, it's a glorified stream cipher. (I think that's what Bruce and others have always said.)
I find it hard to recall when a good CSRNG was broken. I'd recommend you use something like ISAAC or a properly implemented block/stream cipher. Perfect Paper Passwords does this. Use a Fortuna construction with the internals of Fortuna using the above ciphers/algorithms to produce the majority of the random data. The Fortuna system can regularly have data injected into it by a TRNG. The very best TRNG on a budget is random.org plus locally generated stuff. The best cheap, hardware solution is a VIA Artigo board with VIA Padlock (TRNG + acceleration for SHA-1, SHA256, AES, & RSA) for $300. They have libraries to help you use things, too. (There's even a pseudo-TRNG that uses processor timing under network load.)
Remember, the crypto is usually the strongest link in the chain. System security exists on many levels: processor, firmware, peripheral firmware (esp DMA), kernel mode code, OS, trusted middleware or OS functions, application. Security as a whole includes users, policy, physical security, EMSEC, etc. Anyone worrying way too much about RNG's is usually wasting effort. Just use an accepted solution or something I mentioned above. Then, focus on the rest. Especially, how people and systems interact. Configuration, patching, choice of OS, policies. Most problems happen there.
I recall an article on random.org that I can't seem to find now. I all remember is that they used the lsb of the noise they were measuring. The MSBs will certainly not be random. Then then generated a string of 1s and 0's based on the lsb. Don't do something silly like a simple binary conversion, that won't work. You maybe have to sample the noise in binary, to make the distribution of the lsb have a more uniform sampling.
The trick they used to ensure an even distribution was to not use this string of 1's and 0's as the random numbers. Instead they would parse the string, 2 bits at a time. Every time the bits matched (ie 00 or 11) they added a 1 to their random string. Every times the bits flipped (ie 01 or 10) they added a 0 to their random string.
If you make your own TRNG, make sure you verify it!
It is hardly possible to get real random numbers out of software. Even the static in your wav file is likely to be influenced by periodic EMI generated by your computer and is therefore not purely random.
Can you use special hardware or are you forced to stick to pure software? Why won't pseudo random numbers satisfy your needs? They will do fine on a relatively small number of random samples. Because you want to use the random numbers in an OTP, I guess you won't be using it in a big scale.
Can you provide a little more detail?
The atmospheric noise approach to generating random numbers is complex because the atmosphere is filled with non-random signals, all of which pollute the entropy you seek. There is an easier way.
Chances are good your CPU already contains a true random number generator, assuming you have an Intel Ivy Bridge-based Core/Xeon processor, which became available in April, 2012. (The new Haswell architecture also has this feature).
Intel's random generator exploits the random effects of thermal noise inside an unstable digital circuit. Thermal noise is just random atomic vibrations, which is pretty much the same underlying physical phenomenon that Random.org uses when it samples atmospheric noise. The sampled random bits go through a sophisticated conditioning and testing process to eliminate pollution from non-random signals. I highly recommend this excellent article on IEEE Spectrum which describes the process in detail.
Intel added a new x86 instruction called RDRAND that allows programs to directly retrieve these random numbers. Although Java does not yet support direct access to RDRAND, it's possible using JNI. This is the approach I took with the drnglib project. For example:
DigitalRandom random = new DigitalRandom();
System.out.println(random.nextInt());
The nextInt() method is implemented as a JNI native call that invokes RDRAND. The performance is pretty good considering the quality of randomness. Using eight threads, I've generated ~760 MB/sec of random data.
True random number generators (TRNGs) are usually from natural sources like seismic signals, non-stationary bio-signals, etc. The two issues faced by these generators are:
1) The data points are non-uniformly distributed
2) It takes very long time to generate large sequence of numbers (specially when the requirement is in millions).
However, the most important advantage on their part is their unpredictable nature. To overcome their issues and to retain its advantage, it is better to fuse the output of TRNG to seed a pseudo-random number generator. For this, you may try using the amplitude values of atmospheric noise at random time points and use it to seed a PRNG.
This will help you to get large numbers of uniformly distributed values. As the seed is unpredictable, the output of PRNG too becomes unpredictable.

How does PDF417 barcode decoding recover from damaged labels?

I recently learned about PDF417 barcodes and I was astonished that I can still read the barcode after I ripped it in half and scanned only a fragment of the original label.
How can the barcode decoding be that robust? Which (types of) algorithms are used during encoding and decoding?
EDIT: I understand the general philosophy of introducing redundancy to create robustness, but I'm interested in more details, i.e. how this is done with PDF417.
the pdf417 format allows for varying levels of duplication/redundancy in its content. the level of redundancy used will affect how much of the barcode can be obscured or removed while still leaving the contents readable
PDF417 does not use anything. It's a specification of encoding of data.
I think there is a confusion between the barcode format and the data it conveys.
The various barcode formats (PDF417, Aztec, DataMatrix) specify a way to encode data, be it numerical, alphabetic or binary... the exact content though is left unspecified.
From what I have seen, Reed-Solomon is often the algorithm used for redundancy. The exact level of redundancy is up to you with this algorithm and there are libraries at least in Java and C from what I've been dealing with.
Now, it is up to you to specify what the exact content of your barcode should be, including the algorithm used for redundancy and the parameters used by this algorithm. And of course you'll need to work hand in hand with those who are going to decode it :)
Note: QR seems slightly different, with explicit zones for redundancy data.
I don't know the PDF417. I know that QR codes use Reed Solomon correction. It is an oversampling technique. To get the concept: suppose you have a polynomial in the power of 6. Technically, you need seven points to describe this polynomial uniquely, so you can perfectly transmit the information about the whole polynomial with just seven points. However, if one of these seven is corrupted, you miss the information whole. To work around this issue, you extract a larger number of points out of the polynomial, and write them down. As long as you have at least seven out of the bunch, it will be enough to reconstruct your original information.
In other words, you trade space for robustness, by introducing more and more redundancy. Nothing new here.
I do not think the concept of trade off between space and robustness is any different here as anywhere else. Think RAID, let's say RAID 5 - you can yank a disk out of the array and the data is still available. The price? - an extra disk. Or in terms of the barcode - extra space the label occupies

Resources