I have to display barcodes on a mobile screen which can only be within 72x28 pixels (in an area of around 1.5cmx0.5cm). I then have to scan those barcodes using a smartphone. I don't have to encode a lot of information - only enough that can be efficiently decoded in this scenario. What is the best poosible barcode encoding to use? I think given that the vertical is very small, 1D barcodes would be better, but I am not bale to figure out the encoding out of all the available options.
The smallest QR code, Version 1, takes 21x21 pixels. Really, the QR codes are supposed to have a 4-module border on all sides, which would technically make it 29x29 at least. However in practice, leaving one pixel off will probably be just fine, letting you fit into 29x28.
Version 1 can encode up to 41 digits in numeric mode, with the lowest EC level, L.
For 5 digits, a simple Code 128 1D barcode is perhaps an even better choice.
Related
I am working on a project I wanted to do for quite a while. I wanted to make an all-round huffman compressor, which will work, not just in theory, on various types of files, and I am writing it in python:
text - which is, for obvious reasons, the easiet one to implement, already done, works wonderfully.
images - this is where I am struggling. I don't know how to approach images and how to read them in a simple way that it'd actually help me compress them easily.
I've tried reading them pixel by pixel, but somehow, it actually enlarges the picture instead of compressing it.
What I've tried:
Reading the image pixel by pixel using Image(PIL), get all the pixels in a list, create a freq table (for each pixel) and then encrypt it. Problem is, imo, that I am reading each pixel and trying to make a freq table out of that. That way, I get way too many symbols, which leads to too many lengthy huffman codes (over 8 bits).
I think I may be able to solve this problem by reading a larger set of pixels or anything of that sort because then I'd have a smaller code table and therefore less lengthy huffman codes. If I leave it like that, I can, in theory, get 255^3 sized code table (since each pixel is (0-255, 0-255, 0-255)).
Is there any way to read larger amount of pixels at a time (>1 pixel) or is there a better way to approach images when all needed is to compress?
Thank you all for reading so far, and a special thank you for anyone who tries to lend a hand.
edited: If huffman is a real bad compression algorithm for images, are there any better ones you can think off? The project I'm working on can take different algorithms for different file types if it is neccessary.
Encoding whole pixels like this often results in far too many unique symbols, that each are used very few times. Especially if the image is a photograph or if it contains many coloured gradients. A simple way to fix this is splitting the image into its R, G and B colour planes and encoding those either separately or concatenated, either way the actual elements that are being encoded are in the range 0..255 and not multi-dimensional.
But as you suspect, exploiting just 0th order entropy is not so great for many images, especially photographs. As example of what some existing formats do, PNG uses filters to take some advantage of spatial correlation (great for smooth gradients), JPG uses quantized discrete cosine transforms and (usually) a colour space transformation to YCbCr (to decorrelate the channels, and to crush Chroma more mercilessly than Luma) and (usually) Chroma subsampling, JPEG2000 uses wavelets and colour space transformation both in its lossy and lossless forms (though different wavelets, and a different colour space transformation) and also supports subsampling though dropping a wavelet scale achieves a similar effect.
What is the smallest 1D barcode size I can print that will be readable by a scanner? Is 6mm x 6mm even possible?
I am planning to encode 21 characters. Will this fit into the barcode? If not will 2D do?
Unfortunately, this depends on the printer and the scanner. The printer's resolution determines how legible the symbols are, so theoretically a 1200 dpi printer is going to print a more readable code than a 600 dpi. Of course that depends on software.
The scanner is limited by its field of "vision", so the closer the scanner can get to the substrate, the smaller your barcode can be. While a 6mm x 6mm target area is possible, you aren't going to get there with off-the-shelf products. Maybe using lenses (like a microscope) and a high fidelity printer (like a typesetter) you could get to 6mmx6mm, but it's going to be expensive.
The smallest I could get my software to do 21 characters is about 45mm using a HP Inkjet and Symbol handheld. However I was using Code128B. If you are using only numbers, Code128C would cut about 30% off the width.
http://brian-p-anderson.github.io/JS-Barcodes/
I have a barcode image. I have to make it smaller.
Can that damage the barcode?
Proportional scaling
Not proportional scaling (only height changes)
Barcodes are: Type UPC-A / EAN-13 "vertical lines". Sorry not an expert in barcodes, thought the type of barcode would not be important. Scaling is moderate, the image does not lose relevant data.
Regular barcode (=vertical stripes) is recognized by the relative width of the lines. Thus, the horizontal height only matters for robustness against diagonal scanning. If the codes are scanned with a hand scanner, I'd just scale the height (or crop the image). In any case, the different widths of the lines should still be clearly visible. There may be compliance rules suggesting minimum proportions for a given barcode standard.
For regular linear product barcodes, the simple answer is yes, you can scale it (both case are safe).
However, if you scale too far and the bars end up too close together, you will start to get a high level of read errors.
You'll need to test it with an appropriate barcode reader to make sure you haven't scaled too much.
When scaling a barcode, there are several things you must keep in mind.
1) You get the absolute sharpest edges in a barcode if each module (the narrowest bar) is a whole number of pixels wide.
2) If the module width is not a whole number of pixels, produce a barcode where the width of each module is the truncated whole number and use bilinear interpolation to scale up. This will give you at most one pixel of gradient at the edges.
3) Be careful when buying a barcode library, choose one that includes built-in scaling that preserves the barcode, such as this one or this one. Barcodes have special demands that image processing normally does not have, such as pixel-perfection. Using e.g. Gimp might damage the barcode.
I wanted to implement barcode for one of my mobile project requirements. The amount of data that is to be stored is very little (<25 alpha-numeric). I want to know if its wiser to implement a 1d barcode or a 2d barcode (Qr code particularly) for this project. I would be really glad if someone could educate me on the following aspects from a 1d vs 2d perspective:
scanning speed
size (minimum display size that is needed, for the mobile camera to recognize -- this is more crucual)
accuracy
Considered from a typical processing and SDK perspective (zxing preferably).
I'd go with a qr code, particularly if you're planning on using a phone camera. qr codes have features (finders) that make things like perspective correction easier/more reliable. They also have ECC that enables eliminating false positives and correcting various amounts of bit detection errors. If you look at the zxing test suite, you'll find a number of false positive 1D cases since many 1D codes don't have even a checksum.
Speed's probably not an issue for either case if you know what you're trying to scan. The biggest computational cost in zxing is going through all possible codes when you don't know what you're looking for. If you know the code type, it's not likely to be significantly different.
The only thing about size is the number of pixels that have to be captured. In other words, a small code can be read if you hold the camera close to the code. A large code can be read from further away. All this is subject to light conditions, camera focus (or lack there of), and camera brightness adjustment. I can't see how any of these would impact 1D vs 2D though.
I am currently trying to further compress a very simple image. The image uses 2 sets of colors as well as 1 character per "pixel". each set of color may be 1 of 16 options. Because of this I have already combined both colors into 1 byte per pixel representing both of them. I already implemented MTF and BWT encoding methods to assist in RLE. I am positive I can get some more compression out of it however I am not sure what algorithm to use. I have tried huffman however because of the fact the image tends to be small already and RLE compresses most of it due to the lack of entropy, huffman half the time increases the size by adding its decoding table to the file. Please note this will also be run on a slower system so any really heavy algorithms may not work either.
First off, it sounds like you should compress the background and character color images separately. Second, you say that "the colors don't change too often from pixel to pixel". Are some colors "closer" to each other than others? I.e., when color changes from color x, is it more likely to change to a small subset of the remaining colors? If so, you can map the colors to be more adjacent to those they are likely to change to, and taking differences before coding. Then runs of the same color become runs of zeros, and changes to the "next" color become ones.
Once you have a good representation as a series of bytes with lots of runs and a skewed probability of occurrence of bytes values, e.g. lots of zeros and one, then apply zlib or gzip to take advantage of the apparent redundancy and skew.