I'm trying to decode aztec codes from images using zxing library.
Images looks more or less like this:
https://imgur.com/a/5ExPy6q
So far my results are quite random.
I've tried a few image processing actions using imagemagick such as:
convert -brightness-contrast 50x20 in.png out.png
convert -colorspace Gray in.png out.png
And there was improvement but still most of codes fails to decode.
What specific image preprocessing actions should I do for such barcodes ?
You can try -lat (local area threshold) in Imagemagick. For example:
Input:
convert barcode.png -colorspace gray -negate -lat 20x20+10% -negate result.png
You can improve that a little by adding -morphology close:
convert barcode.png -colorspace gray -negate -lat 20x20+10% -negate -morphology open diamond:1 result2.png
Related
Using ImageMagick, I'm trying to resize a JPEG's dimensions and reduce the file size.
The issue is that I don't want to worsen the image quality.
I've tried the following commands:
convert -resize 170x80 -resample 100x100 image1.jpg image2.jpg
=> A resized picture but with bad quality.
convert -resize 170x80 -quality JPEG image1.jpg image2.jpg
=> A resized image and with good quality, but the same file size.
convert -density 600 -resize 170x80 image1.jpg image2.jpg
=> A resized image but very bad quality.
I don't know what option I should use.
quality parameter has a numeric value. From -quality doc :
For the JPEG and MPEG image formats, quality is 1 (lowest image quality and highest compression) to 100 (best quality but least effective compression). The default is to use the estimated quality of your input image if it can be determined, otherwise 92.
You may use quality lower than the default 92 to reduce the size, e.g. 70 as:
convert -resize 170x80 -quality 70 image1.jpg image2.jpg
I've managed to solve this issue using convert and mogrify :
convert -flatten -colorspace RGB myImage.jpg myImage.jpg
&&
mogrify -quality JPEG -geometry 170x80 myImage.jpg
There is a comic book PDF file which has a lot of white space at the bottom.
The content is almost half the length of the page size.
How to crop all pages in the PDF file?
I have tried imagemagick but the quality is poor.
convert -verbose -density 300 -interlace none -quality 100 output.pdf
In ImageMagick, try using a larger density and then resize by the inverse amount in percent. Here, I use density = 4*72 = 288 and then resize by 25% (1/4).
convert -density 288 image.pdf -resize 25% -fuzz 15% -trim +repage result.pdf
I have an image of a person and I want to compress it to make it less than 4KB. I need to compress it and still have the face of the person recognizable even if the image will shrink.
Here is Theresa May at 142kB:
and resized to 72x72 and converted to greyscale and reduced to 2kB with ImageMagick at the command line:
convert original.jpg -resize 72x72 -colorspace gray -define jpeg:extent=2kb result.jpg
I can still recognise her.
Here is some other guy reduced to 1kB and I can still recognise him too:
ImageMagick is installed on most Linux distros and is available for macOS and Windows. Bindings are available for Python, PHP, Ruby, Javascript, Perl etc.
If you had further knowledge about your images, or your recognition algorithm, you may be able to do better. For example, if you knew that the centre of the image was more important than the edges, you could slightly blur, or reduce contrast in relatively unimportant areas and use the available space for more details in the important areas.
Mark Setchell has the right idea. But I might suggest one potential minor improvement. Remove any meta data including profiles, EXIF data etc. You can do that by either adding -strip
convert input.jpg -strip -resize 72x72 -colorspace gray -define jpeg:extent=2kb result.jpg
or by using -thumbnail rather than -resize. The former automatically does the strip.
convert input.jpg -thumbnail 72x72 -colorspace gray -define jpeg:extent=2kb result.jpg
I am completely new to Imagemagick and I need to convert different types of images to the LAB colorspace.
I Am currently using this command:
magick convert "input4.tif" -flatten +profile tiff:37724 -colorspace Lab -auto-orient -intent Absolute -compress LZW "output4.tif"
The problem is that this command does not seems to work for ECI-RGB images and CMYK images.
If I convert a CMYK image to LAB, the image looks completely oversaturated in magenta and cyan.
If I convert an ECI-RGB image to LAB, it is a bit darker than the original.
Help would be greatly appreciated,
Thanks
I am sorry for image sizes.
Now these two cropped images I have in a dataset.
I am going to feed these images into a machine learning algorithm.
Before doing that, I want to extract binarized digits and feed the binary images into algorithm, instead of feeding directly. Can you please elaborate how can I achieve this kind of clean binarization ?
I found otsu and other thresholding methods but they were unable to give clear digits in image.
I had some success, though I don't know how it will fare with the rest of your images, using a 2-colour quantisation, conversion to greyscale and normalisation.
I just did it at the command line with ImageMagick, as follows:
convert input.png +dither -colors 3 -colors 2 -colorspace gray -normalize -scale 250x result.png
So, it loads an image and disables dithering, so that the subsequent quantisation only results in 2 actual colours rather than dithered mixtures. I then quantise down to 3 colours - still in RGB colourspace - and then further down to 2 colours. Then, I convert those 2 colours to greyscale and normalise them so the darker one becomes black and the lighter one becomes white.
An alternate approach, to what Mark Setchell suggested, in ImageMagick goes over to OpenCv rather straightforward. OpenCV has adaptive thresholding see https://docs.opencv.org/3.3.1/d7/d4d/tutorial_py_thresholding.html and connected components processing see https://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gac2718a64ade63475425558aa669a943a and https://www.pyimagesearch.com/2016/10/31/detecting-multiple-bright-spots-in-an-image-with-python-and-opencv/
1) convert to grayscale
2) stretch image to full dynamic range
3) apply local (adaptive) thresholding
4) optionally use connected components labelling to remove regions smaller than some total number of pixels (area).
convert 2.png \
-colorspace gray \
-auto-level \
-lat 20x20+10% \
2_lat.gif
convert 19.png \
-colorspace gray \
-auto-level \
-negate \
-lat 20x20+5% \
19_lat.gif
Do optional connected components processing here:
convert 2_lat.gif \
-define connected-components:area-threshold=40 \
-define connected-components:mean-color=true \
-connected-components 4 \
2_lat_ccl.gif
convert 19_lat.gif \
-define connected-components:area-threshold=20 \
-define connected-components:mean-color=true \
-connected-components 4 \
19_lat_ccl.gif
To smooth the images, you would likely need to use a raster to vector tool such as potrace.