When working with JPEG image properties (resolution, sampling, etc.) and you export the final product are you ALWAYS double dipping into 'jpegification'?
From my understanding when you load a JPEG image into an image manipulation tool (GIMP, Photoshop, ImageMagick, etc.) it goes like so:
Import JPEG
Decode JPEG into easier workable format (Bitmap)
Manipulate the pixels
Export back into JPEG (REDOING JPEG QUANTIZATION AGAIN, even if you copy the original JPEG parameters it's a double dip)
Am I correct in this?
Thanks!
Any areas of the image that have changed would have to be quantized again anyway.
In theory, an application could keep the quantized values lying around then use them again. However,
That would require 3 times as much memory. The quantized values require 16 bits to store (+8 bits for the pixel value).
If you changed the sampling or quantization tables, the quantized values would have to be recalculated.
There would be very few cases where it would make sense to hang on to the quantized DCT values.
I think it depends on what you do after reading the image... but I think you can check for yourself for any particular operation and whether it has re-quantised by using this function in ImageMagick
identify -format "%#\n" image.jpg
bb1f099c2e597fdd2e7ab3d273e52ffde7229b9061154c970d23b171df3aca89
which calculates the checksum (or signature as IM calls it) of the pixels - disregarding the header information.
So, if I create a file of random noise, like this
convert -size 1000x1000 xc:gray +noise gaussian image.jpg
and get the checksum of the data, like this
identify -format "%#\n" image.jpg
84474ba583dbc224d9c1f3e9d27517e11448fcdc167d8d6a1a9340472d40a714
I can then use jhead to change the comment in the header, like this
jhead -cl "Comment" image.jpg
Modified: image.jpg
and yet the checksum remains unchanged so I would say jhead has NOT re-quantised the data.
I guess my point is that your statement that images are ALWAYS re-quantised is not 100% accurate and it depends on what you actually do to the image, and further, that I am showing a way you can readily check for yourself whether any processing has actually caused requantisation. HTH !!!
Related
I am seeing the following image in a paper:
However, when I download the associated dataset with the paper, the images are like this:
How can I make the almost black images in the dataset looking like the one in the paper?
link to dataset: http://www.cs.bu.edu/~betke/research/HRMF2/
link to paper: http://people.bu.edu/breslav/084.pdf
The contents of this dataset is saved as 16 png data. But ordinary display has only 8bit dynamic range. So we cannot display them without windowing. Please try to use ImageJ, it can map 16bit data into visible 8bit data.
https://imagej.nih.gov/ij/index.html
It can show as following.
As the other answers rightly say, the images are in 16-bit PNG format. You can convert one to a conventionally scaled, viewable JPEG with ImageMagick which is installed on most Linux distros and is available for macOS and Windows.
So, in Terminal:
magick 183.png -auto-level 183.jpg
If you would like to convert all 800 images to JPEGs in one go, you can use ImageMagick's mogrify like this:
magick mogrify -format JPEG -auto-level *png
Note that if your ImageMagick is the older v6 (as opposed to the v7 commands I used) the two commands become:
convert 183.png -auto-level 183.jpg
mogrify -format JPEG -auto-level *png
The images in this dataset have high dynamic range of 16-bits per pixel. Image viewer you are using is mapping 16-bit pixels to 8-bit ones, in order to display them (most displays can only effectively handle 8 or 10 bits of brightness). Most image viewers will just truncate the least significant 8 bits, which in your case produces nearly black image. To get better results, use ImageJ (Fiji distribution is the easiest way to get started), which displays this:
I'm trying to batch convert PDF's to PNG's. Previously, this was always done manually through GIMP by importing a PDF, then converting it to PNG.
With the script that I wrote, this should all be done automatically. But for some reason, the image quality I get from using
convert \
-density 300 \
-adaptive-resize 2048 \
-define png:compression-level=9 \
"File1"
"File2"
Doesn't have the same "quality" compared to doing it via GIMP. See the image below for the difference in image quality.
In GIMP, I don't change much to the image. When I import the PDF, I change the resolution to 2048 pixels. When I convert and export it to PNG, I use all the default values GIMP offers, nothing fancy.
Changing the density to a higher or lower value doesn't do anything to the image. Also changing adaptive-resizing to normal resizing doesn't do much.
In the example image, both pictures are 2048 pixels wide. As you can see the lower image has a lot thicker/blurrier lines.
Example image comparison:
So, I have found a way around my problem.
Increasing the PPI kind of helped but still not as much as I would have liked it to.
Eventually I added this:
-channel A -fx "p*(p>0.2?22:0)"
Just some simple piece of code I found somewhere around here. It checks for the Alpha levels in the picture and if it's below a certain threshold it will just remove or "make the pixel" transparent. If it's over the threshold it will just boost the pixel to maximum visibility. Combined with the high PPI I dont get any "half pixels" anymore.
I used Microsoft Paint to create a 15248 x 6552 solid color picture. I saved it as both .png and .jpg and was expecting the .jpg to be smaller than .png, but it was not.
The .jpg file is 1.49MB, while the .png is 391KB. Shouldnt jpeg being a lossy compression be technically smaller in size?
I read somewhere that .png is better for solid colors etc, so I downloaded a picture form the web (not a solid color) and used paint to save it in both formats. This time the jpeg was smaller than png. Is it solely due to the gradient of colors? if so then why?
Even if the picture is a solid color should jpg encodng be able to compress it even better?
It's to be expected that PNG performs better than JPEG in this scenario.
As pointed out in other answer, PNG does a per-line pixel prediction, followed by ZLIB compression. If the image has a single colour, the prediction will produce a constant zero value for all the pixels, except for the start of each row. Hence the compression will be very effective. I'd bet that if the image were "landscape" (6552 x 15248 instead of 15248 x 6552) the compression would be even a little better.
JPEG compression, instead, divides the image in blocks of 8x8 pixels, and for each one it attempts to quantize finely the low frequency components and coarsely the high frequency components. This works nicely for "natural" (photographic or rendered) images, but no so nicely for images with few colors (or a single one!).
See some comparisons here.
Not necessarily.
PNG is a prediction-based algorithm, which means that it tries to deduct the value of one pixel based on previously coded pixels. I bet the prediction is really accurate for a solid image, thence the very good results.
JPEG accepts different "quality levels" which determine the size of your compressed file. The size differences between your experiment and the web version are likely due to that (unless you're downloading a different image, of course!).
Note that JPEG may introduce some image artifacts because it is a lossy algorithm, while PNG will recover the exact input image for you.
I've found for the same picture that if you save as PNG 1st then JPG the PNG will be smaller and if saved as JPG 1st it will be smaller than the PNG saved afterwards
I tried to change some pixel values of a Grayscale image and save it using imwrite in matlab.
no problem with saving.
the problem is when I read it back, some pixel values have been changed. not exactly the same values I assigned to pixels before saving it.
I'm trying to hash images so 1unit difference will effect the hash numbers.
As mentioned by mmgp, JPG can be lossy. That means that some of the information in your image will be lost in favor of storage efficiency.
The rationale behind JPG is somewhat like that behind MP3 -- changes in hues etc. that the human eye is not particularly well-adapted to distinguish will be simplified or removed altogether, thus decreasing the amount of information in the image. The information in a JPG represents a similar-looking, but in fact very different image. This is probably what you're experiencing.
In Matlab, have a look at the output of help imwrite. You can give a parameter to the jpg write called 'Quality', which is a number between 0 and 100, 100 meaning (near-)lossless compression.
Although the JPEG standard does allow for (near-)lossless compression, it is not often used in practice (at least, in my field). More popular lossless image formats are PNG, JPEG2000 and TIFF. Read more about it here.
All of these are also available in Matlab's imwrite function.
I need to reduce the file size of a color scan.
Up to now I think the following steps should be made:
selective blur (or similar) to reduce noise
scale to ~120dpi
reduce colors
Up to now we use convert (imagemagick) and net-ppm tools.
The scans are invoices, not photos.
Any hints appreciated.
Update
example:
http://www.thomas-guettler.de/tbz/example.png 11M
http://www.thomas-guettler.de/tbz/example_0800_pnmdepth009.png pnmscale, pnmdepth 110K
http://www.thomas-guettler.de/tbz/example_1000_pnmdepth006.png pnmscale, pnmdepth 116K
Bounty
The smallest and good readable reduced file of example.png with a reproduce-able solution gets the bounty. The solution needs to use open source software only.
The file format is not important, as long as you can convert it to PNG again. Processing time is not important. I can optimize later.
Update
I got very good results for black-and-white output (thank you). Color reducing to about 16 or 32 colors would be interesting.
This is a rather open ended question since there's still possible room for flex between image quality and image size... after all, making it black and white and compressing it with CCITT T.6 black and white (fax-style) compression is going to beat the pants off most if not all color-capable compression algorithms.
If you're willing to go black and white (not grayscale), do that! It makes documents very small.
Otherwise I recommend a series of minor image transformations and Adaptive Prediction Trees (see here). The APT software package is opensource or public domain and very easy to compile and use. Its advantages are that it performs well on a wide variety of image types, especially text, and it will allow you to scale image size vs. image quality better without losing readability. (I found myself squishing a example_1000-sized color version down to 48KB on the threshold of readability, and 64K with obvious artifacts but easy readability.)
I combined APT with imagemagick tweakery:
convert example.png -resize 50% -selective-blur 0x4+10% -brightness-contrast -5x30 -resize 80% example.ppm
./capt example.ppm example.apt 20 # The 20 means quality in the range [0,100]
And to reverse the process
./dapt example.apt out_example.ppm
convert out_example.ppm out_example.png
To explain the imagemagick settings:
-resize 50% Make it half as small to make processing faster. Also hides some print and scan artifacts.
-selective-blur 0x4+10%: Sharpening actually creates more noise. What you actually want is a selective blur (like in Photoshop) which blurs when there's no "edge".
-brightness-contrast -5x30: Here we increase the contrast a good bit to clip the bad coloration caused by the page outline (leading to less compressible data). We also darken slightly to make the blacks blacker.
-resize 80% Finally, we resize to a little bigger than your example_1000 image size. (Close enough.) This also reduces the number of obvious artifacts since they're somewhat hidden when the pixels are merged together.
At this point you're going to have a fine looking image in this example -- nice, smooth colors and crisp text. Then we compress. The quality value of 20 is a pretty low setting and it's not as spiffy looking anymore, but the document is very legible. Even at a quality value of 0 it's still mostly legible.
Again, using ADT isn't going to necessarily lead to the best results for this image, but it won't turn into an entirely unrecognizable mess on photographic-like content such as gradients, so you should be covered better on more types or unexpected types of documents.
Results:
88kb
76kb
64kb
48kb
Processed image before compression
If you truly don't care about the number of colors, we may as well go to black-and-white and use a bilevel coder. I ended up using the DJVU format because it compares well to JBIG2 and has open source encoders. In this case I used the didjvu encoder because it achieved the best results. (On Ubuntu you can apt-get install didjvu, perhaps on other distributions as well.)
The magic I ended up with looks like this to encode:
convert example.png -resize 50% -selective-blur 0x4+10% -normalize -brightness-contrast -20x100 -dither none -type bilevel example_djvu.pgm
didjvu encode -o example.djvu example_djvu.pgm --lossless
Note that this is actually a superior color blur to 0x2+10% at full resolution -- this will end up making the imagine about as nice as imaginable before it's converted to a bilevel image.
Decoding works as follows:
convert example.djvu out_example.png
Even with the larger resolution (which is much easier to read), the size weights in at 24KB. When reduced to the same size, it's still 24KB! Lastly, at only a 75% of the original image reduction and a 0x5+10% blur it weights in at 32KB.
See here for the visual results: http://img29.imageshack.us/img29/687/exampledjvu.png
If you already have it doing the right thing with the Imagemagick utility "convert" then it might be a good idea to look at the Imagemagick libraries first.
A quick look at my Ubuntu package lists shows bindings for perl,python,ruby,c++ and java