most non-serious cameras (cameras on phones and webcams) provide lossy JPEG image as output.
while for a human eye they may not be noticed but the data loss could be critical for image processing algorithms.
If I am correct what is general approach you take when analyzing input images ?
(please note: using a industry standard camera may not be an option for hobbyist programmers)
JPG is an entire family of implementations, there are actually 4 methods. The most common method is the "normal" method, based on the Discrete Cosine Transform. This simply divides the image in 8x8 blocks and calculates the DCT of this. This results in a list of coefficients. To store these coefficients efficiently, they are multiplied by some other matrix (quantization matrix), such that the higher frequencies are usually rounded to zero. This is the only lossy step in the process. The reason this is done is to be able to store the coefficients more efficiently than before.
So, your question is not answered very easily. It also depends on the size of the input, if you have a sufficiently large image (say 3000x2000), stored at a relatively high precision, you will have no trouble with artefacts. A small image with a high compression rate might cause troubles.
Remember though that an image taken with a camera contains a lot of noise, which in itself is probably far more troubling than the jpg compression.
In my work I usually converted all images to pgm format, which is a raw format. This ensures that if I process the image in a pipeline fashion, all intermediate steps do not suffer from jpg compression.
Keep in mind that operations such as rotation, scaling, and repeated saving of JPG cause data loss each iteration.
Related
I am working on a project I wanted to do for quite a while. I wanted to make an all-round huffman compressor, which will work, not just in theory, on various types of files, and I am writing it in python:
text - which is, for obvious reasons, the easiet one to implement, already done, works wonderfully.
images - this is where I am struggling. I don't know how to approach images and how to read them in a simple way that it'd actually help me compress them easily.
I've tried reading them pixel by pixel, but somehow, it actually enlarges the picture instead of compressing it.
What I've tried:
Reading the image pixel by pixel using Image(PIL), get all the pixels in a list, create a freq table (for each pixel) and then encrypt it. Problem is, imo, that I am reading each pixel and trying to make a freq table out of that. That way, I get way too many symbols, which leads to too many lengthy huffman codes (over 8 bits).
I think I may be able to solve this problem by reading a larger set of pixels or anything of that sort because then I'd have a smaller code table and therefore less lengthy huffman codes. If I leave it like that, I can, in theory, get 255^3 sized code table (since each pixel is (0-255, 0-255, 0-255)).
Is there any way to read larger amount of pixels at a time (>1 pixel) or is there a better way to approach images when all needed is to compress?
Thank you all for reading so far, and a special thank you for anyone who tries to lend a hand.
edited: If huffman is a real bad compression algorithm for images, are there any better ones you can think off? The project I'm working on can take different algorithms for different file types if it is neccessary.
Encoding whole pixels like this often results in far too many unique symbols, that each are used very few times. Especially if the image is a photograph or if it contains many coloured gradients. A simple way to fix this is splitting the image into its R, G and B colour planes and encoding those either separately or concatenated, either way the actual elements that are being encoded are in the range 0..255 and not multi-dimensional.
But as you suspect, exploiting just 0th order entropy is not so great for many images, especially photographs. As example of what some existing formats do, PNG uses filters to take some advantage of spatial correlation (great for smooth gradients), JPG uses quantized discrete cosine transforms and (usually) a colour space transformation to YCbCr (to decorrelate the channels, and to crush Chroma more mercilessly than Luma) and (usually) Chroma subsampling, JPEG2000 uses wavelets and colour space transformation both in its lossy and lossless forms (though different wavelets, and a different colour space transformation) and also supports subsampling though dropping a wavelet scale achieves a similar effect.
As we all know, the JPEG decoding process is shown in the following:
VLD - Variable length decoding,
ZZ - Zigzag scan,
DQ - Dequantization,
IDCT - Inverse discrete cosine transform,
Color conversion (YUV to RGB) and reorder.
My question is: for different characters of different JPEG images, which of the above decoding process will take more time?
For example:
For decoding this type of image with noise, which of the above Five process will take relatively more time?
Another example:
For two same images wit different quality, which one of the above Five processes will take more time when decoding a image with higher quality?
JPEG tends to compress linearly with size (timewise). The major factor in that would affect the decoding time is whether you use sequential or progressive scans. In sequential, each component is processed once. In progressive, it is at least 2 and possibly as many as 500 (but that would be absurd) for each component.
For your specific questions:
VLD - Variable length decoding,
It depends upon whether you do this once (sequential) or multiple times (progressive)
ZZ - Zigzag scan
Easy to do. Array indices.
DQ - Dequantization
Depends upon how many times you do it. Once for sequential. It could be done multiple times for progressive (but does not need to be unless you want continuous updates on the screen).
IDCT - Inverse discrete cosine transform,
This depends a lot on the algorithm used, whether it is done using scaled integers or floating, and if it is down multiple times (as may (or may not) be done with a progressive JPEG)
Color conversion (YUV to RGB) and reorder.
You only have to do this once. However, if there is sampling, it gets more complicated.
In other words, the decoding time is the same no matter what the image is. However, the decoding time depends upon how that image is encoded.
I qualify that by saying that a smaller file tends to decode faster than a larger one simply because of the time to read off the disk. The more random the data is the larger the file tends to be. It often takes more time to read and display a large BMP file than a JPEG of the same image because of the file size.
We're building an online video editing service. One of the features allows users to export a short segment from their video as an animated gif. Imgur has a file size limit of 2Mb per uploaded animated gif.
Gif file size depends on number of frames, color depth and the image contents itself: a solid flat color result in a very lightweight gif, while some random colors tv-noise animation would be quite heavy.
First I export each video frame as a PNG of the final GIF frame size (fixed, 384x216).
Then, to maximize gif quality I undertake several gif render attempts with slightly different parameters - varying number of frames and number of colors in the gif palette. The render that has the best quality while staying under the file size limit gets uploaded to Imgur.
Each render takes time and CPU resources — this I am looking to optimize.
Question: what could be a smart way to estimate the best render settings depending on the actual images, to fit as close as possible to the filesize limit, and at least minimize the number of render attempts to 2–3?
The GIF image format uses LZW compression. Infamous for the owner of the algorithm patent, Unisys, aggressively pursuing royalty payments just as the image format got popular. Turned out well, we got PNG to thank for that.
The amount by which LZW can compress the image is extremely non-deterministic and greatly depends on the image content. You, at best, can provide the user with a heuristic that estimates the final image file size. Displaying, say, a success prediction with a colored bar. You'd can color it pretty quickly by converting just the first frame. That won't take long on 384x216 image, that runs in human time, a fraction of a second.
And then extrapolate the effective compression rate of that first image to the subsequent frames. Which ought to encode only small differences from the original frame so ought to have comparable compression rates.
You can't truly know whether it exceeds the site's size limit until you've encoded the entire sequence. So be sure to emphasize in your UI design that your prediction is just an estimate so your user isn't going to disappointed too much. And of course provide him with the tools to get the size lowered, something like a nearest-neighbor interpolation that makes the pixels in the image bigger. Focusing on making the later frames smaller can pay off handsomely as well, GIF encoders don't normally do this well by themselves. YMMV.
There's no simple answer to this. Single-frame GIF size mainly depends on image entropy after quantization, and you could try using stddev as an estimator using e.g. ImageMagick:
identify -format "%[fx:standard_deviation]" imagename.png
You can very probably get better results by running a smoothing kernel on the image in order to eliminate some high-frequency noise that's unlikely to be informational, and very likely to mess up compression performance. This goes much better with JPEG than with GIF, anyway.
Then, in general, you want to run a great many samples in order to come up with something of the kind (let's say you have a single compression parameter Q)
STDDEV SIZE W/Q=1 SIZE W/Q=2 SIZE W/Q=3 ...
value1 v1,1 v1,2 v1,3
After running several dozens of tests (but you need do this only once, not "at runtime"), you will get both an estimate of, say, , and a measurement of its error. You'll then see that an image with stddev 0.45 that compresses to 108 Kb when Q=1 will compress to 91 Kb plus or minus 5 when Q=2, and 88 Kb plus or minus 3 when Q=3, and so on.
At that point you get an unknown image, get its stddev and compression #Q=1, and you can interpolate the probable size when Q equals, say, 4, without actually running the encoding.
While your service is active, you can store statistical data (i.e., after you really do the encoding, you store the actual results) to further improve estimation; after all you'd only store some numbers, not any potentially sensitive or personal information that might be in the video. And acquiring and storing those numbers would come nearly for free.
Backgrounds
It might be worthwhile to recognize images with a fixed background; in that case you can run some adaptations to make all the frames identical in some areas, and have the GIF animation algorithm not store that information. This, when and if you get such a video (e.g. a talking head), could lead to huge savings (but would throw completely off the parameter estimation thing, unless you could estimate also the actual extent of the background area. In that case, let this area be B, let the frame area be A, the compressed "image" size for five frames would be A+(A-B)*(5-1) instead of A*5, and you could apply this correction factor to the estimate).
Compression optimization
Then there are optimization techniques that slightly modify the image and adapt it for a better compression, but we'd stray from the topic at hand. I had several algorithms that worked very well with paletted PNG, which is similar to GIF in many regards, but I'd need to check out whether and which of them may be freely used.
Some thoughts: LZW algorithm goes on in lines. So whenever a sequence of N pixels is "less than X%" different (perceptually or arithmetically) from an already encountered sequence, rewrite the sequence:
018298765676523456789876543456787654
987678656755234292837683929836567273
here the 656765234 sequence in the first row is almost matched by the 656755234 sequence in the second row. By changing the mismatched 5 to 6, the LZW algorithm is likely to pick up the whole sequence and store it with one symbol instead of three (6567,5,5234) or more.
Also, LZW works with bits, not bytes. This means, very roughly speaking, that the more the 0's and 1's are balanced, the worse the compression will be. The more unpredictable their sequence, the worse the results.
So if we can find out a way of making the distribution more **a**symmetrical, we win.
And we can do it, and we can do it losslessly (the same works with PNG). We choose the most common colour in the image, once we have quantized it. Let that color be color index 0. That's 00000000, eight fat zeroes. Now we choose the most common colour that follows that one, or the second most common colour; and we give it index 1, that is, 00000001. Another seven zeroes and a single one. The next colours will be indexed 2, 4, 8, 16, 32, 64 and 128; each of these has only a single bit 1, all others are zeroes.
Since colors will be very likely distributed following a power law, it's reasonable to assume that around 20% of the pixels will be painted with the first nine most common colours; and that 20% of the data stream can be made to be at least 87.5% zeroes. Most of them will be consecutive zeroes, which is something that LZW will appreciate no end.
Best of all, this intervention is completely lossless; the reindexed pixels will still be the same colour, it's only the palette that will be shifted accordingly. I developed such a codec for PNG some years ago, and in my use case scenario (PNG street maps) it yielded very good results, ~20% gain in compression. With more varied palettes and with LZW algorithm the results will be probably not so good, but the processing is fast and not too difficult to implement.
For example, I have an 1024*768 JPEG image. I want to estimate the size of the image which will be scaled down to 800*600 or 640*480. Is there any algorithm to calculate the size without generating the scaled image?
I took a look in the resize dialog in Photoshop. The size they show is basically (width pixels * height pixels * bit/pixel) which shows a huge gap between the actual file size.
I have mobile image browser application which allow user to send image through email with options to scale down the image. We provide check boxes for the user to choose down-scale resolution with the estimate size. For large image (> 10MB), we have 3 down scale size to choose from. If we generate a cached image for each option, it may hurt the memory. We are trying to find the best solution which avoid memory consumption.
I have successfully estimated the scaled size based on the DQT - the quality factor.
I conducted some experiments and find out if we use the same quality factor as in the original JPEG image, the scaled image will have size roughly equal to (scale factor * scale factor) proportion of the original image size. The quality factor can be estimate based on the DQT defined in the every JPEG image. Algorithm has be defined to estimate the quality factor based on the standard quantization table shown in Annex K in JPEG spec.
Although other factors like color subsampling, different compression algorithm and the image itself will contribute to error, the estimation is pretty accurate.
P.S. By examining JPEGSnoop and it source code, it helps me a lot :-)
Cheers!
Like everyone else said, the best algorithm to determine what sort of JPEG compression you'll get is the JPEG compression algorithm.
However, you could also calculate the Shannon entropy of your image, in order to try and understand how much information is actually present. This might give you some clues as to the theoretical limits of your compression, but is probably not the best solution for your problem.
This concept will help you measure the differences in information between an all white image and that of a crowd, which is related to it's compressibility.
-Brian J. Stinar-
Why estimate what you can measure?
In essence, it's impossible to provide any meaningful estimate due to the fact that different types of images (in terms of their content) will compress very differently using the JPEG algorithm. (A 1024x768 pure white image will be vastly smaller than a photograph of a crowd scene for example.)
As such, if you're after an accurate figure it would make sense to simply carry out the re-size.
Alternatively, you could just provide an range such as "40KB to 90KB", based on an "average" set of images.
I think what you want is something weird and difficult to do. Based on JPG compression level some images are heavier that others in terms of heavier (size).
My hunch for JPEG images: Given two images at same resolution, compressed at the same quality ratio - the image taking smaller memory will compress more (in general) when its resolution is reduced.
Why? From experience: many times when working with a set of images, I have seen that if a thumbnail is occupying significantly more memory than most others, reducing its resolution has almost no change in the size (memory). On other hand, reducing resolution of one of the average size thumbnails reduces the size significantly. (all parameters like original/final resolution and JPEG quality being the same in the two cases).
Roughly speaking - higher the entropy, less will be the impact on size of image by changing resolution (at the same JPEG quality).
If you can verify this with experiments, maybe you can use this as a quick method to estimate the size. If my language is confusing, I can explain with some mathematical notation/psuedo formula.
An 800*600 image file should be roughly (800*600)/(1024*768) times as large as the 1024*768 image file it was scaled down from. But this is really a rough estimate, because the compressibility of original and scaled versions of the image might be different.
Before I attempt to answer your question, I'd like to join the ranks of people that think it's simpler to measure rather than estimate. But it's still an interesting question, so here's my answer:
Look at the block DCT coefficients of the input JPEG image. Perhaps you can find some sort of relationship between the number of higher frequency components and the file size after shrinking the image.
My hunch: all other things (e.g. quantization tables) being equal, the more higher frequency components you have in your original image, the bigger the difference in file size between the original and shrinked image will be.
I think that by shrinking the image, you will reduce some of the higher frequency components during interpolation, increasing the possibility that they will be quantized to zero during the lossy quantization step.
If you go down this path, you're in luck: I've been playing with JPEG block DCT coefficients and put some code up to extract them.
I'm curious about whether there are approaches or algorithms one might use to downscale an image based on the amount of detail or entropy in the image such that the new size is determined to be a resolution at which most of the detail of the original image would be preserved.
For example, if one takes an out-of-focus or shaky image with a camera, there would be less detail or high frequency content than if the camera had taken the image in focus or from a fixed position relative to the scene being depicted. The size of the lower entropy image could be reduced significantly and still maintain most of the detail if one were to scale this image back up to the original size. However, in the case of the more detailed image, one wouldn't be able to reduce the image size as much without losing significant detail.
I certainly understand that many lossy image formats including JPEG do something similar in the sense that the amount of data needed to store an image of a given resolution is proportional to the entropy of the image data, but I'm curious, mostly for my own interest, if there might be a computationally efficient approach for scaling resolution to image content.
It's possible, and one could argue that most lossy image compression schemes from JPEG-style DCT stuff to fractal compression are essentially doing this in their own particular ways.
Note that such methods almost always operate on small image chunks rather than the big picture, so as to maximise compression in lower detail regions rather than being constrained to apply the same settings everywhere. The latter would likely make for poor compression and/or high loss on most "real" images, which commonly contain a mixture of detail levels, though there are exceptions like your out-of-focus example.
You would need to define what constitutes "most of the detail of the original image", since perfect recovery would only be possible for fairly contrived images. And you would also need to specify the exact form of rescaling to be used each way, since that would have a rather dramatic impact on the quality of the recovery. Eg, simple pixel repetition would better preserve hard edges but ruin smooth gradients, while linear interpolation should reproduce gradients better but could wreak havoc with edges.
A simplistic, off-the-cuff approach might be to calculate a 2D power spectrum and choose a scaling (perhaps different vertically and horizontally) that preserves the frequencies that contain "most" of the content. Basically this would be equivalent to choosing a low-pass filter that keeps "most" of the detail. Whether such an approach counts as "computationally efficient" may be a moot point...