If I have a gif89a which has multiple image blocks that are identical (and small, say 40x40 or 1600 pixels in size), should these continue to increase the final size of the gif file (assuming a sane encoder)?
I'm trying to understand how the LZW compression works. According to the W3C spec, I thought the entire data stream itself (consisting of multiple image blocks) should be be compressed, and thus repeating the same image frame multiple times would incur very little overhead (just the size of the symbol for the the repeated image block). This does not seem to be the case, and I've tested with several encoders (Gimp, Photoshop).
Is this to be expected with all encoders, or are these two just doing it poorly?
With gimp, my test gif was 23k in size when it had 240 identical image blocks, and 58k in size with 500 image blocks, which seems less impressive than my intuition is telling me (my intuition's pretty dumb, so I won't be shocked if/when someone tells me it's incredibly wrong).
[edit]
I need to expand on what it is I'm getting at, I think, to receive a proper answer. I am wanting to handcraft a gif image (and possibly write an encoder if I'm up to it) that will take advantage of some quirks to compress it better than would happen otherwise.
I would like to include multiple sub-images in the gif that are used repeatedly in a tiling fashion. If the image is large (in this case, 1700x2200), gif can't compress the tiles well because it doesn't see them as tiles, it rasters from the top left to the bottom right, and at most a 30 pixel horizontal slice of any given tile will be given a symbol and compressed, and not the 30x35 tile itself.
The tiles themselves are just the alphabet and some punctuation in this case, from a scan of a magazine. Of course in the original scan, each "a" is slightly different than every other, which doesn't help for compression, and there's plenty of noise in the scan too, and that can't help.
As each tile will be repeated somewhere in the image anywhere from dozens to hundreds of times, and each is 30 or 40 times as large as any given slice of a tile, it looks like there are some gains to be had (supposing the gif file format can be bent towards my goals).
I've hand-created another gif in gimp, that uses 25 sub-images repeatedly (about 700 times, but I lost count). It is 90k in size unzipped, but zipping it drops it back down to 11k. This is true even though each sub-image has a different top/left coordinate (but that's only what, 4 bytes up in the header of the sub-image).
In comparison, a visually identical image with a single frame is 75k. This image gains nothing from being zipped.
There are other problems I've yet to figure out with the file (it's gif89a, and treats this as an animation even though I've set each frame to be 0ms in length, so you can't see it all immediately). I can't even begin to think how you might construct an encoder to do this... it would have to select the best-looking (or at least one of the better-looking) versions of any glyph, and then figure out the best x,y to overlay it even though it doesn't always line up very well.
It's primary use (I believe) would be for magazines scanned in as cbr/cbz ebooks.
I'm also going to embed my hand-crafted gif, it's easier to see what I'm getting at than to read my writing as I stumble over the explanation:
LZW (and GIF) compression is one-dimensional. An image is treated as a stream of symbols where any area-to-area (blocks in your terminology) symmetry is not used. An animated GIF image is just a series of images that are compressed independently and can be applied to the "main" image with various merging options. Animated GIF was more like a hack than a standard and it wasn't well thought out for efficiency in image size.
There is a good explanation for why you see smaller files after ZIP'ing your GIF with repeated blocks. ZIP files utilize several techniques which include a "repeated block" type of compression which could do well with small (<32K) blocks (or small distances separating) identical LZW data.
GIF-generating software can't overcome the basic limitation of how GIF images are compressed without writing a new standard. A slightly better approach is used by PNG which uses simple 2-dimensional filters to take advantage of horizontal and vertical symmetries and then compresses the result with FLATE compression. It sounds like what you're looking for is a more fractal or video approach which can have the concept of a set of compressed primitives that can be repeated at different positions in the final image. GIF and PNG cannot accomplish this.
GIF compression is stream-based. That means to maximize compression, you need to maximize the repeatability of the stream. Rather than square tiles, I'd use narrow strips to minimize the amount of data that passes before it starts repeating then keep the repeats within the same stream.
The LZW code size is capped at 12 bits, which means the compression table fills up relatively quickly. A typical encoder will output a clear code when this happens so that the compression can start over, giving good adaptability to fresh content. If you do your own custom encoder you can skip the clear code and keep reusing the existing table for higher compression results.
The GIF spec does not specify the behavior when a delay time of 0 is given, so you're at the mercy of the decoder implementation. For consistent results you should use a delay of 1 and accept that the entire image won't show up immediately.
Related
I am working on a project I wanted to do for quite a while. I wanted to make an all-round huffman compressor, which will work, not just in theory, on various types of files, and I am writing it in python:
text - which is, for obvious reasons, the easiet one to implement, already done, works wonderfully.
images - this is where I am struggling. I don't know how to approach images and how to read them in a simple way that it'd actually help me compress them easily.
I've tried reading them pixel by pixel, but somehow, it actually enlarges the picture instead of compressing it.
What I've tried:
Reading the image pixel by pixel using Image(PIL), get all the pixels in a list, create a freq table (for each pixel) and then encrypt it. Problem is, imo, that I am reading each pixel and trying to make a freq table out of that. That way, I get way too many symbols, which leads to too many lengthy huffman codes (over 8 bits).
I think I may be able to solve this problem by reading a larger set of pixels or anything of that sort because then I'd have a smaller code table and therefore less lengthy huffman codes. If I leave it like that, I can, in theory, get 255^3 sized code table (since each pixel is (0-255, 0-255, 0-255)).
Is there any way to read larger amount of pixels at a time (>1 pixel) or is there a better way to approach images when all needed is to compress?
Thank you all for reading so far, and a special thank you for anyone who tries to lend a hand.
edited: If huffman is a real bad compression algorithm for images, are there any better ones you can think off? The project I'm working on can take different algorithms for different file types if it is neccessary.
Encoding whole pixels like this often results in far too many unique symbols, that each are used very few times. Especially if the image is a photograph or if it contains many coloured gradients. A simple way to fix this is splitting the image into its R, G and B colour planes and encoding those either separately or concatenated, either way the actual elements that are being encoded are in the range 0..255 and not multi-dimensional.
But as you suspect, exploiting just 0th order entropy is not so great for many images, especially photographs. As example of what some existing formats do, PNG uses filters to take some advantage of spatial correlation (great for smooth gradients), JPG uses quantized discrete cosine transforms and (usually) a colour space transformation to YCbCr (to decorrelate the channels, and to crush Chroma more mercilessly than Luma) and (usually) Chroma subsampling, JPEG2000 uses wavelets and colour space transformation both in its lossy and lossless forms (though different wavelets, and a different colour space transformation) and also supports subsampling though dropping a wavelet scale achieves a similar effect.
Ok, so I tried to use the imagemagick command:
"convert picA.png -interlace line picB.png"
to make an interlace version of my .png images. Most of the time, I got the resulting image is larger than the original one, which is kinda normal. However, on certain image, the resulting image size is smaller.
So I just wonder why does that happen? I really don't want my new image to lose any quality because of the command.
Also, is there any compatibility problem with interlaced .png image?
EDIT: I guess my problem is that the original image was not compressed as best as it could be.
The following only applies to the cases where the pixel size is >= 8 bits. I didn't investigate for other cases but I expect similar outcomes.
A content-identical interlaced PNG image file will almost always be greater because of the additional data for filter type descriptions required to handle the passes scanlines. This is what I explained in details in this web page based on the PNG RFC RFC2083.
In short, this is because the sum of the below number of bytes for interlaced filter types description per interlacing pass is almost always greater than the image height (which is the number of filter types for non-interlaced images):
nb_pass1_lines = CEIL(height/8)
nb_pass2_lines = (width>4?CEIL(height/8):0)
nb_pass3_lines = CEIL((height-4)/8)
nb_pass4_lines = (width>2?CEIL(height/4):0)
nb_pass5_lines = CEIL((height-2)/4)
nb_pass6_lines = (width>1?CEIL(height/2):0)
nb_pass7_lines = FLOOR(height/2)
Though, theoretically, it can be that the data entropy/complexity accidentally gets lowered enough by the Adam7 interlacing so that, with the help of filtering, the usually additional space needed for filter types with interlacing may be compensated through the deflate compression used for the PNG format. This would be a particular case to be proven as the entropy/complexity is more likely to increase with interlacing because the image data is made less consistent through the interlacing deconstruction.
I used the word "accidentally" because reducing the data entropy/complexity is not the purpose of the Adam7 interlacing. Its purpose is to allow the progressive loading and display of the image through a passes mechanism. While, reducing the entropy/complexity is the purpose of the filtering for PNG.
I used the word "usually" because, as shown in the explanation web page, for example, a 1 pixel image will be described through the same length of uncompressed data whether interlaced or not. So, in this case, no additional space should be needed.
When it comes to the PNG file size, a lower size for interlaced can be due to:
Different non-pixel encoding related content embedded in the file such as palette (in the case of color type =! 3) and non-critical chunks such as chromaticities, gamma, number of significant bits, default background color, histogram, transparency, physical pixel dimensions, time, text, compressed text. Note that some of those non-pixel encoding related content can lead to different display of the image depending on the software used and the situation.
Different pixel encoding related content (which can change the image quality) such as bit depth, color type (and thus the use of palette or not with color type = 3), image size,... .
Different compression related content such as better filtering choices, accidental lower data entropy/complexity due to interlacing as explained above (theoretical particular case), higher compression level (as you mentioned)
If I had to check whether 2 PNG image files are equivalent pixel wise, I would use the following command in a bash prompt:
diff <( convert non-interlaced.png rgba:- ) <( convert interlaced.png rgba:- )
It should return no difference.
For the compatibility question, if the PNG encoder and PNG decoder implement the mandatory aspects of the PNG RFC, I see no reason for the interlacing to lead to a compatibility issue.
Edit 2018 11 13:
Some experiments based on auto evolved distributed genetic algorithms with niche mechanism (hosted on https://en.oga.jod.li ) are explained here:
https://jod.li/2018/11/13/can-an-interlaced-png-image-be-smaller-than-the-equivalent-non-interlaced-image/
Those experiments show that it is possible for equivalent PNG images to have a smaller size interlaced than non-interlaced. The best images for this are tall, they have a one pixel width and have pixel content that appear random. Though, the shape is not the only important aspect for the interlaced image to be smaller than the non-interlaced image as random cases with the same shape lead to different size differences.
So, yes, some PNG images can be identical pixel wise and for non-pixel related content but have a smaller size interlaced than non-interlaced.
So I just wonder why does that happen?
From section Interlacing and pass extraction of the PNG spec.
Scanlines that do not completely fill an integral number of bytes are padded as defined in 7.2: Scanlines.
NOTE If the reference image contains fewer than five columns or fewer than five rows, some passes will be empty.
I would assume the behavior your experiencing is the result of the Adam7 method requiring additional padding.
We're building an online video editing service. One of the features allows users to export a short segment from their video as an animated gif. Imgur has a file size limit of 2Mb per uploaded animated gif.
Gif file size depends on number of frames, color depth and the image contents itself: a solid flat color result in a very lightweight gif, while some random colors tv-noise animation would be quite heavy.
First I export each video frame as a PNG of the final GIF frame size (fixed, 384x216).
Then, to maximize gif quality I undertake several gif render attempts with slightly different parameters - varying number of frames and number of colors in the gif palette. The render that has the best quality while staying under the file size limit gets uploaded to Imgur.
Each render takes time and CPU resources — this I am looking to optimize.
Question: what could be a smart way to estimate the best render settings depending on the actual images, to fit as close as possible to the filesize limit, and at least minimize the number of render attempts to 2–3?
The GIF image format uses LZW compression. Infamous for the owner of the algorithm patent, Unisys, aggressively pursuing royalty payments just as the image format got popular. Turned out well, we got PNG to thank for that.
The amount by which LZW can compress the image is extremely non-deterministic and greatly depends on the image content. You, at best, can provide the user with a heuristic that estimates the final image file size. Displaying, say, a success prediction with a colored bar. You'd can color it pretty quickly by converting just the first frame. That won't take long on 384x216 image, that runs in human time, a fraction of a second.
And then extrapolate the effective compression rate of that first image to the subsequent frames. Which ought to encode only small differences from the original frame so ought to have comparable compression rates.
You can't truly know whether it exceeds the site's size limit until you've encoded the entire sequence. So be sure to emphasize in your UI design that your prediction is just an estimate so your user isn't going to disappointed too much. And of course provide him with the tools to get the size lowered, something like a nearest-neighbor interpolation that makes the pixels in the image bigger. Focusing on making the later frames smaller can pay off handsomely as well, GIF encoders don't normally do this well by themselves. YMMV.
There's no simple answer to this. Single-frame GIF size mainly depends on image entropy after quantization, and you could try using stddev as an estimator using e.g. ImageMagick:
identify -format "%[fx:standard_deviation]" imagename.png
You can very probably get better results by running a smoothing kernel on the image in order to eliminate some high-frequency noise that's unlikely to be informational, and very likely to mess up compression performance. This goes much better with JPEG than with GIF, anyway.
Then, in general, you want to run a great many samples in order to come up with something of the kind (let's say you have a single compression parameter Q)
STDDEV SIZE W/Q=1 SIZE W/Q=2 SIZE W/Q=3 ...
value1 v1,1 v1,2 v1,3
After running several dozens of tests (but you need do this only once, not "at runtime"), you will get both an estimate of, say, , and a measurement of its error. You'll then see that an image with stddev 0.45 that compresses to 108 Kb when Q=1 will compress to 91 Kb plus or minus 5 when Q=2, and 88 Kb plus or minus 3 when Q=3, and so on.
At that point you get an unknown image, get its stddev and compression #Q=1, and you can interpolate the probable size when Q equals, say, 4, without actually running the encoding.
While your service is active, you can store statistical data (i.e., after you really do the encoding, you store the actual results) to further improve estimation; after all you'd only store some numbers, not any potentially sensitive or personal information that might be in the video. And acquiring and storing those numbers would come nearly for free.
Backgrounds
It might be worthwhile to recognize images with a fixed background; in that case you can run some adaptations to make all the frames identical in some areas, and have the GIF animation algorithm not store that information. This, when and if you get such a video (e.g. a talking head), could lead to huge savings (but would throw completely off the parameter estimation thing, unless you could estimate also the actual extent of the background area. In that case, let this area be B, let the frame area be A, the compressed "image" size for five frames would be A+(A-B)*(5-1) instead of A*5, and you could apply this correction factor to the estimate).
Compression optimization
Then there are optimization techniques that slightly modify the image and adapt it for a better compression, but we'd stray from the topic at hand. I had several algorithms that worked very well with paletted PNG, which is similar to GIF in many regards, but I'd need to check out whether and which of them may be freely used.
Some thoughts: LZW algorithm goes on in lines. So whenever a sequence of N pixels is "less than X%" different (perceptually or arithmetically) from an already encountered sequence, rewrite the sequence:
018298765676523456789876543456787654
987678656755234292837683929836567273
here the 656765234 sequence in the first row is almost matched by the 656755234 sequence in the second row. By changing the mismatched 5 to 6, the LZW algorithm is likely to pick up the whole sequence and store it with one symbol instead of three (6567,5,5234) or more.
Also, LZW works with bits, not bytes. This means, very roughly speaking, that the more the 0's and 1's are balanced, the worse the compression will be. The more unpredictable their sequence, the worse the results.
So if we can find out a way of making the distribution more **a**symmetrical, we win.
And we can do it, and we can do it losslessly (the same works with PNG). We choose the most common colour in the image, once we have quantized it. Let that color be color index 0. That's 00000000, eight fat zeroes. Now we choose the most common colour that follows that one, or the second most common colour; and we give it index 1, that is, 00000001. Another seven zeroes and a single one. The next colours will be indexed 2, 4, 8, 16, 32, 64 and 128; each of these has only a single bit 1, all others are zeroes.
Since colors will be very likely distributed following a power law, it's reasonable to assume that around 20% of the pixels will be painted with the first nine most common colours; and that 20% of the data stream can be made to be at least 87.5% zeroes. Most of them will be consecutive zeroes, which is something that LZW will appreciate no end.
Best of all, this intervention is completely lossless; the reindexed pixels will still be the same colour, it's only the palette that will be shifted accordingly. I developed such a codec for PNG some years ago, and in my use case scenario (PNG street maps) it yielded very good results, ~20% gain in compression. With more varied palettes and with LZW algorithm the results will be probably not so good, but the processing is fast and not too difficult to implement.
I'm looking to do some javascript powered animation via image clipping. Here's an example of what I'm talking about: http://www.def-logic.com/_dhtml/freejack/hero1.gif
I know png uses a kind of prediction in its compression, what would be the best way to lay out an image like the one above so that I get the most out of the compression? I'm especially interested when the images are very similar, more so than the one above, so there is a lot of potential for compression due to redundancy.
For example, is there specific size of tile that would work well?
For example, is there specific size of tile that would work well?
Not really. PNG prediction is strictly local (it uses the 3 neighbours pixels), and the prediction ("filter") strategy can be chosen on a line basis.
That kind of redundancy is not very detectable in PNG compression (not in JPG or practically any other, actually).
If you have the freedom to select the distribution of tiles (few or many per row), you can try vary that, it can have some small influence (to have an image with many short lines instead of few long lines can give the filter better opportunities to select different filters) but, again, I'd bet that the difference will be very small.
I would like to know which image format inside PDF's is rendered fastest. I tested mupdf code and I figured out that image decoding takes an important part in rendering time. So I would like to know if there are image formats that would not impact very much on cpu load.
I dont think this is really a question of what is best simply within PDFs, however:
As a general rule, I have always found that pre-rendering the image's size to the actual size you wish to present on screen is the best way to get both size and rendering speed to what you want them to be. Simply dragging an image into a document doesnt bring the pixel count (thus size) down as most document types simply put a display size tag around the full image. This causes the display program to have to real-time resize the image for display. The less the display program has to real-time resize the image the faster it will display.
As for file types:
Bitmaps are generally considered the fastest to display as they (for the most part) are copy and paste the color for each pixel onto the screen pixel. They are generally considered the biggest file. Depending on your images, if they aren't noisy (have a lot of solid runs of the same color) then they can be RLE encoded. I have seen many RLE encoded images that are indeed even smaller than JPEG images, but it is very situational.
JPEGs tend to be the smallest for transfer and also generally display decently quick. As an opinion they are also the lowest quality images (look close, if you started with a perfectly clean image, JPEG compression will add noise to it unless using lossless compression)
PNGs tend to be my favorite. They can be lossless compressed, can be fairly small if using flattened PNGs (i.e. NOT ADOBE FIREWORKS PNGs) and do produce crisp images that render fairly quickly.
So to sum up: I would probably recommend flattened PNGs that have been pre-sized and saved to the size you wish to display on screen.