I have a good understanding of pros and cons of different image formats for web use.
However, I'm trying to decide what format to use for a desktop application.
I have a potentially large number of high-resolution images (with no transparency) to deploy. I'm mainly weighing JPG vs. PNG, but am open to other formats.
My understanding:
JPG is more compressed, which means smaller file size, but probably lower image quality. Because they are more compressed, they take more time to decompress.
PNG files are larger, but maintain image quality. Because they are less compressed, they decompress faster.
Both occupy the same amount of RAM once loaded and decompressed.
Seems that PNG is a better option, given that HD space (i.e. application size) is not an issue, because it will decompress and appear on-screen faster, and maintain higher image quality.
Are my assumptions generally correct? Are there any nuances I'm overlooking? Any other image file formats worth considering?
Your assumptions are roughly correct.
Because [JPG] are more compressed, they take more time to decompress.
Not exactly, a JPG supports distinct levels of compression, the time to decompress depends on the algorithm itself, which is slighly more complex than PNG. However, decompression speed is rarely an issue. And, in any case, that depends wildly on the decoder implementation.
Seems that PNG is a better option, given that HD space (i.e. application size) is not an issue is not an issue, because it will decompress and appear on-screen faster, and maintain higher image quality.
May be. PNG is definitely better if your program is going to read-modify-write the images; JPG is not advisable in this scenario -unless you use lossless JPG. If, instead, the images are read only, the difference is less important. Notice that for high resolution photographic images, the compression ratio can be quite different; and, even if you are not worried about HD space, bigger files can be slower to read because of I/O performance.
I would go with Jpeg.
The size is small compared to other formats and you could compress it in high quality mode so it would be very hard to notice any Jpeg artifacts. Regarding decompression, since most of the decompression procedure is math and CPU runs much faster than memory you will be amazed to hear that in many cases decompressing a Jpeg is faster than reading a PNG from the disk and displaying it.
Related
I have this image (photo taken by me on SGS 9 plus): Uncompressed JPG image. Its size is 4032 x 3024 and its weight is around 3MB. I compressed it with TinyJPG Compressor and its weight was 1.3MB. For PNG images I used Online-Convert and I saw webp images much more smaller even than compressed with TinyPNG. I expected something similar, especially that I read an article JPG to WebP – Comparing Compression Sizes where WEBP is much smaller that compressed JPG.
But when I convert my JPG to WEBP format in various online image convertion tools, I see 1.5-2MB size, so file is bigger than my compressed JPG. Am I missing something? WEBP should not be much smaller than compressed JPG? Thank you in advance for every answer.
These are lossy codecs, so their file size mostly depends on quality setting used. Comparing just file sizes from various tools doesn't say anything without ensuring images have the same quality (otherwise they're incomparable).
There are a couple of possibilities:
JPEG may compress better than WebP. WebP has problems with blurring out of the details, low-resolution color, and using less than full 8 bits of the color space. In the higher end of quality range, a well-optimized JPEG can be similar or better than WebP.
However, most of file size differences in modern lossy codecs are due to difference in quality. The typical difference between JPEG and WebP at the same quality is 15%-25%, but file sizes produced by each codec can easily differ by 10× between low-quality and high-quality image. So most of the time when you see a huge difference in file sizes, it's probably because different tools have chosen different quality settings (and/or recompression has lost fine details in the image, which also greatly affects file sizes). Even visual difference too small for human eye to notice can cause noticeable difference in file size.
My experience is that lossy WebP is superior below quality 70 (in libjpeg terms) and JPEG is often better than WebP at quality 90 and above. In between these qualities it doesn't seem to matter much.
I believe WebP qualities are inflated about 7 points, i.e., to match JPEG quality 85 one needs to use WebP quality 92 (when using the cwebp tool). I didn't measure this well, this is based on rather ad hoc experiments and some butteraugli runs.
Lossy WebP has difficulties compressing complex textures such as leafs of trees densely, whereas JPEGs difficulties are with thin lines against flat borders, like a telephone line hanging against the sky or computer graphics.
What are the state-of-art algorithms when it comes to compressing digital images (say for instance color photos, maybe 800x480 pixels)?
Some of the formats that are frequently discussed as possible JPEG successors are:
JPEG XR (aka HD Photo, Windows Media Photo). According to a study the Graphics and Media Lab at Moscow State University (MSU) image quality is comparable to JPEG2000 and significantly better than JPEG, compression efficiency is comparable to JPEG-2000
WebP is already tested in the wild on Google properties mainly, where the format is served to Chrome users exclusively (if you connect with a different browser, you get png or jpg images instead). It's very web-oriented
HEVC-MSP. In a study of Mozilla Corporation (oct 2013) HEVC-MSP performed best in most tests, and in the tests that it was not best, it came in second to the original JPEG format (but the study only looked at the image compression efficiency and not at other metrics and data that matters: feature sets, performance during run-rime, licensing...)
Jpeg 2000. The most computational intensive to encode/decode. Compared with the regular JPEG format, it offers advantages such as support for higher bit depths, more advanced compression and a lossless compression option. It is the standard comparison term for the others but it is a bit "slow in acceptance".
Anyway JPEG encoders haven't really reached their full compression potential after 20+ years. Even within the constraints of strong compatibility requirements, there are projects (e.g. Mozilla mozjpeg Project or Google Guetzli) that can produce smaller JPG files without sacrificing quality.
It would depend on what you need to do with the encoded images of course. For webpages and small sizes, lossy compression systems may be suitable, but for satellite images, medical images etc. lossless compression may be required.
None of the formats mentioned above satisfy both situations. Not all of the above formats support every pixel format either, so they cannot be compared like for like.
I've been doing my own research into lossless compression for high bit depth images, and what I've found so far is that a Huffman coder with suitable reversible pre-compression filtering beats jpeg2000 and-jpeg xr in terms of file size by 56% on average (i.e., makes files less than half the size) on cinematic real world footage and faster. I can also beats FFV1 in the limited tests I've conducted, producing files under half the size even after FFV1 has truncated the source image pixel depths from 16 bits to 10 bits. Really most surprising.
For lossless compression ratios FLIF is ranked number one for me, but encoding times are astronomical. I've never made a file smaller than a FLIF file when compared. So good things come to those who wait. FLIF uses machine learning to achieve its compression ratios. Applying a lossy pre-compression filter to images before FLIF compression (something the encoder enables), creates visually lossless images that competes with the best lossy encoders, but with the advantage that re-encoding the output files repeatedly won't further reduce quality (as the encoder is lossless).
One thing that is obvious to me - nothing is really state of the art currently. Most formats are using old technology, designed in a time when memory and processing power was a premium. As far as lossless compression goes, FLIF is a big jump forward, but its an area of research that is wide open. Most research seems to be into lossy compression systems.
As a general rule of thumb when is it appropriate to make a gif interlaced, a png interlaced and a jpeg progressive?
Especially when publishing the image on the web.
JPEG: YES — use progressive scan. It makes files smaller (each pass gets its own Huffman table), and partial rendering looks quite good.
GIF: NO — it's unlikely to make the file smaller, partial rendering is poor, and it's pointless for animGIFs. It's best not to use GIF at all (yes, even for anims).
PNG: NO — it hurts compression (as data from each pass is statistically quite different). If the image is large, use high-quality JPEG or lossy PNG if possible, as these may load quicker than a pixelated preview of a large lossless PNG.
ImageOptim will automatically change progressive/interlaced formats when it makes files smaller.
Disclaimers for nitpickers:
In case of small and medium-sized images the progressive preview of each image is not going to be visible long enough for the user to appreciate it. Some browsers don't even bother rendering anything until the whole file is downloaded, so it's better to focus on saving bandwidth to get the whole page loaded ASAP.
Non-progressive JPEG is a bit more efficient when the files are tiny (small thumbnails), but then the savings are tiny, too.
iOS Safari has a higher maximum allowed image size for baseline JPEG than progressive, but the right solution there is to serve images at sizes reasonable for mobile in the first place.
My general rule of thumb: don't ever use interlacing. Interlaced formats typically occupy more space, have (slightly) more complexity and less support in decoders, and the alleged advantages for the user experience are at least debatable. Some arguments for PNG, and in general.
Some people like interlaced or "progressive" images, which load
gradually. The theory behind these formats is that the user can at
least look at a fuzzy full-size proxy for the image while all the bits
are loading. In practice, the user is forced to look at a fuzzy
full-size proxy for the image while all the bits are loading. Is it
done? Well, it looks kind of fuzzy. Oh wait, the top of the image
seems to be getting a little more detail. Maybe it is done now. It is
still kind of fuzzy, though. Maybe the photographer wasn't using a
tripod. Oh wait, it seems to be clearing up now ...
Interlaced images are slightly less efficient, but show up after shorter delay on the client side when transported over the network. IMHO they should be used when the expected download time for the image is long enough to be perceived by the user (say, above 1 second). The difference in file size is really quite small, so it's better to be too-cautious and use interlacing too much rather than too little.
In common broadband internet as of 2012, I'd just use it for every image > 100kb.
These points must be useful.
Interlacing (more generally, progressive display) is a method of displaying images on a monitor.
When to use it? Your decision should be base on these factors:
•> Non-interlaced images are smaller than interlaced images.
•> Interlaced images cause less flickering than non-interlaced ones
•> Interlaced images are much more easily view-able.
The interlace lets you see the picture before all the data has been transmitted (makes them appear faster and better-looking) and gives you the "feeling" that it is being downloaded faster.
TIP: Interlacing is not recommended for small images but is a must if
the viewer uses a slow connection
This is just a copy from Y answers i thought could help to understand.
Original answer could be find at: https://answers.yahoo.com/question/index?qid=20090211121956AAz7Xz8
Just to throw my twopenneth into the argument: Interlacing was introduced years ago when internet speeds were slow, the idea being that the image would present itself in a gradually more defined manner, still giving an overall look and feel to an image without having to wait for the entire thing to load.
Interlacing, today, is basically unnecessary and should be used based on the overall size of the image being transferred.
Progressive scans on JPEG images images do provide a more refined image while attempting to reduce the overall file size (i.e. is an actual compression mode rather than a streaming method for the bits making up the image).
PNGs use a more complex algorithm than GIF.
There is an interesting related post on webmasters
https://webmasters.stackexchange.com/questions/574/progressive-jpeg-why-do-many-web-sites-avoid-rendering-jpegs-that-way-pros
Untimately it depends on how they are going to be used.
The post suggests that there is limited - genuine - support for progressive images. And sometimes they may cause issues with plugins which don't support the progressive format.
Hope that helps.
Platform I am using is java applet, and I want to know whether I should use jpg, gif, png
jpg tends to slow down the game,
gif flickers too much
do not have much hope with png in speed
One large image is generally a bad idea for performance and memory consumption.
You want to split it in smaller chunks and load/unload them when not in use.
In java2D I always used PNG.
In other libraries it may not be important since they convert the image format to something more GPU friendly.
On the other hand, bmp is the faster thing to load (you don't need to process -i.e. decompress- the image), but it may increase your overall game size.
I am working on an iPad application which has hundreds of photo-quality images. I would have naturally assumed to store these images as JPEGs so as to optimize the app file size. However, Apple's guidelines state:
Use the PNG format for images. The PNG format provides lossless image content, meaning that saving image data to a PNG format and then reading it back results in the exact same pixel values. PNG also has an optimized storage format designed for faster reading of the image data. It is the preferred image format for iOS.
However, if I store the same images as JPEGs at 100% quality, the size of them drops to about half that of the PNG lossless versions.
Is there really that much of a performance hit to use JPEG instead of PNG? If I am viewing these images in a carousel or gallery style, do I really need to worry about the performance and use PNGs instead?
Thanks!
Regarding the quality PNG is good for application kind of images, but JPEG is preferred for photos. Choose the lowest JPEG quality that gives good enough quality for your images.
Regarding speed, size also matters. I have no IPad to test with, but the smaller file size to read from flash or network might very well out weight any additional decompression cost. The only way to find out is to measure on your actual device.
There is a performance consideration but while PNG is preferred for quality, given your application, I'd suggest JPEG would be preferable.
Pure performance isn't the only factor of interest or concern; an iPad has only a finite space available to it, and filling that up with image data that most users are not going to need or want seems preferable to using more computational power for most cases.
One other thing to consider - on a gallery, you are strongly recommended to generate thumbnails which give you the best of both worlds: the smaller, more accessible image for general use and the full original image for 'best'.
If in doubt, benchmark with both and see how big the difference is in your application - and if the difference is something you can live with versus the space saving, go with JPEG.