is packbits compression another name for RLE - image

iam congfused over packbits compression. I used faststone image converter to convert a jpg image to tiff packbits compression. I then ran imagemagicks identify tool to look at the compression algorithm used and it stated that it was RLE. So does this mean that packbits and RLE refer to the same compression algorithm ?

Yes. packbits is simply run-length encoding with a particular scheme for marking literal vs. repeated bytes.

Related

BMP image file format

According to this site
BMP (Bitmap) is a uncompressed raster graphics image format
1) So does it mean that, BMP doesn't follow compression while image storing at all ?
2) If it does follow compression then it should be called lossy ? But it's lossless why so ?
Also when it is said,
Lossless means that the image is made smaller, but at no detriment to
the quality
3) If the image is made smaller then how can it remain the same.Making it smaller means that it has to follow some compression right?
Edit:
4) JPEG is also a bitmap format then why is it not lossless ?
First, BMP does not allow image compression at all, pixel values are written as is, no compression or size reduction transformation is used. It's uncompressed, so it's not lossy, it's lossless. It is actually possible to compress images (and also audio) in a lossless manner, that is, mathematical operations are performed on the data that remove redundant data, thus decreasing the overall size, since these operations are invertible they are also able to recover the original data (image, audio, etc...). Technically, a bitmap is a 2 dimensional array of pixel values, but a bitmap is widely known as the uncompressed .bmp image format. Compression has two variants, lossy compression, where you drop some portion of your data that can't be recovered, hence lossy; and lossless, where you drop portions of your data that can be recovered by the inverse process. A full treatment on the subject inevitably has to deal with information theory and Shannon's result on coding theory. A simple place to start is with run-length enconding and Lempel-Ziv compression algorithm for lossless compression, and JPEG compression using wavelets for lossy compression.

Which image file format compresses the most when using a lossless data compression archive file format (like .zip)?

I understand that graphic images do not compress well when using a lossless compression archive file format like .zip. Is there an image file type that losslessly compresses better (smaller) than the others?
Which image file format compresses the most when using a lossless data compression archive file format (like .zip)?
Lossless image compression algorithms implemented in image file format use the same methods as general purpose compression software, plus some specific methods based on image models. These methods tend to remove data redundancy and to provide a variable length coding that exploits data statistics to reduce coding cost.
Hence if a compressed image can be significantly recompressed by, say, zip, it is probably not a very efficient file format in terms of compression. So to answer your question, the image file format that can be the most efficiently compressed by zip is the format with the least internal compression. And the final result will be worse than using a good lossless image compression method and skipping the zip recompression.
There are good lossless image compression methods available. The compression ratio is of course worse than the one provided by lossy compression, but can be decent, depending on your need. In standard methods, you can use png of lossless jpeg2000. And the are very good non standard methods, as webP, FLIF or BPG. But with none of them you will have any significant gain if using zip on these images.
The file format does not affect the compression ratio of the image .it usually tells us what is the data format and the compression used.
The image itself affects the comprrssion. A monothonic image will compress better than a noisy one.

Image format vs image compression algorithm vs codec

I'm still confused with concepts of image format, image compression algorithm or method and codec and relationships between them.
In my understanding format is something image is saved in, so it could contain information about what compression algorithm or method (are these two synonyms?) to use. Or does a specific format always use the same algorithm? Also, these algorithms can then use multiple codecs but I don't see a difference between job done by a compression algorithm and a codec.
Am I right in my assumptions? Can you elaborate definition and relationships of these concepts?
An image format is the specification of how the image data is stored on disk.
Since storage sizes for images can be quite large, images are often stored using a compression algorithm which can reduce the storage space needed to store a representation of the image.
A codec is an encoder/decoder pair. So a codec is a compression algorithm, and the reverse de-compression algorithm too.
One place to start learning more is the documentation for the NetPBM format and library. This is one of the simplest image formats because it does not use compression internally.
The following are examples of formats - PNG, GIF, TIFF, JPEG, BMP, TGA, PCX.
The following are examples of compression algorithms - LZW (Lempel Ziv Welch), RLE (Run Length Encoding), DEFLATE.
For the most part, each format generally uses the same compression, e.g. PNG format uses DEFLATE compression , whereas TGA and PCX format always use RLE. Some formats however can accommodate different types of compression, e.g. TIFF format can accommodate LZW, JPEG, Packbits, CCITT compression types.
A codec is more than a compression algorithm, it understands all aspects of the format... where to find the height and width, palettes, padding, compression, byte-ordering, transparency, meta-data and so on.

How to interprete Tiff image spec 6.0 packbits compression

The following is from TIFF 6.0 Specification Section 9: PackBits Compression
That is the essence of the algorithm. Here are some additional rules:
Pack each row separately. Do not compress across row boundaries.
The number of uncompressed bytes per row is defined to be (ImageWidth + 7)
/ 8. If the uncompressed bitmap is required to have an even number of bytes per
row, decompress into word-aligned buffers.
If a run is larger than 128 bytes, encode the remainder of the run as one or more
additional replicate runs
The first and the third items are easy to understand but I am confused about the second one specifically this: The number of uncompressed bytes per row is defined to be (ImageWidth + 7) / 8. Isn't that only true for 1 bit bi-level image. But to my knowledge, packbits is a byte oriented compression algorithm, it could be used for any type of tiff.
Could someone who knows about tiff and packbits give me some hints?
The TIFF document from this site: http://www.fileformat.info/format/tiff/corion-packbits.htm
has the following at the top:
Abstract
This document describes a simple compression scheme for bilevel
scanned and paint type files.
Motivation
The TIFF specification defines a number of compression schemes.
Compression type 1 is really no compression, other than basic
pixel packing. Compression type 2, based on CCITT 1D
compression, is powerful, but not trivial to implement.
Compression type 5 is typically very effective for most bilevel
images, as well as many deeper images such as palette color and
grayscale images, but is also not trivial to implement. PackBits
is a simple but often effective alternative
So it is clear the additional rules are with respect to bilevel images. For some reason, the above abstract and description are missing from the pdf version of TIFF6.0.

tiff lzw compression 10 times larger than the original jpeg compression

i converted a few jpeg compressed files into tiff lzw compression but to my surprise their sizes were larger than the original jpeg ones. I checked the format of the converted files using imagemagick's identify tool and found that they were using LZW compression.
What could be the reason behind this? Any one having a idea ?
i also tried it using faststone image converter and still with the same result.
That is entirely expected and normal. JPEG is a lossy conversion that allows significant compression by accepting some degradation in the image. TIFF LZW compression is lossless, and so by definition cannot change the image at all for the purposes of compression, greatly limiting how far it can get. Furthermore, LZW is not a particularly good compression scheme, even for lossless. There are better ones. The most common is PNG, but there are better ones depending on the type of image, such as JBIG for bilevel or images or JPEG 2000 lossless for natural images.

Resources