Get width and height from jpeg without 0xFF 0xC0 - image

I'm trying to get the file dimensions (width and height) from a jpeg (in this case an Instagram picture)
https://scontent.cdninstagram.com/t51.2885-15/s640x640/sh0.08/e35/11264864_1701024620182742_1335691074_n.jpg
As I understand if, the width and height are defined after the 0xFF 0xC0 marker, however I cannot find this marker in this picture. Has it been stripped or is there an alternative marker I should check for?

The JPEG Start-Of-Frame (SOF) marker has 4 possible values:
FFC0 (baseline) - This is the usual mode chosen for photos and encodes fully specified DCT blocks in groupings depending on the color/subsample options chosen
FFC1 (extended) - This is similar to baseline, but has more than 8-bits
per color stimulus
FFC2 (progressive) - This mode is often found on web pages to allow the image to load progressively as the data is
received. Each "scan" of the image progressively defines more coefficients of the DCT blocks until they're fully defined. This effectively provides more and more detail as more scans are decoded
FFC3 (lossless) - This mode uses a simple Huffman encoding
to losslessly encode the image. The only place I've seen this used is
on 16-bit grayscale DICOM medical images
If you're scanning through the bytes looking for an FFCx pattern, be aware that you may encounter one in the embedded thumbnail image (inside an FFE1 Exif marker). To properly find the SOFx of the main image, you'll need to walk the chain of JPEG markers. The 2-byte length (big-endian) follows the 2-byte marker. One last pitfall to avoid is that some JPEG encoders stick extra FF values in between the valid markers. If you encounter a FFFF where a marker should be, just increment the pointer by 1 byte and try again until you hit a valid marker.

You can get it pretty simply at the command-line with ImageMagick which is installed on most Linux distros and is available for OSX and Windows.
identify -ping https://scontent.cdninstagram.com/t51.2885-15/s640x640/sh0.08/e35/11264864_1701024620182742_1335691074_n.jpg
Output
https://scontent.cdninstagram.com/t51....1074_n.jpg JPEG 640x640 640x640+0+0 8-bit sRGB 162KB 0.000u 0:00.000

Related

How can an Interlaced .png file's size be smaller than the original file?

Ok, so I tried to use the imagemagick command:
"convert picA.png -interlace line picB.png"
to make an interlace version of my .png images. Most of the time, I got the resulting image is larger than the original one, which is kinda normal. However, on certain image, the resulting image size is smaller.
So I just wonder why does that happen? I really don't want my new image to lose any quality because of the command.
Also, is there any compatibility problem with interlaced .png image?
EDIT: I guess my problem is that the original image was not compressed as best as it could be.
The following only applies to the cases where the pixel size is >= 8 bits. I didn't investigate for other cases but I expect similar outcomes.
A content-identical interlaced PNG image file will almost always be greater because of the additional data for filter type descriptions required to handle the passes scanlines. This is what I explained in details in this web page based on the PNG RFC RFC2083.
In short, this is because the sum of the below number of bytes for interlaced filter types description per interlacing pass is almost always greater than the image height (which is the number of filter types for non-interlaced images):
nb_pass1_lines = CEIL(height/8)
nb_pass2_lines = (width>4?CEIL(height/8):0)
nb_pass3_lines = CEIL((height-4)/8)
nb_pass4_lines = (width>2?CEIL(height/4):0)
nb_pass5_lines = CEIL((height-2)/4)
nb_pass6_lines = (width>1?CEIL(height/2):0)
nb_pass7_lines = FLOOR(height/2)
Though, theoretically, it can be that the data entropy/complexity accidentally gets lowered enough by the Adam7 interlacing so that, with the help of filtering, the usually additional space needed for filter types with interlacing may be compensated through the deflate compression used for the PNG format. This would be a particular case to be proven as the entropy/complexity is more likely to increase with interlacing because the image data is made less consistent through the interlacing deconstruction.
I used the word "accidentally" because reducing the data entropy/complexity is not the purpose of the Adam7 interlacing. Its purpose is to allow the progressive loading and display of the image through a passes mechanism. While, reducing the entropy/complexity is the purpose of the filtering for PNG.
I used the word "usually" because, as shown in the explanation web page, for example, a 1 pixel image will be described through the same length of uncompressed data whether interlaced or not. So, in this case, no additional space should be needed.
When it comes to the PNG file size, a lower size for interlaced can be due to:
Different non-pixel encoding related content embedded in the file such as palette (in the case of color type =! 3) and non-critical chunks such as chromaticities, gamma, number of significant bits, default background color, histogram, transparency, physical pixel dimensions, time, text, compressed text. Note that some of those non-pixel encoding related content can lead to different display of the image depending on the software used and the situation.
Different pixel encoding related content (which can change the image quality) such as bit depth, color type (and thus the use of palette or not with color type = 3), image size,... .
Different compression related content such as better filtering choices, accidental lower data entropy/complexity due to interlacing as explained above (theoretical particular case), higher compression level (as you mentioned)
If I had to check whether 2 PNG image files are equivalent pixel wise, I would use the following command in a bash prompt:
diff <( convert non-interlaced.png rgba:- ) <( convert interlaced.png rgba:- )
It should return no difference.
For the compatibility question, if the PNG encoder and PNG decoder implement the mandatory aspects of the PNG RFC, I see no reason for the interlacing to lead to a compatibility issue.
Edit 2018 11 13:
Some experiments based on auto evolved distributed genetic algorithms with niche mechanism (hosted on https://en.oga.jod.li ) are explained here:
https://jod.li/2018/11/13/can-an-interlaced-png-image-be-smaller-than-the-equivalent-non-interlaced-image/
Those experiments show that it is possible for equivalent PNG images to have a smaller size interlaced than non-interlaced. The best images for this are tall, they have a one pixel width and have pixel content that appear random. Though, the shape is not the only important aspect for the interlaced image to be smaller than the non-interlaced image as random cases with the same shape lead to different size differences.
So, yes, some PNG images can be identical pixel wise and for non-pixel related content but have a smaller size interlaced than non-interlaced.
So I just wonder why does that happen?
From section Interlacing and pass extraction of the PNG spec.
Scanlines that do not completely fill an integral number of bytes are padded as defined in 7.2: Scanlines.
NOTE If the reference image contains fewer than five columns or fewer than five rows, some passes will be empty.
I would assume the behavior your experiencing is the result of the Adam7 method requiring additional padding.

Difference in entropy values for the same image

I am finding the entropy value of an RGB image after histogram processing using the Y plane as follows:
i % the original image
y1=rgb2ycbcr(i);
y=y1(:,:,1);cb=y1(:,:,2);cr=y1(:,:,3);
he1=histeq(y);
r1=cat(3,he1,cb,cr);
r1=ycbcr2rgb(r1);
g1=rgb2gray(r1);
e1=entropy(g1);
Now I followed the procedure:
imwrite(r1,'temp1.jpg');
i2=imread('temp1.jpg');
g2=rgb2gray(i2);
e2=entropy(g2);
But now e1 and e2 are different. Why it is so?
You're writing the image r1 to disk using the JPEG compression standard. JPEG is lossy, which means that what is written to disk is not the same as what was originally stored in memory. Though the images look perceptually the same, if you compared the colour values between corresponding pixels, the majority of them will be slightly different. These slight differences is why the JPEG standard gives high compression ratios and thus smaller file sizes.
If you want to ensure that what you write to file is the same as what you read in, use a lossless compression standard, such as using PNG. As such, change the destination filename so that you're using PNG, not JPEG:
imwrite(r1,'temp1.png'); %// Change
i2=imread('temp1.png'); %// Change
g2=rgb2gray(i2);
e2=entropy(g2);

Why does tinypng make pics brighter?

I like to compress png images via tinypng service. It's saves up to 97% of png-picture size. But sometimes resulting picture looks more brighter than original. And it's bad. The question is why does my image become brighter? An how to avoid this effect?
On tinypng website they write:
Because the number of colors is reduced, 24-bit PNG files can be converted to much smaller 8-bit indexed color images. All unnecessary metadata is stripped too.
Because tinypng uses Lossy compression it can alter image quality including brightness, if you want there to be no effect on image quality you should look at using lossless compression which only strips out unnecessary metadata and won't affect image quality, you could try using:
https://kraken.io/web-interface/
http://www.punypng.com
The recompressed image is brighter because tinypng removes ancillary chunks. I verified that fact by sending it a PNG containing a "gAMA 1.0" chunk.
If the input image has a gAMA chunk, tinypng removes it and the image is displayed as though it were sRGB (gamma=1/2.2).
If the input image has no colorspace chunks (gAMA, sRGB, cHRM, or iCCP), or if it has those but they contain a colorspace that is exactly sRGB or close to sRGB, removing them is pretty safe and won't change the image brightness.
You can avoid the effect by using another application that doesn't remove ancillary chunks, or you can convert your image to the sRGB colorspace before sending them to tinypng.
Or, you could use a PNG editor to restore the gAMA chunk. There are many PNG editors available. Personally, I'd use pngsplit to extract the gAMA chunk from the original and to separate the chunks in the tiny PNG, then "cat" the chunks from the compressed file together with the old gAMA chunk (put it right after the IHDR chunk) to form a new compressed file with the right gAMA.

How to interprete Tiff image spec 6.0 packbits compression

The following is from TIFF 6.0 Specification Section 9: PackBits Compression
That is the essence of the algorithm. Here are some additional rules:
Pack each row separately. Do not compress across row boundaries.
The number of uncompressed bytes per row is defined to be (ImageWidth + 7)
/ 8. If the uncompressed bitmap is required to have an even number of bytes per
row, decompress into word-aligned buffers.
If a run is larger than 128 bytes, encode the remainder of the run as one or more
additional replicate runs
The first and the third items are easy to understand but I am confused about the second one specifically this: The number of uncompressed bytes per row is defined to be (ImageWidth + 7) / 8. Isn't that only true for 1 bit bi-level image. But to my knowledge, packbits is a byte oriented compression algorithm, it could be used for any type of tiff.
Could someone who knows about tiff and packbits give me some hints?
The TIFF document from this site: http://www.fileformat.info/format/tiff/corion-packbits.htm
has the following at the top:
Abstract
This document describes a simple compression scheme for bilevel
scanned and paint type files.
Motivation
The TIFF specification defines a number of compression schemes.
Compression type 1 is really no compression, other than basic
pixel packing. Compression type 2, based on CCITT 1D
compression, is powerful, but not trivial to implement.
Compression type 5 is typically very effective for most bilevel
images, as well as many deeper images such as palette color and
grayscale images, but is also not trivial to implement. PackBits
is a simple but often effective alternative
So it is clear the additional rules are with respect to bilevel images. For some reason, the above abstract and description are missing from the pdf version of TIFF6.0.

Custom Image Format: How to Target Compression Algorithms

I've done a bit of fiddling around with PNGs over the last couple days and I am upset with my findings. I'm concluding that the majority of my results deal with compression. So this weekend I'm going to dive into advanced compression articles. I wanted to share my findings so far. To see if anyone has any advice on achieving my goal and to maybe point me in the right direction.
I am currently working on a project where I need to obtain the smallest possible file size within a window of less than 15 seconds.
The majority of the images I am working with are PNG-8bpp with a full 256 color palette. Most of these images I could represent accurately with 5bpp (32 colors).
PNG indexed however only supports 1,2,4, and 8bpp. So my idea was to strip the PNG format to the minimal information I needed and write an encoder/decoder to support IDAT sections with 3,5,6, or 7bpp.
Test 1:
Original File: 61.5KB, 750 * 500, 8pp Palette, 256 colors, No tRNS
After Optimizations (Reductions to 4bpp, Strip Anx Chunks, & PNGOUT): 49.2KB 4bpp, 16 Colors
Human Interpretation: I can see 6 distinguishable colors.
Since I only need six colors to represent the image I decide to encode the IDAT using 3bpp to give me a max palette of 8 colors. First I uncompressed the IDAT which results in a new file size of 368KB. After applying a 3bpp to IDAT my new uncompressed file size is 274KB. I was off to what seemed to be a good start... Next I applied deflate to my new IDAT section. Result... 59KB.
10KB larger than using 4bpp.
Test 2:
Original File: 102KB, 1000 * 750, 8bpp, 256 Colors, tRNS 1 fully transparent color
After Optimization: 79KB, 8bpp, 193 colors, tRNS 1 full transparent color
Human Interpretation: I need about 24 colors to represent this picture.
24 colors can be represented in 5bpp at 32 colors. Using the same technique above I was able to achieve much better results over uncompressed but again I lost at compression. Final size compressed... 84KB. Then I tried at 6,7bpp... same result poorer compression that at 8bpp.
Just to be sure I saved all the uncompressed images and tried several other compression algorithms... LZMA, BZIP2, PAQ8... same result smaller compression size at 8bpp than at 5,6, or 7bpp AND smaller size at 4bpp than at 3bpp.
Why is this occuring? Can I tweak/modify a compression algorithm to target a PNG like format that uses a 5,6, or 7bpp format that beast 8bpp compression? Is it worth the time... and yes saving another 10KB would be worth it.
What you're seeing is that by using odd pixel sizes, your effective compression decreases because of the way PNG compression works. The advantage of PNG compression over just using straight FLATE/ZIP compression is the filtering. PNG compression tries to exploit horizontal and vertical symmetry with a small assortment of pre-processing filters. These filters work on byte boundaries and are effective with pixel sizes of 4/8/16/24/32/48/64 bits. When you move to an odd size pixel (3/5/6/7 bits) you are defeating the filtering because identically colored pixels won't "cancel each other out" horizontally when filtered on 8-bit boundaries.
Even if the filtering weren't an issue, the way that FLATE compression works, reducing the pixel size from 8 to 7 or 6 bits won't have much effect either because it also assumes a symbol size of 8-bits.
In conclusion...the only benefit you can achieve by using odd sizes of pixels is that the uncompressed data will be smaller. By breaking the pixels' byte boundary symmetry, you defeat much of the benefit of PNG compression.
GIF compression supports all pixel sizes from 1 to 8 bits. It defines the symbol size as the pixel size and doesn't use any pre-filtering. An 8-bit GIF image, if compressed as 7-bit pixels, wouldn't suffer less compression, but also wouldn't benefit because the compression depends more on the repetition of the pixels than the symbol size.
DEFLATE compression used by PNG has two main techniques:
finds repeating byte sequences and encodes them as backreferences
encodes bytes using Huffman coding
By changing pixel length from 8-bit you're out of sync with byte boundaries and DEFLATE won't be able to encode repeating pixel runs as repeated bytes.
And thanks to Huffman coding it doesn't matter that 8-bit pixels have unused bits, because the coding will encode bytes with variable-width codes assigning shortest ones to most frequently occurring values.

Resources