Batch Optimize PNG for google pagespeed - bash

From: http://gtmetrix.com/reports/hosting.site.dev.nexwrx.com/OUsrZOCY
_include/img/menu-mobile.png could save 1.0KiB (82% reduction)
things I have tried
pngcrush _include/img/menu-mobile.png menu-mobile.png
Best pngcrush method = 6 (ws 12 fm 5 zl 9 zs 0) = 215
for menu-mobile.pn
(1.38% critical chunk reduction)
(0.24% filesize reduction)
when I try optipng -o7 _include/img/menu-mobile.png
_include/img/menu-mobile.png is already optimized
pngquant --quality=75-80 _include/img/logo
pngpngquant: mempool.c:40: mempool_create: Assertion `!((uintptr_t)(*mptr + (*mptr)->used) & 15UL)' failed.
Aborted
pngquant just seems to fail on everything (ubuntu 14.04) version 2.01
Any idea how I can get 82% reduction as google states on a .png?

Google removed the iTXt and tEXt chunks, saving around 1050 bytes, and reduced the pixels from 32-bits/pixel RGBA to 4-bits/pixel indexed, saving a few more bytes:
$ pngcheck -v menu-mobile.png
File: menu-mobile.png (1265 bytes)
chunk IHDR at offset 0x0000c, length 13
16 x 32 image, 32-bit RGB+alpha, non-interlaced
chunk iTXt at offset 0x00025, length 1001, keyword: XML:com.adobe.xmp
uncompressed, no language tag
no translated keyword, 980 bytes of UTF-8 text
chunk tEXt at offset 0x0041a, length 25, keyword: Software
chunk IDAT at offset 0x0043f, length 158
zlib: deflated, 4K window, maximum compression
chunk IEND at offset 0x004e9, length 0
No errors detected in menu-mobile.png (5 chunks, 38.2% compression).
$ pngcheck -v menu-mobile-opt.png
File: menu-mobile_opt.png (216 bytes)
chunk IHDR at offset 0x0000c, length 13
16 x 32 image, 4-bit palette, non-interlaced
chunk PLTE at offset 0x00025, length 36: 12 palette entries
chunk tRNS at offset 0x00055, length 11: 11 transparency entries
chunk IDAT at offset 0x0006c, length 88
zlib: deflated, 512-byte window, default compression
chunk IEND at offset 0x000d0, length 0
No errors detected in menu-mobile_opt.png (5 chunks, 15.6% compression).
Pngcrush can do a little better by reducing the pixels to 16-bits/pixel Gray-alpha:
$ pngcrush -s -reduce -rem text menu-mobile.png menu-mobile-pc.png
$ pngcheck -v menu-mobile-pc.png
File: menu-mobile-pc.png (175 bytes)
chunk IHDR at offset 0x0000c, length 13
16 x 32 image, 16-bit grayscale+alpha, non-interlaced
chunk IDAT at offset 0x00025, length 118
zlib: deflated, 2K window, maximum compression
chunk IEND at offset 0x000a7, length 0
No errors detected in menu-mobile-pc.png (3 chunks, 82.9% compression).
In this case, the IDAT chunk that contains the compressed pixel data is 30 bytes larger than google's result, but that's offset by the fact that the Gray-alpha color type doesn't require the PLTE (36 bytes of data plus 12 bytes of chunk overhead) and the tRNS (11 bytes data + 12 bytes overhead) chunks. For images with larger dimensions, this tradeoff would probably be different.

Related

Debayering bayer encoded Raw Images

I have an image which I need to write a debayer for, but I can't figure out how the data is packed.
The information I have about the image:
original bpp: 64;
PNG bpp: 8;
columns: 242;
rows: 3944;
data size: 7635584 bytes.
PNG https://drive.google.com/file/d/1fr8Tg3OvhavsgYTwjJnUG3vz-kZcRpi9/view?usp=sharing
SRC data: https://drive.google.com/file/d/1O_3tfeln76faqgewAknYKJKCbDq8UjEz/view?usp=sharing
I was told that it should be BGGR, but it doesn't look like any ordinary Bayer BGGR image to me. Also I got the image with a txt file which contains this text:
Camera resolution: 1280x944
Camera type: LVDS
Could the image be compressed somehow?
I'm completely lost here, I would appreciate any help.
Bayer pattern of the image in 8bpp representation
Looks like there are 4 images, and the pixels are stored in some kind of "packed 12" format.
Please note that "reverse engineering" the format is challenging, and the solution probably has few mistakes.
The 4 images are stored in steps of 4 rows:
aaaaaaaaaaaaa
bbbbbbbbbbbbb
ccccccccccccc
ddddddddddddd
aaaaaaaaaaaaa
bbbbbbbbbbbbb
ccccccccccccc
ddddddddddddd
...
aaa... marks the first image.
bbb... marks the second image.
ccc... marks the third image.
ddd... marks the fourth image.
There are about 168 rows at the top that we have to ignore.
Getting 1280 pixels out of 1936 bytes in each row:
Each row has 16 bytes we have to ignore.
Out of 1936 bytes, only 1920 bytes are relevant (assume we have to remove 8 bytes from each side).
The 1920 bytes represents 1280 pixels.
Every 2 pixels are stored in 3 bytes (every pixel is 12 bits).
The two 12 bits elements in 3 bytes are packed as follows:
8 MSB bits 8 MSB bits 4 LSB and 4 LSB bits
######## ######## #### ####
It's hard to tell how the LSB bits are divided between the two pixels (the LSB it mainly "noise").
After unpacking the pixels, and extracting one image out of the 4, the format looks like GRBG Bayer pattern (by changing the size of the margins we may get BGGR).
MATLAB code sample for extracting one image:
f = fopen('test.img', 'r'); % Open file (as binary file) for reading
T = fread(f, [1936, 168], 'uint8')'; % Read the first 168
I = fread(f, [1936, 944*4], 'uint8')'; % Read 944*4 rows
fclose(f);
% Convert from packed 12 to uint16 (also skip rows in steps of 4, and ignore 8 bytes from each side):
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A = uint16(I(1:4:end, 8+1:3:end-8)); % MSB of even pixels (convert to uint16)
B = uint16(I(1:4:end, 8+2:3:end-8)); % MSB of odd pixels (convert to uint16)
C = uint16(I(1:4:end, 8+3:3:end-8)); % 4 bits are LSB of even pixels and 4 bits are LSB of odd pixels
I1 = A*16 + bitshift(C, -4); % Add the 4 LSB bits to the even pixels (may be a wrong)
I2 = B*16 + bitand(C, 15); % Add the other 4 LSB bits to the even pixels (may be a wrong)
I = zeros(size(I1, 1), size(I1, 2)*2, 'uint16'); % Allocate 1280x944 uint16 elements.
I(:, 1:2:end) = I1; % Copy even pixels
I(:, 2:2:end) = I2; % Copy odd pixels
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
J = demosaic(I*16, 'grbg'); % Apply demosaic (multiply by 16, because MATLAB assume 12 bits are in the upper bits).
figure;imshow(lin2rgb(J));impixelinfo % Show the output image (lin2rgb applies gamma correction).
Result (converted to 8 bit):

How is this animated .png possible?

While learning how to edit Firefox themes, I came across this animated .png (not apng) that I do not understand. I assumed it was a gif at first just given the pixel art style.
.png in question:
(reuploaded to a site of mine)
The .png is part of this theme:
https://addons.mozilla.org/en-US/firefox/addon/black-rain-remasterd-dark-mode/
and this is the url for the image in question when I inspect the theme:
moz-extension://0c38fbf3-159e-4a75-8215-b2e881443fc9/images/0.png
Tried to open in Photoshop and a visual studio code, opens like any other still .png would and not like say a .gif that would show me the frames.
It is a valid, animated PNG.
You can check it with pngcheck as follows:
pngcheck -v 0.png
File: 0.png (991263 bytes)
chunk IHDR at offset 0x0000c, length 13
640 x 65 image, 8-bit palette, interlaced
chunk PLTE at offset 0x00025, length 768: 256 palette entries
chunk tRNS at offset 0x00331, length 256: 256 transparency entries
chunk pHYs at offset 0x0043d, length 9: 3779x3779 pixels/meter (96 dpi)
chunk acTL at offset 0x00452, length 8
unknown private, ancillary, unsafe-to-copy chunk
chunk fcTL at offset 0x00466, length 26
unknown private, ancillary, unsafe-to-copy chunk
chunk IDAT at offset 0x0048c, length 21371
zlib: deflated, 32K window, maximum compression
rows per pass: 9, 9, 8, 17, 16, 33, 32
chunk fcTL at offset 0x05813, length 26
unknown private, ancillary, unsafe-to-copy chunk
chunk fdAT at offset 0x05839, length 21371
unknown private, ancillary, unsafe-to-copy chunk
...
...
...
chunk fcTL at offset 0xecc09, length 26
unknown private, ancillary, unsafe-to-copy chunk
chunk fdAT at offset 0xecc2f, length 21468
unknown private, ancillary, unsafe-to-copy chunk
chunk IEND at offset 0xf2017, length 0
No errors detected in 0.png (100 chunks, -2282.8% compression).
Or check with:
magick identify -verbose 0.png
You will see it contains both fcTL "Frame Control Chunk" and acTL "Animation Control Chunk". See here.

Tag Size and Cache Bits Exercise

I am studying for my computer architecture exam that is due tomorrow and am stuck on a practice exercise regarding tag size and the total number of cache bits. Here is the question:
Question 8:
This question deals with main and cache memory only.
Address size: 32 bits
Block size: 128 items
Item size: 8 bits
Cache Layout: 6 way set associative
Cache Size: 192 KB (data only)
Write Policy: Write Back
Answer: The tag size is 17 bits. The total number of cache bits is 1602048.
I know that this a failry straight-forward exercise, but I seem to be lacking the proper formulas. I also know that the structure of a N set way associative is |TAG 25 bits|SET 2 bits|OFFSET 5 bits|. And that Tag size = AddrSize - Set - Offset (- item size if any) thus giving the answer of 17 bits tag size.
However, how do I calculate the total number of cache bits please?
cache size in bytes: 192*1024 = 196608
number of blocks: 196608 / 128 = 1536
number of sets: 1536 / 6 = 256
set number bits: log2(256) = 8
offset number bits: log2(128) = 7
tag size: 32-(8+7) = 17
metadata: valid+dirty = 2 bits
total tag + metadata: (17+2)*1536 = 29184 bits
total data: 1536*128*8 = 1572864 bits
total size: 29184 + 1572864 = 1,602,048
There could also be bits used for the replacement policy, but we can assume it's random to make the answer work.

Why are kilo, mega and giga - bytes named after "bytes", if they all have 10 of more bits when bytes have 8 bits?

I get why we have the number 1024 instead of 1000 to use the suffix "kilo" in computing (computer uses base 2, so 2 ^ 10, blah blah blah). So I get the kilo part, but why is it called a kilo - "byte"? To make a kilo - "byte", we need to use bits with 10 digits from 0000000000 to 1111111111. That is not 8 digits, shouldn't it be called something else.
I.e. a kilobyte is not 1024 groupings of 8 bit binary digits, it is 1024 groups of 10 bit binary digits and a megabyte has even more than 10 binary digits - not 8. If asked how many bits are in 1 kilobytes, people calculate it as 1*1024*8. But that's wrong! It should be 1*1024*10.
I.e. a kilobyte is not 1024 groupings of 8 bit binary digits, it is
1024 groups of 10 bit binary digits
You are confusing the size of a byte with the size of the value needed to address those bytes.
On most systems a byte is 8 bits, which means 1000 bytes is exactly 1000*8 bits, and 2000 bytes is exactly 2000*8 bits (i.e. exactly the double, which makes sense).
To address or index those bytes you need 10 bits in the first example (2^10) and 11 bits in the second (2^11 up to 2048 bytes). It wouldn't make a lot of sense if the size of a byte was changing when there are more bytes in a data structure.
As for the 1000 (kilobyte) vs 1024 (kibibyte):
1 kB (kilobyte) = 10^3 = 1000
1 KiB (kibibyte) = 2^10 = 1024
A kilobyte used to be generally accepted as being 1024 bytes. However at some point hard disk manufacturers started to count 1 kB as 1000 bytes (kilo being 1000 which is actually correct):
1 GB = 1000^3 = 1000000000
1 GiB = 1024^3 = 1073741824
Windows still used 1 kB = 1024 bytes to show the hard disk size, i.e. it showed 954MB for 1GB of hard disk space. I remember a lot of customers complaining about that when checking, for example, the size of their 250GB drive which only showed 233GB in Windows.

Direct Table & Lookup Table

How to measure memory size of an image in direct coding 24-bit RGB color model & in 24-bit 256-entry loop-up table representation. For example: Given an image of resolution 800*600. how much spaces are required to save the image using direct coding and look-up table.
For a regular 24-bit RGB representation most probably you just have to multiply the number of pixel on number of bytes per pixel. 24 bits = 3 bytes, so the size is 800 * 600 * 3 bytes = 1440000 bytes ≈ 1.37 MiB. In some cases you may have rows of an image aligned on some boundary in memory, usually 4 or 8 or 32 bytes. But since 800 is divisible by 32, this will not change anything, still 1.37 MiB.
Now, for a look-up table, you have 1 byte per pixel, since you have only to address one entry in the table. This yields 800 * 600 * 1 = 480000 bytes ≈ 0.46 MiB. Plus the table itself: 256 colors, 24 bits (3 bytes) each - 256 * 3 = 768 bytes. Negligible comparing to the size of the image.

Resources