What does intrinsic value represent in dev tools when inspecting an image? - image

When inspecting an image in Chrome Dev Tools I notice that when hovering on the HTML it will display dimensions followed by other dimensions in parenthesis.
For example: 298 x 274 pixels (intrinsic: 860 x 731 pixels)
What is the meaning of the dimension in the parenthesis?

When you hover on the HTML it will display dimensions followed by other dimensions in parenthesis.
For example: 298 x 274 pixels (intrinsic: 860 x 731 pixels)
860 x 731 pixels is the image's actual size, and 298 x 274 pixels are the size after applying CSS to that image.

Related

Relation between PPI and Resolution

The Redmi Note 8 Pro has a screen height of 6.53 inch and 395 ppi. The total pixels along height is 2580 but the max resolution is 2340 *1080. What about the extra 240 pixels (2580-2340)?
I did research on this but couldn't find a answer

How does memory usage in browsers work for images - can I do one large sprite?

I currently display 115 (!) different sponsor icons at the bottom of many web pages on my website. They're lazy-loaded, but even so, that's quite a lot.
At present, these icons are loaded separately, and are sized 75x50 (or x2 or x3, depending on the screen of the device).
I'm toying with the idea of making them all into one sprite, rather than 115 separate files. That would mean, instead of lots of tiny little files, I'd have one large PNG or WEBP file instead. The way I'm considering doing it would mean the smallest file would be 8,625 pixels across; and the x3 version would be 25,875 pixels across, which seems like a really very large image (albeit only 225 px high).
Will an image of this pixel size cause a browser to choke?
Is a sprite the right way to achieve a faster-loading page here, or is there something else I should be considering?
115 icons with 75 pixel wide sure will calculate to very wide 8625 pixels image, which is only 50px heigh...
but you don't have to use a low height (50 pixel) very wide (8625 pixel) image.
you can make a suitable rectangular smart size image with grid of icons... say, 12 rows of 10 icons per line...
115 x 10 = 1150 + 50 pixel (5 pixel space between 10 icons) total 1200 pixel wide approx.
50 x 12 = 600 + 120 pixel (5 pixel space between 12 icons) total 720 pixel tall approx.

Automatically decide the final quality for adaptive image compression

I have seen few websites like compressjpeg, kraken, tinyjpg and several others who decides optimum quality while compressing. If images are of quality 99 they will compress sometimes to 94 quality and sometimes to 70 level.
I tried to study their pattern and found that all of them are using imagemagick and most probably they have some tables which reads the rgb pattern of those images and decides what should be the optimum compression level.
I want quality to be dynamic for all images instead of the below imagemagick command, I am using currently:-
convert -quality 70% input.jpg output.jpg
Here is few images and their corresponding quality after compression
Name R G B Overall Size width height Tinyimg size Tinypng compression original
7.jpg 95.0354 120.168 158.313 124.506 266 1920 1200 159.8  70 91
2.jpg 155.466 126.892 121.507 134.622 59 720 378 55.3 92 94
3.jpg 230.791 230.596 230.532 230.64 28.5 720 378 10.3 69 94
1.jpg 74.8786 99.9428 101.71 92.1772 33.5 650 400 32.8 64 69
4.jpg 235.647 52.3033 50.1626 112.704 384 400 250 25.3 95 99
9.jpg 194.461 180.839 183.859 186.386 12.71 300 188 12.9 75 75
6.jpg 170.337 169.707 153.873 164.639 6.69 184 274 6.9 74 74
5.jpg 154.196 130.809 111.683 132.229 8.5 259 194 8.5 74 74
8.jpg 162.161 184.608 194.416 180.395 6.04 126 83 5.9 89 89
Any guidance will be useful.
I was going to put this as a comment, but I decided to put it as an answer so folks can add/remove arguments and conclusions.
I don't believe there is an optimum quality setting. I think it depends on the purpose of the image and the content of the image - maybe other things too.
If the image has lots of smooth gradients, you will need a higher quality setting than if the image has loads of (high frequency) details many of which can be lost without perceptible loss of quality.
If the purpose of the image is as a web preview, it can have a far lower quality setting than if the purpose of the image is to pass a piece of fine art landscape/portrait photography to a printer or a customer who has paid £1,000 for it (I'm looking at you Venture UK).
One thing you can do is set the maximum file size you wish to achieve, but that disregards all the above:
convert -size 2048x2048 xc:gray +noise random -define jpeg:extent=100KB out.jpg
I guess I am saying "it depends".
You can try jpeg-archive. This utility provides dynamic compression using various metrics such as SSIM, Multi-SSIM and Smallfry. The command you should try for is:-
jpeg-recompress --accurate -m smallfry --quality high image.jpg compressed.jpg
Note, that this method keeps subsampling on by default and should be kept so for the good size reduction.
IMO, Guetzli is right now not good for production, especially for large number of images.
The answer is Google's Guetzli.
See explanations here.

How to Calculate Zebra Font 0 text width?

Is there a way to calculate the total width of Zebra Font 0 given text? Consider the following ZPL command,
**^XA^FO100,150^A030,30^FDSample Text^FS^XZ**
Here both character height and width is 30 dots. I want to calculate the actual width of this text in mm. Please note that printer DPI is 300..............
Font 0 is a variable-width font (not monospaced like some of the others), so the width of the text will depend on the text itself.
One option would be to switch to a built-in monospaced font like font C, where each character is always 10 dots wide and the intercharacter gap is 2 dots wide (see the Zebra Programming Guide, page 1212 table 32 and page 1216 table 35). If your printer is 300 DPI, then it's 12 dpmm (dots per millimeter), and you can just do the math from there based on how many characters you have (and how many gaps between them):
"Sample Text" length = 11 characters
Intercharacter gaps = 11 - 1 = 10 intercharacter gaps
(11 characters * 10 character width) + (10 intercharacter gaps * 2 gap width) = 130 dots
130 dots / 12 dpmm = 10.8 mm
However, if you really want to use font 0, and if you know what text you want to measure, then you can try drawing a box around it using ^GB to get a rough approximation of the width.
Here's an example using your sample text, which seems to indicate that it's about 112 dots wide. At your density (12 dots per millimeter), that's a little over 9 millimeters.

Calculating the size of monochrome binary image

I created a monochrome bitmap image and stored it in secondary memory. The dimension of the image is 484 * 114. In monochrome each pixel of the image is represented by 1 bit so the size of the image should be 6.7 kb . But when I check the size by right on file in OS , it is 7.18 KB , I need the explanation why the size is different and not exact as I calculated?
Because of overhead of headers for example; your bitmap won't only store the bits representing your image but also (meta)information containing information such as width, height, bits per plane etc. The actual bitmap data is just a bunch of bytes, without this (meta)information your image might as well be 114 x 484 instead of 484 x 114. Take a look at, for example, the BMP fileformat.
Also, OS'es tend to round filesizes to particular block sizes (like 4Kib). Unless you state the exact filesize in bytes, OS and filetype all we can do is guess.

Resources