Calculating pixel length of an image - image

May I know what are the ways to calculate the length of 1 pixel in centimeters? The images that I have are 640x480. I would like to compare 2 pixels at different places on the image and find the difference in distance. Thus I would need to find out what's the length of the pixel in centimeters.
Thank you.

A pixel is a relative unit of measure, it does not have an absolute size.
Edit. With regard to your edit: again, you can only calculate the distance between two pixels in an image in pixels, not in centimeters. As a simple example, think video projectors: if you project, say, a 3×3px image onto a wall, the distance between the leftmost and the rightmost pixels could be anything from a few millimeters to several meters. If you moved the projector closer to the wall or farther away from it, the pixel size would change, and whatever distance you had calculated earlier would become wrong.
Same goes for computer monitors and other devices (as Johannes Rössel has explained in his answer). There, the pixel size in centimeters depends on factors such as the physical resolution of the screen, the resolution of the graphical interface, and the zooming factor at which the image is displayed.
A pixel does not have a fixed physical size, by definition. It is simply the smallest addressable unit of picture, however large or small.

This is fully dependent on the screen resolution and screen size:
pixel width = width of monitor viewable area / number of horizontal pixels
pixel height = height of monitor viewable area / number of vertical pixels

Actually, the answer depends on where exactly your real-world units are.
It comes down to dpi (dots per inch) which is the number of image pixels along a length of 2.54 cm. That's the resolution of an image or a target device (printer, screen, &c.).
Image files usually have a resolution embedded within them which specifies their real-world size. It doesn't alter their pixel dimensions, it just says how large they are if printed or how large a “100 %” view on a display would be.
Then there is the resolution of your screen, as others have mentioned, as well as the specified resolution your graphical interface uses (usually 96 dpi, sometimes 120)—and then it's all a matter of whether programs actually honor that setting ...

The OS will assume some dpi (usually 96 dpi on windows) however the screens real dpi will depend on the physical size of the display and the resolution
e.g a 15" monitor should have a 12" width so depending on the horizontal resolution you will get a different horizontal dpi, assuming a 1152 pixel screen width you will genuinely get 96 dpi

Related

It´s necessary to create different screens sizes and density xmls for an app? Best approach for this

Just a straight forward question. I´m trying to make the best possible choice here and there is too much information for a "semi-beginner" like me.
Well, at this point, I´m trying with screen size values for my layout (activity_main.xml (normal, large, small)) and with different densities (xhdpi, xxhdpi, mhdpi) and, if a can say so myself, it is a mess. Do I have to create every possible option to support all screen sizes and densities? Or am I doing something really wrong here? what is the best approach for this?
My layouts are now activity_main(normal_land_xxhdpi) and I have serious doubts about it.
I´m using last version of android studio of course. My app is a single activity with buttons, textview and others. Does not have any fragments or intents whatsoever, and for that reason I think this has to be an easy task, but not for me.
Hope you guys can help. I don't think i need to put any code here, but if needed, i can add it.
If you want to make a responsive UI for every device you need to learn about some things first:
-Difference between PX, DP:
https://developer.android.com/training/multiscreen/screendensities
Here you can understand that dp is a standard measure which android uses to calculate how much pixels, lets say a line, should have to keep the proportions between different screensizes with different densities.
-Resolution, Density and Ratio:
The resolution is how much pixels a screen has over height and width. This pixels can be smaller or bigger, so for instance, if you have a screen A with 10x10 px whose pixels are two times smaller than other screen B with 10 x 10 pixels too, A is two times smaller than B even when both have 10 x 10 px.
For that reason exists the meaning Density, which is how much pixels your screen has for every inch, so you can measure the quality of a screen where most pixels per inch (ppi) is better.
Ratio tells you how much pixels are for height as well as width, for example the ratio of a screen of 1000 x 2000 px is 1:2, a full hd screen of 1920 x 1080 is 16:9 (16 pixels height for every 9 pixels width). A 1:1 ratio is a square screen.
-Standard device's resolutions
You can find the most common measurements on...
https://material.io/resources/devices/
When making a UI, you use the DP measurements. You will realize that even when resolution between screens are different, the DP is the same cause they have different densities.
Now, the right way is going with constraint layout using dp measures to put your views on screen, with correct constraints the content will adapt to other screen sizes
Anyway, you will need to make additional XML for some cases:
-Different orientation
-Different ratio
-Different DP resolution (not px)
For every activity, you need to provide a portrait and landscape design. If other device has different ratio, maybe you will need to adjust the height or width due to the proportions of the screens aren't the same. Finally, even if the ratio is the same, the DP resolution could be different, maybe you designed an activity for a 640x360dp and other device has 853x480dp, which means you will have more vertical space.
You can read more here:
https://developer.android.com/training/multiscreen/screensizes
And learn how to use constraintLayout correctly:
https://developer.android.com/training/constraint-layout?hl=es-419
Note:
Maybe it seems to be so much work for every activity, but you make the first design and then you just need to copy the design to other xml with some qualifiers and change the dp values to adjust the views as you wants (without making from scratch) which is really faster.

Icon resolutions: Pixels vs DPI

When I try to do some research on making icons for Windows, and what size/resolution images I should leave in my .ico files before saving, there's too much weird information.
Some say put 16x16, 24x24, 32x32, 48x48 ... and so on in 96 DPI.
This is what irks me, and I feel it doesn't make any sense.
Isn't 1 pixel = 1 pixel?
Why do they insist on mixing DPI into this?
What is always true is that 1 pixel = 1 pixel. What does change is how big that pixel is on various displays that have different screen densities. That is what DPI describes - number of dots (pixels) per inch. But using DPI in context of image size only makes sense when you use it in combination with inches (centimeters). For instance "create image 10x10 inches at 300 DPI" and from that statement you can calculate that image has to be 3000x3000 pixels in size.
As far as Windows is concerned what does count is font scaling setting that can be set from 100% to 200%
So when you are creating your icons make sure that you have at least 1x and 2x dimensions. If the icon has to be 16x16 pix under normal resolution, that means that you would also create 32x32 pix icon. Other commonly used scaling are 125% and 150%, so it would be good idea to provide icon for those sizes too.
You can freely ignore statements like "Make the icon x pixels wide and x pixels high in x DPI" because those people have no clue what they are talking about.

Point to pixel conversion

On Mac OS X, I need to convert a point measurement to pixel measurement.
The formula which I know is
pixel = point * resolution (in terms of dpi) / 72
I have few measurements which I want to convert to pixels. Although reverse cases would also be possible.
How to do this in Cocoa or Quartz?
Does it depend on axis? Means would 5 pixels in Y-axis would be same as 5 pixels in X-axis in terms of points? Or is it safe to assume that resolution is same for both X and Y axis?
Please note that I do not want to make any assumption about resolution.
You probably don't want to convert anything to pixels. OS X now works in points; so for example when you draw a rectangle you are giving its dimensions in points, not pixels.
A OS X Quartz point is related to, but not the same as, a (computer) printing point - the two used to be the same, 72 points = 1". However WYSIWYG has become "some scale of" and 72 points (note not pixels) on screen is not a physical inch as screen pixel densities have increased. However 72 points is still an "abstract" inch.
In OS X you draw in points, the OS takes care of mapping those points to the physical pixels on the screen; which roughly translates to screens up to a certain density being treated as 72ppi (pixels/inch) or 1 pixel/point, and higher density screens being treated as 144ppi or 2 pixel/point - for example these are the ppi assigned to standard and Retina screenshots.
If you really, really need to know what a point translates to in pixels you can find out, but this changes depending on what screen a window is on.
For details of all of this you can start with Points Don’t Correspond to Pixels and then read the rest of the High Resolution Guidelines for OS X that reference is part of. How to find point to backing store mapping, if you really, really need to know, is covered.
HTH
There is an opportunity for confusion when your user specifies a length in 'points' as to whether they mean typography points of size 1/72" or lenghts compared to Mac UI points, which vary with the display resolution.
In Mac OS, "points" are pixels, unless you are in high resolution mode, in which case points are 2x2 pixels. The "Points Don't Correspond to Pixels" page says "on a high-resolution display, there are four onscreen pixels for each point," indicating a 4:1 correspondence in hi-res, and 1:1 correspondence in standard res. It also notes:
Note: The term points has its origin in the print industry, which defines 72 points as to equal 1 inch in physical space. When used in reference to high resolution in OS X, points in user space do not have any relation to measurements in the physical world.
To convert a typographer's point size to something physically the same size on a Mac screen in Mac points, your formula is exactly correct. You might rename 'pixel' to Mac points:
MacPoints = (TypesettersPoints/72)*ResolutionInDotsPerInch
Best to stick with points.
First you would need to know where the point is coming from. Views, Windows and Screens all have their own coordinate systems.
You would need to do several things to translate this to the pixel grid of a given screen.
First you need to convert your point to the screen coordinates.
Then to coordinates of pixel grid of the screen.
You will also need to find out the current display properties to know if it's a retina display or not. ( makes a big difference.)
All of the methods are in NSView, NSWindow, NSScreen.
All of the functions are part of Quartz Display services. You will need ones for CGDisplay you might need ones for CGWindow.
You will also need to have your app observe notifications for display configuration changes and figure out the hard part, when a point is in a coordinate space that overlaps two screens.
I leave it to you to do the rest and decide if you really need this.

what pixel width I should make for a background cover image

After reading posts on this site and others, I am uncertain what pixel width I should make for a background cover image. For retina devices it is recommended to create a background image x2 the size if for a non-retina device.
I want to create a background image that will cover the entire screen but this would make a retina device image 2560 x 1760 and non-retina 1920 x 1200 if accommodating large monitors.
Is this too big?
I can't keep the jpeg image file size down to 276KB and that is with the most compression I can apply without destroying the image quality
Well your question is very interesting. To address your comment about what pixel width you should design your background image for. The size of a screen for a laptop, or desktop can vary. So you'll never exactly be able to make a single "universal" image for all backgrounds. Generally diagonal display in inches decides the amount of pixels. My screen is quite large, roughly the biggest size commercially purchased by public. Which is of course 1600 x 900 pixel, 17.3" diagonal display. I wouldn't recommend designing images for any display larger because those displays are custom and more for people who do things like graphic design themselves. Visit http://nodejs.org/logos/ If you scroll down to the computer display, you can see a list of pixel ratios that nodejs designs its logos for. These are very mainstream and popular display sizes. I would recommend you do the same and use those. Hope this answer helped.
Edit: nodejs builds for quality up to 2560 x 1440. If the size of the files doesn't matter. You should easily be able to build for sizes this large. Hope this helped.

What scaling factor to use for mapping the Font size on a high resolution monitor?

We have a requirement where our application needs to support high resolution monitors. Currently, when the application comes up in High res monitor, the text that it displays is too small. We use Arial 12 point font by default.
Now to make the text visible, I need to change the font size proportionally. I am finding it tough to come up with a formula which would give me the target font size given the monitor resolution.
Here is my understanding of the problem.
1) On windows, by default 96 pixels correpond to 1 Logical inch. This means that when the monitor resolution increases, the screen size in logical inches also increase.
2) 1 Point font is 1/72 of a Logical Inch. So combined with the fact that there are 96 Pixels per Logical inch, it turns out that, there are 96/72 Pixels per Point of Font.
This means that for a 12 point font, The number of Pixels it will occupy is 12*96/72 = 16 Pixels.
Now I need to know the scaling factor by which I need to increase these Number of Pixels so that the resultant Font is properly visible. If I know the scaled pixel count, I can get the Font size simply by dividing it by (96/72)
What is the suggested scaling factor which would ensure properly scaled Fonts on all monitor resolutions?
Also, please correct if my understanding is wrong.
There's an example on the MSDN page for LOGFONT structure. Your understanding is correct, you need to scale the point size by vertres / 72.
lfHeight = -PointSize * GetDeviceCaps(hDC, LOGPIXELSY) / 72;
If you set the resolution in Windows to match that of the physical monitor, no adjustment should be needed. Any well written program will do the multiplication and division necessary to scale the font properly, and in the newest versions of Windows the OS will lie about the resolution and scale the fonts automatically.
If you wish to handle this outside of the Windows settings, simply multiply your font size by your actual DPI and divide by 96.
Edit: Beginning with Windows Vista, Windows will not report your actual configured DPI unless you write a DPI-aware program. Microsoft has some guidance on the subject. You might find that the default scaling that Microsoft provides for non-DPI-aware programs is good enough for your purposes.

Resources