I just want to know if the pixel unit is something that doesn't change, and if we can convert from pixels to let's say centimeters ?
Similar to this question which asks about points instead of centimeters. There are 72 points per inch and there are 2.54 centimeters per inch, so just substitute 2.54 for 72 in the answer to that question. I'll quote and correct my answer here:
There are 2.54 centimeters per inch; if it is sufficient to assume 96 pixels per inch, the formula is rather simple:
centimeters = pixels * 2.54 / 96
There is a way to get the configured pixels per inch of your display for Microsoft Windows called GetDeviceCaps. Microsoft has a guide called "Developing DPI-Aware Applications", look for the section "Creating DPI-Aware Fonts".
Converting pixels to centimeters depends on the DPI (dots per inch) of the media displaying the image, i.e. monitor, laser printer, etc.
http://wiki.answers.com/Q/How_do_you_convert_pixels_into_centimeters
I'm going to go out on a limb and just guess that you want to be able to display things to the user on their monitor, scaled to be very close to its real life size.
IF this is the case, I would recommend either displaying your items next to real life items (credit cards, dollar bills, pop cans, etc) or even better, allow the user to hold something up to the screen like a credit card or dollar bill or ruler. You could then have them scale a slider or something similar to meet the width or height of that object.
By holding a credit card, something with a relatively known height and width, up to the screen, you can easily determine the ratio of pixels to inches and use that to your hearts content.
Wiki says
Most credit cards are issued by local banks or credit unions, and are the shape and size specified by the ISO/IEC 7810 standard as ID-1 (85.60 × 53.98 mm)
Using mspaint, a credit card of mine is exactly 212 pixels tall, thats 53.98mm / 212 pixels = 3.92 pixels per mm. Multiply by 10 and that's 39.2 pixels per cm.
You could EASILY do that programatically via javascript, C#, flash, whatever you want.
You can convert from pixels to centimeters, but it's not a consistent conversion. It will depend on the size and resolution of the display device in question. The definition of a pixel will not change, but the size of a pixel will vary on different display devices.
No, different mediums & monitors have different pixel density.
For instance a desktop monitor may have 75 pixels per inch whereas a print may be outputted at 300.
Here is a list of displays by pixel density
In Adobe Illustrator CS3 :(, the figure I get is 1 cm = 28.347 pixels. Note I am using an iMac 7. that has a resolution of 102 pixels per inch, 40 ppcm according to the link http://en.wikipedia.org/wiki/List_of_displays_by_pixel_density provided by rebo.
I created an Adobe Illustrator CS3 document using javascript to test the value of 1 cm = 28.347 pixels and it matches perfectly.
I know this question is very old but I was trying to find an answer to it and decided to share my findings.
Regards
The pixel system is that it depend on your screen resolution.
Well , first you should get the dpi(density pixel perInch) of your screen.
For example your screen dpi is 96;
1 CM = 37.795276F Pixel in 96dpi.
37.795276F / 96F = 0.03937007F which is each pixel in 1dpi.
now you can make it adapted to your screen by getting the current dpi of screen and multiply it to 0.03937007F. then you have each Centimeter in your desire dpi(screen resolution)
lets set scenario .
I want to make a methode which get CM and return pixel base on screen Dpi;
public float CentimeterToPixel(int valueCM, float dpi)
{
return 0.03937007F * dpi * valueCM;
}
if you want to make it more accurate you have to approach dpiX & dpiY.
for example in C# winforms You can add an object of Graphics from
System.Drawing And System.Drawing.Drawing2D then Get it dpiX & dpiY value and design youre area based on it to have more acurate calculation(in some case that horizontal resolution differ from vertical).
See the code bellow.
using System;
using System.Windows.Forms;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
namespace MyApp
{
static class MyAppClass
{
private static Bitmap bmp = new Bitmap(1, 1);// a simple bitmap that automaticaly created base on current screen resolution
private static Graphics graphic = Graphics.FromImage(bmp);
public static float CentimeterToPixelWidth(int valueCM)
{
return 0.03937007F * graphic.DpiX * valueCM;
}
public static float CentimeterToPixelHeight(int valueCM)
{
return 0.03937007F * graphic.DpiY * valueCM;
}
}
}
Whish it help you, Heydar.
The size of pixels change depending on the display device.
The following "found" code uses api calls to determine pixel density Get screen DPI in .NET
("Found" as in I googled it but haven't tried it)
As far as I understand it, a PIXEL is:
Picture Element
thus it depends on two things:
(a) Resolution
(b) Physical Screen size
Thus if you divide screen size by resolution, this should give you CM per Pixel.
Related
I am using Perl's
Image::Imlib2
package to generate thumbnails from larger images.
I've done such tasks before with several ImageMagick interfaces (PHP, Ruby, Python) and it was relatively easy. I have no prior experience with Imlib2 and it is a long time since I wrote something in Perl, so I am sorry if this seems naive!
This is what I've tried so far. It is simple, and assumes that scaling an image will keep the aspect ratio, and the generated thumbnail will be an exact miniature copy of the original image.
use strict;
use warnings;
use Image::Imlib2;
my $dir = 'imgs/*';
my #files = glob ($dir);
foreach my $img ( #files ) {
my $image = Image::Imlib2->load($img);
my $cropped_image = $image->create_scaled_image(50, 50);
$cropped_image->save($img);
}
Original image
Generated image
My first look at the image tells me that something is wrong. It may be my ignorance on cropping, resizing and scaling, but the generated image is displaying wrongly on small screens.
I've read What's the difference between cropping and resizing?, and honestly didn't understand anything. Also this one Image scaling.
Could someone explain the differences between those three ideas, and if possible give examples (preferably with Perl) to achieve better results? Or at least describe what I should consider when I want to create thumbnails?
The code you use isn't preserving the aspect-ratio. From Image::Imlib2::create_scaled_image
If x or y are 0, then retain the aspect ratio given in the other.
So change the line
my $cropped_image = $image->create_scaled_image(50, 50);
to
my $scaled_image = $image->create_scaled_image(50, 0);
and the new image will be 50 pixels wide, and its height computed so to keep the original aspect-ratio.
Since this is not cropping I've changed the variable name as well.
As for other questions, below is a basic discussion from comments. Please search for tutorials on image processing. Also, documentation of major libraries often have short and good explanations.
This is aggregated from comments deemed helpful. Also see Borodin's short and clear answer.
Imagine that you want to draw a picture (of some nice photograph) yourself in the following way. You draw a grid of, say, 120 (horizontally) by 60 (vertically) boxes. So 120 x 60, 720 boxes. These are your "pixels," and each you may fill with only one color. If the photo you are re-drawing is "mostly" blue at some spot, you color that pixel blue. Etc. It is not easy to end up with a faithful redrawing -- the denser the pixels the beter.
Now imagine that you want to draw another copy of this, just smaller. If you make it 20x20 that will be completely different, since it's a square. The best chance of getting it to "look the same" is to pick 2-to-1 ratio (like 120x60), so say 40x20. That's "aspect-ratio." But there is still a problem, since now you have to decide all over again what color to pick for each box, so to represent what is "mostly" on the photo at that spot. There are algorithms for that ("sampling," see your second link). That's involved with "resizing." The "quality" of the obtained drawing clearly must be much worse.
So "resizing" isn't all that simple. But, for us users, we mostly need to roughly know what is involved, and to find out how to use these features in a library. So read documentation. Some uses are very simple, while sometimes you'll have to decide which "algorithm" to let it use, or some such. Again, what I do is read manuals carefully.
The basic version of "cropping" is simple -- you just cut off a part of the picture. Say, remove the first and last 20 columns and the bottom and top 10 rows, and from the initial 120x60 you get a picture of 80x40. This is normally done when outer parts of an image have just white areas (or, worse, black!). So you want to "cut out" the picture itself from the whole image. Many graphics tools can do that on their own, by analyzing the image and figuring out those areas. Or, we select and hit a button.
I'm still not certain that you understand the difference between these terms
Your original image is 752 × 500 pixels
Resizing is a vague term that just means making a picture a different size somehow
Scaling is to change the size of an image proportionally. Scaling your picture down by a factor of ten would result in an image 75 × 50 (it should be 75.2 but we can't have 0.2 of a pixel). Scaling it up would make it bigger
You have scaled your picture to 50 × 50 pixels, which is a vertical scale of 10 (500 ÷ 5) but a horizontal scale of 15 (752 ÷ 50), so it appears squashed horizontally (or stretched vertically)
Cropping is to reduce an image by removing parts of it. To crop your image to 50 × 50 you would choose a 50 × 50 rectangle out of the whole picture and remove the rest. It would be a piece about the size of your monkey's nose, but you can pick any region you wish
zdim has shown you how you can call
$image->create_scaled_image(0, 50)
so that the height, or y-dimension, is reduced to 50, while the width, or x-dimension, is scaled by the same factor. That will result in a thumbnail 75 × 50 as above
I hope that helps
As I said in my comment, there is an
Image::Magick
Perl module if you would prefer to be back on familiar ground
Resizing and scaling is the same; you just change the size of the image. You can make it smaller or bigger.
Depending on the interface, you have to give either the new dimensions or a scaling factor for the operation. A factor less than or greater than 1.0 would make the image smaller or bigger. Smaller images are created by subsampling and bigger images by interpolation.
Cropping is very simple. You select a rectangular region of an image and that's your new image. It's like using scissors.
In your code example the image is named cropped_image although it is created through scaling, or resizing.
The output image is an image of size 50 x 50 pixels. That's what you did here:
my $cropped_image = $image->create_scaled_image(50, 50);
So no matter how your image looks before, you stuff it into 50 x 50 pixels. In this case not only reducing the resolution but also changing the aspect ratio.
The image is not displayed improperly, it's displayed perfectly fine.
When creating vector graphics for PDFs, I use one of the "create" functions for PDF rendering, for instance cairo_pdf_surface_create_for_stream. The signature of that function is:
cairo_surface_t * cairo_pdf_surface_create_for_stream (cairo_write_func_t write_func,
void *closure,
double width_in_points,
double height_in_points);
Now, I can set the size of the surface in points, but the size of one point is seemingly hardcoded. in the description it says:
width_in_points: width of the surface, in points (1 point == 1/72.0 inch)
height_in_points: height of the surface, in points (1 point == 1/72.0 inch)
As you can see, 1pt = 1/72" (72 dpi). But how do I change that setting?
I could factor something into the size, when using a different resolution and compensate that way, but this seems to me like worst practice ever.
A point is a standard typograpical unit of measure. Whether or not you're talking about Cairo, a point is simply 1/72". It's not some setting you change, just like the fact that you don't change the number of inches in a foot.
The whole reason for using a physical measurement (points) instead of a screen-dependent one (pixels) is resolution-independence. This is a Good Thing.
What are you hoping to accomplish by changing the DPI?
If by "change the dpi" you want to draw at a different scale than 1/72" you can use cairo_scale(). If you are referring to the dpi of fallback images (regions that are rasterized becasue they can not be drawn natively by pdf) use cairo_surface_set_fallback_resolution().
May I know what are the ways to calculate the length of 1 pixel in centimeters? The images that I have are 640x480. I would like to compare 2 pixels at different places on the image and find the difference in distance. Thus I would need to find out what's the length of the pixel in centimeters.
Thank you.
A pixel is a relative unit of measure, it does not have an absolute size.
Edit. With regard to your edit: again, you can only calculate the distance between two pixels in an image in pixels, not in centimeters. As a simple example, think video projectors: if you project, say, a 3×3px image onto a wall, the distance between the leftmost and the rightmost pixels could be anything from a few millimeters to several meters. If you moved the projector closer to the wall or farther away from it, the pixel size would change, and whatever distance you had calculated earlier would become wrong.
Same goes for computer monitors and other devices (as Johannes Rössel has explained in his answer). There, the pixel size in centimeters depends on factors such as the physical resolution of the screen, the resolution of the graphical interface, and the zooming factor at which the image is displayed.
A pixel does not have a fixed physical size, by definition. It is simply the smallest addressable unit of picture, however large or small.
This is fully dependent on the screen resolution and screen size:
pixel width = width of monitor viewable area / number of horizontal pixels
pixel height = height of monitor viewable area / number of vertical pixels
Actually, the answer depends on where exactly your real-world units are.
It comes down to dpi (dots per inch) which is the number of image pixels along a length of 2.54 cm. That's the resolution of an image or a target device (printer, screen, &c.).
Image files usually have a resolution embedded within them which specifies their real-world size. It doesn't alter their pixel dimensions, it just says how large they are if printed or how large a “100 %” view on a display would be.
Then there is the resolution of your screen, as others have mentioned, as well as the specified resolution your graphical interface uses (usually 96 dpi, sometimes 120)—and then it's all a matter of whether programs actually honor that setting ...
The OS will assume some dpi (usually 96 dpi on windows) however the screens real dpi will depend on the physical size of the display and the resolution
e.g a 15" monitor should have a 12" width so depending on the horizontal resolution you will get a different horizontal dpi, assuming a 1152 pixel screen width you will genuinely get 96 dpi
We have a requirement where our application needs to support high resolution monitors. Currently, when the application comes up in High res monitor, the text that it displays is too small. We use Arial 12 point font by default.
Now to make the text visible, I need to change the font size proportionally. I am finding it tough to come up with a formula which would give me the target font size given the monitor resolution.
Here is my understanding of the problem.
1) On windows, by default 96 pixels correpond to 1 Logical inch. This means that when the monitor resolution increases, the screen size in logical inches also increase.
2) 1 Point font is 1/72 of a Logical Inch. So combined with the fact that there are 96 Pixels per Logical inch, it turns out that, there are 96/72 Pixels per Point of Font.
This means that for a 12 point font, The number of Pixels it will occupy is 12*96/72 = 16 Pixels.
Now I need to know the scaling factor by which I need to increase these Number of Pixels so that the resultant Font is properly visible. If I know the scaled pixel count, I can get the Font size simply by dividing it by (96/72)
What is the suggested scaling factor which would ensure properly scaled Fonts on all monitor resolutions?
Also, please correct if my understanding is wrong.
There's an example on the MSDN page for LOGFONT structure. Your understanding is correct, you need to scale the point size by vertres / 72.
lfHeight = -PointSize * GetDeviceCaps(hDC, LOGPIXELSY) / 72;
If you set the resolution in Windows to match that of the physical monitor, no adjustment should be needed. Any well written program will do the multiplication and division necessary to scale the font properly, and in the newest versions of Windows the OS will lie about the resolution and scale the fonts automatically.
If you wish to handle this outside of the Windows settings, simply multiply your font size by your actual DPI and divide by 96.
Edit: Beginning with Windows Vista, Windows will not report your actual configured DPI unless you write a DPI-aware program. Microsoft has some guidance on the subject. You might find that the default scaling that Microsoft provides for non-DPI-aware programs is good enough for your purposes.
or "How I learned to stop worrying and learned to love measurement systems"
I wanted a central spot that I can refer to later to give me a quick low-down on various units of measurement used in programming. SO seemed the best place to put it, and while I could go ahead and answer the question myself, y'all are a much smarter bunch than I, so I might as well let you do it.
Please pick one unit that you're familiar with, use "#name" in the first line to give it as the heading (making it easy to find) and define it within your answer. Please do not duplicate - add comments or edit existing answers rather than adding a new answer. Similar units are still seperate - so please don't define em and en in the same answer. If a unit is exactly the same as another unit, add a line for "aliases" below the heading.
If it's a particularly obscure measurement type, please link to a second reference so people don't downvote you because they've never heard of it.
Point
Pica
Twips
Pixel
Em
En
CPI
DPI
I'm seeing a lot of downvoting - I suppose people believe this doesn't add value to StackOverflow's community. Please consider commenting below if you feel this doesn't add to the community, or if you think this is a bad question. I'm interested in improving it if you have any suggestions.
The great thing about standards is there are so many to choose from!
-Adam
I recommend to ammend the above answers using the following descriptions
PICA
Pica Typographic unit of measurement in the anglo-american point system. One pica is 1/72 Inch (0,351 mm) and equals 12 pica points. The didot equivalent of a pica is a cicero. A standard unit of measure in newspapers. There are 6 picas in one inch, 12 points in one pica.
PICA POINT
Pica Point 1/12 of a pica
POINT
996 points are equivalent to 35 centimeters, or one point is equal to .01383 inches. This means about 72.3 points to the inch. We in electronic printing use 72 points per inch
1 point (Truchet) = 0.188 mm (obsolete today)
1 point (Didot) = 0.376 mm = 1/72 of a French royal inch (27.07 mm)
1 point (ATA) = 0.3514598 mm = 0.013837 inch
1 point (TeX) = 0.3514598035 mm = 1/72.27 inch
1 point (Postscript) = 0.3527777778 mm = 1/72 inch
1 point (l’Imprimerie nationale, IN) = 0.4 mm
EM
An old printing term for a square-shaped blank space that’s as wide as the type is high; in other words, a 10-point em space will be 10 points wide.
EN
Half an em space; a 10-point en space will be 5 points wide.
DPI
The number of dots per inch a printer prints. The higher the dpi, the finer the resolution of the output.
PIXEL
The smallest dot you can draw on a computer screen
CPI
Counts per inch for Mouse properties and The number of horizontal characters that will fit in one inch for Printer properties
PITCH Alias CPI
Pitch describes the width of a character. Pitch equals the number of characters that can fit side-by-side in 1 inch; for example, 10 pitch equals 10 characters-per-inch or 10 CPI. Pitch is a term generally used with non-proportional (fixed-width) fonts.
TWIPS
A twip (derived from TWentieth of an Imperial Point) is a typographical measurement, defined as 1/20 of a typographical point. One twip is 1/1440 inch or 17.639 µm when derived from the PostScript point at 72 to the inch, and 1/1445.4 inch or 17.573 µm based on the printer's point at 72.27 to the inch
Additional Units:
LPI
The number of vertical lines of text that will fit in one inch
PPI
Thickness of paper, expressed in thousandths of an inch or pages per inch.
or sometimes no of horizontal pixels closely printed or displayed per inch.
FONT SIZE
Font size or Type size is the baseline distance for which the font was designed. A font should normally be identified and selected by this size, because the intended baseline distance is much more relevant for practical layout work than the actual dimensions of certain characters.
FONT HEIGHT
Font height is the height in mm of letters such as k or H. Typically, the font height is around 72% of the font size, but this is of course at the discretion of the font designer.
X-HEIGHT
x-height indicate typesize of lower-case letters excluding ascenders and descenders (from the height of the lower-case x)
H-HEIGHT
h-height or cap height refers to the height of a capital letter above the baseline for a particular typeface. It specifically refers to the height of capital letters that are flat—such as H or I—as opposed to round letters such as O.
Pixel
One of the little colored squares on your screen.
Pica
A typographical measure of 12 points, sometimes (incorrectly) called an Em. (in fact, an em is actually a horizontal distance the same as the point size of the type).
Twips
'Twentieth of an Imperial Point'. A measure used for marking up positions of widgets in Visual BASIC user interfaces. It was used this way so that positions could be specified precisely using integers. One Twip = 1/20 point = 1/1440 inch.
EM
An old printing term for a square-shaped blank space that’s as wide as the type is high; in other words, a 10-point em space will be 10 points wide.
DPI
Dots per inch. A dimensionless number used to measure the resolution of something in space, i.e. with respect to real occupied physical size.
dds complexity and headache since the standard/default DPI of a computer screen varies with the operating system. Macintosh screens generally have 72 DPI, while Windows favors 96. If you don't compensate for this when displaying images (and text), you will get unexpected variations.
Always amusing when people start talking of "the DPI of this image", for digital images such as PNG or JPEG. To me, they only have absolute pixels in them, unconnected to any physical size. If you want to print the image on a (for instance) 300 DPI printer, then you need to adapt and scale to get it correct, but the image itself only has pixels.
EN
Half an em space; a 10-point en space will be 5 points wide.
CPI
Counts per inch for Mouse properties and
The number of horizontal characters that will fit in one inch for Printer properties
PITCH Alias CPI
Pitch describes the width of a character. Pitch equals the number of characters that can fit side-by-side in 1 inch; for example, 10 pitch equals 10 characters-per-inch or 10 CPI. Pitch is a term generally used with non-proportional (fixed-width) fonts.
PostScript Point
1/72th of an inch.