Are pixels ever not a square on a monitor? - windows

I'm working on making my application DPI sensitive using this MSDN guide where the technique for scaling uses X and Y logical pixels from a device context.
int _dpiX = 96, _pdiY = 96;
HDC hdc = GetDC(NULL);
if (hdc)
{
_dpiX = GetDeviceCaps(hdc, LOGPIXELSX);
_dpiY = GetDeviceCaps(hdc, LOGPIXELSY);
ReleaseDC(NULL, hdc);
}
Then you can scale X and Y coordinates using
int ScaleX(int x) { return MulDiv(x, _dpiX, 96); }
int ScaleY(int y) { return MulDiv(y, _dpiY, 96); }
Is there ever a situation where GetDeviceCaps(hdc, LOGPIXELSX) and GetDeviceCaps(hdc, LOGPIXELSY) would return different numbers for a monitor. The only device I'm really concerned about is a monitor so do I need to have separate ScaleX(int x) and ScaleY(int y) functions? Could I use just one Scale(int px) function? Would there be a downside to doing this?
Thanks in advance for the help.

It is theoretically possible, but I don't know of any recent monitor that uses non-square pixels. There are so many advantages to square pixels, and so much existing software assumes square pixels, that it seems unlikely for a mainstream monitor to come out with a non-square pixel mode.
In many cases, if you did have a monitor with non-square pixels, you probably could apply a transform to make it appear as though it has square pixels (e.g., by setting the mapping mode).
That said, it is common for printers to have non-square device units. Many of them have a much higher resolution in one dimension than in the other. Some drivers make this resolution available to the caller. Others will make it appear as though it has square pixels. If you ever want to re-use your code for printing, I'd advise you to not conflate your horizontal and vertical scaling.

Hardware pixels of LCD panels are always square. Using CRT, you can have rectangular square, like using 320x200 or 320x400 resolution on 4:3 monitor (these resolution were actualy used). On LCD you can get rectangular pixels by using non-native resolution on monitor - widescreen resolution on 5:4 monitor and vice versa.

Related

What is the expected renderscript rsSample() performance?

rsSample() seems intended for sampling from mipmapped textures across multiple levels of detail to avoid aliasing. The fisheye example would be a good use case.
The implementation simply samples 8 pixels from the underlying mipmap and does a linear blend of them.
It seems I can only get 15 Mpixels/second on a Google Pixel with this simple kernel:
uchar4 __attribute__((kernel)) rescaletest(uint32_t x, uint32_t y) {
float2 location = {x,y};
return convert_uchar4(rsSample(gInput8888, gWrapLinearMipLinear, location/2000.f, 1.5f)*255.f);
}
Considering all composited graphics presumably use mipmaps, and to composite even one texture at 60fps needs 120Mpixels/sec, what am I doing wrong?

Structured Light - How to do when the projector's resolution is lower than patterns?

I try to build a structured light environment to do 3D scanning.
As far as I know, if I choose to use gray code to reconstruct a 3D model, I have to implement specific patterns that were encode in power 2(2^x, x = 0 ~ 10).
That is said, the patterns must be at least 1024 x 1024 in resolution.
What if my DLP projector only support resolution up to 800 x 480? It projects Moire pattern when the gray code pattern resolution becomes too high(I tried). What should I do?
My friends suggest that I create 1024 x 1024 patterns, and "crop" them into 800 x 480,
but I thought the gray code should follow specific sequence and patterns, my friends suggestion will create several image that is not symmetry.
Does anyone have the same experience like me?
----------2015.8.4 Update Question----------
I was thinking that if my projector can't perfectly projects high resolution patterns, can I just let it projects the patterns with low resolution, for instance, from 2^0 to 2^6?
Or the gray code strictly demands patterns from 2^0 to 2^10? Otherwise gray code is not available?
you can not directly scale down to your resolution
because it would distort the pattern make it useless
instead you can:
crop it to your resolution
but you need to handle that in the scanning part too because you would not have the full pattern available
use nearest usable power of 2 resolution
like 512x256 and create pattern for it. The rest of the space is unused (wasting pixels)
use bullet #2 + scale up to fit your resolution better
so create pattern 512x256 and linearly scale to fit to 800x480 as much as you can so:
800/512 = 1.5625
480/256 = 1.8750
use the smaller scale (512x256 * 1.5625 -> 800x400) so scale the pattern by 1.5625 and use that as a pattern image
this is scaled by nearest neighbor to avoid subpixel grayscale colors which are harder to detect. This will waste less pixels but it can lower the precision of 3D scan !!!
This is how I generate my pattern in C++ and VCL:
// [generate pattern xs*ys power of 2 resolution]
// clear buffer
bmp->Canvas->Brush->Color=clBlack;
bmp->Canvas->FillRect(TRect(0,0,xs,ys));
int x,y,a,da;
for (da=0;1<<da<xs;da++); // number of bits per x resolution
for (a=0,y=0;y<ys;y++,a=(y*da)/ys)
for (x=0;x<xs;x++)
if (int((x>>a)&1)==0) pyx[ys-1-y][x]=0x00FFFFFF;
bmp->SaveToFile("3D_scann_pattern0.bmp");
bmp is VCL bitmap
xs,ys is the resolution of a bitmap
p[ys][xs] is direct 32bit pixel access to bitmap
This is slight differently encoded then your pattern !!!
[Notes]
If you need precision use bullet #2
If you need to cover bigger area use bullet #3
You can also scale in y axis differently then in x axis as this is just 1D encoding

How to set DPI in cairographics?

When creating vector graphics for PDFs, I use one of the "create" functions for PDF rendering, for instance cairo_pdf_surface_create_for_stream. The signature of that function is:
cairo_surface_t * cairo_pdf_surface_create_for_stream (cairo_write_func_t write_func,
void *closure,
double width_in_points,
double height_in_points);
Now, I can set the size of the surface in points, but the size of one point is seemingly hardcoded. in the description it says:
width_in_points: width of the surface, in points (1 point == 1/72.0 inch)
height_in_points: height of the surface, in points (1 point == 1/72.0 inch)
As you can see, 1pt = 1/72" (72 dpi). But how do I change that setting?
I could factor something into the size, when using a different resolution and compensate that way, but this seems to me like worst practice ever.
A point is a standard typograpical unit of measure. Whether or not you're talking about Cairo, a point is simply 1/72". It's not some setting you change, just like the fact that you don't change the number of inches in a foot.
The whole reason for using a physical measurement (points) instead of a screen-dependent one (pixels) is resolution-independence. This is a Good Thing.
What are you hoping to accomplish by changing the DPI?
If by "change the dpi" you want to draw at a different scale than 1/72" you can use cairo_scale(). If you are referring to the dpi of fallback images (regions that are rasterized becasue they can not be drawn natively by pdf) use cairo_surface_set_fallback_resolution().

How WebGL works?

I'm looking for deep understanding of how WebGL works. I'm wanting to gain knowledge at a level that most people care less about, because the knowledge isn't necessary useful to the average WebGL programmer. For instance, what role does each part(browser, graphics driver, etc..) of the total rendering system play in getting an image on the screen?
Does each browser have to create a javascript/html engine/environment in order to run WebGL in browser? Why is chrome a head of everyone else in terms of being WebGL compatible?
So, what's some good resources to get started? The kronos specification is kind of lacking( from what I saw browsing it for a few minutes ) for what I'm wanting. I'm wanting mostly how is this accomplished/implemented in browsers and what else needs to change on your system to make it possible.
Hopefully this little write-up is helpful to you. It overviews a big chunk of what I've learned about WebGL and 3D in general. BTW, if I've gotten anything wrong, somebody please correct me -- because I'm still learning, too!
Architecture
The browser is just that, a Web browser. All it does is expose the WebGL API (via JavaScript), which the programmer does everything else with.
As near as I can tell, the WebGL API is essentially just a set of (browser-supplied) JavaScript functions which wrap around the OpenGL ES specification. So if you know OpenGL ES, you can adopt WebGL pretty quickly. Don't confuse this with pure OpenGL, though. The "ES" is important.
The WebGL spec was intentionally left very low-level, leaving a lot to
be re-implemented from one application to the next. It is up to the
community to write frameworks for automation, and up to the developer
to choose which framework to use (if any). It's not entirely difficult
to roll your own, but it does mean a lot of overhead spent on
reinventing the wheel. (FWIW, I've been working on my own WebGL
framework called Jax for a while
now.)
The graphics driver supplies the implementation of OpenGL ES that actually runs your code. At this point, it's running on the machine hardware, below even the C code. While this is what makes WebGL possible in the first place, it's also a double edged sword because bugs in the OpenGL ES driver (which I've noted quite a number of already) will show up in your Web application, and you won't necessarily know it unless you can count on your user base to file coherent bug reports including OS, video hardware and driver versions. Here's what the debug process for such issues ends up looking like.
On Windows, there's an extra layer which exists between the WebGL API and the hardware: ANGLE, or "Almost Native Graphics Layer Engine". Because the OpenGL ES drivers on Windows generally suck, ANGLE receives those calls and translates them into DirectX 9 calls instead.
Drawing in 3D
Now that you know how the pieces come together, let's look at a lower level explanation of how everything comes together to produce a 3D image.
JavaScript
First, the JavaScript code gets a 3D context from an HTML5 canvas element. Then it registers a set of shaders, which are written in GLSL ([Open] GL Shading Language) and essentially resemble C code.
The rest of the process is very modular. You need to get vertex data and any other information you intend to use (such as vertex colors, texture coordinates, and so forth) down to the graphics pipeline using uniforms and attributes which are defined in the shader, but the exact layout and naming of this information is very much up to the developer.
JavaScript sets up the initial data structures and sends them to the WebGL API, which sends them to either ANGLE or OpenGL ES, which ultimately sends it off to the graphics hardware.
Vertex Shaders
Once the information is available to the shader, the shader must transform the information in 2 phases to produce 3D objects. The first phase is the vertex shader, which sets up the mesh coordinates. (This stage runs entirely on the video card, below all of the APIs discussed above.) Most usually, the process performed on the vertex shader looks something like this:
gl_Position = PROJECTION_MATRIX * VIEW_MATRIX * MODEL_MATRIX * VERTEX_POSITION
where VERTEX_POSITION is a 4D vector (x, y, z, and w which is usually set to 1); VIEW_MATRIX is a 4x4 matrix representing the camera's view into the world; MODEL_MATRIX is a 4x4 matrix which transforms object-space coordinates (that is, coords local to the object before rotation or translation have been applied) into world-space coordinates; and PROJECTION_MATRIX which represents the camera's lens.
Most often, the VIEW_MATRIX and MODEL_MATRIX are precomputed and
called MODELVIEW_MATRIX. Occasionally, all 3 are precomputed into
MODELVIEW_PROJECTION_MATRIX or just MVP. These are generally meant
as optimizations, though I'd like find time to do some benchmarks. It's
possible that precomputing is actually slower in JavaScript if it's
done every frame, because JavaScript itself isn't all that fast. In
this case, the hardware acceleration afforded by doing the math on the
GPU might well be faster than doing it on the CPU in JavaScript. We can
of course hope that future JS implementations will resolve this potential
gotcha by simply being faster.
Clip Coordinates
When all of these have been applied, the gl_Position variable will have a set of XYZ coordinates ranging within [-1, 1], and a W component. These are called clip coordinates.
It's worth noting that clip coordinates is the only thing the vertex shader really
needs to produce. You can completely skip the matrix transformations
performed above, as long as you produce a clip coordinate result. (I have even
experimented with swapping out matrices for quaternions; it worked
just fine but I scrapped the project because I didn't get the
performance improvements I'd hoped for.)
After you supply clip coordinates to gl_Position WebGL divides the result by gl_Position.w producing what's called normalized device coordinates.
From there, projecting a pixel onto the screen is a simple matter of multiplying by 1/2 the screen dimensions and then adding 1/2 the screen dimensions.[1] Here are some examples of clip coordinates translated into 2D coordinates on an 800x600 display:
clip = [0, 0]
x = (0 * 800/2) + 800/2 = 400
y = (0 * 600/2) + 600/2 = 300
clip = [0.5, 0.5]
x = (0.5 * 800/2) + 800/2 = 200 + 400 = 600
y = (0.5 * 600/2) + 600/2 = 150 + 300 = 450
clip = [-0.5, -0.25]
x = (-0.5 * 800/2) + 800/2 = -200 + 400 = 200
y = (-0.25 * 600/2) + 600/2 = -150 + 300 = 150
Pixel Shaders
Once it's been determined where a pixel should be drawn, the pixel is handed off to the pixel shader, which chooses the actual color the pixel will be. This can be done in a myriad of ways, ranging from simply hard-coding a specific color to texture lookups to more advanced normal and parallax mapping (which are essentially ways of "cheating" texture lookups to produce different effects).
Depth and the Depth Buffer
Now, so far we've ignored the Z component of the clip coordinates. Here's how that works out. When we multiplied by the projection matrix, the third clip component resulted in some number. If that number is greater than 1.0 or less than -1.0, then the number is beyond the view range of the projection matrix, corresponding to the matrix zFar and zNear values, respectively.
So if it's not in the range [-1, 1] then it's clipped entirely. If it is in that range, then the Z value is scaled to 0 to 1[2] and is compared to the depth buffer[3]. The depth buffer is equal to the screen dimensions, so that if a projection of 800x600 is used, the depth buffer is 800 pixels wide and 600 pixels high. We already have the pixel's X and Y coordinates, so they are plugged into the depth buffer to get the currently stored Z value. If the Z value is greater than the new Z value, then the new Z value is closer than whatever was previously drawn, and replaces it[4]. At this point it's safe to light up the pixel in question (or in the case of WebGL, draw the pixel to the canvas), and store the Z value as the new depth value.
If the Z value is greater than the stored depth value, then it is deemed to be "behind" whatever has already been drawn, and the pixel is discarded.
[1]The actual conversion uses the gl.viewport settings to convert from normalized device coordinates to pixels.
[2]It's actually scaled to the gl.depthRange settings. They default 0 to 1.
[3]Assuming you have a depth buffer and you've turned on depth testing with gl.enable(gl.DEPTH_TEST).
[4]You can set how Z values are compared with gl.depthFunc
I would read these articles
http://webglfundamentals.org/webgl/lessons/webgl-how-it-works.html
Assuming those articles are helpful, the rest of the picture is that WebGL runs in a browser. It renderers to a canvas tag. You can think of a canvas tag like an img tag except you use the WebGL API to generate an image instead of download one.
Like other HTML5 tags the canvas tag can be styled with CSS, be under or over other parts of the page. Is composited (blended) with other parts of the page. Be transformed, rotated, scaled by CSS along with other parts of the page. That's a big difference from OpenGL or OpenGL ES.

Pixel to Centimeter?

I just want to know if the pixel unit is something that doesn't change, and if we can convert from pixels to let's say centimeters ?
Similar to this question which asks about points instead of centimeters. There are 72 points per inch and there are 2.54 centimeters per inch, so just substitute 2.54 for 72 in the answer to that question. I'll quote and correct my answer here:
There are 2.54 centimeters per inch; if it is sufficient to assume 96 pixels per inch, the formula is rather simple:
centimeters = pixels * 2.54 / 96
There is a way to get the configured pixels per inch of your display for Microsoft Windows called GetDeviceCaps. Microsoft has a guide called "Developing DPI-Aware Applications", look for the section "Creating DPI-Aware Fonts".
Converting pixels to centimeters depends on the DPI (dots per inch) of the media displaying the image, i.e. monitor, laser printer, etc.
http://wiki.answers.com/Q/How_do_you_convert_pixels_into_centimeters
I'm going to go out on a limb and just guess that you want to be able to display things to the user on their monitor, scaled to be very close to its real life size.
IF this is the case, I would recommend either displaying your items next to real life items (credit cards, dollar bills, pop cans, etc) or even better, allow the user to hold something up to the screen like a credit card or dollar bill or ruler. You could then have them scale a slider or something similar to meet the width or height of that object.
By holding a credit card, something with a relatively known height and width, up to the screen, you can easily determine the ratio of pixels to inches and use that to your hearts content.
Wiki says
Most credit cards are issued by local banks or credit unions, and are the shape and size specified by the ISO/IEC 7810 standard as ID-1 (85.60 × 53.98 mm)
Using mspaint, a credit card of mine is exactly 212 pixels tall, thats 53.98mm / 212 pixels = 3.92 pixels per mm. Multiply by 10 and that's 39.2 pixels per cm.
You could EASILY do that programatically via javascript, C#, flash, whatever you want.
You can convert from pixels to centimeters, but it's not a consistent conversion. It will depend on the size and resolution of the display device in question. The definition of a pixel will not change, but the size of a pixel will vary on different display devices.
No, different mediums & monitors have different pixel density.
For instance a desktop monitor may have 75 pixels per inch whereas a print may be outputted at 300.
Here is a list of displays by pixel density
In Adobe Illustrator CS3 :(, the figure I get is 1 cm = 28.347 pixels. Note I am using an iMac 7. that has a resolution of 102 pixels per inch, 40 ppcm according to the link http://en.wikipedia.org/wiki/List_of_displays_by_pixel_density provided by rebo.
I created an Adobe Illustrator CS3 document using javascript to test the value of 1 cm = 28.347 pixels and it matches perfectly.
I know this question is very old but I was trying to find an answer to it and decided to share my findings.
Regards
The pixel system is that it depend on your screen resolution.
Well , first you should get the dpi(density pixel perInch) of your screen.
For example your screen dpi is 96;
1 CM = 37.795276F Pixel in 96dpi.
37.795276F / 96F = 0.03937007F which is each pixel in 1dpi.
now you can make it adapted to your screen by getting the current dpi of screen and multiply it to 0.03937007F. then you have each Centimeter in your desire dpi(screen resolution)
lets set scenario .
I want to make a methode which get CM and return pixel base on screen Dpi;
public float CentimeterToPixel(int valueCM, float dpi)
{
return 0.03937007F * dpi * valueCM;
}
if you want to make it more accurate you have to approach dpiX & dpiY.
for example in C# winforms You can add an object of Graphics from
System.Drawing And System.Drawing.Drawing2D then Get it dpiX & dpiY value and design youre area based on it to have more acurate calculation(in some case that horizontal resolution differ from vertical).
See the code bellow.
using System;
using System.Windows.Forms;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
namespace MyApp
{
static class MyAppClass
{
private static Bitmap bmp = new Bitmap(1, 1);// a simple bitmap that automaticaly created base on current screen resolution
private static Graphics graphic = Graphics.FromImage(bmp);
public static float CentimeterToPixelWidth(int valueCM)
{
return 0.03937007F * graphic.DpiX * valueCM;
}
public static float CentimeterToPixelHeight(int valueCM)
{
return 0.03937007F * graphic.DpiY * valueCM;
}
}
}
Whish it help you, Heydar.
The size of pixels change depending on the display device.
The following "found" code uses api calls to determine pixel density Get screen DPI in .NET
("Found" as in I googled it but haven't tried it)
As far as I understand it, a PIXEL is:
Picture Element
thus it depends on two things:
(a) Resolution
(b) Physical Screen size
Thus if you divide screen size by resolution, this should give you CM per Pixel.

Resources