I have simple triangle with indices:
{ { 0.0f, -0.1f } },
{ { 0.1f, 0.1f } },
{ { -0.1f, 0.1f } }
Matrix:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float)swapChainExtent.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1;
This code work fine, I see triangle. But, if I try use orthographic projection:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.0F);
ubo.proj[1][1] *= -1;
I do not see anything. :(
I tried to googled this problem and found no solution. What's my mistake?
Update:
Solved:
rasterizer.frontFace = VK_FRONT_FACE_CLOCKWISE;
...
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.1F, 1000.0F);
First, I don't know what Z range this overload of glm::ortho produces. Maybe Your vertices don't fit in that range. There is an overload version of this function which allows You to provide a Z/depth range. Try providing a range that covers Your Z/depth values or try moving vertices further away or closer to the camera. Or provide a range like from -1000 to +1000.
And another problem. How big is Your swapchain? If it is, for example, 800 x 600 pixels then You specify rendering area in the [0, 0] to [800, 600] range (in pixels). But You provide vertices that lie in area smaller than a single pixel, in the [-0.1, -0.1] to [0.1, 0.1] range (still in pixels). It's nothing strange You don't see anything, because Your whole triangle is smaller than a single pixel.
Probably these two problems caused that You don't see anything. When You change Your depth, You don't see anything due to triangle being to small. When You change the size of Your triangle (without changing depth), object is view-frustum culled. Change the size of Your triangle and then try changing depth values of vertices.
Related
I am trying to position my text model mesh on screen. Using the code below, it draws mesh as the code suggests; with the left of the mesh at the center of the screen. But, I would like to position it at the left of edge of the screen, and this is where I get stuck. If I un-comment the Matrix.translateM line, I would think the position will now be at the left of the screen, but it seems that the position is being scaled (!?)
A few scenarios I have tried:
a.) Matrix.scaleM only (no Matrix.translateM) = the left of the mesh is positioned 0.0f (center of screen), has correct scale.
b.) Matrix.TranslateM only (no Matrix.scaleM) = the left of the mesh is positioned -1.77f at the left of screen correctly, but scale incorrect.
c.) Matrix.TranslateM then Matrix.scaleM, or Matrix.scaleM then Matrix.TranslateM = the scale is correct, but position incorrect. It seems the position is scaled and is very much closer to the center than to the left of the screen.
I am using OpenGL ES 2.0 in Android Studio programming in Java.
Screen bounds (as setup from Matrix.orthoM)
left: -1.77, right: 1.77 (center is 0.0), top: -1.0, bottom: 1.0 (center is 0.0)
Mesh height is 1.0f, so if no Matrix.scaleM, the mesh takes the entire screen height.
float ratio = (float) 1920.0f / 1080.0f;
float scale = 64.0f / 1080.0f; // 64px height to projection matrix
Matrix.setIdentityM(modelMatrix, 0);
Matrix.scaleM(modelMatrix, 0, scale, scale, scale); // these two lines
//Matrix.translateM(modelMatrix, 0, -ratio, 0.0f, 0.0f); // these two lines
Matrix.setIdentityM(mMVPMatrix, 0);
Matrix.orthoM(mMVPMatrix, 0, -ratio, ratio, -1.0f, 1.0f, -1.0f, 1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mMVPMatrix, 0, modelMatrix, 0);
Thanks, Ed Halferty and Matic Oblak, you are both correct. As Matic suggested, I have now put the Matrix.TranslateM first, then Matrix.scaleM second. I have also ensured that the MVPMatrix is indeed modelviewprojection, and not projectionviewmodel.
Also, now with Matrix.translateM for the model mesh to -1.0f, it is to the left edge of the screen, which is better than -1.77f in any case.
Correct position + scale, thanks!
float ratio = (float) 1920.0f / 1080.0f;
float scale = 64.0f / 1080.0f;
Matrix.setIdentityM(modelMatrix, 0);
Matrix.translateM(modelMatrix, 0, -1.0f, 0.0f, 0.0f);
Matrix.scaleM(modelMatrix, 0, scale, scale, scale);
Matrix.setIdentityM(mMVPMatrix, 0);
Matrix.orthoM(mMVPMatrix, 0, -ratio, ratio, -1.0f, 1.0f, -1.0f, 1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, modelMatrix, 0, mMVPMatrix, 0);
I am trying to create a VR application using iPhone's motion manager object. By VR I mean an app to show places on camera.
I can successfully visualize iPhone orientation using yaw, pitch and roll with Z -> X -> Y rotational order.
Here is a picture of what I had done till now:
So I can rotate the device and it will do rotate correctly in my Windows app that I created for monitoring.
This is correct and I will show the code for it later. But this is not what I want to do.
What I want to do is actually opposite of this. I don't want device to move. I want the world around it to rotate. So if user point the device's camera to east, he should see the "east sign" and if he changed the direction to the north, he should see the "north sign" on the screen. But accurate. Not like other applications out there that removed the roll movement.
The problem is here that when I move the device from laying on the table to portrait mode, rotating to right and left result in incorrect rotation. And if I put it in landscape mode, then top to down rotation is incorrect. In other word, rotations are based on world axes and they only works correct when device in laying on the ground. Because it is the reference frame I think.
What I want to ask here is how to convert these angles so I see the result i expect. There should be a way based on trigonometry.
These are the function I use to calculate the rotation matrix:
private static Matrix4 CreateRotationMatrix(char axis, float radians, bool rightHanded = true)
{
float c = (float)Math.Cos(radians);
float s = (float)Math.Sin(radians) * (rightHanded ? 1 : -1);
switch (axis)
{
case 'X':
return new Matrix4(
new Vector4(1.0f, 0.0f, 0.0f, 0.0f),
new Vector4(0.0f, c, -s, 0.0f),
new Vector4(0.0f, s, c, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
case 'Y':
return new Matrix4(
new Vector4(c, 0.0f, s, 0.0f),
new Vector4(0.0f, 1.0f, 0.0f, 0.0f),
new Vector4(-s, 0.0f, c, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
case 'Z':
return new Matrix4(
new Vector4(c, -s, 0.0f, 0.0f),
new Vector4(s, c, 0.0f, 0.0f),
new Vector4(0.0f, 0.0f, 1.0f, 0.0f),
new Vector4(0.0f, 0.0f, 0.0f, 1.0f));
default:
return Matrix4.Identity;
}
}
public static Matrix4 MatrixFromEulerAngles(
Vector3 euler,
string order,
bool isRightHanded = true,
bool isIntrinsic = true)
{
if (order.Length != 3) throw new ArgumentOutOfRangeException("order", "String must have exactly 3 charecters");
// X = Pitch
// Y = Yaw
// Z = Roll
return isIntrinsic
? CreateRotationMatrix(order[2], GetEulerAngle(order[2], euler), isRightHanded)
* CreateRotationMatrix(order[1], GetEulerAngle(order[1], euler), isRightHanded)
* CreateRotationMatrix(order[0], GetEulerAngle(order[0], euler), isRightHanded)
: CreateRotationMatrix(order[0], GetEulerAngle(order[0], euler), isRightHanded)
* CreateRotationMatrix(order[1], GetEulerAngle(order[1], euler), isRightHanded)
* CreateRotationMatrix(order[2], GetEulerAngle(order[2], euler), isRightHanded);
}
private static float GetEulerAngle(char angle, Vector3 euler)
{
switch (angle)
{
case 'X':
return euler.X;
case 'Y':
return euler.Y;
case 'Z':
return euler.Z;
default:
return 0f;
}
}
And this is how I apply the matrix to OpenGL:
Matrix4 projectionMatrix = Helper.MatrixFromEulerAngles(new Vector3(pitch, yaw, roll), "YXZ", true, true);
GL.LoadMatrix(ref projectionMatrix);
Ok, inverting was part of the answer. It helped a lot. Thanks #dari and #Spektre for the suggestion.
But the complete answer was, change the rotation direction (right-handed -> left-handed), changing the rotation order (YXZ -> ZXY, in other word, Intrinsic to Extrinsic) and Inverting the result matrix.
Before asking, I had tried all of these three alone, but never thought of using them all together.
So:
Matrix4 projectionMatrix = Helper.MatrixFromEulerAngles(new Vector3(pitch, yaw, roll), "YXZ", false, false);
projectionMatrix.Invert();
GL.LoadMatrix(ref projectionMatrix);
I'm trying to figure out World space to Screen space transform. As I understand, in D3D11, function XMVector3Project should handle this. However, when I use it like this:
XMVECTOR eye = XMVectorSet(10000, 0.0f, 1.5f, 0.0f);
XMVECTOR at = XMVectorSet(10000, 0.0f, 0.0f, 0.0f);
XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
auto viewMatrix = XMMatrixTranspose(XMMatrixLookAtRH(eye2, at2, up2));
XMVECTOR vec = XMVector3Project(XMVectorSet(0.0, 0.0, 0.0, 1.0f), 0, 0, 480, 800, 0, 1, XMMatrixIdentity(), viewMatrix, XMMatrixIdentity());
it returns point (240, 480). I don't understand how that's possible, cause even with no Projection matrix, when I set view matrix to show point (1000, 1000, x), Point (0,0,0) shouldn't show on screen at all.
That's just my view, probably wrong, so I would like to know how is that intended behaviour?
I think the problem here is your use of XMMatrixTranspose. DirectXMath (aka XNAMath version 3 aka xboxmath) functions are all written assuming you have row-major matrices either left-handed or right-handed. By applying the XMMatrixTranspose to the lookat matrix, you are making it column-major. While this is commonly done as a last step before setting it into a Constant Buffer for consumption by HLSL (see MSDN DirectXMath Programmer's Guide and MSDN HLSL docs for details), the result doesn't make sense to use this way with XMVector3Project.
BTW, I'm assuming your use of XMVectorSet here is just for testing, but the efficient way to code a constant XMVECTOR is using XMVECTORF32.
static const XMVECTORF32 eye = { 10000, 0.0f, 1.5f, 0.0f };
static const XMVECTORF32 at = { 10000, 0.0f, 0.0f, 0.0f };
static const XMVECTORF32 up = { 1.0f, 0.0f, 0.0f, 0.0f };
I am experimenting with different matrices, studying their effect on a textured quad. So far I have implemented Scaling, Rotation, and Translation matrices fairly easily - by using the following method against my position vectors:
enter code here
for(int a=0;a<noOfVertices;a++)
{
myVectorPositions[a] = SlimDX.Vector3.TransformCoordinate(myVectorPositions[a],myPerspectiveMatrix);
}
However, I what I want to do is be able to position my vectors using world-space coordinates, not object-space.
At the moment my position vectors are declared thusly:
enter code here
myVectorPositions[0] = new Vector3(-0.1f, 0.1f, 0.5f);
myVectorPositions[1] = new Vector3(0.1f, 0.1f, 0.5f);
myVectorPositions[2] = new Vector3(-0.1f, -0.1f, 0.5f);
myVectorPositions[3] = new Vector3(0.1f, -0.1f, 0.5f);
On the other hand (and as part of learning about matrices) I have read that I need to apply a matrix to get to screen coordinates. I've been looking through the SlimDX API docs and can't seem to pin down the one I should be using.
In any case, hopefully the above makes sense and what I am trying to achieve is clear. I'm aiming for a simple 1024 x 768 window as my application area, and want to position a my textured quad at 10,10. How do I go about this? Most confused right now.
I am not familiar with slimdx, but in native DirectX, if you want to draw a quad in screen coordinates, you should define the vertex format as Translated, that is you specify the screen coordinates directly instead of using D3D transform engine to transform your vertex. the vertex definition as below
#define SCREEN_SPACE_FVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
and you can define your vertex like this
ScreenVertex Vertices[] =
{
// Triangle 1
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, }, // x, y, z, rhw, color
{ 350.0f, 150.0f, 0, 1.0f, 0xff00ff00, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
// Triangle 2
{ 150.0f, 150.0f, 0, 1.0f, 0xffff0000, },
{ 350.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
{ 150.0f, 350.0f, 0, 1.0f, 0xff00ffff, },
};
By default screen space in 3d systems is from -1 to 1 (where -1,-1 is bottom left corner and 1,1 top right).
To convert those unit to pixel values, you need to convert pixel values into this space. So for example pixel 10,30 on a screen of 1024*768 is:
position.x = 10.0f * (1.0f / 1024.0f); // maps to 0/1
position.x *= 2.0f; //maps to 0/2
position.x -= 1.0f; // Maps to -1/1
Now for y you do
position.y = 30.0f * (1.0f / 768.0f); // maps to 0/1
position.y = 1.0f - position.y; //Inverts y
position.y *= 2.0f; //maps to 0/2
position.y -= 1.0f; // Maps to -1/1
Also if you want to apply transforms to your quads, It is better to send the transformation to the shader (and do the vector transformation in the vertex shader), rather than doing the multiplications on the vertices, since you will not need to update your vertexbuffer every time.
This is OpenGL on iPhone 4.
Im drawing scene using light and materials. Here is snippet of my code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustumf(-1, 1, -1, 1, -1, 1);
CGFloat ambientLight[] = { 0.5f, 0.5f, 0.5f, 1.0f };
CGFloat diffuseLight[] = { 1.0f, 1.0f, 1.0f, 1.0f };
CGFloat direction[] = { 0.0f, 0.0f, -20.0f, 0 };
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambientLight);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseLight);
glLightfv(GL_LIGHT0, GL_POSITION, direction);
glShadeModel(GL_FLAT);
glEnable(GL_LIGHTING);
glDisable(GL_COLOR_MATERIAL);
float blankColor[4] = {0,0,0,1};
float whiteColor[4] = {1,1,1,1};
float blueColor[4] = {0,0,1,1};
glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor);
glEnable(GL_CULL_FACE);
glVertexPointer(3, GL_FLOAT, 0, verts.pdata);
glEnableClientState(GL_VERTEX_ARRAY);
glNormalPointer(GL_FLOAT, 0, normals.pdata);
glEnableClientState(GL_NORMAL_ARRAY);
glDrawArrays (GL_TRIANGLES, 0, verts.size/3);
Problem is that instead of seeing BLUE diffuse color I see it white. It fades out if I rotate model's side but I can't understand why its not using my blue color.
BTW if I change glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor) to glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, blueColor) then I do see blue color. If I do it glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor); and then glMaterialfv(GL_BACK, GL_DIFFUSE, blueColor); I see white color again. So it looks like GL_FRONT_AND_BACK shows it but rest of combinations show white. Anyone can explain it to me?
This is because of clockwise
10.090 How does face culling work? Why doesn't it use the surface normal?
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension
from here
Also you can read here