The position of light in opengl - opengl-es

I am trying to draw a rotating cubic and add a spot light in a fixed position in front of this cubic. But because i set the wrong value in z-axis, the light don't show up. After i tried different position of light, the cubic finally displays as i wished. But i still don't know why this value works.
Here is the wrong code. But i think the value is reasonable.
Matrix.setIdentityM(mLightModelMatrix, 0);
Matrix.translateM(mLightModelMatrix, 0, 0.0f, 0.0f, 0.5f);
Matrix.multiplyMV(mLightPosInWorldSpace, 0, mLightModelMatrix, 0, mLightPosInModelSpace, 0);
Matrix.multiplyMV(mLightPosInEyeSpace, 0, mViewMatrix, 0, mLightPosInWorldSpace, 0);
GLES20.glUniform3f(muLightPosHandler,
mLightPosInEyeSpace[0],
mLightPosInEyeSpace[1],
mLightPosInEyeSpace[2]);
Here is the right code. But i don't know why it works.
Matrix.setIdentityM(mLightModelMatrix, 0);
Matrix.translateM(mLightModelMatrix, 0, 0.0f, 0.0f, 2.8f);
Matrix.multiplyMV(mLightPosInWorldSpace, 0, mLightModelMatrix, 0, mLightPosInModelSpace, 0);
Matrix.multiplyMV(mLightPosInEyeSpace, 0, mViewMatrix, 0, mLightPosInWorldSpace, 0);
GLES20.glUniform3f(muLightPosHandler,
mLightPosInEyeSpace[0],
mLightPosInEyeSpace[1],
mLightPosInEyeSpace[2]);
The difference between these two snippets is only the z-axis of light position. You can get all source code from here.
The reason i think 0.0f, 0.0f, 0.5f is a reasonable position is that the center point of the cubic front face is 0.0f, 0,0f, 0.5f before transformation. So it will give the cubic strongest light.

It's because you have a spot light which is defined by a light cone. At any distance before 2.8 the cone isn't wide enough to light the entire cube, as demonstrated by this picture:
To get the whole face of the cube to be lit you either need to move the light further away (as you have found) or widen the light cone.
With only two triangles per face you won't get much lighting in the first case as none of the vertices are lit. It's the light on the vertices that determine the light on the face. If you break the model down into more faces you'll see the effect.

Related

glm::orto in Vulkan

I have simple triangle with indices:
{ { 0.0f, -0.1f } },
{ { 0.1f, 0.1f } },
{ { -0.1f, 0.1f } }
Matrix:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float)swapChainExtent.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1;
This code work fine, I see triangle. But, if I try use orthographic projection:
ubo.model = glm::mat4(1.0f);
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.0F);
ubo.proj[1][1] *= -1;
I do not see anything. :(
I tried to googled this problem and found no solution. What's my mistake?
Update:
Solved:
rasterizer.frontFace = VK_FRONT_FACE_CLOCKWISE;
...
ubo.proj = glm::ortho(0.0F, swapChainExtent.width, swapChainExtent.height, 0.1F, 1000.0F);
First, I don't know what Z range this overload of glm::ortho produces. Maybe Your vertices don't fit in that range. There is an overload version of this function which allows You to provide a Z/depth range. Try providing a range that covers Your Z/depth values or try moving vertices further away or closer to the camera. Or provide a range like from -1000 to +1000.
And another problem. How big is Your swapchain? If it is, for example, 800 x 600 pixels then You specify rendering area in the [0, 0] to [800, 600] range (in pixels). But You provide vertices that lie in area smaller than a single pixel, in the [-0.1, -0.1] to [0.1, 0.1] range (still in pixels). It's nothing strange You don't see anything, because Your whole triangle is smaller than a single pixel.
Probably these two problems caused that You don't see anything. When You change Your depth, You don't see anything due to triangle being to small. When You change the size of Your triangle (without changing depth), object is view-frustum culled. Change the size of Your triangle and then try changing depth values of vertices.

OpenGL Matrix scale then Translate is still scaling my position

I am trying to position my text model mesh on screen. Using the code below, it draws mesh as the code suggests; with the left of the mesh at the center of the screen. But, I would like to position it at the left of edge of the screen, and this is where I get stuck. If I un-comment the Matrix.translateM line, I would think the position will now be at the left of the screen, but it seems that the position is being scaled (!?)
A few scenarios I have tried:
a.) Matrix.scaleM only (no Matrix.translateM) = the left of the mesh is positioned 0.0f (center of screen), has correct scale.
b.) Matrix.TranslateM only (no Matrix.scaleM) = the left of the mesh is positioned -1.77f at the left of screen correctly, but scale incorrect.
c.) Matrix.TranslateM then Matrix.scaleM, or Matrix.scaleM then Matrix.TranslateM = the scale is correct, but position incorrect. It seems the position is scaled and is very much closer to the center than to the left of the screen.
I am using OpenGL ES 2.0 in Android Studio programming in Java.
Screen bounds (as setup from Matrix.orthoM)
left: -1.77, right: 1.77 (center is 0.0), top: -1.0, bottom: 1.0 (center is 0.0)
Mesh height is 1.0f, so if no Matrix.scaleM, the mesh takes the entire screen height.
float ratio = (float) 1920.0f / 1080.0f;
float scale = 64.0f / 1080.0f; // 64px height to projection matrix
Matrix.setIdentityM(modelMatrix, 0);
Matrix.scaleM(modelMatrix, 0, scale, scale, scale); // these two lines
//Matrix.translateM(modelMatrix, 0, -ratio, 0.0f, 0.0f); // these two lines
Matrix.setIdentityM(mMVPMatrix, 0);
Matrix.orthoM(mMVPMatrix, 0, -ratio, ratio, -1.0f, 1.0f, -1.0f, 1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mMVPMatrix, 0, modelMatrix, 0);
Thanks, Ed Halferty and Matic Oblak, you are both correct. As Matic suggested, I have now put the Matrix.TranslateM first, then Matrix.scaleM second. I have also ensured that the MVPMatrix is indeed modelviewprojection, and not projectionviewmodel.
Also, now with Matrix.translateM for the model mesh to -1.0f, it is to the left edge of the screen, which is better than -1.77f in any case.
Correct position + scale, thanks!
float ratio = (float) 1920.0f / 1080.0f;
float scale = 64.0f / 1080.0f;
Matrix.setIdentityM(modelMatrix, 0);
Matrix.translateM(modelMatrix, 0, -1.0f, 0.0f, 0.0f);
Matrix.scaleM(modelMatrix, 0, scale, scale, scale);
Matrix.setIdentityM(mMVPMatrix, 0);
Matrix.orthoM(mMVPMatrix, 0, -ratio, ratio, -1.0f, 1.0f, -1.0f, 1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, modelMatrix, 0, mMVPMatrix, 0);

How to scale and rotate textures in opengl es?

I am using opengl ES for my iphone game. To scale and rotate my object i do this:
glScalef( scaleX , scaleY ,1);
glRotatef(rotationZ, 0.0f, 0.0f, 1.0f)
I am using an ortho screen with orthof(-1,1,-1,1,-1,1). My problem is when i rotate objects, the image gets skewed. I understand why that is happening as i am scaling wrt to the screen size so while rotating it changes the image size.
What can i do to prevent it from getting skewed.
glViewport(0,0, (GLint)screenWidth, (GLint)screenHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1,1,-1,1,-1,1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glTranslatef(positionX, positionY,0.0f);
glScalef(scaleX , scaleY ,1);
glRotatef(rotationZ, 0.0f, 0.0f, 1.0f);
Use an ortho projection that matches your aspect ratio of the screen rather than just sending a bunch of ones. Unless you have a square screen, your left/right shouldn't be the same as your top/bottom or you will see skew.

glMaterialfv not working for me

This is OpenGL on iPhone 4.
Im drawing scene using light and materials. Here is snippet of my code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustumf(-1, 1, -1, 1, -1, 1);
CGFloat ambientLight[] = { 0.5f, 0.5f, 0.5f, 1.0f };
CGFloat diffuseLight[] = { 1.0f, 1.0f, 1.0f, 1.0f };
CGFloat direction[] = { 0.0f, 0.0f, -20.0f, 0 };
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambientLight);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseLight);
glLightfv(GL_LIGHT0, GL_POSITION, direction);
glShadeModel(GL_FLAT);
glEnable(GL_LIGHTING);
glDisable(GL_COLOR_MATERIAL);
float blankColor[4] = {0,0,0,1};
float whiteColor[4] = {1,1,1,1};
float blueColor[4] = {0,0,1,1};
glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor);
glEnable(GL_CULL_FACE);
glVertexPointer(3, GL_FLOAT, 0, verts.pdata);
glEnableClientState(GL_VERTEX_ARRAY);
glNormalPointer(GL_FLOAT, 0, normals.pdata);
glEnableClientState(GL_NORMAL_ARRAY);
glDrawArrays (GL_TRIANGLES, 0, verts.size/3);
Problem is that instead of seeing BLUE diffuse color I see it white. It fades out if I rotate model's side but I can't understand why its not using my blue color.
BTW if I change glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor) to glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, blueColor) then I do see blue color. If I do it glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor); and then glMaterialfv(GL_BACK, GL_DIFFUSE, blueColor); I see white color again. So it looks like GL_FRONT_AND_BACK shows it but rest of combinations show white. Anyone can explain it to me?
This is because of clockwise
10.090 How does face culling work? Why doesn't it use the surface normal?
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension
from here
Also you can read here

How to draw a texture as a 2D background in OpenGL ES 2.0?

I'm just getting started with OpenGL ES 2.0, what I'd like to do is create some simple 2D output. Given a resolution of 480x800, how can I draw a background texture?
[My development environment is Java / Android, so examples directly relating to that would be best, but other languages would be fine.]
Even though you're on Android, I created an iPhone sample application that does this for frames of video coming in. You can download the code for this sample from here. I have a writeup about this application, which does color-based object tracking using live video, that you can read here.
In this application, I draw two triangles to generate a rectangle, then texture that using the following coordinates:
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
To pass through the video frame as a texture, I use a simple program with the following vertex shader:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
and the following fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main()
{
gl_FragColor = texture2D(videoFrame, textureCoordinate);
}
Drawing is a simple matter of using the right program:
glUseProgram(directDisplayProgram);
setting the texture uniform:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);
setting the attributes:
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
and then drawing the triangles:
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
You don't really draw a background, instead you draw a rectangle (or, even more correctly: two triangles forming a rectangle) and set a texture to that. This isn't different at all from drawing any other object on screen.
There are plenty of places showing how this is done, maybe there's even an android example project showing this.
The tricky part is getting something to display in front of or behind something else. For this to work, you need to set up a depth buffer and enable depth testing (glEnable(GL_DEPTH_TEST)). And your vertices need to have a Z coordinate (and tell glDrawElements that your vertices are made up of three values, not two).
If you don't do that, objects will be rendered in the order their glDrawElements() functions are called (meaning whichever you draw last will end up obscuring the rest).
My advice is to not have a background image or do anything fancy until you get the hang of it. OpenGL ES 2.0 has kind of a steep learning curve, and tutorials on ES 1.x don't really help with getting 3D to work because they can use helper functions like gluPerspective, which 2.0 just doesn't have. Start with creating a triangle on a background of nothing. Next, make it a square. Then, if you want to go fancy already, add a texture. Play with positions. See what happens when you change the Z value of your vertices. (Hint: Not a lot, if you don't have depth testing enabled. And even then, if you don't have perspective projection, objects won't get smaller the farther they are away, so it will still seem as if nothing happened)
After a few days, it stops being so damn frustrating, and you finally "get it", mostly.

Resources