SDL OpenGL Code Appears To Have No Depth - windows

I am running this code on a PC (compiled in code::blocks 10.05)
When i used to do basic OpenGL code with GLUT (GL Utiltiy Toolkit) Everything worked fine.
Now that i'm running OpenGL code through the SDL Framework when i try to change the z-axis (third parameter) of the translation function the location of a geometric primitive (quad) the 3D space appears to have no depth and either shows covering the complete screen or completely disappears when the depth gets to a certain point.
Am i missing anything? :/
#include <sdl.h>
#include <string>
#include "sdl_image.h"
#include "SDL/SDL_opengl.h"
#include <gl\gl.h>
// Declare Constants
const int FRAMES_PER_SECOND = 60;
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
const int SCREEN_BPP = 32;
// Create Event Listener
SDL_Event event;
// Declare Variable Used To Control How Deep Into The Screen The Quad Is
GLfloat zpos(0);
// Loop Switches
bool gamestarted(false);
bool exited(false);
// Prototype For My GL Draw Function
void drawstuff();
// Code Begins
void init_GL() {
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LEQUAL); // The Type Of Depth Testing To Do
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
glViewport(0, 0, 640, 513); // Viewport Change
glOrtho(0, 640, 0, 513, -1.0, 1.0); // Puts Stuff Into View
}
bool init() {
SDL_Init(SDL_INIT_EVERYTHING);
SDL_SetVideoMode(640, 513, 32, SDL_OPENGL);
return true;
}
void drawstuff() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, zpos);
glColor3f(0.5f,0.5f,0.5f);
glBegin(GL_QUADS);
glVertex3f(-1.0f, 1.0f, 0.0f);
glVertex3f( 1.0f, 1.0f, 0.0f);
glVertex3f( 1.0f,-1.0f, 0.0f);
glVertex3f(-1.0f,-1.0f, 0.0f);
glEnd();
}
int main (int argc, char* args[]) {
init();
init_GL();
while(exited == false) {
while( SDL_PollEvent( &event ) ) {
if( event.type == SDL_QUIT ) {
exited = true;
}
if( event.type == SDL_MOUSEBUTTONDOWN ) {
zpos-=.1;
}
}
glClear( GL_COLOR_BUFFER_BIT);
drawstuff();
SDL_GL_SwapBuffers();
}
SDL_Quit();
return 0;
}

When you say depth, do you refer to a perspective effect? You need to use a perspective projection matrix (see gluPerspective) if you want things farther away to appear smaller.
You're currently using orthographic projection (glOrtho), which does not have any perspective effect.

I don't know the reason for your problem, but I find a problem in your code.
After rendering one frame, you cleared the color framebuffer, but you forgot to clear the depth buffer.So, use glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); in your main function and see the effect.

I think I know the anwser:
In Orthographic projection(after projection and perspective divide):
Zb = -2/(f-n) -(f+n)/((f-n)*Z)
Zb:z value in depth buffer
Z: z value of your vertex you give
In your situation: glOrtho(0, 640, 0, 513, -1.0, 1.0);
f = 1.0, n = -1.0
so your Zb would always be -2/(f - n) = -1, this causes all your primitives's depth the same.
You can reference to Red book's Appendix C.2.5.There is a matrix for orthographic projection, and after that is perspective divide.
There is another tips to keep in mind in perspective projection that zNear value cannot be set to zero, which causes all primitives' depth value in depth buffer to be same as this one.

Related

Open GL ES 2.0 draw a square

I'm trying to follow the Android Developers "Displaying graphics with OpenGL ES" tutorial.
In the first part, I had to draw a triangle and it has been ok. Now I should draw a square, by using a Square class that can be defined by following the tutorial.
When I try to draw (using a draw() method that I mostly copied from the Triangle class), the screen shows only a triangle that's half of the defined square (obviously I've commented out the triangle.draw() line...).
Here's my code, has someone any hint to resolve it?
Square.java:
public class Square {
private FloatBuffer vertexBuffer;
private ShortBuffer drawListBuffer;
private final int mProgram;
private final String vertexShaderCode =
"attribute vec4 vPosition;" +
"void main() {" +
" gl_Position = vPosition;" +
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
static float squareCoords[] = {
-0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
// Set color with red, green, blue and alpha (opacity) values
float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f };
private short drawOrder[] = { 0, 1, 2, 0, 2, 3 }; // order to draw vertices
public Square() {
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(squareCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
int vertexShader = MyGLRenderer.loadShader(GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = MyGLRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);
}
private int mPositionHandle;
private int mColorHandle;
private final int vertexCount = squareCoords.length / COORDS_PER_VERTEX;
private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
public void draw() {
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the triangle
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// Draw the square
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
}
}
See Triangle primitives
You have 4 vertex coordinates, which have the following arrangement:
static float squareCoords[] = {
-0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
0 3
x x
| |
| |
x-----x
1 2
This pattern matches the primitive type GL_TRIANGLE_FAN, but not the primitive type GL_TRIANGLES.
Change the primitive type to solve your issue:
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, vertexCount);
Note for the primitive type GL_TRIANGLES you would need 6 vertex coordinates, e.g. with the following arrangement:
0 3 5
x x-----x
| \ \ |
| \ \ |
x-----x x
1 2 4
The primitive type GL_TRIANGLES matches your index buffer:
private short drawOrder[] = { 0, 1, 2, 0, 2, 3 };
If you want to draw the primitives from an index list, then you have to use glDrawElements:
GLES20.glDrawElements(GLES20.GL_TRIANGLES, 6, GLES20.GL_SHORT, drawListBuffer);

OpenGL ES 2.0 GLSL Barrel Distortion Shader not working

i took the code from the OpenGL ES 2.0 Tutorial from : github.com/learnopengles/Learn-OpenGLES-Tutorials
and customized the class "LessonOneRenderer" to test a barrel distortin Shader like mentioned on this site:
www.geeks3d.com/20140213/glsl-shader-library-fish-eye-and-dome-and-barrel-distortion-post-processing-filters/2/
This is the "untouched" Result without calling the distort() function:
http://fs1.directupload.net/images/150125/hw746rcn.png
and the result calling the function :
http://fs2.directupload.net/images/150125/f84arxoj.png
/**
* This class implements our custom renderer. Note that the GL10 parameter passed in is unused for OpenGL ES 2.0
* renderers -- the static class GLES20 is used instead.
*/
public class LessonOneRenderer implements GLSurfaceView.Renderer
{
/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
private float[] mModelMatrix = new float[16];
/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
private float[] mViewMatrix = new float[16];
/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
private float[] mProjectionMatrix = new float[16];
/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
private float[] mMVPMatrix = new float[16];
/** Store our model data in a float buffer. */
private final FloatBuffer mTriangle1Vertices;
/** This will be used to pass in the transformation matrix. */
private int mMVPMatrixHandle;
/** This will be used to pass in model position information. */
private int mPositionHandle;
/** This will be used to pass in model color information. */
private int mColorHandle;
/** How many bytes per float. */
private final int mBytesPerFloat = 4;
/** How many elements per vertex. */
private final int mStrideBytes = 7 * mBytesPerFloat;
/** Offset of the position data. */
private final int mPositionOffset = 0;
/** Size of the position data in elements. */
private final int mPositionDataSize = 3;
/** Offset of the color data. */
private final int mColorOffset = 3;
/** Size of the color data in elements. */
private final int mColorDataSize = 4;
private FloatBuffer vertexBuffer;
private ShortBuffer drawListBuffer;
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
// X Y Z
static float squareCoords[] = { -0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
private short drawOrder[] = { 0, 1, 2, 0, 2, 3 }; // order to draw vertices
float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f };
static final int vertexStride = COORDS_PER_VERTEX * 4;
static final int vertexCount = 4;
private ByteBuffer dlb;
/**
* Initialize the model data.
*/
public LessonOneRenderer()
{
ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords.length * 4); // (# of coordinate values * 4 bytes per float)
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(squareCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 2); // (# of coordinate values * 2 bytes per short)
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
// Define points for equilateral triangles.
// This triangle is red, green, and blue.
final float[] triangle1VerticesData = {
// X, Y, Z,
// R, G, B, A
-0.5f, 0.5f, 0.0f, // top left
1.0f, 0.0f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.0f,// bottom left
0.0f, 0.0f, 1.0f, 1.0f,
0.5f, -0.5f, 0.0f, // bottom right
0.0f, 1.0f, 0.0f, 1.0f
,
0.5f, 0.5f, 0.0f, // top right
0.0f, 1.0f, 0.0f, 1.0f
};
// Initialize the buffers.
mTriangle1Vertices = ByteBuffer.allocateDirect(triangle1VerticesData.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mTriangle1Vertices.put(triangle1VerticesData).position(0);
// initialize byte buffer for the draw list
dlb = ByteBuffer.allocateDirect(drawOrder.length * 2); // (# of coordinate values * 2 bytes per short)
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
}
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{
// Set the background clear color to gray.
GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
// Position the eye behind the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = 1.5f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = -5.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
final String vertexShader =
"uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.
+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color; \n" // Per-vertex color information we will pass in.
+ "varying vec4 a_Pos; \n"
+ "varying vec4 v_Color; \n" // This will be passed into the fragment shader.
+" "
+"vec4 Distort(vec4 p){ \n"
+" float BarrelPower = 0.4; \n"
+" vec2 v = p.xy / p.w; \n"
+" float radius = length(v); \n"
+" if (radius > 0.0){ \n"
+" float theta = atan(v.y,v.x); \n"
+" radius = pow(radius, BarrelPower);\n"
+" v.x = radius * cos(theta); \n"
+" v.y = radius * sin(theta); \n"
+" p.xy = v.xy * p.w; \n"
+" }"
+" \n"
+" return p; \n"
+" } \n"
+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
+ " v_Color = a_Color; \n" // Pass the color through to the fragment shader.
+ " vec4 P = u_MVPMatrix * a_Position;" // It will be interpolated across the triangle.
+ " gl_Position = Distort(P); \n" // gl_Position is a special variable used to store the final position.
+ " \n" // Multiply the vertex by the matrix to get the final point in
+ "} \n"; // normalized screen coordinates.
final String fragmentShader =
// Set the default precision to medium. We don't need as high of a
"varying vec4 a_Pos;" // precision in the fragment shader.
+ "varying vec4 v_Color; \n" // This is the color from the vertex shader interpolated across the
// triangle per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ vec4 c = vec4(1.0); \n"
+ " gl_FragColor = v_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
// Load in the vertex shader.
int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
if (vertexShaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(vertexShaderHandle, vertexShader);
// Compile the shader.
GLES20.glCompileShader(vertexShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
GLES20.glDeleteShader(vertexShaderHandle);
vertexShaderHandle = 0;
}
}
if (vertexShaderHandle == 0)
{
throw new RuntimeException("Error creating vertex shader.");
}
// Load in the fragment shader shader.
int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
if (fragmentShaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);
// Compile the shader.
GLES20.glCompileShader(fragmentShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
GLES20.glDeleteShader(fragmentShaderHandle);
fragmentShaderHandle = 0;
}
}
if (fragmentShaderHandle == 0)
{
throw new RuntimeException("Error creating fragment shader.");
}
// Create a program object and store the handle to it.
int programHandle = GLES20.glCreateProgram();
if (programHandle != 0)
{
// Bind the vertex shader to the program.
GLES20.glAttachShader(programHandle, vertexShaderHandle);
// Bind the fragment shader to the program.
GLES20.glAttachShader(programHandle, fragmentShaderHandle);
// Bind attributes
GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
GLES20.glBindAttribLocation(programHandle, 1, "a_Color");
// Link the two shaders together into a program.
GLES20.glLinkProgram(programHandle);
// Get the link status.
final int[] linkStatus = new int[1];
GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);
// If the link failed, delete the program.
if (linkStatus[0] == 0)
{
GLES20.glDeleteProgram(programHandle);
programHandle = 0;
}
}
if (programHandle == 0)
{
throw new RuntimeException("Error creating program.");
}
// Set program handles. These will later be used to pass in values to the program.
mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
mColorHandle = GLES20.glGetAttribLocation(programHandle, "a_Color");
// Tell OpenGL to use this program when rendering.
GLES20.glUseProgram(programHandle);
}
#Override
public void onSurfaceChanged(GL10 glUnused, int width, int height)
{
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 10.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
#Override
public void onDrawFrame(GL10 glUnused)
{
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
// Do a complete rotation every 10 seconds.
long time = SystemClock.uptimeMillis() % 10000L;
float angleInDegrees = (360.0f / 10000.0f) * ((int) time);
// Draw the triangle facing straight on.
//Matrix.setIdentityM(mModelMatrix, 0);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
//drawTriangle(mTriangle1Vertices);
//Draw Square
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f, -0.7f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
this.drawSquare(mTriangle1Vertices);
//Draw Square
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f, 0.7f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
this.drawSquare(mTriangle1Vertices);
// Draw one translated a bit down and rotated to be flat on the ground.
//Matrix.setIdentityM(mModelMatrix, 0);
//Matrix.translateM(mModelMatrix, 0, 0.0f, -1.0f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, 90.0f, 1.0f, 0.0f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
//drawTriangle(mTriangle2Vertices);
// Draw one translated a bit to the right and rotated to be facing to the left.
//Matrix.setIdentityM(mModelMatrix, 0);
//Matrix.translateM(mModelMatrix, 0, 1.0f, 0.0f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, 90.0f, 0.0f, 1.0f, 0.0f);
//Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
//drawTriangle(mTriangle3Vertices);
// test.draw();
}
/**
* Draws a triangle from the given vertex data.
*
* #param aTriangleBuffer The buffer containing the vertex data.
*/
private void drawTriangle(final FloatBuffer aTriangleBuffer)
{
// Pass in the position information
aTriangleBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the color information
aTriangleBuffer.position(mColorOffset);
GLES20.glVertexAttribPointer(mColorHandle, mColorDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mColorHandle);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 3);
}
private void drawSquare(final FloatBuffer aTriangleBuffer)
{
// Pass in the position information
aTriangleBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the color information
aTriangleBuffer.position(mColorOffset);
GLES20.glVertexAttribPointer(mColorHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false,
vertexStride, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mColorHandle);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4);
}
}
Does anybody have an idea what i'm missing to distort the squares like in the example?
You apply the distortion effect in the vertex shader. So all you are doing is moving the 4 corners of those spheres. You can't achieve the desired effect that way.
Threre are different options. You theoretically could use some higher tesselation for your squares, so you just create a 2D grid of vertices, which then can indivudually be moved according to the distortion. If your grid is fine enough, the piecewise-linear nature of that approximation will not be visible anymore.
However, such effects are typically done differently. One usually does not tune the geometric models for specific effects. That kind of distortions usually are applied in screen-space, as a post-processing effect (and the link you posted does exactly this). The idea is that you can render the whole scene into a texture, and draw a single tetxured rectangle filling the whole screen as the final pass. In that pass, you can apply the distortion to the texture coordinates in the fragment shader, just as in the original example.
All of that can be done in OpenGL ES, too. The keywords to look for are RTT (render to texture) and FBOs (framebuffer objects). On GLES, this feature is available as the OES_framebuffer_object extension (which is widely supported on most ES implementations).
However, using that stuff is definitively a bit more advanced than your typical "lessson 1" of some tutorial, you might want to read some other lessons first before trying that... ;)

CPU usage is high when using opengl control class?

Consider I have used the OpenGL Control class as follows: (No need to read the code, I have just made slight changes to be able to use the code in more than one opengl window)
OpenGLControl.cpp
#include "stdafx.h"
#include "OpenGLControl.h"
COpenGLControl::COpenGLControl(void)
{
m_fPosX = 0.0f; // X position of model in camera view
m_fPosY = 0.0f; // Y position of model in camera view
m_fZoom = 10.0f; // Zoom on model in camera view
m_fRotX = 0.0f; // Rotation on model in camera view
m_fRotY = 0.0f; // Rotation on model in camera view
m_bIsMaximized = false;
}
COpenGLControl::~COpenGLControl(void)
{
}
BEGIN_MESSAGE_MAP(COpenGLControl, CWnd)
ON_WM_PAINT()
ON_WM_SIZE()
ON_WM_CREATE()
ON_WM_TIMER()
ON_WM_MOUSEMOVE()
END_MESSAGE_MAP()
void COpenGLControl::OnPaint()
{
//CPaintDC dc(this); // device context for painting
ValidateRect(NULL);
}
void COpenGLControl::OnSize(UINT nType, int cx, int cy)
{
wglMakeCurrent(hdc, hrc);
CWnd::OnSize(nType, cx, cy);
if (0 >= cx || 0 >= cy || nType == SIZE_MINIMIZED) return;
// Map the OpenGL coordinates.
glViewport(0, 0, cx, cy);
// Projection view
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Set our current view perspective
gluPerspective(35.0f, (float)cx / (float)cy, 0.01f, 2000.0f);
// Model view
glMatrixMode(GL_MODELVIEW);
wglMakeCurrent(NULL, NULL);
}
int COpenGLControl::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
if (CWnd::OnCreate(lpCreateStruct) == -1) return -1;
oglInitialize();
return 0;
}
void COpenGLControl::OnDraw(CDC *pDC)
{
wglMakeCurrent(hdc,hrc);
// If the current view is perspective...
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -m_fZoom);
glTranslatef(m_fPosX, m_fPosY, 0.0f);
glRotatef(m_fRotX, 1.0f, 0.0f, 0.0f);
glRotatef(m_fRotY, 0.0f, 1.0f, 0.0f);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::OnTimer(UINT nIDEvent)
{
wglMakeCurrent(hdc,hrc);
switch (nIDEvent)
{
case 1:
{
// Clear color and depth buffer bits
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw OpenGL scene
oglDrawScene();
// Swap buffers
SwapBuffers(hdc);
break;
}
default:
break;
}
CWnd::OnTimer(nIDEvent);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::OnMouseMove(UINT nFlags, CPoint point)
{
wglMakeCurrent(hdc,hrc);
int diffX = (int)(point.x - m_fLastX);
int diffY = (int)(point.y - m_fLastY);
m_fLastX = (float)point.x;
m_fLastY = (float)point.y;
// Left mouse button
if (nFlags & MK_LBUTTON)
{
m_fRotX += (float)0.5f * diffY;
if ((m_fRotX > 360.0f) || (m_fRotX < -360.0f))
{
m_fRotX = 0.0f;
}
m_fRotY += (float)0.5f * diffX;
if ((m_fRotY > 360.0f) || (m_fRotY < -360.0f))
{
m_fRotY = 0.0f;
}
}
// Right mouse button
else if (nFlags & MK_RBUTTON)
{
m_fZoom -= (float)0.1f * diffY;
}
// Middle mouse button
else if (nFlags & MK_MBUTTON)
{
m_fPosX += (float)0.05f * diffX;
m_fPosY -= (float)0.05f * diffY;
}
OnDraw(NULL);
CWnd::OnMouseMove(nFlags, point);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::oglCreate(CRect rect, CWnd *parent,CString windowName)
{
CString className = AfxRegisterWndClass(CS_HREDRAW | CS_VREDRAW | CS_OWNDC, NULL, (HBRUSH)GetStockObject(BLACK_BRUSH), NULL);
CreateEx(0, className,windowName, WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN, rect, parent, 0);
// Set initial variables' values
m_oldWindow = rect;
m_originalRect = rect;
hWnd = parent;
}
void COpenGLControl::oglInitialize(void)
{
// Initial Setup:
//
static PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32, // bit depth
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
24, // z-buffer depth
8,0,PFD_MAIN_PLANE, 0, 0, 0, 0,
};
// Get device context only once.
hdc = GetDC()->m_hDC;
// Pixel format.
m_nPixelFormat = ChoosePixelFormat(hdc, &pfd);
SetPixelFormat(hdc, m_nPixelFormat, &pfd);
// Create the OpenGL Rendering Context.
hrc = wglCreateContext(hdc);
wglMakeCurrent(hdc, hrc);
// Basic Setup:
//
// Set color to use when clearing the background.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepth(1.0f);
// Turn on backface culling
glFrontFace(GL_CCW);
glCullFace(GL_BACK);
// Turn on depth testing
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
// Send draw request
OnDraw(NULL);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::oglDrawScene(void)
{
wglMakeCurrent(hdc, hrc);
// Wireframe Mode
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBegin(GL_QUADS);
// Front Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
// Back Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
// Top Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
// Bottom Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
// Right Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
// Left Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glEnd();
wglMakeCurrent(NULL, NULL);
}
MyOpenGLTestDlg.h
COpenGLControl m_oglWindow;
COpenGLControl m_oglWindow2;
MyOpenGLTestDlg.cpp
// TODO: Add extra initialization here
CRect rect;
// Get size and position of the picture control
GetDlgItem(ID_OPENGL)->GetWindowRect(rect);
// Convert screen coordinates to client coordinates
ScreenToClient(rect);
// Create OpenGL Control window
CString s1("OPEN_GL");
m_oglWindow.oglCreate(rect, this,s1);
// Setup the OpenGL Window's timer to render
m_oglWindow.m_unpTimer = m_oglWindow.SetTimer(1, 1, 0);
CRect rect2;
GetDlgItem(ID_OPENGL2)->GetWindowRect(rect2);
ScreenToClient(rect2);
CString s2("OPEN_GL2");
m_oglWindow2.oglCreate(rect2, this,s2);
m_oglWindow2.m_unpTimer = m_oglWindow2.SetTimer(1, 1, 0);
The problem is when I only create one OpenGL window, the system shows:
physical memoey: 48%
CPU usage: 54%
and when I create two windows, it shows:
physical memoey: 48%
CPU usage: 95%
I'm concerned that, it is only for such simple geometries!!!
How will be the usage for two opengl windows showing textures??
Is there anyway to reduce the usage?
BTW: why is the usage so much as mensioned?
CPU usage doesn't actually indicate anything about the complexity of your application. If you draw in a tight loop, one frame after the other with no delay or VSYNC enabled you can achieve 100% CPU utilization. What this tells you is you are not GPU bound. The same way if your GPU usage (yes, you can measure this with vendor-specific APIs) is >95% then you are not CPU bound.
In short, you should expect to see very high CPU usage if the GPU is not doing anything particularly complicated :) You can always increase the sleep/timer interval to reduce CPU utilization. Remember that CPU utilization is measured as the time spent doing work versus the total time the OS gave a thread/process. Time spent working is inversely related to time spent waiting (for I/O, sleep, etc.). If you increase the time spent waiting, it will reduce the time spent working, and therefore your reported utilization.
You can also reduce the CPU usage just by enabling VSYNC. Since that will block the calling thread until the VBLANK interval comes around (often 16.666 ms).
It should also be noted that a 1 ms timer interval on your OpenGL timer is excessively low. I cannot think of a lot of applications that need to draw 1000 times a second :) Try something slightly below your target refresh rate (e.g. Monitor = 60 Hz, then try a 10-15 ms timer interval)
This needs more investigation but you may have problems with your main-loop.
This probably is not a problem with OpenGL, but with usage of WinApi. When you add textures, models, shaders... your cpu usage should be similar.
You use SetTimer(1, 1, 0); it means 1 millisecond of delay as I understand? Can you change it to 33 milliseconds (33 FPS)?
That way you will not kill your message pump in mfc app. Note that this timer is very imprecise.
link to [Basic MFC + OpenGL Message loop], (http://archive.gamedev.net/archive/reference/articles/article2204.html), using OnIdle()
Here is a great tutorial about MFC + opengl + threading - #songho
https://gamedev.stackexchange.com/questions/8623/a-good-way-to-build-a-game-loop-in-opengl - discussoion regarding mail loop in GLUT

Drawing QWidget over QGLNode in QGLView

I use Qt3D and draw a complex 3d scene using QGLView.
I need to draw some widgets on QGLNode inside this scene, but I don't know how.
Here is a minimal example (based on "nesting" demo from qt3d) of what I suppose to do:
Header:
class CubeView : public QGLView
{
Q_OBJECT
Q_PROPERTY(qreal cubeAngle READ cubeAngle WRITE setCubeAngle)
public:
CubeView(QWidget *parent = 0);
~CubeView();
qreal cubeAngle() const { return cangle; }
void setCubeAngle(qreal angle) { cangle = angle; update(); }
protected:
void initializeGL(QGLPainter *painter);
void paintGL(QGLPainter *painter);
private:
QGLSceneNode *scene;
QGLSceneNode *cube;
QGLFramebufferObject *fbo;
QGLFramebufferObjectSurface fboSurface;
QGLCamera *innerCamera;
qreal cangle;
void drawCube(QGLPainter* painter);
void drawFrameBuffer();
};
Initializing:
void CubeView::initializeGL(QGLPainter *)
{
fbo = new QGLFramebufferObject(512, 512);
fboSurface.setFramebufferObject(fbo);
}
Paint code:
void CubeView::paintGL(QGLPainter *painter)
{
painter->modelViewMatrix().push();
painter->projectionMatrix().push();
#ifdef DRAW_GLNODE
painter->pushSurface(&fboSurface);
#endif
painter->setCamera(innerCamera);
painter->modelViewMatrix().rotate(cangle, 0.0f, 1.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
#ifdef DRAW_GLNODE
cube->draw(painter);
#else
QPushButton button;
button.resize(fbo->width(), fbo->height());
button.render(fbo); // filling QGLFrameBuffer
#endif
drawFrameBuffer(); // debug QPixmap with QGLFrameBuffer data
#ifdef DRAW_GLNODE
painter->popSurface();
#endif
painter->projectionMatrix().pop();
painter->modelViewMatrix().pop();
drawCube(painter);
}
Drawing cube:
void CubeView::drawCube(QGLPainter *painter)
{
painter->modelViewMatrix().push();
painter->modelViewMatrix().rotate(cangle, 1.0f, 1.0f, 1.0f);
painter->setStandardEffect(QGL::LitDecalTexture2D);
glBindTexture(GL_TEXTURE_2D, fbo->texture());
glEnable(GL_TEXTURE_2D);
cube->draw(painter);
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
painter->modelViewMatrix().pop();
}
When "#define DRAW_GLNODE" is active it shows rotating cube as a texture inside another cube (you can see frame buffer data in the left window).
http://i.stack.imgur.com/Ok4NT.png
When I switch to "#define 0" it shows black screen in a QGLView, but QGLFrameBuffer contains data as you can see.
http://i.stack.imgur.com/5R6Wz.png
What am I doing wrong?

Only half a triangle shows when rotating in OpenGL ES 2.0

I'm trying to rotate a triangle around the the Y axis. When I rotate it about the Z axis, everything is fine. But when I try rotating about the Y axis, all I get is a half triangle, rotating about the Y axis. I'm using PowerVRs OpenGL ES 2.0 SDK. My Init and Draw functions are below.
int Init(ESContext* esContext)
{
UserData* userData = (UserData *)esContext->userData;
const char *vShaderStr =
"attribute vec4 vPosition; \n"
"uniform mat4 MVPMatrix;"
"void main() \n"
"{ \n"
" gl_Position = MVPMatrix * vPosition;\n"
"} \n";
const char *fShaderStr =
"precision mediump float; \n"
"void main() \n"
"{ \n"
" gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"
"} \n";
GLuint vertexShader;
GLuint fragmentShader;
GLuint programObject;
GLint linked;
GLfloat ratio = 320.0f/240.0f;
vertexShader = LoadShader(GL_VERTEX_SHADER, vShaderStr);
fragmentShader = LoadShader(GL_FRAGMENT_SHADER, fShaderStr);
programObject = glCreateProgram();
if (programObject == 0)
return 0;
glAttachShader(programObject, vertexShader);
glAttachShader(programObject, fragmentShader);
glBindAttribLocation(programObject, 0, "vPosition");
glLinkProgram(programObject);
glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, &linked);
if (!linked)
{
GLint infoLen = 0;
glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 1)
{
char* infoLog = (char *)malloc(sizeof(char) * infoLen);
glGetProgramInfoLog(programObject, infoLen, NULL, infoLog);
free(infoLog);
}
glDeleteProgram(programObject);
return FALSE;
}
userData->programObject = programObject;
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glViewport(0, 0, esContext->width, esContext->height);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(userData->programObject);
userData->angle = 0.0f;
userData->start = time(NULL);
userData->ProjMatrix = PVRTMat4::Perspective(ratio*2.0f, 2.0f, 3.0f, 7.0f, PVRTMat4::eClipspace::OGL, false, false);
userData->ViewMatrix = PVRTMat4::LookAtLH(PVRTVec3(0.0f, 0.0f, -3.0f), PVRTVec3(0.0f, 0.0f, 0.0f), PVRTVec3(0.0f, 1.0f, 0.0f));
return TRUE;
}
void Draw(ESContext *esContext)
{
GLfloat vVertices[] = {0.0f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f};
GLint MVPHandle;
double timelapse;
PVRTMat4 MVPMatrix = PVRTMat4::Identity();
UserData* userData = (UserData *)esContext->userData;
timelapse = difftime(time(NULL), userData->start) * 1000;
if(timelapse > 16.0f) //Maintain approx 60FPS
{
if (userData->angle > 360.0f)
{
userData->angle = 0.0f;
}
else
{
userData->angle += 0.1f;
}
}
userData->ModelMatrix = PVRTMat4::RotationY(userData->angle);
MVPMatrix = userData->ViewMatrix * userData->ModelMatrix;
MVPMatrix = userData->ProjMatrix * MVPMatrix;
MVPHandle = glGetUniformLocation(userData->programObject, "MVPMatrix");
glUniformMatrix4fv(MVPHandle, 1, FALSE, MVPMatrix.ptr());
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, vVertices);
glEnableVertexAttribArray(0);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 3);
eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface);
}
PVRTMat4::Perspective(ratio*2.0f, 2.0f, 3.0f, 7.0f, PVRTMat4::eClipspace::OGL, false, false); puts the near clipping plane at 3.0f units away from the camera (via the third argument).
PVRTMat4::LookAtLH(PVRTVec3(0.0f, 0.0f, -3.0f), PVRTVec3(0.0f, 0.0f, 0.0f), PVRTVec3(0.0f, 1.0f, 0.0f)); places the camera at (0, 0, -3), looking back at (0, 0, 0).
You generate a model matrix directly using PVRTMat4::RotationY(userData->angle); so that matrix does no translation. The triangle you're drawing remains positioned on (0, 0, 0) as per its geometry.
So what's happening is that the parts of the triangle that get closer to the camera than 3 units are being clipped by the near clipping plane. The purpose of the near clipping plane is effectively to place the lens of the camera relative to where the image will be perceived. Or it's like specifying the distance from user to screen, if you prefer.
You therefore want either to bring the near clip plane closer to the camera or to move the camera further away from the triangle.

Resources