How to use fragment shader to draw sphere ilusion in OpenGL ES? - opengl-es

I am using this simple function to draw quad in 3D space that is facing camera. Now, I want to use fragment shader to draw illusion of a sphere inside. But, the problem is I'm new to OpenGL ES, so I don't know how?
void draw_sphere(view_t view) {
set_gl_options(COURSE);
glPushMatrix();
{
glTranslatef(view.plyr_pos.x, view.plyr_pos.y, view.plyr_pos.z - 1.9);
#ifdef __APPLE__
#undef glEnableClientState
#undef glDisableClientState
#undef glVertexPointer
#undef glTexCoordPointer
#undef glDrawArrays
static const GLfloat vertices []=
{
0, 0, 0,
1, 0, 0,
1, 1, 0,
0, 1, 0,
0, 0, 0,
1, 1, 0
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
glDisableClientState(GL_VERTEX_ARRAY);
#else
#endif
}
glPopMatrix();
}
More exactly, I want to achieve this:

There might be quite a few thing you need to to achieve this... The sphere that is drawn on the last image you posted is a result in using lighting and shine and color. In general you need a shader that can process all that and can normally work for any shape.
This specific case (also some others that can be mathematically presented) can be drawn with a single quad without even needing to push normal coordinates to the program. What you need to do is create a normal in a fragment shader: If you receive vectors sphereCenter, fragmentPosition and float sphereRadius, then sphereNormal is a vector such as
sphereNormal = (fragmentPosition-sphereCenter)/radius; //taking into account all have .z = .0
sphereNormal.z = -sqrt(1.0 - length(sphereNormal)); //only if(length(spherePosition) < sphereRadius)
and real sphere position:
spherePosition = sphereCenter + sphereNormal*sphereRadius;
Now all you need to do is add your lighting.. Static or not it is most common to use some ambient factor, linear and square distance factors, shine factor:
color = ambient*materialColor; //apply ambient
vector fragmentToLight = lightPosition-spherePosition;
float lightDistance = length(fragmentToLight);
fragmentToLight = normalize(fragmentToLight); //can also just divide with light distance
float dotFactor = dot(sphereNormal, fragmentToLight); //dot factor is used to take int account the angle between light and surface normal
if(dotFactor > .0) {
color += (materialColor*dotFactor)/(1.0 + lightDistance*linearFactor + lightDistance*lightDistance*squareFactor); //apply dot factor and distance factors (in many cases the distance factors are 0)
}
vector shineVector = (sphereNormal*(2.0*dotFactor)) - fragmentToLight; //this is a vector that is mirrored through the normal, it is a reflection vector
float shineFactor = dot(shineVector, normalize(cameraPosition-spherePosition)); //factor represents how strong is the light reflection towards the viewer
if(shineFactor > .0) {
color += materialColor*(shineFactor*shineFactor * shine); //or some other power then 2 (shineFactor*shineFactor)
}
This pattern to create lights in fragment shader is one of very many. If you don't like it or you cant make it work I suggest you find another one on the web, otherwise I hope you will understand it and be able to play around with it.

Related

OpenGL ES: Handle large amount matrixdata improve performance

I am using instancing in my OpenGL-app and since only one drawcall are made I have to calculate a larger matrix that consists of smaller matrices and that larger matrix is sent to the shader where gl_InstanceID can distinguish between successive matrices.
Its put on the bus with the following call
GLES30.glUniformMatrix4fv(mMVPMatrixHandleBall, nBalls, false, mMVPMatrixMajor, 0);
and in the shader the multiplication si made by
gl_Position = u_MVPMatrix[gl_InstanceID] * a_Position;
simple!
On the client-side the larger matrix is created by the following code:
private void setLargeMVPmatrix() {
int cnt = 0;
for (Iterator<Ball> shapeIterator = arrayListBalls.iterator(); shapeIterator.hasNext(); ) {
Ball ball = shapeIterator.next();
mModelMatrix = ball.getmModelMatrix();
//multipl.
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
//subst. in matrisdata i en större vektor dvs vi får en stor matris som innehhåller flera mindre matriser
for (int i = 0; i < CreateGLContext.MATRIX_SIZE; i++) {
mMVPMatrixMajor[i + CreateGLContext.MATRIX_SIZE * cnt] = mMVPMatrix[i];
}
cnt++;
}
}
If I have moving-objects on the screen, like bouncing balls, for instance 100 balls bouncing around it means I have to continously translate their positions each frame which in turn means I have to call this method every frame. And the consequence is that it becomes a real performance bottelneck. I know it by just commenting out the method just to see what happends - and a real performance boost but the balls doesnt not move any longer, of course.
So my question - Is there a soluition to this problem? If I use instancing, I have to send a large matrix according to above.
Edit:
I've even tried the following which I thought could at least partially solve my problem. In the drawMethod:
int cnt = 0;
for (Iterator <Ball> it = arrayListBalls.iterator(); it.hasNext();) {
Ball ball = it.next();
mModelMatrix = ball.getmModelMatrix();
//multipl.
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES30.glUniformMatrix4fv( (mMVPMatrixHandleBall + cnt), 1, false, mMVPMatrix, 0);
cnt++;
}
Thanks in advance!!!
If the data that change are positions and rotations then that's what you should update to the shader.
Doing most of matrix stuff at CPU is slow, unless the needed operations are tiny, like computing the new view and projection matrices, same for all objects, and they are cheap to pass as uniforms
For every frame I'd re-fill a BufferData, perhaps with the help of glMapBufferRange or glBufferSubData, with the new positions and rotations.
Then, in the shader, build the matrices needed and do matrices multiplication there.
If initial positions and rotations are needed to build new matrices, then you must also pass them in another buffer, although just update it for the first frame.
With the proper attributes order you read in the shader these positions and rotations. The gl_InstanceID is then not needed for gl_Position calculus, perhaps needed for other object property.
If you need help on how to build matrices inside the shaders, look for glRotate and glTranslate in OpenGL 2.1 docs where you can find the definitions.
Also note that passing a big matrix for all objects by an uniform may exceed the limit on the size for the whole uniform data.

seeing through triangles in GLKit

I am working on a simple iOS application to learn about OpenGLES 2.0. In the project, I'm rendering 4 triangles in the shape of a pyramid, with some sliders to adjust the height of the apex of the pyramid, and to rotate the modalViewMatrix about the y axis. I am trying to find the reason why.. after rotating this object counter-clockwise to the point where triangles appear in front of other triangles, I can see through the near triangles. However, when rotating in the clockwise direction to the same point, the near triangles are opaque and occlude the furthest triangles.
I assumed that the reason was a lack of a depth render buffer but after setting the property view.drawableDepthFormat = GLKViewDrawableDepthFormat16; the behavior persists.
For reference, this is my drawRect function where drawing is done. The only other code is done in viewDidLoad and in Global scope of the xcode project here.
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
[self.baseEffect prepareToDraw];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER,pos);
glEnableVertexAttribArray(GLKVertexAttribPosition);
const GLvoid * off1 = NULL + offsetof(SceneVertex, position) ;
glVertexAttribPointer(GLKVertexAttribPosition, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off1);
glEnableVertexAttribArray(GLKVertexAttribNormal);
const GLvoid * off2 = NULL + offsetof(SceneVertex, normal) ;
glVertexAttribPointer(GLKVertexAttribNormal, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off2);
GLenum error = glGetError();
if(GL_NO_ERROR != error)
{
NSLog(#"GL Error: 0x%x", error);
}
int sizeOfTries = sizeof(triangles);
int sizeOfSceneVertex = sizeof(SceneVertex);
int numArraysToDraw = sizeOfTries / sizeOfSceneVertex;
glDrawArrays(GL_TRIANGLES, 0, numArraysToDraw);
}
It's not enough just to have a depth buffer, you need to tell OpenGL how you want to use it. Try adding the following lines:
glEnable(GL_DEPTH_TEST); // Enable depth testing
glDepthMask(GL_TRUE); // Enable depth write
glDepthFunc(GL_LEQUAL); // Choose the depth comparison function
While we're here, I'd recommend GLKViewDrawableDepthFormat24 over GLKViewDrawableDepthFormat16 for most use cases (better precision).
I'd also recommend familiarizing yourself with xcode's frame capture feature (doc), it really is an invaluable way to figure out what is going on when rendering is not working as intended.

OpenGL ortho, perspective and frustum projections

45I am trying to understand OpenGL projections on a single point. I am using QGLWidget for rendering context and QMatrix4x4 for projection matrix. Here is the draw function
attribute vec4 vPosition;
uniform mat4 projection;
uniform mat4 modelView;
void main()
{
gl_Position = projection* vPosition;
}
void OpenGLView::Draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram(programObject);
glViewport(0, 0, width(), height());
qreal aspect = (qreal)800 / ((qreal)600);
const qreal zNear = 3.0f, zFar = 7.0f, fov = 45.0f;
QMatrix4x4 projection;
projection.setToIdentity();
projection.ortho(-1.0f,1.0f,-1.0f,1.0f,-20.0f,20.0f);
// projection.frustum(-1.0f,1.0f,-1.0f,1.0f,-20.0f,20.0f);
// projection.perspective(fov,aspect,zNear, zFar);
position.setToIdentity();
position.translate(0.0f, 0.0f, -5.0f);
position.rotate(0,0,0, 0);
QMatrix4x4 mvpMatrix = projection * position;
for (int r=0; r<4; r++)
for (int c=0; c<4; c++)
tempMat[r][c] = mvpMatrix.constData()[ r*4 + c ];
glUniformMatrix4fv(projection, 1, GL_FALSE, (float*)&tempMat[0][0]);
//Draw point at 0,0
GLfloat f_RefPoint[2];
glUniform4f(color,1, 0,1,1);
glPointSize(15);
f_RefPoint[0] = 0;
f_RefPoint[1] = 0;
glEnableVertexAttribArray(vertexLoc);
glVertexAttribPointer(vertexLoc, 2, GL_FLOAT, 0, 0, f_RefPoint);
glDrawArrays (GL_POINTS, 0, 1);
}
Observations:
1) projection.ortho: the point rendered on the window and translating the point with different z-axis value has no effect
2) projection.frustum: the point is drawn on the windown only the point is translated as translate(0.0f, 0.0f, -20.0f)
3) projection.perspective: the point is never rendered on the screen.
Could someone help me understand this behaviour?
The ortho projection works this way. I suggest you search for some images or some videos about the differences between different projections.
I don't know how you see a point translation in Z coordinate but if you would have a square it would become smaller by translating it further away (with ortho it would stay the same). There is an issue here as you use -20.0f for zNear while this value should be positive. The values inserted into this method should in most cases be generated with field of view, aspect ratio... Anyway you will not be able to see anything closer then zNear and anything further then zFar.
This is the same as frustum but already takes parameters as field of view, aspect ratio. The reason you do not see anything is your zNear is at 3.0f and the point is .0f length away. By translating the point you will be able to see it but try translating it by anything from 3.0f to 7.0f (3.0f is your zNear and 7.0f is your zFar). Alternatives are increasing zFar or translating the projection matrix backwards. Or mostly in your case I suggest adding some "look at" system on the projection matrix as it will give you some easy-to-use tools to manipulate your "camera", in most cases you can set a point you are looking from, a point you are looking at and up vector.

Objects look weird with first-person camera in DirectX

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!
The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

How to rotate an object and but leaving the lighting fixed? (OpenGL)

I have a cube which I want to rotate. I also have a light source GL_LIGHT0. I want to rotate the cube and leave the light source fixed in its location. But the light source is rotating together with my cube. I use OpenGL ES 1.1
Here's a snippet of my code to make my question more clear.
GLfloat glfarr[] = {...} //cube points
GLubyte glubFaces[] = {...}
Vertex3D normals[] = {...} //normals to surfaces
const GLfloat light0Position[] = {0.0, 0.0, 3.0, 0.0};
glLightfv(GL_LIGHT0, GL_POSITION, light0Position);
glEnable(GL_LIGHT0);
for(i = 0; i < 8000; ++i)
{
if (g_bDemoDone) break;
glLoadIdentity();
glTranslatef(0.0,0.0, -12);
glRotatef(rot, 0.0, 1.0,1.0);
rot += 0.8;
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, normals);
glVertexPointer(3, GL_FLOAT, 0, glfarr);
glDrawElements(GL_TRIANGLES, 3*12, GL_UNSIGNED_BYTE, glubFaces);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
eglSwapBuffers(eglDisplay, eglSurface);
}
Thanks.
Fixed in relation to what? The light position is transformed by the current MODELVIEW matrix when you do glLightfv(GL_LIGHT0, GL_POSITION, light0Position);
If you want it to move with with the cube you'll have to move glLightfv(GL_LIGHT0, GL_POSITION, light0Position); to after the translation and rotation calls.
The problem seems to be that you're rotating the modelview matrix, not the cube itself. Essentially, you're moving the camera.
In order to rotate just the cube, you'll need to rotate the vertices that make up the cube. Generally that's done using a library (GLUT or some such) or simple trig. You'll be operating on the vertex data stored in the array, before the glDrawElements call. You may/may not have to or want to modify the normals or texture coordinates, it depends on your effects and how it ends up looking.

Resources