Basic OpenGL lighting not working on my Mac - macos

I've been working with OpenGL for a few months. We have started coding basic shaders in class and we have a Qt+OpenGL project to do. It's a simple one: an interactive 3d scene viewer. We've been given the starting code that loads .obj files, places them in the scene and then lets us rotate with the mouse.
I have a really weird problem here that my teacher hasn't been able to explain. We're using GL_LIGHT0 with its default parameters:
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
This is the only inicialization. The light's direction and position should be the same as the eye's. And this is how it works when I run the program in Ubuntu; i load the model of a cup and as I rotate, light rotates with my eye and I can see all its faces lighted up.
However, under Mac OS X 10.6.8, it's not working. The light stays fixed and all the faces that are facing other directions are black.
I've tried other lights, I've tried loading the identity in MODELVIEW and then specifying position and direction myself, I've tried re-downloading the code, re-compiling it and running it and still the same error.
The program needs some .dylib files that I link using soft links (I don't think this is the problem, though). The problem is not in the code since it works in other systems.
Is it possible that the Mac implementation is different? How can I test this? Thanks!
Edit: Here is a larger piece of code:
void GLWidget::initializeGL() {
glClearColor(0.8f, 0.8f, 1.0f, 1.0f);
glEnable(GL_DEPTH_TEST);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
resetCamera(); // sets VRP, OBS...
}
void GLWidget::paintGL( void ) {
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
setProjection();
setModelview();
drawAxes();
pscene.render();
}
void drawAxes() {
float L = DRAW_AXES_LENGTH;
glDisable(GL_LIGHTING);
glBegin(GL_LINES);
glColor3f(1,0,0); glVertex3f(0,0,0); glVertex3f(L,0,0); // X
glColor3f(0,1,0); glVertex3f(0,0,0); glVertex3f(0,L,0); // Y
glColor3f(0,0,1); glVertex3f(0,0,0); glVertex3f(0,0,L); // Z
glEnd();
glEnable(GL_LIGHTING);
}
These are the functions related to lighting. As I said, LIGHT0 is set once inside initializeGL. I tried to set its position in paintGL but it didn't work either.
----- EDIT 2 -----
I found a solution. Inside drawAxes, just before returning, add:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
GLfloat pos[] = {0.0f, 0.0f, 1.0f, 0.0f};
glLightfv(GL_LIGHT0, GL_POSITION, pos);
glPopMatrix();
Why is this? After disabling and re-enabling it stops working?

Related

Does the Windows Composition API support 2.5D projected rotation?

I have started to use the Windows Composition API in UWP applications to animate elements of the UI.
Visual elements expose RotationAngleInDegrees and RotationAngle properties as well as a RotationAxis property.
When I animate a rectangular object's RotationAngleInDegrees value around the Y axis, the rectangle rotates as I would expect but in a 2D application window, it does not appear to be displaying with a 2.5D projection.
Is there a way to get the 2.5D projection effect on rotations with the composition api?
It depends to the effect that you want to have. There is a fluent design app sample on GitHub and here is the link. You will be able to download the demo from the store. And you can get some idea from depth samples. For example, flip to reveal shows a way to rotate a image card and you can find source code from here. For more details please check the sample and the demo.
In general, the animation is to rotate based on X axis:
rectanglevisual.RotationAxis = new System.Numerics.Vector3(1f, 0f, 0f);
And then use rotate animation to rotate based on RotationAngleInDegrees.
It is also possible for you to do this directly on XAML platform by using PlaneProjection from image control.
As the sample that #BarryWang pointed me to demonstrates it is necessary to apply a TransformMatrix to the page (or a parenting container) before executing the animation to get the 2.5D effect with rotation or other spatial transformation animations with the composition api.
private void UpdatePerspective()
{
Visual visual = ElementCompositionPreview.GetElementVisual(MainPanel);
// Get the size of the area we are enabling perspective for
Vector2 sizeList = new Vector2((float)MainPanel.ActualWidth, (float)MainPanel.ActualHeight);
// Setup the perspective transform.
Matrix4x4 perspective = new Matrix4x4(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, -1.0f / sizeList.X,
0.0f, 0.0f, 0.0f, 1.0f);
// Set the parent transform to apply perspective to all children
visual.TransformMatrix =
Matrix4x4.CreateTranslation(-sizeList.X / 2, -sizeList.Y / 2, 0f) * // Translate to origin
perspective * // Apply perspective at origin
Matrix4x4.CreateTranslation(sizeList.X / 2, sizeList.Y / 2, 0f); // Translate back to original position
}

My triangle doesn't render when I use OpenGL Core Profile 3.2

I have a Cocoa (OSX) project that is currently very simple, I'm just trying to grasp the general concepts behind using OpenGL. I was able to get a triangle to display in my view, but when I went to write my vertex shaders and fragment shaders, I realized I was running the legacy OpenGL core profile. So I switched to the OpenGL 3.2 profile by setting the properties in the pixel format of the view in question before generating the context, but now the triangle doesn't render, even without my vertex or fragment shaders.
I have a controller class for the view that's instantiated in the nib. On -awakeFromNib it sets up the pixel format and the context:
NSOpenGLPixelFormatAttribute attr[] =
{
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
0
};
NSOpenGLPixelFormat *glPixForm = [[NSOpenGLPixelFormat alloc] initWithAttributes:attr];
[self.mainView setPixelFormat:glPixForm];
self.glContext = [self.mainView openGLContext];
Then I generate the VAO:
glGenVertexArrays(1, &vertexArrayID);
glBindVertexArray(vertexArrayID);
Then the VBO:
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
g_vertex_buffer_data, the actual data for that buffer is defined as follows:
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
Here's the code for actually drawing:
[_glContext setView:self.mainView];
[_glContext makeCurrentContext];
glViewport(0, 0, [self.mainView frame].size.width, [self.mainView frame].size.height);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer);
glVertexAttribPointer(
0,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0
);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
glFlush();
This code draws the triangle fine if I comment out the NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core, in the NSOpenGLPixelFormatAttribute array, but as soon as I enable OpenGL Core Profile 3.2, it just displays black. Can anyone tell me what I'm doing wrong here?
EDIT: This issue still happens whether I turn my vertex and fragment shaders on or not, but here are my shaders in case it is helpful:
Vertex shader:
#version 150
in vec3 position;
void main() {
gl_Position.xyz = position;
}
Fragment shader:
#version 150
out vec3 color;
void main() {
color = vec3(1,0,0);
}
And right before linking the program, I make this call to bind the attribute location:
glBindAttribLocation(programID, 0, "position");
EDIT 2:
I don't know if this helps at all, but I just stepped through my program, running glGetError() and it looks like everything is fine until I actually call glDrawArrays(), then it returns GL_INVALID_OPERATION. I'm trying to figure out why this could be occurring, but still having no luck.
I figured this out, and it's sadly a very stupid mistake on my part.
I think the issue is that you need a vertex shader and a fragment shader when using 3.2 core profile, you can't just render without them. The reason it wasn't working with my shaders was...wait for it...after linking my shader program, I forgot to store the programID in the ivar in my class, so later when I call glUseProgram() I'm just calling it with a zero parameter.
I guess one of the main sources of confusion was the fact that I expected the 3.2 core profile to work without any vertex or fragment shaders.

OpenGL (ES) Model Within Translucent Model

I want "Face In a Crystal Ball" effect where I have a model (the face) doing things inside of a translucent model (the crystal ball). I feel like I'm taking crazy pills because I just can't get this inner face to show up partially occluded by the ball. My goal is to vary the alpha of the ball (and/or face) to make the face appear and disappear.
Below is the relevant bits code. As you'll see, I'm not using shaders, just good old GL/GLES1. If anyone can tell me what I'm doing wrong, I'll be VERY appreciative.
The setup code...
//-- CONFIGURATION ---------------
// Create The Depth Buffer Object
glGenRenderbuffersOES(1, &depth_renderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depth_renderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH_COMPONENT16_OES,
width,
height);
// Create The FrameBuffer Object
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES,
GL_RENDERBUFFER_OES,
color_renderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_DEPTH_ATTACHMENT_OES,
GL_RENDERBUFFER_OES,
depth_renderbuffer);
// Bind Color Buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, color_renderbuffer);
glViewport(0, 0, width, height);
//-- LIGHTING ----------------------
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
//-- PROJECTION ---------------------
glMatrixMode(GL_PROJECTION);
viewport_size = vec2((float) width,(float) height);
//Orthographic Projection
float max_x,max_y;
if(width>height){
max_y = 1;
max_x = (float)width/(float)height;
}
else{
max_x = 1;
max_y = (float)height/(float) width;
}
const float MAX_X = max_x;
const float MAX_Y = max_y;
const float Z_0 = 0;
const float MAX_Z = 1;
glOrthof(-MAX_X, MAX_X, -MAX_Y, MAX_Y, Z_0-MAX_Z, Z_0+MAX_Z);
world_size = vec3(2*MAX_X,2*MAX_Y,2*MAX_Z);
//Color Depth
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE); //Dissapears if False
glDepthFunc(GL_LEQUAL);
glEnable(GL_BLEND);
//glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //doesn't do it
glBlendFunc(GL_ONE, GL_ONE); //better
Here is the rendering call
glClearColor(world->background_color.x,
world->background_color.y,
world->background_color.z,
world->background_color.w);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for(int s=0;s<surfaces.size();s++){
Surface* surface = surface[s];
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, surface->getMatAmbient().Pointer());
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, surface->getMatDiffuse().Pointer());
glMatrixMode(GL_MODELVIEW);
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
glPushMatrix();
glLoadIdentity();
vec4 light_position = vec4(world->light->position,1);
glLightfv(GL_LIGHT0,GL_POSITION,light_position.Pointer());
glPopMatrix();
glPushMatrix();
glMultMatrixf(surface->transform.Pointer());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, surface->index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, surface->vertex_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, VERTEX_STRIDE, 0);
glNormalPointer(GL_FLOAT, VERTEX_STRIDE, (GLvoid*) VERTEX_NORMAL_OFFSET);
glDrawElements(GL_TRIANGLES, surface->indices.size(), GL_UNSIGNED_SHORT, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glPopMatrix();
}
It sounds like you may be suffering from a simple case of the concept of a depth buffer not really applying to your scene. A depth buffer stores one depth for every pixel on screen, which in a scene with fully opaque objects would be the depth of the nearest object at that pixel.
The problem is that when you want to add partially transparent objects to the scene, you end up in a position where several objects contribute to the colour of an individual pixel. But you can still store the depth of only one of them.
So what's probably happening in your case is that you're drawing the crystal ball first, and that's putting the depths of the various crystal ball pixels into the depth buffer. You're then attempting to draw the face and OpenGL is seeing that it's further away than the values already in the buffer, so skipping those pixels.
So the quick-fix solution is just to re-order your scene geometry by hand such that the face is always drawn before the crystal ball, being always on the inside.
In an ideal solution, you'd draw all opaque geometry in one step (traditionally in something close to front-to-back order, though that's not as important on the PowerVR) to establish opaque depth values, then all transparent geometry back to front so that it is composited in the correct order.
In OpenGL you really want the order of certain things to be relatively fixed so that you can push the relevant values over to the GPU and not incur communications costs. People still tend to divide into opaque and transparent geometry and draw opaque first but often they'll just then disable z-buffer writes when they draw the transparent geometry, making an effort to do it something a bit like back-to-front order but not investing too much time in the problem.
If you're happy to use purely additive blending then clearly any order drawing for the transparencies is correct once the depth buffer has the opaque stuff set up.
What order are you rendering the objects? If you draw the ball before the face, then the entire face will get rejected because it is behind the ball in the z-buffer. If you want to do correct transparency, you have to render objects from back to front.
And regarding your inline question:
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
When you call glLightfv with a position, the position is transformed by what's currently in the modelview matrix stack. You have to put it in the right place relative to what frame of reference you're defining the coordinates (is it relative to the view coordinates, or to the world coordinates, or to the object coordinates?).

glMaterialfv not working for me

This is OpenGL on iPhone 4.
Im drawing scene using light and materials. Here is snippet of my code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustumf(-1, 1, -1, 1, -1, 1);
CGFloat ambientLight[] = { 0.5f, 0.5f, 0.5f, 1.0f };
CGFloat diffuseLight[] = { 1.0f, 1.0f, 1.0f, 1.0f };
CGFloat direction[] = { 0.0f, 0.0f, -20.0f, 0 };
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambientLight);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseLight);
glLightfv(GL_LIGHT0, GL_POSITION, direction);
glShadeModel(GL_FLAT);
glEnable(GL_LIGHTING);
glDisable(GL_COLOR_MATERIAL);
float blankColor[4] = {0,0,0,1};
float whiteColor[4] = {1,1,1,1};
float blueColor[4] = {0,0,1,1};
glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor);
glEnable(GL_CULL_FACE);
glVertexPointer(3, GL_FLOAT, 0, verts.pdata);
glEnableClientState(GL_VERTEX_ARRAY);
glNormalPointer(GL_FLOAT, 0, normals.pdata);
glEnableClientState(GL_NORMAL_ARRAY);
glDrawArrays (GL_TRIANGLES, 0, verts.size/3);
Problem is that instead of seeing BLUE diffuse color I see it white. It fades out if I rotate model's side but I can't understand why its not using my blue color.
BTW if I change glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor) to glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, blueColor) then I do see blue color. If I do it glMaterialfv(GL_FRONT, GL_DIFFUSE, blueColor); and then glMaterialfv(GL_BACK, GL_DIFFUSE, blueColor); I see white color again. So it looks like GL_FRONT_AND_BACK shows it but rest of combinations show white. Anyone can explain it to me?
This is because of clockwise
10.090 How does face culling work? Why doesn't it use the surface normal?
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension
from here
Also you can read here

How to draw a texture as a 2D background in OpenGL ES 2.0?

I'm just getting started with OpenGL ES 2.0, what I'd like to do is create some simple 2D output. Given a resolution of 480x800, how can I draw a background texture?
[My development environment is Java / Android, so examples directly relating to that would be best, but other languages would be fine.]
Even though you're on Android, I created an iPhone sample application that does this for frames of video coming in. You can download the code for this sample from here. I have a writeup about this application, which does color-based object tracking using live video, that you can read here.
In this application, I draw two triangles to generate a rectangle, then texture that using the following coordinates:
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
To pass through the video frame as a texture, I use a simple program with the following vertex shader:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
and the following fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main()
{
gl_FragColor = texture2D(videoFrame, textureCoordinate);
}
Drawing is a simple matter of using the right program:
glUseProgram(directDisplayProgram);
setting the texture uniform:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);
setting the attributes:
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
and then drawing the triangles:
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
You don't really draw a background, instead you draw a rectangle (or, even more correctly: two triangles forming a rectangle) and set a texture to that. This isn't different at all from drawing any other object on screen.
There are plenty of places showing how this is done, maybe there's even an android example project showing this.
The tricky part is getting something to display in front of or behind something else. For this to work, you need to set up a depth buffer and enable depth testing (glEnable(GL_DEPTH_TEST)). And your vertices need to have a Z coordinate (and tell glDrawElements that your vertices are made up of three values, not two).
If you don't do that, objects will be rendered in the order their glDrawElements() functions are called (meaning whichever you draw last will end up obscuring the rest).
My advice is to not have a background image or do anything fancy until you get the hang of it. OpenGL ES 2.0 has kind of a steep learning curve, and tutorials on ES 1.x don't really help with getting 3D to work because they can use helper functions like gluPerspective, which 2.0 just doesn't have. Start with creating a triangle on a background of nothing. Next, make it a square. Then, if you want to go fancy already, add a texture. Play with positions. See what happens when you change the Z value of your vertices. (Hint: Not a lot, if you don't have depth testing enabled. And even then, if you don't have perspective projection, objects won't get smaller the farther they are away, so it will still seem as if nothing happened)
After a few days, it stops being so damn frustrating, and you finally "get it", mostly.

Resources