I'm creating an iPhone app with cocos2d and I'm trying to make use of the following OpenGL ES 1.1 code. However, I'm not good with OpenGL and my app makes use of OpenGL ES 2.0 so I need to convert it.
Thus I was wondering, how difficult would it be to convert the following code from ES 1.1 to ES 2.0? Is there some source that could tell me which methods need replacing etc?
-(void) draw
{
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
glColor4ub(_color.r, _color.g, _color.b, _opacity);
glLineWidth(1.0f);
glEnable(GL_LINE_SMOOTH);
if (_opacity != 255)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//non-GL code here
if (_opacity != 255)
glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
}
It will not be that easy if you're not so fit in OpenGL.
OpenGL ES 2.0 doesn't have a fixed-function pipeline anymore. This means you have to manage vertex transformations, lighting, texturing and the like all yourself using GLSL vertex and fragment shaders. You also have to keep track of the transformation matrices yourself, there are no glMatrixMode, glPushMatrix, glTranslate, ... anymore.
There are also no buitlin vertex attributes anymore (like glVertex, glColor, ...). So these functions, along with the corresponding array functions (like glVertexPointer, glColorPointer, ...) and gl(En/Dis)ableClientState, have been reomved, too. Instead you need the generic vertex attribute functions (glVertexAttrib, glVertexAttribPointer and gl(En/Dis)ableVertexAttribArray, which behave similarly) together with a corresponding vertex shader to give these attributes their correct meaning.
I suggest you look into a good OpenGL ES 2.0 tutorial or book, as porting from 1.1 to 2.0 is really a major change, at least if you never have heard anything about shaders.
Related
I am updating an existing app for Mac OS that is based on the older fixed-pipeline OpenGL. (OpenGL 2, I guess it is.) It uses an NSOpenGLView and an ortho projection to create animated kaleidoscopes, with either still images or input from a connected video camera.
It was written before HD cameras were available (or at least readily so.) It's written to expect YCBCR_422 video frames from Quicktime (k422YpCbCr8CodecType) and passes the frames to OpenGL as GL_YCBCR_422_APPLE to map them to a texture.
My company decided it was time to update the app to support the new crop of HD cameras, and I am not sure how to proceed.
I have a decent amount of OpenGL experience, but my knowledge is spotty and has gaps in it.
I'm using the delegate method captureOutput:didOutputVideoFrame:withSampleBuffer:fromConnection to receive frames from the camera via a QTCaptureDecompressedVideoOutput
I have a Logitech c615 for testing, and the buffers I'm getting are reported as being in 32 bit RGBA (GL_RGBA8) format. I believe it's actually in ARGB format.
However, according to the docs on glTexImage2D, the only supported input pixel formats are GL_COLOR_INDEX, GL_RED, GL_GREEN, GL_BLUE, GL_ALPHA, GL_RGB, GL_BGR GL_RGBA, GL_BGRA, GL_LUMINANCE, or GL_LUMINANCE_ALPHA.
I would like to add a fragment shader that would swizzle my texture data into RGBA format when I map my texture into my output mesh.
Since writing this app I've learned the basics of shaders from writing iOS apps for OpenGL ES 2, which does not support fixed pipeline OpenGL at all.
I really don't want to rewrite the app to be fully shader based if I can help it. I'd like to implement an optional fragment shader that I can use to swizzle my pixel data for video sources when I need it, but continue to use the fixed pipeline for managing my projection matrix and model view matrix.
How do you go about adding a shader to the (otherwise fixed) rendering pipeline?
I don't like to ask this, but what exactly is your problem now? It is not that difficult to attach shaders to the fixed function pipeline; all you need is to reimplement that tiny bit of functionality that vertex and fragment shaders are replacing. You can use built-in GLSL variables like gl_ModelViewMatrix or such to use values that have been setup by your legacy OpenGL code; you can find some of them here: http://www.hamed3d.org/Res/GLSL_Reference/0321552628_AppI.pdf
In OpenGL ES 1.x, one could do glTranslate first, and then glRotate, to modify where the center of rotation is located (i.e. rotate around given point). As far as I understand, in OpenGL ES 2.0 matrix computations are done on CPU side. I am using IwGeom (from Marmalade SDK) – a typical (probably) matrix package. From documentation:
Matrices in IwGeom are effectively in 4x3 format, comprising a 3x3
rotation, and a 3-component vector translation.
I find it hard to obtain the same effect using this method. The translation is always applied after the rotation. More, in Marmalade, one also sets Model matrix:
IwGxSetModelMatrix( &modelMatrix );
And, apparently, rotation and translation is also applied in one order: a) rotation, b) translation.
How to obtain the OpenGL ES 1.x effect?
Marmalades IwGX wraps OpenGL and it is more similar to GLES 1.0 then GLES 2.0 as it does not requires shaders.
glTranslate and glRotate modifying view matrix.
You may replace with
CIwFMat viewMat1 = IwGxGetModelMatrix();
CIwFMat rot; rot.SetIdentity(); rot.SetRotZ(.....); // or other matrix rotation function
CIwFMat viewMat2 = viewMat1;
viewMat2.PostMult(rot); // or viewMat2.PreMult(rot);
IwGxSetModelMatrix(viewMat2);
// Draw Something
IwGxSetModelMatrix(&viewMat1);
If you use GLES 2.0 then matrix might be computed in vertex shader as well. That might be faster then CPU. CPU with NEON instructions have similar performance on iPhone 4S
I have a question regarding [glPushMatrix], together with the matrix transformations, and OpenGL ES. The GLSL guide says that under OpenGL ES, the matrices have to be computed:
However, when developing applications in modern versions of OpenGL and
OpenGL ES or in WebGL, the model matrix has to be computed.
and
In some versions of OpenGL (ES), a built-in uniform variable
gl_ModelViewMatrix is available in the vertex shader
As I understood, gl_ModelViewMatrix is not available under all OpenGL ES specifications. So, are the functions like glMatrixMode, glRotate, ..., still valid there? Can I use them to calculate the model matrix? If not, how to handle those transformation matrices?
First: You shouldn't use the matrix manipulation functions in regular OpenGL as well. In old versions they're just to inflexible and also redundant, and in newer versions they've been removed entirely.
Second: The source you're mentioning is a Wikibook which means it's not a authorative source. In the case of this Wikibook it's been written to accomodate for all versions of GLSL, and some of them, mainly for OpenGL-2.1 have those variables.
You deal with those matrices by calculating them yourself (no, this is not slower, OpenGL's matrix stuff was not GPU accelerated) and pass them to OpenGL either by glLoadMatrix/glMultMatrix (old versions of OpenGL) or a shader uniform.
If you're planning on doing this in Android, then take a look at this.
http://developer.android.com/reference/android/opengl/Matrix.html
It has functions to setup view, frustum, transformation matrices as well as some matrix operations.
Is there a way in OpenGL ES to do flat shading without repeating each vertex for every triangle?
In regular OpenGL this is done with glShadeModel but in ES I write the shaders so its not that simple.
GLSL 1.3 or 1.4 (not sure) introduces the keyword flat which seem to enable this but unfortunately ES 2.0 doesn't have this yet.
Yet another way to do this uses dFdx,dFdy functions which, alas, are also missing in ES.
No, flat-shading is not a feature of OpenGL ES 2.0, sorry.
What's the purpose of glNormal3f in OpenGL ES? If there is no immediate mode available in OpenGL ES, for what would I use glNormal3f? Example code would be nice.
I think it's for the same reason that there is a glColor function. If the normal of your whole geometry is the same for all vertices you could specify it with glNormal before calling glDrawArrays/glDrawElements.
The only reason I can think of is that it is there to support efficiently expressing surfaces where many vertices share the same normal. With the arrays-based approach, you'd have to create an array with the same value repeated for each vertex, which wastes memory.
I find it curious that the manual page (OpenGL ES 1.1, there) doesn't even mention it. I found one iPhone programming tutorial (PDF) that even claimed glNormal() was not there anymore.
OpenGL ES 1.1 does mention it but yes that's an error there in the iPhone programming tutorial.
You are not supposed to use these functions anymore. Stick to the glXXXXArray(). I suspect that they are just left overs hanging in there to make OpenGL to OpenGL ES transfer easier.
In OpenGL|ES glNormal3f() is not a deprecated function,
because it is useful for rendering flat 2D shapes in the {X,Y} plane:
instead of using:
glEnableClientState( GL_NORMAL_ARRAY );
glNormalPointer( GL_FLOAT, 0, vbuf + normal_offset );
it is simpler and it requires less VBO memory to call:
glNormal3f( 0, 0, 1 );