How to draw a colored rectangle in OpenGL ES? - opengl-es

Is this easy to do? I don't want to use texture images. I want to create a rectangle, probably of two polygons, and then set a color on this. A friend who claims to know OpenGL a little bit said that I must always use triangles for everything and that I must use textures for everything when I want it colored. Can't imagine that is true.

You can set per-vertex colors (which can all be the same) and draw quads. The tricky part about OpenGL ES is that they don't support immediate mode, so you have a much steeper initial learning curve compared to OpenGL.
This question covers the differences between OpenGL and ES:
OpenGL vs OpenGL ES 2.0 - Can an OpenGL Application Be Easily Ported?

With OpenGL ES 2.0, you do have to use a shader, which (among other things) normally sets the color. As long as you want one solid color for the whole thing, you can do it in the vertex shader.

Related

Surface Normals OpenGL

So, I am working on an OpenGL ES 2.0 terrain rendering program still.
I have weird drawing happening at the tops of ridges. I am guessing this is due to surface normals not being applied.
So, I have calculated normals.
I know that in other versions of OpenGL you can just activate the normal array and it will be used for culling.
To use normals in OpenGL ES can I just activate the normals or do I have to use a lighting algorithm within the shader?
Thanks
OpenGL doesn't use normals for culling. It culls based on whether the projected triangle has its vertices arranged clockwise or anticlockwise. The specific decision is based on (i) which way around you said was considered to be front-facing via glFrontFace; (ii) which of front-facing and/or back-facing triangles you asked to be culled via glCullFace; and (iii) whether culling is enabled at all via glEnable/glDisable.
Culling is identical in both ES 1.x and 2.x. It's a fixed hardware feature. It's external to the programmable pipeline (and, indeed, would be hard to reproduce within the ES 2.x programmable pipeline because there's no shader with per-triangle oversight).
If you don't have culling enabled then you are more likely to see depth-buffer fighting at ridges as the face with its back to the camera and the face with its front to the camera have very similar depths close to the ridge and limited precision can make them impossible to distinguish correctly.
Lighting in ES 1.x is calculated from the normals. Per-vertex lighting can produce weird problems at hard ridges because normals at vertices are usually the average of those at the faces that join at that vertex, so e.g. a fixed mesh shaped like \/\/\/\/\ ends up with exactly the same normal at every vertex. But if you're not using 1.x then that won't be what's happening.
To implement lighting in ES 2.x you need to do so within your shader. As a result of that, and of normals not being used for any other purpose, there is no formal way to specify normals as anything special. They're just another vertex attribute and you can do with them as you wish.

Multi-pass shaders in OpenGL ES 2.0

First - Does subroutines require GLSL 4.0+? So it unavailable in GLSL version of OpenGL ES 2.0?
I quite understand what multi-pass shaders are.
Well what is my picture:
Draw group of something (e.g. sprites) to FBO using some shader.
Think of FBO as big texture for big screen sized quad and use another shader, which, for example, turn texture colors to grayscale.
Draw FBO textured quad to screen with grayscaled colors.
Or is this called else?
So multi-pass = use another shader output to another shader input? So we render one object twice or more? How shader output get to another shader input?
For example
glUseProgram(shader_prog_1);//Just plain sprite draw
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, /*some texture_id*/);
//Setting input for shader_prog_1
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//Disabling arrays, buffers
glUseProgram(shader_prog_1);//Uses same vertex, but different fragment shader program
//Setting input for shader_prog_2
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Can anyone provide simple example of this in basic way?
In general, the term "multi-pass rendering" refers to rendering the same object multiple times with different shaders, and accumulating the results in the framebuffer. The accumulation is generally done via blending, not with shaders. That is, the second shader doesn't take the output of the first. They each perform part of the computation, and the blend stage combines them into the final value.
Nowadays, this is primarily done for lighting in forward-rendering scenarios. You render each object once for each light, passing different lighting parameters and possibly using different shaders each time you render a light. The blend mode used to accumulate the results is additive, since light reflectance is an additive property.
Does subroutines require GLSL 4.0+? So it unavailable in GLSL version of OpenGL ES 2.0?
This is a completely different question from the entire rest of your post, but the answer is yes and no.
No, in the sense that ARB_shader_subroutine is an OpenGL extension, and it therefore could be implemented by any OpenGL implementation. Yes, in the practical sense that any hardware that actually could implement shader_subroutine could also implement the rest of GL 4.x and therefore would already be advertising 4.x functionality.
In practice, you won't find shader_subroutine supported by non-4.x OpenGL implementations.
It is unavailable in GLSL ES 2.0 because it's GLSL ES. Do not confuse desktop OpenGL with OpenGL ES. They are two different things, with different GLSL versions and different featuresets. They don't even share extensions (except for a very few recent ones).

Seamless Cube Maps on OpenGL ES 2.0 using iOS?

Is there an equivalent for
glEnable(GL_TEXTURE_CUBE_MAP_SEAMLESS);
in OpenGL ES2.0 when implementing cubemap samplers? I'm developing a test app on the iPad -- cubemapping a sphere -- and I'm getting seams between each face of the cubemap.
Or if there is no magic glEnable for ES2.0, what is the best way to get rid of the seams?
OpenGL ES does not have the equivalent of desktop GL's ARB_seamless_cube_map functionality.
And no, glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S/T, GL_CLAMP_TO_EDGE) does not count. Seamless cubemapping means that texels from different faces can be blended together. Clamping to the edge means exactly that: clamping to the edge of a face. What you've done is make the seam less noticeable; it's still there.

Returning values from a OpenGL ES 2.0 shader

Is it possible to get any values out of a OpenGL ES 2.0 shader? I'd like to use the gpu to do some processing (not 3D). The only thing I could think of is to render to the canvas and then to use readPixels to get the colors (preferably in a large 2d array).
Yes, that's called GPGPU. The only way is to draw to a framebuffer or a texture, here is a tutorial that explains it, just stick to the GLSL version.

converting 2D mouse coordinates to 3D space in OpenGL ES

I want to convert mouse's current X and Y coordinates into the 3D space I have drawn in the viewport. I need to do this on the OpenGL ES platform. I found following possible solutions implemented in OpenGL, but none fits what I am looking for.
I found NeHe's tutorial on doing exactly this, but in traditional OpenGL way. It uses gluUnProject.
http://nehe.gamedev.net/data/articles/article.asp?article=13
Although gluUnProject is not available in OpenGL ES, its implementation seems simple enough to port back. But before calling it, we need to call glReadPixels with GL_DEPTH_COMPONENT and that is not possible in OpenGL ES. (The reason I found in this thread: http://www.khronos.org/message_boards/viewtopic.php?f=4&t=771)
What I want to do is similar to picking, except that I don't want to select the object but I want exact coordinates so that I can recognize particular portion of the object that is currently under mouse cursor. I went through the Picking tutorials in this answer.
https://stackoverflow.com/posts/2211312/revisions
But they need glRenderMode, which I believe is absent in OpenGL ES.
If you know how to solve this problem in OpenGL ES, please let me know.
Thanks.
I think the general solution is to figure out where in world space the clicked coordinate falls, assuming the screen is a plane in the world (at the camera's location). Then you shoot a ray perpendicular to the plane, into your scene.
This requires "world-space" code to figure out which object(s) the ray intersects with; the solutions you mention as being unsuitable for OpenGL ES seem to be image-based, i.e. depend on the pixels generated when rendering the scene.
With OpenGL ES 2.0 you could use a FBO and render the depth values to a texture. Obviously, this wouldn't be exactly cheap (just a way around the restriction of glReadPixels)...
Further, since - as I understand it - you want to pick certain parts of your object you might want to do some sort of color-picking where each selectable portion of the object has an unique color (note that the Lighthouse 3D tutorial only shows the general idea behind color-picking, your implementation would probably be different). You could optimize a little by performing a ray/bounding-box intersection beforehand and only rendering the relevant candidates to the texture used for picking.

Resources