One section of the OpenGL ES 3.0 spec is not completely clear to me.
https://www.khronos.org/registry/Ope...s_spec_3.0.pdf, page 185:
If an OpenGL ES Shading Language 1.00 fragment shader writes to
gl_FragColor or gl_FragData, DrawBuffers specifies the draw buffer, if
any, into which the single fragment color defined by gl_FragColor or
gl_FragData[0] is written. If an OpenGL ES Shading Language 3.00
fragment shader writes a user-defined varying out variable,
DrawBuffers specifies a set of draw buffers into which each of the
multiple output colors defined by these variables are separately
written.
I understand this the following way:
1) If I use OpenGL ES 3.0 and write shaders using GLSL 1.0, then the only way I can write to 2 buffers at once (COLOR0 and COLOR1) is to manually specify what gets written to gl_FragData[0] and gl_FragData[1] in my fragment shader. If I then want to get back to writing only to COLOR0, I must switch glPrograms to one that only writes to gl_FragData[0] (or gl_FragColor).
2) If on the other hand I use OpenGL ES 3.0 and write my shaders using GLSL 3.0, then I can write a single fragment shader with output defined to be a single varying out variable, and dynamically switch on and off writing to COLOR1 with calls to DrawBuffers() and with no need to swap glPrograms.
Is the above correct?
Is the above correct?
No. In ESSL 1.0 shaders you can only write to a single color buffer gl_FragData, or it's alias gl_FragData[0]. There is no such thing as gl_FragData[1] in ESSL 1.0.
and dynamically switch on and off writing to COLOR1 with calls to DrawBuffers() and with no need to swap glPrograms.
Yes, this is how it works in ESSL 3.x.
However, in most cases it's far more efficient just to swap programs. You execute the shader program millions of times (once per fragment), so having one program containing all of the code for all color targets and just masking out output writes is horribly inefficient. Don't do it. You want you shader programs to be as close to optimal as possible - that's where your GPU runtime goes ...
Related
I'm doing a project using OpenCL and thought it can work on Mali 400 GPU. But I recently found that Mali 400 GPU only support OpenGL ES 2.0 standard.
I still have to use this GPU, So is there any way to let a shader act nearly the same as OpenCL kernel or CUDA kernel?
There are some main features I expect but not sure glsl will support:
For example, I created a global memory for GPU, and I want to read/write the global memory in shader, how should I pass the variable from host to vertex shader, can I expect that data be both 'in and out' like this?
layout (location = 0) inout vec3 a_Data;
I want to fetch a_Data as 64 float values, is there a easy way to declare it like vec64 or float[64], or I have to use multiple vec4 to assemble it?
So is there any way to let a vertex shader act nearly the same as OpenCL kernel or CUDA kernel?
No.
In ES 2.0, vertex shaders have no mechanism to write to memory. At all. All of their output variables go to the rasterizer to generate fragments to be processed by the fragment shader.
The only way to do GPGPU processing on such hardware is to use the fragment shader to write data. That means you have to manipulate your data to look like a rendering operation. Your vertex positions needs to set up your fragment shader to be able to write values to the appropriate places. And your fragment shader needs to be able to acquire whatever information it needs based on what the VS provides (which is at a per-vertex granularity and interpolated for each fragment).
As OpenGL ES does not support shared "uniform blocks" I was wondering if there is a way I can put matrices that can be referenced by a number of different shaders, a simple example would be a worldToViewport or worldToEye which would not change for an entire frame and which all shaders would reference. I saw one post where one uses 3 or 4 dot calls on a vertex to transform it from 4 "column vectors", but wondering if there is a way to assign the buffer data to a "mat4" in the shader.
Ah yes the need for this is webGL which at the moment it seems only to support openGLES 2.0.
I wonder if it supports indexed attribute buffers as I assume they don't need to be any specified size relative to the size of the position vertex array.
Then if one can use a hard coded or calculated index into the attribute buffer ( in the shader ) and if one can bind more than one attribute buffer at a time, and access all "bound to the shader" buffers simultaneously in a shader ...
I see if all true might work. I need a good language/architecture reference on shaders as I am somewhat new to shader programming as I I'm trying to deign a wall without knowing the shapes of the bricks :)
Vertex attributes are per-vertex, so there is no way so share vertex attributes amongst multiple vertices.
OpenGL ES 2.0 upwards has CPU-side uniforms, which must be uploaded individually from the CPU at draw time. Uniforms belong to the program object, so for uniforms which are constant for a frame you only have to modify each program once, so the cost isn't necessarily proportional to draw count.
OpenGL ES 3.0 onwards has Uniform Buffer Objects (UBOs) which allow you to load uniforms from a buffer in memory.
I'm not sure what you mean by "doesn't support shared uniform blocks", as that's pretty much what a UBO is, albeit it won't work on older hardware which only supports OpenGL ES 2.x.
I need to create a texture for an OpenGL ES 2.0 application with the following specs:
Each pixel has two components (lets call them r and g in the fragment shader).
Each pixel component is a 16 bit float.
That means every pixel in the texture has 4 bytes (2 bytes / 16 bit for each component).
The fragment shader should be able to sample the texture as 2 float16 components.
All formats must be supported on OpenGL ES 2.0 and as efficient as possible.
How would the appropriate glTexImage2D call look?
Regards
Neither floating point textures nor floating point render targets are supported in OpenGL ES 2.x. The short answer is therefore "you can't do what you are trying to do", at least not natively.
You can emulate higher precision by packing pairs of values into a RGBA8 texture or render target, e.g. the pair of RG values is one value, and BA is the other, but you'll have to pack/unpack the component 8-bit unorms yourself in shader code. This is quite a common solution in deferred rendering G-buffers for example, but can be relatively expensive on some of the lower-end mobile GPU parts (given it's basically just overhead, rather than useful rendering).
I'm using an ParticleSystem with PointSprites (inspired by the Cocos2D Source). But I wonder how to rebuild the functionality for OpenGL ES 2.0
glEnable(GL_POINT_SPRITE_OES);
glEnableClientState(GL_POINT_SIZE_ARRAY_OES);
glPointSizePointerOES(GL_FLOAT,sizeof(PointSprite),(GLvoid*) (sizeof(GL_FLOAT)*2));
glDisableClientState(GL_POINT_SIZE_ARRAY_OES);
glDisable(GL_POINT_SPRITE_OES);
these generate BAD_ACCESS when using an OpenGL ES 2.0 context.
Should I simply go with 2 TRIANGLES per PointSprite? But thats probably not very efficent (overhead for extra vertexes).
EDIT:
So, my new problem with the suggested solution from:
https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
is a possibility to pass many different sizes in an batch call. I thought of using an Attribute instead of an Uniform, but then I would need to pass always an PointSize to my shaders - even if I'm not drawing GL_POINTS. So, maybe a second shader (a shader only for GL_POINTS)?! I'm not aware of the overhead for switching shaders every frame in the draw routine (because if the particle system is used, I want naturally also render regular GL_TRIANGLES without an pointSize)... Any ideas on this?
So doing the thing here as I already commented here is what you need: https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
And for which approach to go, I can either tell you to use different shaders for different types of drawables in your application or just another boolean uniform in your shader and enable and disable changing the gl_PointSize through your shader code. It's usually up to you. What you need to keep in mind is changing the shader program is one of the most time costly operations so doing the drawing of same type of objects in a batch will be better in that case. I'm not really sure if using an if statement in your shader code will give a huge performance impact.
When setting up attribute locations for an OpenGL shader program, you are faced with two options:
glBindAttribLocation() before linking to explicitly define an attribute location.
or
glGetAttribLocation() after linking to obtain an automatically assigned attribute location.
What is the utility for using one over the other?
And which one, if any, is preferred in practice?
I know one good reason to prefer explicit location definition.
Consider that you hold your geometry data in Vertex Array Objects. For a given object, you create a VAO in such way that the indices correspond to, for example:
index 0: positions,
index 1: normals,
index 2: texcoords
Now consider that you want to draw one object with two different shaders. One shader requires position and normal data as input, the other - positions and texture coords.
If you compile those shaders, you will notice that the first shader will expect the positions at attribute index 0 and normals at 1. The other would expect positions at 0 but texture coords at 1.
Quoting https://www.opengl.org/wiki/Vertex_Shader:
Automatic assignment
If neither of the prior two methods assign an input to an attribute index, then the index is automatically assigned by OpenGL when the program is linked. The index assigned is completely arbitrary and may be different for different programs that are linked, even if they use the exact same vertex shader code.
This means that you wouldn't be able to use your VAO with both shaders. Instead of having one VAO per, say, object, you'd need - in the worst case - a separate VAO per object per shader.
Forcing the shaders to use your own attribute numbering convention via glBindAttribLocation can solve this problem easily - all you need to do is to keep a consistent relation between attributes and their estabilished IDs, and force the shaders to use that convention when linking.
(That's not really a big issue if you don't use separate VAOs, but still might make your code clearer.)
BTW:
When setting up attribute locations for an OpenGL shader program, you are faced with two options
There's a third option in OpenGL/GLSL 3.3: Specify the location directly in shader code. It looks like this:
layout(location=0) in vec4 position;
But this is not present in GLSL ES shader language.
Another answer here is that glGetAttribLocation returns data to the caller, which means that it implicitly requires a pipeline flush. If you call it right after you compile your program, you're essentially forcing asynchronous compilation to occur synchronously.
The third option, ie layout(location=0) in vec4 position; in the shader code, is now available in OpenGL ES 3.0/GLSL 300 es. Only for vertex shader input variables though.