Efficiently sending values to GLSL shader - opengl-es

I am trying to write my particle system for OpenGL ES 2.0. Each particle is made up of 4 vertexes, forming the little square where a transparent texture is drawn.
The problem is: each particle has its own properties (color, position, size), that are constant across the 4 vertexes of that particle. The only variation for each vertex is what corner of the square it is.
If I am to send the properties of the particle via uniform variables, I must do:
for(each particle) { // do maaaany times
glUniform*(...);
glDrawArray(...); // only draw 4 vertexes
};
this is clearly inefficient, since I will only draw 4 vertexes per glDrawArray call.
If I send this properties via attribute variables, I must fill the same information 4 times for each fragment in the attribute buffer:
struct particle buf[n];
for(each particle) {
struct particle p;
p = ...; // Update particle
buf[i+0] = buf[i+1] = buf[i+2] = buf[i+3] = p;
};
glBufferData(..., buf, ...);
// then draw everithing once afterwards...
what is memory inefficient and seems very ugly to me. So what is the solution to this problem? What is the right way to pass parameters that change for each few vertexes to the shader?

Use point sprites. The introduction is very explicit about how to solve your problem.
You can also combine the use of point sprites with another extension, point_size_array.
...
As Christian Rau has commented, the point_size_array is no more usefull using programmable pipeline: set the maximum point size as usual, then discard fragments basing on their distance from the point center, derived from texture coordinates generated by OpenGL. The particle size shall be sent via additional attribute.

GL ES doesn't really have a good solution to this. Desktop OpenGL allows for instancing and various other tricks, but ES just doesn't have those.

You can use a Uniform Buffer Object. Note that this feature is only available on D3D10+ hardware.

Send the information via a texture. I'm not sure that texture sampling is supported in opengl-es 2.0 vertex shaders, but if it is, then that would be optimal.

Related

SCNProgram Vertex Shader – retrieve node clip space coordinate

I'm fairly new to Shader Development and currently working on a SCNProgram to replace the rendering of a plane geometry.
Within the programs vertex shader I'd like to access the position (or basically anchor position) point of the node/mesh as a clip space coordinate. Is there an easy way to accomplish that, maybe through the supplied Node Buffer?
I got kinda close with:
xCoordinate = scn_node.modelViewProjectionTransform[3].x / povZPosition
yCoordinate = scn_node.modelViewProjectionTransform[3].y / povZPosition
The pov z position is being injected from outside through a custom buffer.
This breaks though, when the POV is facing the scene at an angle.
I figured that I could probably just calculate the node position by myself via:
renderer.projectPoint(markerNode.presentation.worldPosition)
and then passing that through my shader via »program.handleBinding(ofBufferNamed: …« on every frame. I hope there is a better way though.
While digging through Google the Unity equivalent would probably be: https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Screen-Position-Node.html
I would be really thankful for any hints. Attached is a little visualization.
If I'm reading you correctly, it sounds like you actually want the NDC position of the center of the node. This differs subtly from the clip-space position, but both are computable in the vertex shader as:
float4 clipSpaceNodeCenter = scn_node.modelViewProjectionTransform[3];
float2 ndcNodeCenter = clipSpaceNodeCenter.xy / clipSpaceNodeCenter.w;

GLES fragment shader, get 0-1 range when using TextureRegion - libGDX

I have a fragment shader in which I use v_texCoords as a base for some effects. This works fine if I use a single Texture, as v_texCoords always ranges from 0 - 1, so the center point is always (0.5, 0.5) for example. If I am drawing from part of a TextureRegion though, my shader messes up because v_texCoords no longer ranges from 0-1. Is there any methods or variabels I can use to get a consistent 0-1 range in my fragment shader? I want to avoid setting uniforms as this would mean I need to flush the batch for every draw.
Thanks!
Nothing like this exists at the shader level - TextureRegions are entirely a libgdx construct that doesn't exist at all at the OpenGL ES API level.
Honestly for what you are trying I'd simply suggest not overloading the texture coordinate for two orthogonal purposes, and just add a separate vertex attribute which provides the 0-to-1 number.

Approach to write a fragment shader for each triangle in a mesh

I have a mesh that consists of several triangles (order 100). I would like to define a different fragment shader for each of them. So to be able to show different kind of reflection behaviour for each triangle.
How should I approach this problem? Should I start defining a GLSL program and try to distinguish between different triangles? this answer is suggesting me that this is not the right approach glDrawElements and flat shading . Even this Using a different vertex and fragment shader for each object in webgl seems not the right approach since I do not want to have multiple objects, but just one with different materials(fragment shaders) on it.
My suggestion would be to create a super shader which can handle all the different scenarios you desire.
In order to set this up you'll need attributes that dictate which part of the shader to use.
e.g. in your vertex or fragment shader:
attribute bool flatShading;
attribute bool phongShading;
if (flatShading) {
// perform flat shading
} else if (phongShading) {
// perform phong shading
}
Then setup your buffers as so that the vertices in each triangle have a certain shading attribute applied.

How to draw point sprites of different sizes in OpenGL?

I'm making a small OpenGL Mac app that uses point sprites. I'm using a vertex array to draw them, and I want to use a similar "array" function to give them all different sizes.
In OpenGL ES, there is a client state called GL_POINT_SIZE_ARRAY_OES, and a corresponding function glPointSizePointerOES() which do exactly what I want, but I can't seem to find an equivalent in standard OpenGL.
Does OpenGL support this in any way?
To expand a little on Fen's answer, the fixed function OpenGL pipeline can't do exactly what you want. It can do 'perspective' points which get smaller as the Z distance increases, but that's all.
For arbitrary point size at each vertex you need a custom vertex shader to set the size for each. Pass the point sizes either as an attribute array (re-use surface normals or tex coords, or use your own attribute index) or in a texture map, say a 1D texture with width equal to size of points array. The shader code example referred to by Fen uses the texture map technique.
OpenGL does not support this Apple extension, but you can do it other other way:
For fixed pipeline: (opengl 1.4 and above)
You need to setup point parameters:
float attenuation[3] = {0.0f, 1.0f, 0.0f};
glPointParameterfvEXT(GL_POINT_DISTANCE_ATTENUATION, attenuation);
glPointParameterfEXT(GL_POINT_SIZE_MIN, 1.0f);
glPointParameterfEXT(GL_POINT_SIZE_MAX, 128.0f);
glEnable(GL_POINT_SPRITE);
OpenGL will calculate point size for you that way
Shaders
Here is some info for rendering using shaders:
http://en.wikibooks.org/wiki/OpenGL_Programming/Scientific_OpenGL_Tutorial_01
If by "Does OpenGL support this", you mean "Can I do something like that in OpenGL", absolutely.
Use shaders. Pass a 1-dimensional generic vertex attribute that represents your point size. And in your vertex shader, set that point size as the gl_PointSize output from the vertex shader. It's really quite simple.
If you meant, "Does fixed-function OpenGL support this," no.

How do I use a vertex shader to multiply my vertex data by a uniform?

This is a question that came from an earlier problem I had. Basically, I was trying to implement orthographic scaling in my shader by modifying the scale components of my projection matrix, but it wasn't possible. What I had to actually do was scale the verts before "sending them in" to my shader via a draw. That works like a charm...
But, of course, the issue is that in software now I'm responsible for scaling all my verts before handing them off to the shader. This makes me wonder if it would be possible to have a vertex shader do this. I imagine it is, but I can't figure it out.
What I'm doing is just going through all 4 of my verts (held in float vertices[8]) and doing *= scale;. To be slightly more accurate, I multiply the X and Y components separately by scaleX and scaleY.
How can I do this same thing in a vertex shader?
replace gl_Vertex with (gl_Vertex * scale) everywhere in your vertex shader. Or if you're using a user-defined input for your coordinate, put * scale on that.

Resources