I just tried to do this
C++
struct PointLight
{
glm::vec4 position;
glm::vec4 colour;
};
std::vector <PointLight> lights_array;
GLSL 320 ES:
layout (std140) struct PointLight
{
vec4 position;
vec4 colour;
};
layout (std140) buffer Lights
{
int count;
PointLight data [];
}
b_lights;
The compile error surprised me:
error C7600: no value specified for layout qualifier 'std140'
I can't find a straight answer but I get the impression that I can't specify std140 for struct definitions. Is this so? Or how can I spell it?
If not, then how am I able to guarantee that I can send lights_array to glBufferData so that it has the correct layout in the shader's b_lights.data array?
In other words, why is std140 required for the buffer but not for the struct?
Interface blocks have layouts, not structs. The layout applies to how the block lays out its elements, recursively, through their entire contents.
So you don't need to apply an interface block layout to a random struct.
Related
I'm setting a render target that has two outputs, one of type rgba and other of type r8. I would like to know, inside the filament material definition file, how do I specify the outputs for each target? in opengl gl it would look like :
out vec4 accum;
out float reveal;
void material(inout MaterialInputs material) {
prepareMaterial(material);
// prepare material inputs
accum = material.baseColor;
reveal = material.baseColor.a;
}
I can't get a SSBO working using Qt3D. I'm also unable to find a single example displaying how it is supposed to be done.
Here are the main parts of my code:
Buffer init:
QByteArray ssboData;
ssboData.resize(1000);
ssboData.fill(0);
mySSBOBuffer = new Qt3DRender::QBuffer(this);
mySSBOBuffer->setUsage(Qt3DRender::QBuffer::DynamicRead);
mySSBOBuffer->setAccessType(Qt3DRender::QBuffer::AccessType::ReadWrite);
mySSBOBuffer->setData(ssboData);
QByteArray atomicCounterData;
atomicCounterData.resize(4);
atomicCounterData.fill(0);
myAtomicCounterBuffer = new Qt3DRender::QBuffer(this);
myAtomicCounterBuffer->setUsage(Qt3DRender::QBuffer::DynamicRead);
myAtomicCounterBuffer->setAccessType(Qt3DRender::QBuffer::AccessType::ReadWrite);
myAtomicCounterBuffer->setData(atomicCounterData);
Passing the buffers as QParameters to the shader.
myMaterial->addParameter(new Qt3DRender::QParameter("acCountFrags", QVariant::fromValue(myAtomicCounterBuffer->id()), myMaterial));
myMaterial->addParameter(new Qt3DRender::QParameter("ssboBuffer", QVariant::fromValue(mySSBOBuffer->id()), myMaterial));
I also tried
myMaterial->addParameter(new Qt3DRender::QParameter("acCountFrags", QVariant::fromValue(myAtomicCounterBuffer), myMaterial));
myMaterial->addParameter(new Qt3DRender::QParameter("ssboBuffer", QVariant::fromValue(mySSBOBuffer), myMaterial));
Fragment Shader (color has no use, just to check shader is working):
#version 430 core
layout(binding = 0, offset = 0) uniform atomic_uint acCountFrags;
layout (std430) buffer ssboBuffer
{
uint fragIds[];
};
out vec4 color;
void main()
{
uint index = atomicCounterIncrement(acCountFrags);
fragIds[index] = 5;
color = vec4(0.2, 0.2, 0.2, 1.0);
}
In all of my tries, nothing is written to the buffers after rendering. They are still full of 0 like after init.
Does anybody know if i'm doing something wrong ? Or somewhere I could find a working example ?
Thank you.
The answer was a missing BufferCapture component in my FrameGraph. Found it thanks to the example given by HappyFeet in the comments.
As I'm learning more about WebGL2, I've come across this new syntax within shaders where you set location inside of shaders via: layout (location=0) in vec4 a_Position;. How does this compare to getting the attribute location with the traditional gl.getAttribLocation('a_Position');. I assume it's faster? Any other reasons? Also, is it better to set locations to integers or would you be able to use strings as well?
There are 2 ideas conflated here
Manually assigning locations to attributes
Assigning attribute locations in GLSL vs JavaScript
Why would you want to assign locations?
You don't have to look up the location then since you already know it
You want to make sure 2 or more shader programs use the same locations so that they can use the same attributes. This also means a single vertex array can be used with both shaders. If you don't assign the attribute location then the shaders may use different attributes for the same data. In other words shaderprogram1 might use attribute 3 for position and shaderprogram2 might use attribute 1 for position.
Why would you want to assign locations in GLSL vs doing it in JavaScript?
You can assign a location like this in GLSL ES 3.0 (not GLSL ES 1.0)
layout (location=0) in vec4 a_Position;
You can also assign a location in JavaScript like this
// **BEFORE** calling gl.linkProgram
gl.bindAttribLocation(program, 0, "a_Position");
Off the top of my head it seems like doing it in JavaScript is more DRY (Don't repeat yourself). In fact if you use consistent naming then you can likely set all locations for all shaders by just binding locations for your common names before calling gl.linkProgram. One other minor advantage to doing it in JavaScript is it's compatible with GLSL ES 1.0 and WebGL1.
I have a feeling though it's more common to do it in GLSL. That seems bad to me because if you ever ran into a conflict you might have to edit 10s or 100s of shaders. For example you start with
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
Later in another shader that doesn't have texcoord but has normals you do this
layout (location=0) in vec4 a_Position;
layout (location=1) in vec3 a_Normal;
Then sometime much later you add a shader that needs all 3
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
layout (location=2) in vec3 a_Normal;
If you want to be able to use all 3 shaders with the same data you'd have to go edit the first 2 shaders. If you'd used the JavaScript way you wouldn't have to edit any shaders.
Of course another way would be to generate your shaders which is common. You could then either inject the locations
const someShader = `
layout (location=$POSITION_LOC) in vec4 a_Position;
layout (location=$NORMAL_LOC) in vec2 a_Texcoord;
layout (location=$TEXCOORD_LOC) in vec3 a_Normal;
...
`;
const substitutions = {
POSITION_LOC: 0,
NORMAL_LOC: 1,
TEXCOORD_LOC: 2,
};
const subRE = /\$([A-Z0-9_]+)/ig;
function replaceStuff(subs, str) {
return str.replace(subRE, (match, group0) => {
return subs[group0];
});
}
...
gl.shaderSource(prog, replaceStuff(substitutions, someShader));
or inject preprocessor macros to define them.
const commonHeader = `
#define A_POSITION_LOC 0
#define A_NORMAL_LOC 1
#define A_TEXCOORD_LOC 2
`;
const someShader = `
layout (location=A_POSITION_LOC) in vec4 a_Position;
layout (location=A_NORMAL_LOC) in vec2 a_Texcoord;
layout (location=A_TEXCOORD_LOC) in vec3 a_Normal;
...
`;
gl.shaderSource(prog, commonHeader + someShader);
Is it faster? Yes but probably not much, not calling gl.getAttribLocation is faster than calling it but you should generally only be calling gl.getAttribLocation at init time so it won't affect rendering speed and you generally only use the locations at init time when setting up vertex arrays.
is it better to set locations to integers or would you be able to use strings as well?
Locations are integers. You're manually choosing which attribute index to use. As above you can use substitutions, shader generation, preprocessor macros, etc... to convert some type of string into an integer but they need to be integers ultimately and they need to be in range of the number of attributes your GPU supports. You can't pick an arbitrary integer like 9127. Only 0 to N - 1 where N is the value returned by gl.getParameter(MAX_VERTEX_ATTRIBS). Note N will always be >= 16 in WebGL2
What does float4 position [[position]]; do in the following snippet?
#include <metal_stdlib>
using namespace metal;
struct Vertex
{
float4 position [[position]];
float4 color;
};
vertex Vertex vertex_main(device Vertex *vertices [[buffer(0)]], uint vid [[vertex_id]])
{
return vertices[vid];
}
I am confused by the [[position]] part and similar usage in the function definition especially.
The Metal Shading Language is documented at https://developer.apple.com/metal/metal-shading-language-specification.pdf
In particular look at "Table 9" on page 68 of that document. There [[position]] is identified as an attribute qualifier for the return type of a Vertex Function. I assume that means that when your vertex shader returns the caller will use the values in that part of the struct to determine the positions of the vertices the shader would like to modify.
I don't have enough reputation to respond to your comment regarding the name of the brackets, but the [[]] brackets are attribute syntax taken from C++11.
Metal is based on C++ and the this is just the syntax of attributes in C++11. See this for more details about the grammar.
Im working with a parser that handles Wavefront OBJ 3D object files and im not quite sure if im doing the loading correctly into OpenGL.
So basically what i do is that i read my Wavefront OBJ files and parse all data.
Normally in OpenGL ES 1.1 i do following when loading data:
glBegin(GL_TRIANGLES);
glNormal3f(normals[faces[i].normal[0]].v[0], normals[faces[i].normal[0]].v[1], normals[faces[i].normal[0]].v[2]);
glVertex3f(vertices[faces[i].vertex[0]].v[0], vertices[faces[i].vertex[0]].v[1], vertices[faces[i].vertex[0]].v[2]);
glNormal3f(normals[faces[i].normal[1]].v[0], normals[faces[i].normal[1]].v[1], normals[faces[i].normal[1]].v[2]);
glVertex3f(vertices[faces[i].vertex[1]].v[0], vertices[faces[i].vertex[1]].v[1], vertices[faces[i].vertex[1]].v[2]);
glNormal3f(normals[faces[i].normal[2]].v[0], normals[faces[i].normal[2]].v[1], normals[faces[i].normal[2]].v[2]);
glVertex3f(vertices[faces[i].vertex[2]].v[0], vertices[faces[i].vertex[2]].v[1], vertices[faces[i].vertex[2]].v[2]);
glEnd();
As for OpenGL ES 2.0 i have tried following for the vertices without any luck:
glBufferData(GL_ARRAY_BUFFER, obj.vertices.size()*sizeof(float), &(obj.vertices[0].v), GL_STATIC_DRAW);
My datastructure:
struct vertex {
std::vector<float> v;
}
The v vector is created for each vertex offcourse, with {x,y,z}.
class waveObj {
public:
std::vector<vertex> vertices;
std::vector<vertex> texcoords;
std::vector<vertex> normals;
std::vector<vertex> parameters;
std::vector<face> faces;
}
struct face {
std::vector<int> vertex;
std::vector<int> texture;
std::vector<int> normal;
};
How can i load my data as i done with OpenGL ES 1.1 in 2.0?
Also is it even possible to load a vector (v), rather than the positions seperate (float x,y,z)?
There are several options:
Create separate VBO for each of you data: one for position, one for normals, etc
Create single VBO with interleaved data - but this would require a bit of code changes in your code.
For start I suggest using one buffer for one vertex attrib + index buffer:
one thing with the index buffer:
you have separate indices for pos, normal, texture (you take those values directly from OBJ file), but if you ant to draw geometry using IBO (index buffer object) you need to create sinlge index.
here is some of my code to do that:
map<FaceIndex, GLushort, FaceIndexComparator>::iterator
cacheIndex = cache.find(fi);
if (cacheIndex != cache.end()) {
node->mIndices.push_back(cacheIndex->second);
}
else {
node->mPositions.push_back(positions[fi.v]);
node->mNormals.push_back(normals[fi.n]);
node->mTexCoords.push_back(texCoords[fi.t]);
node->mIndices.push_back((unsigned int)node->mPositions.size()-1);
cache[fi] = ((unsigned int)node->mPositions.size()-1);
}
What it does:
it has a vector for each pos, nomal and tex cood... but when there is a "f" flag in OBJ file I check if there is a triple in my cache.
if there is such triple I put that index in my node's indices
if not I need to create new index