What is the correct syntax for passing a variable number of MTLTexture's as an array to a fragment shader?
This StackOverflow Question: "How to use texture2d_array array in metal?" mentions the use of:
array<texture2d<half>, 5>
However, this requires specifying the size of the array. In Metal Shading Language Specification.pdf (Sec. 2.11) they also demonstrate this type. However, they also refer to array_ref but it's not clear to me how to use it, or if it's even allowed as a parameter type for a fragment shared given this statement:
"The array_ref type cannot be passed as an argument to graphics
and kernel functions."
What I'm currently doing is just declaring the parameter as:
fragment float4 fragmentShader(RasterizerData in [[ stage_in ]],
sampler s [[ sampler(0) ]],
const array<texture2d<half>, 128> textures [[ texture(0) ]]) {
const half4 c = textures[in.textureIndex].sample(s, in.coords);
}
Since the limit is 128 fragment textures. In any render pass, I might use between 1..n textures, where I ensure that n does not exceed 128. That seems to work for me, but am I Doing It Wrong?
My use-case is drawing a 2D plane that is sub-divided into a bunch of tiles. Each tile's content is sampled from a designated texture in the array based on a pre-computed texture index. The textures are set using setFragmentTexture:atIndex in the correct order at the start of the render pass. The texture index is passed from the vertex shader to the fragment shader.
You should consider an array texture instead of a texture array. That is, a texture whose type is MTLTextureType2DArray. You use the arrayLength property of the texture descriptor to specify how many 2-D slices the array texture contains.
To populate the texture, you specify which slice you're writing to with methods such as -replaceRegion:... or -copyFrom{Buffer,Texture}:...toTexture:....
In a shader, you can specify which element to sample or read from using the array parameter.
Related
I'm trying to perform a render-to-texture operation that is supposed to accumulate arithmetic calculations in a texture. The output texture format should have at least the following capabilities:
Must have at least 2 components: One for the calculation result, one for alpha.
The calculation result has a value range of 0-65536.
Must be able to perform additive blending on these values using at least the alpha value from the fragment shader (blend function will be GL_ONE, GL_SRC_ALPHA).
Usually, I render to a texture using FBOs. However, according to
https://www.khronos.org/registry/OpenGL-Refpages/es3.1/html/glTexImage2D.xhtml
texture formats are either not color-renderable (so no FBOs) or non-blendable (because integer) or have small component sizes (usually 8-bit).
Is there a texture format that suits my needs? Or is there a non-FBO solution?
Regards
As OpenGL ES does not support shared "uniform blocks" I was wondering if there is a way I can put matrices that can be referenced by a number of different shaders, a simple example would be a worldToViewport or worldToEye which would not change for an entire frame and which all shaders would reference. I saw one post where one uses 3 or 4 dot calls on a vertex to transform it from 4 "column vectors", but wondering if there is a way to assign the buffer data to a "mat4" in the shader.
Ah yes the need for this is webGL which at the moment it seems only to support openGLES 2.0.
I wonder if it supports indexed attribute buffers as I assume they don't need to be any specified size relative to the size of the position vertex array.
Then if one can use a hard coded or calculated index into the attribute buffer ( in the shader ) and if one can bind more than one attribute buffer at a time, and access all "bound to the shader" buffers simultaneously in a shader ...
I see if all true might work. I need a good language/architecture reference on shaders as I am somewhat new to shader programming as I I'm trying to deign a wall without knowing the shapes of the bricks :)
Vertex attributes are per-vertex, so there is no way so share vertex attributes amongst multiple vertices.
OpenGL ES 2.0 upwards has CPU-side uniforms, which must be uploaded individually from the CPU at draw time. Uniforms belong to the program object, so for uniforms which are constant for a frame you only have to modify each program once, so the cost isn't necessarily proportional to draw count.
OpenGL ES 3.0 onwards has Uniform Buffer Objects (UBOs) which allow you to load uniforms from a buffer in memory.
I'm not sure what you mean by "doesn't support shared uniform blocks", as that's pretty much what a UBO is, albeit it won't work on older hardware which only supports OpenGL ES 2.x.
In THREE you can specify a DataTexture with a given data type and format. My shader pipeline normalizes the raw values based on a few user-controlled uniforms.
In the case of a Float32Array, it is very simple:
data = new Float32Array(...)
texture = new THREE.DataTexture(data, columns, rows, THREE.LuminanceFormat, THREE.FloatType)
And, in the shader, the swizzled values have non-normalized values. However, if I use:
data = new Uint8Array(...)
texture = new THREE.DataTexture(data, columns, rows, THREE.LuminanceFormat, THREE.UnsignedByteType);
Then the texture is normalized between 0.0 and 1.0 as an input to the pipeline. Not what I was expecting. Is there a way to prevent this behavior?
Here is an example jsfiddle demonstrating a quick test of what is unexpected (at least for me): http://jsfiddle.net/VsWb9/3796/
three.js r.71
For future reference, this is not currently possible in WebGL. It requires the use of GL_RED_INTEGER and the unsupported usampler2d.
This comment from the internalformat of Texture also describes the issue in GL for the internal formats.
For that matter, the format of GL_LUMINANCE says that you're passing
either floating-point data or normalized integer data (the type says
that it's normalized integer data). Of course, since there's no
GL_LUMINANCE_INTEGER (which is how you say that you're passing integer
data, to be used with integer internal formats), you can't really use
luminance data like this.
Use GL_RED_INTEGER for the format and GL_R8UI for the internal format
if you really want 8-bit unsigned integers in your texture. Note that
integer texture support requires OpenGL 3.x-class hardware.
That being said, you cannot use sampler2D with an integer texture. If
you are using a texture that uses an unsigned integer texture format,
you must use usampler2D.
Due to performance issues drawing thousands of similar triangles (with different attributes), I would like to draw all of these using a single call to drawElements. But in order to draw each triangle with their respective attributes (ex: world location, color, orientation) I believe i need to send an array buffer to my vertex shader, where the array is a list of all the triangle attributes.
If this approach is indeed the standard way to do it, then i am delighted to know i have the theory correct. I just need to know how to send an array buffer to the shader. Note that currently i know how to send a multiple attributes and uniforms (though they are not contiguous in memory, which is what im looking for).
If not, i'd appreciate it if a resident expert can point me in the right direction.
I have a related question because I am having trouble actually implementing a VBO.
How to include model matrix to a VBO?
You do have the theory correct, let me just clear a few things...
You can only "send vertex array" to the vertex shader via pointers using attributes so this part of the code stays pretty much the same. What you do seem to be looking for are 2 optimisations, putting the whole buffer on the GPU and using interleaved vertex data.
To put the whole buffer on the GPU you need to use VBOs as already mentioned in the comment. By doing that you create a raw buffer on the GPU to which you can put any data you want and even modify them in runtime if needed. In your case that would be vertex data. To use non interleaved data you would probably create a buffer for each, position, colour, orientation...
To use interleaved data you need to put them into the buffer sequentially, if possible it is best to create a data structure that holds all the vertex data (of a single vertex, not the whole array) and send those to the buffer (a simple primitive array will work as well). A C example of such structure:
typedef union {
struct {
float x,y,z; //position
float r,g,b,a; //color
float ox,oy,oz; //orientation
};
struct {
float position[3];
float color[4];
float orientation[3];
};
}Vertex;
What you need to do then is set the correct pointers when using this data. In the VBO you start with NULL (0) and that would represent the position in this case, to set colour you would have to then use ((float *)NULL)+3 or use some conveniences such as offsetof(Vertex, color) in C. With that you also need to set the stride, that would be the size of the Vertex structure so you could use sizeof(Vertex) or hardcoded sizeof(float)*(3+4+3).
After this all you need to watch for is correct buffer binding/unbinding while rest of your code should be exactly the same.
When setting up attribute locations for an OpenGL shader program, you are faced with two options:
glBindAttribLocation() before linking to explicitly define an attribute location.
or
glGetAttribLocation() after linking to obtain an automatically assigned attribute location.
What is the utility for using one over the other?
And which one, if any, is preferred in practice?
I know one good reason to prefer explicit location definition.
Consider that you hold your geometry data in Vertex Array Objects. For a given object, you create a VAO in such way that the indices correspond to, for example:
index 0: positions,
index 1: normals,
index 2: texcoords
Now consider that you want to draw one object with two different shaders. One shader requires position and normal data as input, the other - positions and texture coords.
If you compile those shaders, you will notice that the first shader will expect the positions at attribute index 0 and normals at 1. The other would expect positions at 0 but texture coords at 1.
Quoting https://www.opengl.org/wiki/Vertex_Shader:
Automatic assignment
If neither of the prior two methods assign an input to an attribute index, then the index is automatically assigned by OpenGL when the program is linked. The index assigned is completely arbitrary and may be different for different programs that are linked, even if they use the exact same vertex shader code.
This means that you wouldn't be able to use your VAO with both shaders. Instead of having one VAO per, say, object, you'd need - in the worst case - a separate VAO per object per shader.
Forcing the shaders to use your own attribute numbering convention via glBindAttribLocation can solve this problem easily - all you need to do is to keep a consistent relation between attributes and their estabilished IDs, and force the shaders to use that convention when linking.
(That's not really a big issue if you don't use separate VAOs, but still might make your code clearer.)
BTW:
When setting up attribute locations for an OpenGL shader program, you are faced with two options
There's a third option in OpenGL/GLSL 3.3: Specify the location directly in shader code. It looks like this:
layout(location=0) in vec4 position;
But this is not present in GLSL ES shader language.
Another answer here is that glGetAttribLocation returns data to the caller, which means that it implicitly requires a pipeline flush. If you call it right after you compile your program, you're essentially forcing asynchronous compilation to occur synchronously.
The third option, ie layout(location=0) in vec4 position; in the shader code, is now available in OpenGL ES 3.0/GLSL 300 es. Only for vertex shader input variables though.