In my fragment shader, I have the line
gl_FragColor = texture2D(texture, destinationTextureCoordinate) * destinationColor;
Where texture is a uniform of type sampler2D. In my code , I always set this to the value '0'.
glUniform1i(_uniformTexture, 0);
Is it possible to skip the call to glUniform1i and just hardcore 0 in the fragment shader? I tried just replacing texture with 0 and it complained about not being a valid type.
I'm not entirely sure what you're trying to achieve, but here are some thoughts:
sampler2D needs to sample a 2D texture, as the name indicates. It is a special GLSL variable, so the compiler is right to complain about 0 not being a valid type when fed into the first parameter of texture2D.
Unless you are using multiple textures, the second parameter to glUniform1i should always be 0 (default). You can skip this call if you are only using a single texture, but it's good practice to leave it in.
Why do you need a call to texture2D if you just want to pass the value 0? Surely you can just do gl_FragColor = destinationColor. This will color your fragment based on the vertex shader output from gl_Position. I'm not sure why you are implementing a texture if you don't plan on using it (or so it seems).
EDIT: Code to send two textures to the fragment shader correctly.
//glClear();
// Attach Texture 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _texture0);
glUniform1i(_uSampler0, 0);
// Attach Texture 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _texture1);
glUniform1i(_uSampler1, 1);
//glDrawArrays();
You need a layout, like this:
#version 420
//#extension GL_ARB_shading_language_420pack: enable Use for GLSL versions before 420.
layout(binding=0) uniform sampler2D diffuseTex;
In that way you are getting a binding to the sampler uniform 0. I think that is what you want. But keep in mind that when you are binding uniforms they got incremental values from zero, so be sure what you want to bind. The idea to bind uniform variables via the glUniform1i() function is to ensure the correctness of your code.
Source: http://www.opengl.org/wiki/GLSL_Sampler#Version_4.20_binding
Related
As I'm learning more about WebGL2, I've come across this new syntax within shaders where you set location inside of shaders via: layout (location=0) in vec4 a_Position;. How does this compare to getting the attribute location with the traditional gl.getAttribLocation('a_Position');. I assume it's faster? Any other reasons? Also, is it better to set locations to integers or would you be able to use strings as well?
There are 2 ideas conflated here
Manually assigning locations to attributes
Assigning attribute locations in GLSL vs JavaScript
Why would you want to assign locations?
You don't have to look up the location then since you already know it
You want to make sure 2 or more shader programs use the same locations so that they can use the same attributes. This also means a single vertex array can be used with both shaders. If you don't assign the attribute location then the shaders may use different attributes for the same data. In other words shaderprogram1 might use attribute 3 for position and shaderprogram2 might use attribute 1 for position.
Why would you want to assign locations in GLSL vs doing it in JavaScript?
You can assign a location like this in GLSL ES 3.0 (not GLSL ES 1.0)
layout (location=0) in vec4 a_Position;
You can also assign a location in JavaScript like this
// **BEFORE** calling gl.linkProgram
gl.bindAttribLocation(program, 0, "a_Position");
Off the top of my head it seems like doing it in JavaScript is more DRY (Don't repeat yourself). In fact if you use consistent naming then you can likely set all locations for all shaders by just binding locations for your common names before calling gl.linkProgram. One other minor advantage to doing it in JavaScript is it's compatible with GLSL ES 1.0 and WebGL1.
I have a feeling though it's more common to do it in GLSL. That seems bad to me because if you ever ran into a conflict you might have to edit 10s or 100s of shaders. For example you start with
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
Later in another shader that doesn't have texcoord but has normals you do this
layout (location=0) in vec4 a_Position;
layout (location=1) in vec3 a_Normal;
Then sometime much later you add a shader that needs all 3
layout (location=0) in vec4 a_Position;
layout (location=1) in vec2 a_Texcoord;
layout (location=2) in vec3 a_Normal;
If you want to be able to use all 3 shaders with the same data you'd have to go edit the first 2 shaders. If you'd used the JavaScript way you wouldn't have to edit any shaders.
Of course another way would be to generate your shaders which is common. You could then either inject the locations
const someShader = `
layout (location=$POSITION_LOC) in vec4 a_Position;
layout (location=$NORMAL_LOC) in vec2 a_Texcoord;
layout (location=$TEXCOORD_LOC) in vec3 a_Normal;
...
`;
const substitutions = {
POSITION_LOC: 0,
NORMAL_LOC: 1,
TEXCOORD_LOC: 2,
};
const subRE = /\$([A-Z0-9_]+)/ig;
function replaceStuff(subs, str) {
return str.replace(subRE, (match, group0) => {
return subs[group0];
});
}
...
gl.shaderSource(prog, replaceStuff(substitutions, someShader));
or inject preprocessor macros to define them.
const commonHeader = `
#define A_POSITION_LOC 0
#define A_NORMAL_LOC 1
#define A_TEXCOORD_LOC 2
`;
const someShader = `
layout (location=A_POSITION_LOC) in vec4 a_Position;
layout (location=A_NORMAL_LOC) in vec2 a_Texcoord;
layout (location=A_TEXCOORD_LOC) in vec3 a_Normal;
...
`;
gl.shaderSource(prog, commonHeader + someShader);
Is it faster? Yes but probably not much, not calling gl.getAttribLocation is faster than calling it but you should generally only be calling gl.getAttribLocation at init time so it won't affect rendering speed and you generally only use the locations at init time when setting up vertex arrays.
is it better to set locations to integers or would you be able to use strings as well?
Locations are integers. You're manually choosing which attribute index to use. As above you can use substitutions, shader generation, preprocessor macros, etc... to convert some type of string into an integer but they need to be integers ultimately and they need to be in range of the number of attributes your GPU supports. You can't pick an arbitrary integer like 9127. Only 0 to N - 1 where N is the value returned by gl.getParameter(MAX_VERTEX_ATTRIBS). Note N will always be >= 16 in WebGL2
i am writng a shader that needs to use one sampler cube for reflection and another for refraction. water is intended to reflect form a skybox, but refract using a container for the water. i create a uniform in the fragment shader for each sampler cube:
uniform samplerCube reflectMap;
uniform samplerCube refractMap;
i create 2 texture cubes, tRefract and tReflect, using THREE.ImageUtils.loadTextureCube
i associate the uniforms to the custom shader, setting their values:
refractMap: {type: 't', value: tRefract}
reflectMap: {type: 't', value: tReflect}
in the fragment shader i sample the textures and mix the colors:
vec4 reflectCubeColor = textureCube(reflectMap, vec3(-vReflect.x, vReflect.yz));
vec4 refractCubeColor = textureCube(refractMap, vec3(-vRefract.x, vRefract.yz));
vec3 c = mix(gl_FragColor.xyz, reflectCubeColor.xyz, reflectivity);
gl_FragColor = vec4(mix(refractCubeColor.xyz, c, opacity), 1.0);
(vReflect and vRefract have been computed in the vertex shader)
what happens, however, is only the texture associated to the reflection is used. it appears on the skybox, it is reflected in the water and refracted in the water.
thinking maybe this was a question of not switching texture units as one might do in "regular" opengl, i dug around in the three.js code to see if there was a way to specify the texture unit. it appears in the built-in normal shader that something like this might doing just that:
refractMap: {type: 't', value: 6, texture: tRefract}
reflectMap: {type: 't', value: 7, texture: tReflect}
where value is specifying the texture unit. i didn't see any api, however, for saying which unit should be the active unit. it doesn't seem to matter the order in which the textures are loaded or associated to uniforms, and changing the values associated as above seems to have no effect.
so can i make use of more than one samplerCube in my fragment shader ?
I'm creating a sphere and adding distortion to it, that's working fine.
When I look at the wireframe it's like this
and with the wireframe turned of it looks like this
As you can see the are no shades and the distortion isn't visible when the wireframe is turned of.
What I'm looking for is what to place in my custom fragmentShader.
I used this
// calc the dot product and clamp
// 0 -> 1 rather than -1 -> 1
vec3 light = vec3(0.5,0.2,1.0);
// ensure it's normalized
light = normalize(light);
// calculate the dot product of
// the light to the vertex normal
float dProd = max(0.0, dot(vNormal, light));
// feed into our frag colour
gl_FragColor = vec4(dProd, dProd, dProd, 1.0);
But that just creates a very ugly false light.
Any ideas anybody?
Thanks in advance,
Wezy
If you want to use Three.js' lights (and I can't think a reason why not), you need to include the corresponding shader chunks:
Take a look at how the WebGLRenderer composes its shaders in THREE.ShaderLib
Pick a material there that is close to what you want and copy its definition (uniforms and both shader codes) to your code, renaming it to something custom
Delete chunks you don't need, and add your custom code to appropriate places
What happens if you bind (different textures) to both GL_TEXTURE_2D and GL_TEXTURE_CUBE_MAP in the same texture image unit?
For example, suppose I bind one texture to GL_TEXTURE0's GL_TEXTURE_2D target and another texture to the same texture unit's GL_TEXTURE_CUBE_MAP target. Can I then have two uniform variables, one a sampler2D and the other a samplerCube and set both to 0 (to refer to GL_TEXTURE0)?
I suspect the answer is "no" (or that the result is undefined) but I haven't found anything in the spec that specifically prohibits using multiple texture targets in the same texture image unit.
I haven't found anything that describes if you can bind a 2D texture and a cube map texture in the same texture unit, but (or so) I guess this is perfectly possible. It makes sense to allow it, since all texture modification functions require you to specify the texture target to operate on, anyway.
But the OpenGL ES 2 spec explicitly disallows to use both at the same time in a shader, as chapter 2.10 says:
It is not allowed to have variables of different sampler types
pointing to the same texture image unit within a program object. This
situation can only be detected at the next rendering command issued,
and an INVALID_OPERATION error will then be generated.
So you cannot use both a sampler2D and a samplerCube referring to the same texture unit to bend your implementation's texture unit limits.
For Chrome, I'm getting an error trying to perform such an operation.
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
gl.bindTexture(gl.TEXTURE_2D, texture)
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // texture
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // null
gl.getError() // returns 1282 error
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
gl.bindTexture(gl.TEXTURE_2D, texture)
// gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // texture
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // null
gl.getError() // no error
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
// gl.bindTexture(gl.TEXTURE_2D, texture)
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // null
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // texture
gl.getError() // no error
I am doing work on Opengl-es 2.0 in ubuntu 10.10 with pvrsdk .
In my code i am taking the vertices of triangle to render but when i have to use the Attribute parameter in vertex shader . how does it change
I see that in examples they are using myVertex . what is meant by that.
like this:
const char* pszVertShader = "\
attribute highp vec4 myVertex;\
uniform mediump mat4 projmatrix;\
invariant gl_Position;\
void main(void)\
{\
gl_Position = projmatrix * myVertex;\
}";
===============================render======================
glBindAttribLocation(uiProgramObject, VERTEX_ARRAY, "myVertex");
So i just wanna know that I am taking vertices from text file will it affect the myVertex attribute.
if you need another information I can provide you and I have already posted my whole code in previous question here
OpenGL 3/OpenGL-ES 2 are abandoned the concept of predefined vertex attributes "position", "normal", "texcoord", …, that are supplied through glVertexPointer, glNormalPointer, glTexCoordPointer, …
Instead in your shaders you introduce custom varyings/in attribute identifiers. Those identifiers are referred to by a so called attribute location, a numeric index. glBindAttribLocation allows one to assign attribute identifiers to specific locations. In the case of above code fragment there seems to exists a global constant VERTEX_ARRAY – introduced by the program code(!), i.e. not predefined by OpenGL or something like that – that is universally used for the myVertex varying/in attribute in the shader.
So that particular vertex attribute can be supplied with data by glVertexAttribPointer through a common token VERTEX_ARRAY in a statement similar to this:
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, false, 0, isVBO ? (char*)vertex.offset : (char*)vertex.data + vertex.offset);
Of course the exact semantics of the glVertexAttribPointer calls depends on the particular program that does them.
Instead of a (global) constant VERTEX_ARRAY you could as well use a variable shader.attrib.vertex, which you set per shader using
shader.attrib.vertex = glGetAttribLocation(shader.program_object, "myVertex");
and use that variable in the calls to glVertexAttribPointer
glVertexAttribPointer(shader.attrib.vertex, …)