I'm trying to render a car's license plate in WebGL with a purple texture whose uniform name is diffTex.
When I render the rest of the car with a simple black material and no textures, the drawcall that renders the license plate binds uniform 35 to diffTex and uniform 36 to specNrmMap for 6 total activeTexture() calls. The purple plate shows onscreen as expected.
However, when I render the entire car with their own materials, textures, etc. the drawcall that renders the license plate skips diffTex, and binds uniform 35 to specNrmMap with no #36 for 5 total activeTexture() calls. The purple plate shows up white without the diffuse texture.
Does WebGL have a uniform limit or a texture binding limit that I might be overlooking? webglreport.com states my Max Texture Image Units is 16 in the fragment shader, and I'm only using 6, so I have 10 to spare. I'm not changing anything in the license plate material, it just works when I render the car in black without textures, and it stops working when I render the rest of the car with textures.
Uniforms do not have numbers in WebGL. Those numbers in your debugger are something assigned by the debugger. How it numbers them is up to the debugger. It could number them by querying them. If so they'd get different numbers across implementations. They'd also change if you change the shader. It could number them based on the order you use them. If so, then you setting different textures would also number them different.
Uniforms are almost always optimized out if they are not used so if you stopped using a particular uniform then again the debugger you're using might number them differently.
As for limits, as you already checked there is a limit to the number of texture units and you can bind a different texture to every unit so your 6 textures is well under the limit.
For uniforms the limit is queried via gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS) for vertex shaders and gl.getParameter(gl.MAX_FRAGMENT_UNIFORM_VECTORS) though it's unlikely you're hitting that limit because you'd get an error trying to compile the shaders.
Note: how many uniforms you can actually use from that number is defined by the packing algorithm. See this answer
As for why your code is not working you'd have to post a repo (in the question itself) for us to figure that out.
Related
I am using three.js to render a voxel representation as a set of triangles. I have got it render 5 million triangles comfortably but that seems to be the limit. you can view it online here.
select the Dublin model at resolution 3 to see a lot of triangles being drawn.
I have used every trick to get it this far (buffer geometry, voxel culling, multiple buffers) but I think it has hit the maximum amount that openGL triangles can accomplish.
Large amounts of voxels are normally rendered as a set of images in a 3D texture and while there are several posts on how to hack 2d textures into 3D textures but they seem to have a maximum limit on the texture size.
I have searched for tutorials or examples using this approach but haven't found any. Has anyone used this approach before with three.js
Your scene is render twice, because SSAO need depth texture. You could use WEBGL_depth_texture extension - which have pretty good support - so you just need a single render pass. You can stil fallback to low-perf-double-pass if extension is unavailable.
Your voxel's material is double sided. It's may be on purpose, but it may create a huge overdraw.
In your demo, you use a MeshPhongMaterial and directional lights. It's a needlessly complex material. Your geometries don't have any normals so you can't have any lighting. Try to use a simpler unlit material.
Your goal is to render a huge amount of vertices, so assuming the framerate is bound by vertex shader :
try stuff like https://github.com/GPUOpen-Tools/amd-tootle to preprocess your geometries. Focusing on prefetch vertex cache and vertex cache.
reduce the bandwidth used by your vertex buffers. Since your vertices are aligned on a "grid", you could store vertices position as 3 Shorts instead of 3 floats, reducing your VBO size by 2. You could use a same tricks if you had normals since all normals should be Axis aligned (cubes)
generally reduce the amount of varyings needed by fragment shader
if you need more attributes than just vec3 position, use one single interleaved VBO instead of one per attrib.
I would like to ask for help concerning the making of the WEBGL Engine. I am stuck at the Texture Atlases. There is a texture, containing 2-2 pictures, and I draw its upper left corner to a vertex (texture coordinates are the following : 0-0.5 0-0.5).
This works properly, although when I look the vertex from afar, all of these blur together, and give strange looing colours. I think it is caused, because I use automatically generated Mipmap, and when I look it from afar, the texture unit uses the 1x1 Mipmap texture, where the 4 textures are blurred together to one pixel.
I was suggested the Mipmap’s own generator, with maximum level setting, (GL_TEXTURE_MAX_LEVEL),, although it is not supported by the Webgl. I was also suggested to use the „textureLod” function in the Fragment Shader, but the Webgl only lets me to use it in the vertex shader.
The only solution seems to be the Bias, the value that can be given at the 3rd parameter of the Fragment Shader „texture2D” function, but with this, I can only set the offset of the Mipmap LOD, not the actual value.
My idea is to use the Depth value (the distance from the camera) to move the Bias (increase it , so it will go more and more negative) so this insures, that it won’t use the last Mipmap level at greater distances, but to always take sample from a higher resolution Mipmap level. The issue with this, that I must calculate the angle of the given vertex to the camera, because the LOD value depends on this.
So the Bias=Depth + some combination of the Angle. I would like to ask help calculating this. If someone has any ideas concerning the Webgl Texture Atlases, I would gladly use them.
I am trying to draw large numbers of 2d circles for my 2d games in opengl. They are all the same size and have the same texture. Many of the sprites overlap. What would be the fastest way to do this?
an example of the kind of effect I'm making http://img805.imageshack.us/img805/6379/circles.png
(It should be noted that the black edges are just due to the expanding explosion of circles. It was filled in a moment after this screen-shot was taken.
At the moment I am using a pair of textured triangles to make each circle. I have transparency around the edges of the texture so as to make it look like a circle. Using blending for this proved to be very slow (and z culling was not possible as they were rendered as squares to the depth buffer). Instead I am not using blending but having my fragment shader discard any fragments with an alpha of 0. This works, however it means that early z is not possible (as fragments are discarded).
The speed is limited by the large amounts of overdraw and the gpu's fillrate. The order that the circles are drawn in doesn't really matter (provided it doesn't change between frames creating flicker) so I have been trying to ensure each pixel on the screen can only be written to once.
I attempted this by using the depth buffer. At the start of each frame it is cleared to 1.0f. Then when a circle is drawn it changes that part of the depth buffer to 0.0f. When another circle would normally be drawn there it is not as the new circle also has a z of 0.0f. This is not less than the 0.0f that is currently there in the depth buffer so it is not drawn. This works and should reduce the number of pixels which have to be drawn. However; strangely it isn't any faster. I have already asked a question about this behavior (opengl depth buffer slow when points have same depth) and the suggestion was that z culling was not being accelerated when using equal z values.
Instead I have to give all of my circles separate false z-values from 0 upwards. Then when I render using glDrawArrays and the default of GL_LESS we correctly get a speed boost due to z culling (although early z is not possible as fragments are discarded to make the circles possible). However this is not ideal as I've had to add in large amounts of z related code for a 2d game which simply shouldn't require it (and not passing z values if possible would be faster). This is however the fastest way I have currently found.
Finally I have tried using the stencil buffer, here I used
glStencilFunc(GL_EQUAL, 0, 1);
glStencilOp(GL_KEEP, GL_INCR, GL_INCR);
Where the stencil buffer is reset to 0 each frame. The idea is that after a pixel is drawn to the first time. It is then changed to be none-zero in the stencil buffer. Then that pixel should not be drawn to again therefore reducing the amount of overdraw. However this has proved to be no faster than just drawing everything without the stencil buffer or a depth buffer.
What is the fastest way people have found to write do what I am trying?
The fundamental problem is that you're fill limited, which is the GPUs inability to shade all the fragments you ask it to draw in the time you're expecting. The reason that you're depth buffering trick isn't effective is that the most time-comsuming part of processing is shading the fragments (either through your own fragment shader, or through the fixed-function shading engine), which occurs before the depth test. The same issue occurs for using stencil; shading the pixel occurs before stenciling.
There are a few things that may help, but they depend on your hardware:
render your sprites from front to back with depth buffering. Modern GPUs often try to determine if a collection of fragments will be visible before sending them off to be shaded. Roughly speaking, the depth buffer (or a represenation of it) is checked to see if the fragment that's about to be shaded will be visible, and if not, it's processing is terminated at that point. This should help reduce the number of pixels that need to be written to the framebuffer.
Use a fragment shader that immediately checks your texel's alpha value, and discards the fragment before any additional processing, as in:
varying vec2 texCoord;
uniform sampler2D tex;
void main()
{
vec4 texel = texture( tex, texCoord );
if ( texel.a < 0.01 ) discard;
// rest of your color computations
}
(you can also use alpha test in fixed-function fragment processing, but it's impossible to say if the test will be applied before the completion of fragment shading).
I have a large number (~1000) of THREE.Mesh objects that have been constructed from the same THREE.Geometry and THREE.MeshPhongMaterial (which has a map).
I would like to tint (color) these objects individually.
Naïvely, I tried changing the mesh.material.color property, but changing this property on any of the objects changes the color of all the objects at once. This makes sense, since there is only one material that is shared among all the objects.
My next idea was to create a separate THREE.MeshPhongMaterial for each object. So, now I have a large number of THREE.Mesh objects that been constructed from the same THREE.Geometry, but have individual THREE.MeshPhongMaterials (that share the same texture). This allows me to change the colors individually, but the performance is worse. The chrome profilier shows that the app is spending significant time doing material-things like switching textures.
The material color is just a uniform in the shader. So, updating that uniform should be quite quick.
question: Is there a way to override a material color from the mesh level?
If there was, I believe I could share the material among all my objects and get my performance back, while still changing the colors individually.
[I have tested on v49 and v54, they have identical performance and degredation]
update: I have built a test case, and the performance drop due to this is smaller than I thought it was, but is still measurable.
Here are two links:
http://danceliquid.com/docs/threejs/material-test/index.html?many-materials=false
http://danceliquid.com/docs/threejs/material-test/index.html?many-materials=true
In the first case, there are only two materials, in the second case each cube has it's own material. I measure the framerate of the first case to be 53fps on this machine, and the framerate of the second is 46fps. This is about a 15% drop.
In both cases, the color of the material of every cube is changed every frame. In the case with many materials, we actually see each cube getting it's own color, in the case with only two materials, we see them all having the same color (as expected).
Yes. Per object, clone your material using material.clone(), modify its emissive and color, and set the object's material to this clone. Shaders and attributes are copied by reference, so do not worry that you are cloning the entire material each time; in fact the only things that are copied by value are the uniforms (such as emissive and color). So you can change these per individual object.
Personally I store the original material on a separate, custom property of the object so that I can easily switch back to it later; depends what your needs are.
If you're writing your own shaders, you could use a uniform variable for a general tint (not vertex specific) and pass that in to the shader for factoring into the overall color. vec4f_t and vec4f() are not standard in the C portion, but your code probably already has equivalents.
C:
vec4f_t hue = vec4f(....); // fill in as desired
// load the shader so that GLuint shader_id is available.
// "hue" is a uniform var in the vertexshader
GLUint hue_id = glGetUniformLocation(shader_id, "hue");
// later, before rendering the object:
glUniform4fv(hue_id, 1, &hue);
the.vertexshader:
uniform vec4 hue; // add this and use it in the texture's color computation
There is a great article about multiple light sources in GLSL
http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Multiple_Lights
But light0 and light1 parameters described in shader code, what if must draw flare gun shots, e.g every flare has it own position, color and must illuminate surroundings. How we manage other objects shader to deal with unknown (well there is a limit to max flares on the screen) position, colors of flares? For example there will be 8 max flares on screen, what i must to pass 8*2 uniforms, even if they not exist at this time?
Or imagine you making level editor, user can place lamps, how other objects will "know" about new light source and render then new lamp has been added?
I think there must be clever solution, but i can't find one.
Lighting equations usually rely on additive colour. So the output is the colour of light one plus the colour of light two plus the colour of light three, etc.
One of the in-framebuffer blending modes offered by OpenGL is additive blending. So the colour output of anything new that you draw will be added to whatever is already in the buffer.
The most naive solution is therefore to write your shader to do exactly one light. If you have multiple lights, draw the scene that many times, each time with a different nominated line. It's an example of multipass rendering.
Better solutions involve writing shaders to do two, four, eight or whatever lights at once, doing, say, 15 lights as an 8-light draw then a 4-light draw then a 2-light draw then a 1-light draw, and including only geometry within reach of each light when you do that pass. Which tends to mean finding intelligent ways to group lights by locality.
EDIT: with a little more thought, I should add that there's another option in deferred shading, though it's not completely useful on most GL ES devices at the moment due to the limited options for output buffers.
Suppose theoretically you could render your geometry exactly once and store whatever you wanted per pixel. So you wouldn't just output a colour, you'd output, say, a position in 3d space, a normal, a diffuse colour, a specular colour and a specular exponent. Those would then all be in a per-pixel buffer.
You could then render each light by (i) working out the maximum possible space it can occupy when projected onto the screen (so, a 2d rectangle that relates directly to pixels); and (ii) rendering the light as a single quad of that size, for each pixel reading the relevant values from the buffer you just set up and outputting an appropriately lit colour.
Then you'd do all the actual geometry in your scene only exactly once, and each additional light would cost at most a single, full-screen quad.
In practice you can't really do that because the output buffers you tend to be able to use in ES provide too little storage. But what you can usually do is render to a 32bit colour buffer with an attached depth buffer. So you can just store depth in the depth buffer and work out world (x, y, z) from that plus the [uniform] position of the camera in the light shader. You could store 8-bit versions of normal x and y in the colour buffer so as to spend 16 bits and work out z in the colour buffer because you know that the normal is always of unit length. Then, to pick a concrete example at random, maybe you could store a 16-bit version of the diffuse colour in the remaining space, possibly in YCrCb with extra storage for Y.
The main disadvantage is that hardware antialiasing then doesn't due to much the same sort of concerns as transparency and depth buffers. But if you get to the point where you save dramatically on lighting it might still make sense to do manual antialiasing by rendering a large version of the scene and then scaling it down in a final pass.