I noticed issues with skinned animations on larger world-coordinates. The bones start to jump around more frequently and distant the larger the coordinates go.
On iPad the issue already appears at 200 units, slightly precision issue, drastically increasing moving further, so it seems like precision is even less. On Desktop it starts vibrating at 400,000 units. I'm using a 1 unit/1 meter scale, that is recommended. If i scale everything up immensely (1000x) the issue starts to disappear, but that isn't really a solution to go with.
The shader already declares precision highp float, i don't know if there is anything fixing the issue.
Is there any solution i don't need to scale up everything and need to clamp the world coordinates within a very low range?
Related
I'm trying to render a car's license plate in WebGL with a purple texture whose uniform name is diffTex.
When I render the rest of the car with a simple black material and no textures, the drawcall that renders the license plate binds uniform 35 to diffTex and uniform 36 to specNrmMap for 6 total activeTexture() calls. The purple plate shows onscreen as expected.
However, when I render the entire car with their own materials, textures, etc. the drawcall that renders the license plate skips diffTex, and binds uniform 35 to specNrmMap with no #36 for 5 total activeTexture() calls. The purple plate shows up white without the diffuse texture.
Does WebGL have a uniform limit or a texture binding limit that I might be overlooking? webglreport.com states my Max Texture Image Units is 16 in the fragment shader, and I'm only using 6, so I have 10 to spare. I'm not changing anything in the license plate material, it just works when I render the car in black without textures, and it stops working when I render the rest of the car with textures.
Uniforms do not have numbers in WebGL. Those numbers in your debugger are something assigned by the debugger. How it numbers them is up to the debugger. It could number them by querying them. If so they'd get different numbers across implementations. They'd also change if you change the shader. It could number them based on the order you use them. If so, then you setting different textures would also number them different.
Uniforms are almost always optimized out if they are not used so if you stopped using a particular uniform then again the debugger you're using might number them differently.
As for limits, as you already checked there is a limit to the number of texture units and you can bind a different texture to every unit so your 6 textures is well under the limit.
For uniforms the limit is queried via gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS) for vertex shaders and gl.getParameter(gl.MAX_FRAGMENT_UNIFORM_VECTORS) though it's unlikely you're hitting that limit because you'd get an error trying to compile the shaders.
Note: how many uniforms you can actually use from that number is defined by the packing algorithm. See this answer
As for why your code is not working you'd have to post a repo (in the question itself) for us to figure that out.
I recently tried my app on mobile and noticed some weird behavior, seems like camera near plane is clipping the geometry however other objects at the same distance aren't clipped... Materials are StandarMaterials, depthTest and depthWrite are set to true.
I must add I can't reproduce this issue on my desktop. Which makes it difficult to understand what's going on, since it's working perfectly at first sight.
Here are 2 gifs showing the problem:
You can see the same wall on the left in the next gif
Thanks!
EDIT:
It seems the transparent faces (on mobile) was due to logarithmicDepthBuffer = true (but don't know why?) and I also had additional artefacts cause by camera near and far planes being too far from each other producing depth issues (see Flickering planes)...
EDIT 2:
Well I wasn't searching for the right terms... Just found this today: https://github.com/mrdoob/three.js/issues/13047#issuecomment-356072043
So logarithmicDepthBuffer uses EXT_frag_depth which is only supported by 2% of mobiles according to WebGLStats. A workaround would be to tesselate the geometries...
Well I wasn't searching for the right terms... Just found this today: https://github.com/mrdoob/three.js/issues/13047#issuecomment-356072043
So logarithmicDepthBuffer uses EXT_frag_depth which is only supported by 2% of mobiles according to WebGLStats. A workaround would be to tesselate the geometries or stay with linear depth buffer...
Additional artefacts caused by camera near and far planes being too far from each other producing depth issues (see Flickering planes)...
You don't need a logarithmic depth buffer to fix this. You've succumbed to the classic temptation to bring your near clip REALLY close to the eye and the far clip very far away. This creates a very non-linear depth precision distribution and is easily mitigated by pushing the near clip plane out by a reasonable amount. Try to sandwich your 3D data as tightly as possible between your near and far clip planes and tolerate some near plane clipping.
I'm new to three.js and WebGL in general.
The sample at http://css.dzone.com/articles/threejs-render-real-world shows how to use raster GIS terrain data in three.js
Is it possible to use vector GIS data in a scene? For example, I have a series of points representing locations (including height) stored in real-world coordinates (meters). How would I go about displaying those in three.js?
The basic sample at http://threejs.org/docs/59/#Manual/Introduction/Creating_a_scene shows how to create a geometry using coordinates - could I use a similar approach with real-world coordinates such as
"x" : 339494.5,
"y" : 1294953.7,
"z": 0.75
or do I need to convert these into page units? Could I use my points to create a surface on which to drape an aerial image?
I tried modifying the simple sample but I'm not seeing anything (or any error messages): http://jsfiddle.net/slead/KpCfW/
Thanks for any suggestions on what I'm doing wrong, or whether this is indeed possible.
I did a number of things to get the JSFiddle show something.. here: http://jsfiddle.net/HxnnA/
You did not specify any faces in your geometry. In this case I just hard-coded a face with all three of your data points acting as corner. Alternatively you can look into using particles to display your data as points instead of faces.
Set material to THREE.DoubleSide. This is not usually needed or recommended, but helps debugging in early phases, when you can see both sides of a face.
Your camera was probably looking in a wrong direction. Added a lookAt() to point it to the center and made the field of view wider (this just makes it easier to find things while coding).
Your camera near and far planes were likely off-range for the camera position and terrain dimensions. So I increased the far plane distance.
Your coordinate values were quite huge, so I just modified them by hand a bit to make sense in relation to the camera, and to make sure they form a big enough triangle for it to be seen in camera. You could consider dividing your coordinates with something like 100 to make the units smaller. But adjusting the camera to account for the huge scale should be enough too.
Nothing wrong with your approach, just make sure you feed the data so that it makes sense considering the camera location, direction and near + far planes. Pay attention to how you make the faces. The parameters to Face3 is the index of each point in your vertices array. Later on you might need to take winding order, normals and uvs into account. You can study the geometry classes included in Three.js for reference.
Three.js does not specify any meaning to units. Its just floating point numbers, and you can decide yourself what a unit (1.0) represents. Whether it's 1mm, 1 inch or 1km, depends on what makes the most sense considering the application and the scale of it. Floating point numbers can bring precision problems when the actual numbers are extremely small or extremely big. My own applications typically deal with stuff in the range from a couple of centimeters to couple hundred meters, and use units in such a way that 1.0 = 1 meter, that has been working fine.
I'm having an issue with back faces (to the light) and shadow mapping that I can't seem to get past. I'm still at the relatively early stages of optimizing my engine, however I can't seem to get there as even with everything hand-tuned for this one piece of geometry it still looks like garbage.
What it is is a skinny wall that is "curved" via about 5 different chunks of wall. When I create my depth map I'm culling front faces (to the light). This definitely helps, but the front faces on the other side of the wall are what seem to be causing the z-fighting/projective shadowing.
Some notes on the screenshot:
Front faces are culled when the depth texture (from the light) is being drawn
I have the near and far planes tuned just for this chunk of geometry (set at 20 and 25 respectively)
One directional light source, coming down on a slight angle toward the right side of the scene, enough to indicate that wall should be shadowed, but mostly straight down
Using a ludicrously large 4096x4096 shadow map texture
All lighting is disabled, but know that I am doing soft lighting (and hence vertex normals for the vertices) even on this wall
As mentioned here it concludes you should not shadow polygons that are back faced from the light. I'm struggling with this particular issue because I don't want to pass the face normals all the way through to the fragment shader to rule out the true back faces to the light there - however if anyone feels this is the best/only solution for this geometry thats what I'll have to do. Considering how the pipeline doesn't make it easy/obvious to pass the face normals through it makes me feel like this isn't the path of least resistance. And note that the normals I am passing are the vertex normals, to allow for softer lighting effects around the edges (will likely include both non-shadowed and shadowed surfaces).
Note that I am having some nasty Perspective Aliasing, but I'm hoping my next steps are to work on cascaded shadow maps, but without fixing this I feel like I'm just delaying the inevitable as I've hand-tightened the view as best I can (or so I think).
Anyways I feel like I'm missing something, so if you have any thoughts or help at all would be most appreciated!
EDIT
To be clear, the wall technically should NOT be in shadow, based on where the light is coming from.
Below is an image with shadowing turned off. This is just using the vertex normals to calculate diffuse lighting - its not pretty (too much geometry is visible) but it does show that some of the edges are somewhat visible.
So yes, the wall SHOULD be in shadow, but I'm hoping I can get the smoothing working better so the edges can have some diffuse lighting. If I need to have it completely in shadow, then if its the shadow map that puts it in shadow, or my code specifically putting it in shadow because the face normal is away, I'm fine with that - but passing the face normal through to my vertex/fragment shader does not seem like the path of least resistance.
Perhaps these will help illustrate my problem better, or perhaps bring to light some fundamental understanding I am missing.
EDIT #2
I've included the depth texture below. You can see the wall in question in the bottom left, and from the screenshot you can see how i've trimmed the depth values to ~0.4->1. This means the depth values of that wall start in the 0.4 range. So its not PERFECTLY clipped for it, but its close. Does that seem reasonable? I'm pretty sure its a full 24 or 32 bit depth buffer, a la DEPTH_COMPONENT extension on iOS. For #starmole, does this help to determine if its a scaling error in my projection? Do you think the size/area covered of my map is too large, hence if it focuses closer it might help?
The problem seems to be that you are
Culling the front faces
Looking at the back face
Not removing the light from the back face because it's actually not lit by the normal - or there is some inaccuracy in the computation
Probably not adding some epsilon
(1) and (2) mean that there will be Z-fighting between the shadow map and the back faces.
Also, the shadow map resolution is not going to help you - just look at the wall in the shadow map, it's one pixel thick.
Recommendations:
Epsilons. Make sure that Z > lightZ + epsilon
Epsilons. Make sure that the wall is facing the light (dot of normal > epsilon) to make sure the wall is shadowed if it's very nearly orthogonal
I understand that by setting the depth function in OpenGL ES one can control how overlapping geometries are rendered in a 3D scene. I use gl.depthFunc(gl.LEQUAL) (webgl) in my code.
However when two sets of polygons are coincident and are of different color, the resulting surface turns out to be an arbitrary mixed pattern of the two colors (which changes as the camera location changes, hence leads to flickering). Take a look at this image:
How can I fix this? I have tried different depthFunc values, but none of them solves this problem. I would like the coincident polygons to have single color, it doesn't matter which one.
This is called z-fighting, and is related to two objects being rendered at the same depth, but rounding errors (and depth buffer precision) occasionally popping one in front of the other. One solution available to you is to use the glPolygonOffset function:
http://www.khronos.org/opengles/sdk/docs/man/xhtml/glPolygonOffset.xml
You can see an example of it in use at the bottom of this page:
http://www.glprogramming.com/red/chapter06.html
What you experience is called Z fighting and unfortunately there's not definitive solution against it. What happens is, that due to the limited precision of the depth buffer, rounding errors occur and either one of the primitives "win" the depth test operation. Changing the depth function will just toggle the colours in the fighting pattern, but not remove it.
One method to get rid of the Z fighting is using polygon offset http://www.opengl.org/wiki/Basics_Of_Polygon_Offset
Unfortunately polygon offset introduces its own share of problems.
Try changing your z-near to be farther from zero in your call to gluPerspective:
void gluPerspective(
GLdouble fovy,
GLdouble aspect,
GLdouble zNear,
GLdouble zFar);
From this website:
http://www.opengl.org/resources/faq/technical/depthbuffer.htm
Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on?
You may have configured your zNear and zFar clipping planes in a way
that severely limits your depth buffer precision. Generally, this is
caused by a zNear clipping plane value that's too close to 0.0. As the
zNear clipping plane is set increasingly closer to 0.0, the effective
precision of the depth buffer decreases dramatically. Moving the zFar
clipping plane further away from the eye always has a negative impact
on depth buffer precision, but it's not one as dramatic as moving the
zNear clipping plane.
Try messing with glPolygonOffset(factor, units). This page might help.