Ugly rendering of GLTF with vertex colors - three.js

The geometry
I have a cube represented as a triangular mesh:
Applying a color
Each of the vertices gets a color assigned, to apply a color map onto the cube.
In GLTF format, the mesh is therefore described as follows:
{
"primitives": [
{
"attributes": {
"POSITION": 0,
"COLOR_0": 2
},
"indices": 1
}
]
}
Problem: Ugly rendering
Unfortunately, this is rendered with ugly artifacts. The color of the three vertices of a face are not smoothly interpolated across the face, but there are brighter and darker zones along the edges of the triangles. This happens with various renderers such as Babylon.js, Cesium, Filament, Three.js.
What's the reason for those artifacts and how can I get a smooth colormap on my cube?

Related

If alpha is 0, RBG is 0 even with premultipyAlpha to false

I have a 3D model with a RGBA texture where the alpha value should be ignored (I must use those alpha values differently for some reflections effects).
PNG specs say : colors are not premultiplied by alpha.
Then, in shaders we should be able to ignore alpha values and only displaying RGB colors.
I tried many things, disabling premultipliedAlpha in WebGLRenderer, Texture and Material but RGB values are 0 if alpha is 0.
In the example below, the models should have skin colored hands and neck but appear white.
On the center texture is loaded natively by the FBXLoader.
On the right texture is loaded by the TextureLoader.
On the left another texture without alpha is loaded by the TextureLoader for demonstration purpose.
For ShaderMaterials, the fragment shader is :
void main() {
FragColor = vec4(texture2D(map, vUv).rgb, 1.);
}
Codesandbox
What I am doing wrong ? Is that a WebGL or THREE limitation ?
PS: using 2 textures is not an option

Why is three.js inconsistent about gouraud interpolation?

I want to shade a THREE.BoxBufferGeometry using a simple THREE.MeshLambertMaterial. The material is supposed to use a Lambert illumination model to pick the colors for each vertex (and it does), and then use Gouraud shading to produce smooth gradients on each face.
The Gouraud part is not happening. Instead, the cube's faces are each shaded with one single, solid color.
I have tried various other BufferGeometrys, and gotten inconsistent results.
For example, if instead I make an IcosahedronBufferGeometry, I get the same problem: each face is one single, solid color.
geometry = new THREE.IcosahedronBufferGeometry(2, 0); // no Gouraud shading.
geometry = new THREE.IcosahedronBufferGeometry(2, 2); // no Gouraud shading.
On the other hand, if I make a SphereBufferGeometry, the Gouraud is present.
geometry = new THREE.SphereBufferGeometry(2, 3, 2); // yes Gouraud shading.
geometry = new THREE.SphereBufferGeometry(2, 16, 16); // yes Gouraud shading.
But then if I make a cube using a PolyhedronBufferGeometry, the Gouraud shading doesn't appear unless I set the detail to something other than 0.
const verticesOfCube = [
-1,-1,-1, 1,-1,-1, 1, 1,-1, -1, 1,-1,
-1,-1, 1, 1,-1, 1, 1, 1, 1, -1, 1, 1,
];
const indicesOfFaces = [
2,1,0, 0,3,2,
0,4,7, 7,3,0,
0,1,5, 5,4,0,
1,2,6, 6,5,1,
2,3,7, 7,6,2,
4,5,6, 6,7,4
];
const geometry = new THREE.PolyhedronBufferGeometry(verticesOfCube, indicesOfFaces, 1, 1); // no Gouraud shading
geometry = new THREE.PolyhedronBufferGeometry(verticesOfCube, indicesOfFaces, 1, 1); // yes Gouraud shading
I am aware of the existence of the BufferGeometry methods computeFaceNormals() and computeVertexNormals(). Normals are emphatically important here, as they are used to determine the colors for each face and vertice, respectively. But while they help with the Icosahedron, they have no effect on the Box, no matter whether they are present, only one is present, or both are present in both possible orders.
Here is the code I expect to work:
const geometry = new THREE.BoxBufferGeometry(2, 2, 2);
geometry.computeFaceNormals();
geometry.computeVertexNormals();
const material = new THREE.MeshLambertMaterial({
color: 0xBE6E37
});
const mesh = new THREE.Mesh(geometry, material);
I should be getting a cube whose faces (the real, triangular ones) are shaded with a gradient. First, the face normals should be computed, and then the vertex normals by averaging the normals of the faces formed by them. Here is a triangular bipyramid on which correct Gouraud shading is being applied:
But the code above produces this instead:
At no point does three.js log any errors or warnings to the console.
So what is it that's going on here? The only explanation I can think of is that the Box is actually comprised of 24 vertices, three at each corner of the cube, and that they form faces such that each vertex's computed normal is an average of at most two faces pointing in the same direction. But I can't find that written down anywhere, and that explanation doesn't fly for the Polyhedron where vertices and faces were explicitly specified in code.

Convert Cubemap coordinates to equivalents in Equirectangular

I have a set of coordinates of a 6-image Cubemap (Front, Back, Left, Right, Top, Bottom) as follows:
[ [160, 314], Front; [253, 231], Front; [345, 273], Left; [347, 92], Bottom; ... ]
Each image is 500x500p, being [0, 0] the top-left corner.
I want to convert these coordinates to their equivalents in equirectangular, for a 2500x1250p image. The layout is like this:
I don't need to convert the whole image, just the set of coordinates. Is there any straight-forward conversion por a specific pixel?
convert your image+2D coordinates to 3D normalized vector
the point (0,0,0) is the center of your cube map to make this work as intended. So basically you need to add the U,V direction vectors scaled to your coordinates to 3D position of texture point (0,0). The direction vectors are just unit vectors where each axis has 3 options {-1, 0 , +1} and only one axis coordinate is non zero for each vector. Each side of cube map has one combination ... Which one depends on your conventions which we do not know as you did not share any specifics.
use Cartesian to spherical coordinate system transformation
you do not need the radius just the two angles ...
convert the spherical angles to your 2D texture coordinates
This step depends on your 2D texture geometry. The simplest is rectangular texture (I think that is what you mean by equirectangular) but there are other mappings out there with specific features and each require different conversion. Here few examples:
Bump-map a sphere with a texture map
How to do a shader to convert to azimuthal_equidistant
For the rectangle texture you just scale the spherical angles into texture resolution size...
U = lon * Usize/(2*Pi)
V = (lat+(Pi/2)) * Vsize/Pi
plus/minus some orientation signs to match your coordinate systems.
btw. just found this (possibly duplicate QA):
GLSL Shader to convert six textures to Equirectangular projection

Three.js pixel perfect at z=0 plane

Is there any way to configure the camera in Three.js so that when a 2d object (line, plane, image) is rendered at z=0, it doesn't bleed (from perspective) into other pixels.
Ex:
var plane = new THREE.Mesh(new THREE.PlaneGeometry(1, 1), material);
plane.position.x = 4;
plane.position.y = 3;
scene.add(plane);
...
// Get canvas pixel information
context.readPixels(....);
If you example the data from readPixels, I always find that the pixel is rendering into its surrounding pixels (ex: 3,3,0 may contain some color information), but would like it to be pixel perfect if the element that is draw is on the z=0 plane.
You probably want to use THREE.OrthographicCamera for the 2d stuff instead of THREE.PerspectiveCamera. That way they are not affected by perspective projection.
Which pixels get rendered depends on where your camera is. If your camera for example t z=1 then a lot of pixels will get rendered. If you move your camera to z=1000 then you see, due to perspective, maybe only 1 pixel will get rendered from your geometry.

Direct3D9 Calculating view space point light position

I am working on my own deffered rendering engine. I am rendering the scene to the g-buffer containing diffuse color, view space normals and depth (for now). I have implemented directional light for the second rendering stage and it works great. Now I want to render a point light, which is a bit harder.
I need the point light position for the shader in view space because I have only depth in the g-buffer and I can't afford a matrix multiplication in every pixel. I took the light position and transformed it by the same matrix, by which I transform every vertex in shader, so it should align with verices in the scene (using D3DXVec3Transform). But that isn't the case: transformed position doesn't represent viewspace position nearly at all. It's x,y coordinates are off the charts, they are often way out of the (-1,1) range. The transformed position respects the camera orientation somewhat, but the light moves too quick and the y-axis is inverted. Only if the camera is at (0,0,0), the light stands at (0,0) in the center of the screen. Here is my relevant rendering code executed every frame:
D3DXMATRIX matView; // the view transform matrix
D3DXMATRIX matProjection; // the projection transform matrix
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x,y,z), // the camera position
&D3DXVECTOR3 (xt,yt,zt), // the look-at position
&D3DXVECTOR3 (0.0f, 0.0f, 1.0f)); // the up direction
D3DXMatrixPerspectiveFovLH(&matProjection,
fov, // the horizontal field of view
asp, // aspect ratio
znear, // the near view-plane
zfar); // the far view-plane
D3DXMATRIX vysl=matView*matProjection;
eff->SetMatrix("worldViewProj",&vysl); //vertices are transformed ok ín shader
//render g-buffer
D3DXVECTOR4 lpos; D3DXVECTOR3 lpos2(0,0,0);
D3DXVec3Transform(&lpos,&lpos2,&vysl); //transforming lpos into lpos2 using vysl, still the same matrix
eff->SetVector("poslight",&lpos); //but there is already a mess in lpos at this time
//render the fullscreen quad with wrong lighting
Not that relevant shader code, but still, I see the light position this way (passing IN.texture is just me being lazy):
float dist=length(float2(IN.texture0*2-1)-float2(poslight.xy));
OUT.col=tex2D(Sdiff,IN.texture0)/dist;
I have tried to transform a light only by matView without projection, but the problem is still the same. If I transform the light in a shader, it's the same result, so the problem is the matrix itself. But it is the same matrix as is transforming the vertices! How differently are vertices treated?
Can you please take a look at the code and tell me where the mistake is? It seems to me it should work ok, but it doesn't. Thanks in advance.
You don't need a matrix multiplication to reconstruct view position, here is a code snippet (from andrew lauritzen deffered light example)
tP is the projection transform, position screen is -1/1 pixel coordinate and viewspaceZ is linear depth that you sample from your texture.
float3 ViewPosFromDepth(float2 positionScreen,
float viewSpaceZ)
{
float2 screenSpaceRay = float2(positionScreen.x / tP._11,
positionScreen.y / tP._22);
float3 positionView;
positionView.z = viewSpaceZ;
positionView.xy = screenSpaceRay.xy * positionView.z;
return positionView;
}
Result of this transform D3DXVec3Transform(&lpos,&lpos2,&vysl); is a vector in homogeneous space(i.e. projected vector but not divided by w). But in you shader you use it's xy components without respecting this(w). This is (quite probably) the problem. You could divide vector by its w yourself or use D3DXVec3Project instead of D3DXVec3Transform.
It's working fine for vertices as (I suppose) you mul them by the same viewproj matrix in the vertex shader and pass transformed values to interpolator where hardware eventually divides it's xyz by interpolated 'w'.

Resources