Jittering vertices - three.js

I'm trying to get rid of spatial jitter. To be more precise, I'm trying to replicate Three.js behavior concerning matrix manipulation.
My current rendering pipeline (WebGL with m4.js)
There is a scene object, a camera and a mesh. The mesh has its position (mesh.position) set to position and the camera is floating somewhere near it.
Vertices have positions relative to mesh.position:
new Float32Array([
-5, 0, -5,
5, 0, -5,
0, 0, 5
])
Important part of render loop:
const position = {x: 6428439.8443510765, y: 0, z: 4039717.5286310893}; // mesh is located there
camera.updateMatrixWorld();
camera.updateMatrixWorldInverse();
let modelViewMatrix = m4.multiply(camera.matrixWorldInverse, mesh.matrixWorld);
material.uniforms.projectionMatrix = {type: 'Matrix4fv', value: camera.projectionMatrix};
material.uniforms.modelViewMatrix = {type: 'Matrix4fv', value: modelViewMatrix};
material.use();
mesh.draw(material);
Vertex shader:
#version 300 es
precision highp float;
in vec3 position;
in vec3 color;
out vec3 vColor;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
void main() {
vColor = color;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Fragment shader:
#version 300 es
precision highp float;
out vec4 FragColor;
in vec3 vColor;
uniform float uSample;
void main() {
FragColor = vec4(vColor * uSample, 1);
}
Result:
When moving or rotating the camera around the mesh spatial jitter effect is observed, which is not the expected behavior.
I've implemented the same scene using Three.js, and as expected you can see no jitter while moving vertices or camera: Codepen link. Three.js must work exactly the same as my implemetation, but obviously I'm missing something.

It turns out that m4.js which is a part of webgl-3d-math library used on webgl2fundamentals.org utilizes Float32Array objects for storing matrices. Maybe this approach positively affects performance, but it also causes some confusion as JavaScript's Number uses 64-bit floats.

Related

Raycasting with InstancedMesh, InstancedBufferGeometry, custom shader

Basic, I can't get raycasting to work with them. My guess is my matrix coordinate calculation method is wrong. Don't know how to do it right.
I set vertex position and offset in vertexShader, and in InstancedMesh, I set the same offset, expecting the the raycast can get the an instanceID, but nothing intersects. You can find my entire code here.
I tried to adapt an official raycasting example here, but can't figure out where I did wrong. My hodgepodge uses: InstancedMesh, InstancedBufferGeometry, custom shader together. My objective is to learn how it works.
My question is where I did wrong?
My vertex shader:
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
attribute vec4 color;
attribute vec3 offset;
varying vec3 vPosition;
varying vec4 vColor;
void main() {
vColor = vec4(color);
vPosition = offset*1.0 + position;
gl_Position = projectionMatrix * modelViewMatrix * vec4( vPosition, 1.0 );
// if gl_Position not set, nothing is shown
}
My InstancedMesh matrix setting:
for(let i = 0; i < SQUARE_COUNT; i++) {
transform.position.set(offsets[i], offsets[i+1], offsets[i+2] )
transform.updateMatrix()
mesh.setMatrixAt(i, transform.matrix)
}
The offsets is set before as following:
for(let i = 0; i < SQUARE_COUNT; i++ ) {
offsets.push( 0 + i*0.05, 0 + i*0.05, 0 + i*0.05); // same is set in InstancedMesh
colors.push( Math.random(), Math.random(), Math.random(), Math.random() );
}
The raycaster has no awareness of any nonstandard transformation that you do in your vertex shader. It's just the way it works. It has no way of knowing that you are doing:
vPosition = offset*1.0 + position;
in your shader.
It works by assuming that you are running the bog standard vertex shader with no additional transforms. It assumes that every object you are casting against has a well defined/computed bounding box as well.
If you are going to use raycasting, you may have to make a non-rendered scene that represents your objects in their final rendered positions, and cast against that.

How to texture non-unwrapped model using a cubemap

I have lots of models that ain't unwrapped (they don't have UV coordinates). They are quite complex to unwrap them. Thus, I decided to texture them using a seamless cubemap:
[VERT]
attribute vec4 a_position;
varying vec3 texCoord;
uniform mat4 u_worldTrans;
uniform mat4 u_projTrans;
...
void main()
{
gl_Position = u_projTrans * u_worldTrans * a_position;
texCoord = vec3(a_position);
}
[FRAG]
varying vec3 texCoord;
uniform samplerCube u_cubemapTex;
void main()
{
gl_FragColor = textureCube(u_cubemapTex, texCoord);
}
It works, but the result is quite weird due to texturing depends on the vertices position. If my model is more complex than a cube or sphere, I see visible seams and low resolution of the texture on some parts of the object.
Reflection is mapped good on the model, but it has a mirror effect.
Reflection:
[VERT]
attribute vec3 a_normal;
varying vec3 v_reflection;
uniform mat4 u_matViewInverseTranspose;
uniform vec3 u_cameraPos;
...
void main()
{
mat3 normalMatrix = mat3(u_matViewInverseTranspose);
vec3 n = normalize(normalMatrix * a_normal);
//calculate reflection
vec3 vView = a_position.xyz - u_cameraPos.xyz;
v_reflection = reflect(vView, n);
...
}
How to implement something like a reflection, but with “sticky” effect, which means that it’s as if the texture is attached to a certain vertex (not moving). Each side of the model must display its own side of the cubemap, and as a result it should look like a common 2D texturing. Any advice will be appreciated.
UPDATE 1
I summed up all comments and decided to calculate cubemap UV. Since I use LibGDX, some names may differ from OpenGL ones.
Shader class:
public class CubemapUVShader implements com.badlogic.gdx.graphics.g3d.Shader {
ShaderProgram program;
Camera camera;
RenderContext context;
Matrix4 viewInvTraMatrix, viewInv;
Texture texture;
Cubemap cubemapTex;
...
#Override
public void begin(Camera camera, RenderContext context) {
this.camera = camera;
this.context = context;
program.begin();
program.setUniformMatrix("u_matProj", camera.projection);
program.setUniformMatrix("u_matView", camera.view);
cubemapTex.bind(1);
program.setUniformi("u_textureCubemap", 1);
texture.bind(0);
program.setUniformi("u_texture", 0);
context.setDepthTest(GL20.GL_LEQUAL);
context.setCullFace(GL20.GL_BACK);
}
#Override
public void render(Renderable renderable) {
program.setUniformMatrix("u_matModel", renderable.worldTransform);
viewInvTraMatrix.set(camera.view);
viewInvTraMatrix.mul(renderable.worldTransform);
program.setUniformMatrix("u_matModelView", viewInvTraMatrix);
viewInvTraMatrix.inv();
viewInvTraMatrix.tra();
program.setUniformMatrix("u_matViewInverseTranspose", viewInvTraMatrix);
renderable.meshPart.render(program);
}
...
}
Vertex:
attribute vec4 a_position;
attribute vec2 a_texCoord0;
attribute vec3 a_normal;
attribute vec3 a_tangent;
attribute vec3 a_binormal;
varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;
uniform mat4 u_matProj;
uniform mat4 u_matView;
uniform mat4 u_matModel;
uniform mat4 u_matViewInverseTranspose;
uniform mat4 u_matModelView;
void main()
{
gl_Position = u_matProj * u_matView * u_matModel * a_position;
v_texCoord = a_texCoord0;
//CALCULATE CUBEMAP UV (WRONG!)
//I decided that tm_l2g mentioned in comments is u_matView * u_matModel
v_cubeMapUV = vec3(u_matView * u_matModel * vec4(a_normal, 0.0));
/*
mat3 normalMatrix = mat3(u_matViewInverseTranspose);
vec3 t = normalize(normalMatrix * a_tangent);
vec3 b = normalize(normalMatrix * a_binormal);
vec3 n = normalize(normalMatrix * a_normal);
*/
}
Fragment:
varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;
uniform sampler2D u_texture;
uniform samplerCube u_textureCubemap;
void main()
{
vec3 cubeMapUV = normalize(v_cubeMapUV);
vec4 diffuse = textureCube(u_textureCubemap, cubeMapUV);
gl_FragColor.rgb = diffuse;
}
The result is completely wrong:
I expect something like that:
UPDATE 2
The texture looks stretched on the sides and distorted in some places if I use vertices position as a cubemap coordinates in the vertex shader:
v_cubeMapUV = a_position.xyz;
I uploaded euro.blend, euro.obj and cubemap files to review.
that code works only for meshes that are centered around (0,0,0) if that is not the case or even if (0,0,0) is not inside the mesh then artifacts occur...
I would start with computing BBOX BBOXmin(x0,y0,z0),BBOXmax(x1,y1,z1) of your mesh and translate the position used for texture coordinate so its centered around it:
center = 0.5*(BBOXmin+BBOXmax);
texCoord = vec3(a_position-center);
However non uniform vertex density would still lead to texture scaling artifacts especially if BBOX sides sizes differs too much. Rescaling it to cube would help:
vec3 center = 0.5*(BBOXmin+BBOXmax); // center of BBOX
vec3 size = BBOXmax-BBOXmin; // size of BBOX
vec3 r = a_position-center; // position centered around center of BBOX
r.x/=size.x; // rescale it to cube BBOX
r.y/=size.y;
r.z/=size.z;
texCoord = r;
Again if the center of BBOX is not inside mesh then this would not work ...
The reflection part is not clear to me do you got some images/screenshots ?
[Edit1] simple example
I see it like this (without the center offsetting and aspect ratio corrections mentioned above):
[Vertex]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
{
pixel_col=col;
pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
}
//------------------------------------------------------------------
[Fragment]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec4 pixel_col;
in smooth vec3 pixel_txr;
uniform samplerCube txr_skybox;
out layout(location=0) vec4 frag_col;
//------------------------------------------------------------------
void main(void)
{
frag_col=texture(txr_skybox,pixel_txr);
}
//------------------------------------------------------------------
And here preview:
The white torus in first few frames are using fixed function and the rest is using shaders. As you can see the only input I use is the vertex position,color and transform matrices tm_l2g which converts from mesh coordinates to global world and tm_g2s which holds the perspective projection...
As you can see I render BBOX with the same CUBE MAP texture as I use for rendering the model so it looks like cool reflection/transparency effect :) (which was not intentional).
Anyway When I change the line
pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
into:
pixel_txr=pos;
In my vertex shader the object will be solid again:
You can combine both by passing two texture coordinate vectors and fetching two texels in fragment adding them with some ratio together. Of coarse you would need to pass 2 Cube map textures one for object and one for skybox ...
The red warnings are from my CPU side code reminding me that I am trying to set uniforms that are not present in the shaders (as I did this from the bump mapping example without changing CPU side code...)
[Edit1] here preview of your mesh with offset
The Vertex changes a bit (just added the offsetting described in the answer):
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
uniform vec3 center=vec3(0.0,0.0,2.0);
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
{
pixel_col=col;
pixel_txr=pos-center;
gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
}
//------------------------------------------------------------------
So by offsetting the center point you can get rid of the singular point distortion however as I mentioned in comments for arbitrary meshes there will be always some distortions with cheap texturing tricks instead of proper texture coordinates.
Beware my mesh was resized/normalized (sadly I do not remeber if its <-1,+1> range or different ona and too lazy to dig in my source code of the GLSL engine I tested this in) so the offset might have different magnitude in your environment to achieve the same result.

Diffuse lighting artifacts (OpenGL 4)

I'm trying to implement simplest possible diffuse lighting after reading a few tutorials, but fail miserably.
I load 3d mesh from Wavefront obj file and if I don't apply lighting, it looks just fine. But, when I do apply lighting, the animal looks like a chessboard and the cube is messed up even more:
animal comparison (with normals)
cube comparison (with normals)
Here is the vertex shader:
#version 400
layout (location = 0) in vec4 a_position;
layout (location = 1) in vec3 a_texCoords;
layout (location = 2) in vec3 a_normal;
uniform mat4 u_viewProjection;
uniform mat4 u_model;
out vec3 v_fragPos;
out vec3 v_fragNormal;
void main() {
v_fragPos = a_position.xyz;
v_fragNormal = a_normal;
gl_Position = u_viewProjection * a_position;
}
I pass fragment position and normals to the fragment shader as-is because I'm not applying any model transformations, I simply assume that a model already has world coordinates after loading it from file (forget about u_model uniform, it's not used for now).
Then, the fragment shader:
#version 400
uniform vec3 u_lightPos;
uniform vec3 u_lightColor;
uniform vec3 u_diffuseColor;
uniform vec3 u_specularColor;
uniform vec3 u_ambientColor;
in vec3 v_fragPos;
in vec3 v_fragNormal;
out vec4 o_fragColor;
void main() {
vec3 lightDir = u_lightPos - v_fragPos;
float cosTheta = max(dot(normalize(v_fragNormal), normalize(lightDir)), 0.0);
vec3 diffuseContribution = cosTheta * u_lightColor;
o_fragColor = vec4(u_diffuseColor * diffuseContribution, 1.0);
}
No model or normal matrices used, because no rotations (or scale) are applied to model for now.
I thought about incorrect normals, but at least a simple cube should have correct ones, right?
Also, maybe I should mention that I'm using NSOpenGLView under MacOS.
Any help will be appreciated.
Thank you!
UPDATE:
Adding VBO setup.
This is how single vertex looks like:
struct Vertex1P1N1UV {
glm::vec4 mPosition;
glm::vec3 mTextureCoords;
glm::vec3 mNormal;
Vertex1P1N1UV();
Vertex1P1N1UV(glm::vec4 position, glm::vec3 texcoords, glm::vec3 normal);
};
I initialize my VAO like this
auto* VAO = new GLVertexArray<Vertex1P1N1UV>();
VAO->initialize(subMesh.vertices(), GLVertexArrayLayoutDescription({
static_cast<int>(glm::vec4::length() * sizeof(GLfloat)),
static_cast<int>(glm::vec3::length() * sizeof(GLfloat)),
static_cast<int>(glm::vec3::length() * sizeof(GLfloat)) }));
VAO initialize method
void initialize(const std::vector<Vertex>& vertices, const GLVertexArrayLayoutDescription& layoutDescription) {
bind();
mVertexBuffer.initialize(vertices);
GLuint offset = 0;
for (GLuint location = 0; location < layoutDescription.getAttributeSizes().size(); location++) {
glEnableVertexAttribArray(location);
GLsizei attributeSize = layoutDescription.getAttributeSizes()[location];
glVertexAttribPointer(location, attributeSize / sizeof(GLfloat), GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void *>(offset));
offset += attributeSize;
}
}
And buffer initialize method
void initialize(const std::vector<Vertex>& data) override {
bind();
glBufferData(GL_ARRAY_BUFFER, data.size() * sizeof(Vertex), data.data(), GL_STATIC_DRAW);
}
Vertex will become Vertex1P1N1UV in this case
UPDATE:
Implemented normal visualization (reuploaded screenshots, scroll to top).
What bothers me is that I can see normals through the mesh, like it's transparent despite opaque color. Is this a depth testing problem?
After 2 days of struggling I found the problem.
I did not enable depth buffer on NSOpenGLView. That is one line of code.
If someone stumbles upon this, he can look here OpenGL GL_DEPTH_TEST not working
for a solution.

THREE.JS GLSL sprite always front to camera

I'm creating a glow effect for car stop lights and found a shader that makes it possible to always face the camera:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main() {
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * -viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
This solution is quite simple and almost works. It reacts to camera movement and it would be great. BUT this element is a child of a car. The car itself is moving around and when it rotates the material stops pointing directly at the camera.
I don't want to use SpritePlugin or LensFlarePlugin because they slow down my game by 20fps so I'll stick to this lightweight solution.
I found a solution for Direct 3d that you have to remove rotation data from tranformation matrix, but I don't know how to do this in THREE.js
I guess that instead of adding calculations with car transformation there must be a way to simplify this shader instead.
How to simplify this shader so the material always faces the camera?
From the link below: "To do spherical billboarding, just remove all rotations by setting the identity matrix". How to do it ShaderMaterial in THREE.js?
http://www.geeks3d.com/20140807/billboarding-vertex-shader-glsl/
The problem here I think is intercepting transformation matrix from ShaderMaterial before it's passed to the shader, but I'm not sure.
Probably irrelevant but here's also fragment shader:
uniform vec3 glowColor;
varying float intensity;
void main() {
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
edit: for now I found a workaround which is eliminating parent's rotation influence by setting opposite quaternion. Not perfect and it's happening in CPU not GPU
this.quaternion._x = -this.parent.quaternion._x;
this.quaternion._y = -this.parent.quaternion._y;
this.quaternion._z = -this.parent.quaternion._z;
this.quaternion._w = -this.parent.quaternion._w;
Are you looking for an implementation of billboarding? (make a 2D sprite always face camera) If so, all you need to do is this:
"vec3 billboard(vec2 v, mat4 view){",
" vec3 up = vec3(view[0][1], view[1][1], view[2][1]);",
" vec3 right = vec3(view[0][0], view[1][0], view[2][0]);",
" vec3 p = right * v.x + up * v.y;",
" return p;",
"}"
v is the offset from the center, basically the 4 vertices in a plane that faces the z-axis. Eg. (1.0, 1.0), (1.0, -1.0), (-1.0, 1.0), and (-1.0, -1.0).
Use it like so:
"vec3 worldPos = billboard(a_offset, u_view);"
// then do whatever else.

Lighting not dynamically changing on objects when moved

I'm having trouble with my lighting source and objects in my webGL app. In my "drawScene" function, i load the view port, clear the view, then render my light. After i identify my matrix and render my VBOs (with pushing and poping the matrix).
What happens then once i load my app, the light source is correct, its ontop of the object
I then move the light source to the left, and the lighting displays correctly on the object as it should
Then here is the problem. I move the object to the left, past the light source but the lighting on the object does not move to the right side of the object
If i were to move the lighting all the way to the left, it only changes on the object once the light passes the initial starting position (0, 0).
I thought it was a push and poping matrix issue, but i've corrected the architecture of the code. Light renders first, then the matrix is called, and then the object. It may be a issue with my shaders but i can not tell... Here is what my shaders look like. Think anyone could help me out?
<script id="shader-fs" type="x-shader/x-fragment"> // Textured, lit, normal mapped frag shader precision mediump float;
// uniforms from app
uniform sampler2D samplerD; // diffuse texture map
uniform sampler2D samplerN; // normal texture map
uniform vec3 uLightColor; // directional light color
uniform vec3 uAmbientColor; // ambient light color
// interpolated values from vertex shader
varying vec2 vTextureCoord;
varying vec3 vLightDir;
void main()
{
// get the color values from the texture and normalmap
vec4 clrDiffuse = texture2D(samplerD, vTextureCoord);
vec3 clrNormal = texture2D(samplerN, vTextureCoord).rgb;
// scale & normalize the normalmap color to get a normal vector for this texel
vec3 normal = normalize(clrNormal * 2.0 - 1.0);
// Calc normal dot lightdir to get directional lighting value for this texel.
// Clamp negative values to 0.
vec3 litDirColor = uLightColor * max(dot(normal, vLightDir), 0.0);
// add ambient light, then multiply result by diffuse tex color for final color
vec3 finalColor = (uAmbientColor + litDirColor) * clrDiffuse.rgb;
// finally apply alpha of texture for the final color to render
gl_FragColor = vec4(finalColor, clrDiffuse.a);
}
</script>
<script id="shader-vs" type="x-shader/x-vertex"> // Textured, lit, normal mapped vert shader precision mediump float;
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord; // Texture & normal map coords
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform vec3 uLightDir; // Application can set desired light direction
varying vec2 vTextureCoord; // Passed through to frag shader
varying vec3 vLightDir; // Compute transformed light dir for frag shader
void main(void)
{
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
vLightDir = uLightDir;
vLightDir = normalize(vLightDir);
}
</script>
#MaticOblak
function webGLStart() { ... this.light = new Light(); ...
function Light() { ... this.create(); ...
Light.prototype.create = function() {
gl.uniform3fv(shader.prog.lightDir, new Float32Array([0.0, 0.0, 1.0, 1.0]));
gl.uniform3fv(shader.prog.lightColor, new Float32Array([0.8, 0.8, 0.8]));
gl.uniform3fv(shader.prog.ambientColor, new Float32Array([0.2, 0.2, 0.2]));
}
function drawScene() {
...
this.light.render();
mat4.identity(mvMatrix);
mat4.translate(mvMatrix, [-player.getPosition()[0], -
player.getPosition()[1], 0.0]);
mat4.translate(mvMatrix, [0, 0, -20.0]);
this.world.render();
this.player.render();
Light.prototype.render= function() {
gl.uniform3f(shader.prog.lightDir,
this.position[0],this.position[1],this.position[2] );
}

Resources