Cube map distorts when you translate mesh - three.js

Trying to create a skybox using the cubemap shader (like in the examples) and noticed a distortion when you transate the mesh.
If you create a cube of say 1 dimension width, height, and depth. Set the side to be THREE.BackSide and depthWrite to false. Then scale the mesh to say 1000 units in the x, y, and z fields.
When the mesh is positioned in the center of the world everything is fine. But as soon as you translate the mesh the cube map starts to distort badly.
You would want to move the mesh to be the same position as the camera thereby never allowing the skybox to reach its limits if the user walks around.
The shader code I'm using is this:
'cube': {
uniforms: { "tCube": { type: "t", value: null },
"tFlip": { type: "f", value: -1 } },
vertexShader: [
"varying vec3 vWorldPosition;",
"void main() {",
"vec4 worldPosition = modelMatrix * vec4( position, 1.0 );",
"vWorldPosition = worldPosition.xyz;",
"gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
"}"
].join("\n"),
fragmentShader: [
"uniform samplerCube tCube;",
"uniform float tFlip;",
"varying vec3 vWorldPosition;",
"void main() {",
"gl_FragColor = textureCube( tCube, vec3( tFlip * vWorldPosition.x, vWorldPosition.yz ) );",
"}"
].join("\n")
}
Does anyone know if the shader can be modified to prevent this distortion?
Many thanks!

After doing some research, I found that the cube map shaders for skyboxes rely on the camera to be in the center of the world. So to get this working in the scenario I described above, instead of setting the position of the skybox to the camera, I simply set the camera's world position to be 0.
Just before rendering the skybox you need to do this:
// Get the current position
this._prevCamPos.getPositionFromMatrix( camera.matrixWorldInverse );
// Now set the position of the camera to be 0,0,0
camera.matrixWorldInverse.elements[12] = 0;
camera.matrixWorldInverse.elements[13] = 0;
camera.matrixWorldInverse.elements[14] = 0;
Then just after its rendered it needs to go back:
// Now revert the camera back
camera.matrixWorldInverse.elements[12] = this._prevCamPos.x;
camera.matrixWorldInverse.elements[13] = this._prevCamPos.y;
camera.matrixWorldInverse.elements[14] = this._prevCamPos.z;

Related

How can I color points in Three JS using OpenGL and Fragment Shaders to depend on the points' distance to the scene origin

To clarify I am using React, React Three Fiber, Three JS
I have 1000 points mapped into the shape of a disc, and I would like to give them texture via ShaderMaterial. It takes a vertexShader and a fragmentShader. For the color of the points I want them to transition in a gradient from blue to red, the further away points are blue and the closest to origin points are red.
This is the vertexShader:
const vertexShader = `
uniform float uTime;
uniform float uRadius;
varying float vDistance;
void main() {
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
vDistance = length(mvPosition.xyz);
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = 5.0;
}
`
export default vertexShader
And here is the fragmentShader:
const fragmentShader = `
uniform float uDistance[1000];
varying float vDistance;
void main() {
// Calculate the distance of the fragment from the center of the point
float d = 1.0 - length(gl_PointCoord - vec2(0.5, 0.5));
// Interpolate the alpha value of the fragment based on its distance from the center of the point
float alpha = smoothstep(0.45, 0.55, d);
// Interpolate the color of the point between red and blue based on the distance of the point from the origin
vec3 color = mix(vec3(1.0, 0.0, 0.0), vec3(0.0, 0.0, 1.0), vDistance);
// Set the output color of the fragment
gl_FragColor = vec4(color, alpha);
}
`
export default fragmentShader
I have tried solving the problem at first by passing an array of normalized distances for every point, but I now realize the points would have no idea how to associate which array index is the distance correlating to itself.
The main thing I am confused about it is how gl_FragColor works. In the example linked the idea is that every point from the vertexShader file will have a vDistance and use that value to assign a unique color to itself in the fragmentShader
So far I have only succeeded in getting all of the points to be the same color, they do not seem to differ based on distance at all

THREE.JS GLSL sprite always front to camera

I'm creating a glow effect for car stop lights and found a shader that makes it possible to always face the camera:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main() {
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * -viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
This solution is quite simple and almost works. It reacts to camera movement and it would be great. BUT this element is a child of a car. The car itself is moving around and when it rotates the material stops pointing directly at the camera.
I don't want to use SpritePlugin or LensFlarePlugin because they slow down my game by 20fps so I'll stick to this lightweight solution.
I found a solution for Direct 3d that you have to remove rotation data from tranformation matrix, but I don't know how to do this in THREE.js
I guess that instead of adding calculations with car transformation there must be a way to simplify this shader instead.
How to simplify this shader so the material always faces the camera?
From the link below: "To do spherical billboarding, just remove all rotations by setting the identity matrix". How to do it ShaderMaterial in THREE.js?
http://www.geeks3d.com/20140807/billboarding-vertex-shader-glsl/
The problem here I think is intercepting transformation matrix from ShaderMaterial before it's passed to the shader, but I'm not sure.
Probably irrelevant but here's also fragment shader:
uniform vec3 glowColor;
varying float intensity;
void main() {
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
edit: for now I found a workaround which is eliminating parent's rotation influence by setting opposite quaternion. Not perfect and it's happening in CPU not GPU
this.quaternion._x = -this.parent.quaternion._x;
this.quaternion._y = -this.parent.quaternion._y;
this.quaternion._z = -this.parent.quaternion._z;
this.quaternion._w = -this.parent.quaternion._w;
Are you looking for an implementation of billboarding? (make a 2D sprite always face camera) If so, all you need to do is this:
"vec3 billboard(vec2 v, mat4 view){",
" vec3 up = vec3(view[0][1], view[1][1], view[2][1]);",
" vec3 right = vec3(view[0][0], view[1][0], view[2][0]);",
" vec3 p = right * v.x + up * v.y;",
" return p;",
"}"
v is the offset from the center, basically the 4 vertices in a plane that faces the z-axis. Eg. (1.0, 1.0), (1.0, -1.0), (-1.0, 1.0), and (-1.0, -1.0).
Use it like so:
"vec3 worldPos = billboard(a_offset, u_view);"
// then do whatever else.

WebGL GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 1

I'm attempting to fix a pre-existing bug in some code that is based on THREE.js rev 49 with some custom shaders.
I'm a total WebGL newb, so I haven't been able to make much heads or tails of other answers since they seemed to assume a lot more knowledge than I had. I would be super appreciative of even any hints as to what to look for! :) The end result of the code is to draw a translucent box wireframe and paint the faces with translucent textures.
The particular error is:
[.WebGLRenderingContext]GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 1
I traced the issue to a particular _gl.drawElements( _gl.TRIANGLES, geometryGroup.__webglFaceCount, _gl.UNSIGNED_SHORT, 0 ); in THREE.WebGLRenderer.renderBuffer.
Here is a snippet of the calling code:
scene.overrideMaterial = depthMaterial; // shaders below
var ctx = renderer.getContext(); // renderer is a THREE.WebGLRenderer
ctx.disable(ctx.BLEND);
// renderTarget is a THREE.WebGLRenderTarget, _camera, scene is obvious
renderer.render(scene, _camera, renderTarget, true); // error occurs here
Here are the relevant shaders:
uniforms: {},
vertexShader: [
"varying vec3 vNormal;",
"void main() {",
"vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );",
"vNormal = normalMatrix * normal;",
"gl_Position = projectionMatrix * mvPosition;",
"}"
].join("\n"),
fragmentShader: [
"vec4 pack_depth( const in highp float depth ) {",
"const highp vec4 bit_shift = vec4( 256.0, 256.0*256.0, 256.0*256.0*256.0, 256.0*256.0*256.0*256.0 );",
"vec4 res = depth * bit_shift;",
"res.x = min(res.x + 1.0, 255.0);",
"res = fract(floor(res) / 256.0);",
"return res;",
"}",
"void main() {",
"gl_FragData[0] = pack_depth( gl_FragCoord.z );",
"}"
].join("\n")
Thanks for your help!
In WebGL you set up buffers full of data, usually vertex positions, normals, colors, texture coordinates. You then ask WebGL to draw something with those buffers. You can ask with gl.drawArrays or with gl.drawElements. gl.drawElements uses another buffer full of indices to decide which vertices to use.
The error you got means you asked WebGL to draw or access more elements than the buffers you setup. In other words, if you provide only 3 vertices worth of data but you ask it to draw 4 vertices when you call gl.drawArrays you'll get that error. Similarly if you only provide 3 vertices worth of data but then setup indices that access any vertex greater than 2 you'll get that error. You've got 3 vertices numbered #0, #1, and #2 so if any of your indices are greater than 2 you're asking WebGL to access something out of range of the 3 vertices you provided.
So, check your data. Are you indices out of range? Is one of your buffers shorter than the others? etc..
I'm adding this for thoroughness - I was using an imported OBJ model and was getting this error when creating the shader via THREE.ShaderLib["normalmap"]
The fix was simply calling computeTangents() on the mesh's geometry object:
model.geometry.computeTangents();
archived the answer here

How to get fullscreen texture coordinates for a fullscreen texture from a previous rendering pass?

I do two rendering passes in webgl application using three.js (contrived example here):
renderer.render(depthScene, camera, depthTarget);
renderer.render(scene, camera);
The first rendering pass is to the render target depthTarget which I want to access in the second rendering pass as a texture uniform:
uniform sampler2D tDepth;
float unpack_depth( const in vec4 rgba_depth ) { ... }
void main() {
vec2 screenTexCoord = vec2( 1.0, 1.0 );
float depth = 1.0 - unpack_depth( texture2D( tDepth, screenTexCoord ) );
gl_FragColor = vec4( vec3( depth ), 1.0 );
}
My question is how do I get the value for screenTexCoord? It is not gl_FragCoord.xy.
To avoid a possible misunderstanding: I don't want to render the texture from the first pass to a quad. I want to use the texture from the first pass while rendering the geometry in the second pass.
EDIT:
According to the WebGL specification gl_FragCoord contains window coordinates which are normalized device coordinates (ndc) scaled by the viewport. The ndc are within [-1, 1] so the following should yield coordinates within [0, 1] for texture lookup:
vec2 ndcXY = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
vec2 screenTexCoord = (ndcXY+1.0)/2.0;
But somewhere I must be wrong because the updated example does still not show the (packed) depth?!
Finally figured it out myself. The correct way to calculate the texture coordinates is just:
vec2 screenTexCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
See a working example here.

Vertex Displacement Doesn't Work in Three.js

I've spent the last week experimenting with Three.js and WebRTC and feel like I've exhausted documentation on this subject. I'm trying to map uniform sampler2D tDiffuse; to this vertex shader. The brightness of each pixel in tDiffuse should map to a vertex displacement on each vertex of the output. But I get the following error: ERROR: 0:80: 'constructor' : not enough data provided for construction
Right now, this shader is in the effects pipeline after the model is rendered. Do I need to specify width and height or am I missing something? Is there something wrong with my code, which I've cobbled together from a few different sources? Can I even do vertex displacement n this effects pipeline or do I need to apply the shader differently, to the mesh in my scene? I understand the theory behind what I need to do, but GLSL and furthermore this three.js pipeline is new to me, although I have lots of experience with similar graphical applications.
THREE.RuttEtraShader = {
uniforms: {
"tDiffuse": { type: "t", value: null },
"opacity": { type: "f", value: 1.0 }
},
vertexShader: [
'uniform sampler2D tDiffuse;',
'varying vec3 vColor;',
"varying vec2 vUv;",
'void main() {',
'vec4 newVertexPos;',
'vec4 dv;',
'float df;',
"vUv = uv;",
'dv = texture2D( tDiffuse, vUv.xy );',
'df = 0.30*dv.x + 0.59*dv.y + 0.11*dv.z;',
'newVertexPos = vec4( normalize( position ) * df * 10.0 ) + vec4( position, 1.0 );',
'vColor = vec3( dv.x, dv.y, dv.z );',
'gl_Position = projectionMatrix * modelViewMatrix * newVertexPos;',
'}'
].join("\n"),
fragmentShader: [
"uniform float opacity;",
"uniform sampler2D tDiffuse;",
"varying vec2 vUv;",
"void main() {",
"vec4 texel = texture2D( tDiffuse, vUv );",
"gl_FragColor = opacity * texel;",
"}"
].join("\n")
};
Effectively, your GLSL compiler is brain-dead.
It is having trouble with the line: normalize (position) ... because you have not declared position. Instead of giving you a useful message that explains this, instead it complains that position does not have proper dimensions.
This of course assumes you have actually pasted the proper vertex / fragment shaders. I am not convinced, as neither one of those shaders has 80 lines of code.

Resources