Three.js shader pointLight position - matrix

I'm trying to write my shader and add light sources, I sort of figured it out and did it.
But there is a problem, the position of the light source is incorrectly determined, when the camera rotates or moves, something unimaginable happens.
So I get the position of the vertex shader
vec3 vGlobalPosition = (modelMatrix * vec4(position, 1.0 )).xyz
Now I'm trying to make an illuminated area
float lightDistance = pointLights[ i ].distance;
vec3 lightPosition = pointLights[ i ].position;
float diffuseCoefficient = max(
1.0 - (distance(lightPosition,vGlobalPosition) / lightDistance ), 0.0);
gl_FragColor.rgb += color.rgb * diffuseCoefficient;
But as I wrote earlier if you rotate the camera, the lighting area moves to different positions.
I set the light position manually and everything became normal.
vec3 lightPosition = vec3(2000,0,2000);
...
The question is how to get the right position of the light source? I need a global position, what position is contained in the light source I do not know.
Added an example: http://codepen.io/korner/pen/XMzEaG

Your problem lies with vPos. Currently you do:
vPos = (modelMatrix * vec4(position, 1.0)).xyz
Instead you need to multiply the position with modelViewMatrix:
vPos = (modelViewMatrix * vec4(position, 1.0)).xyz;
You need to use modelViewMatrix because PointLight.position is relative to the camera.

Related

Perpendicular falloff material

I want to make a falloff semi-transparent shader, opaque when normals are perpendicular to camera direction and transparent when normals face towards the camera. Here is the code I use so far :
vec3 vertexNormal = normalize( normalMatrix * normal );
vec3 viewDir = vec3( 0.0, 0.0, 1.0 );
float dotProd = dot(vertexNormal, viewDir);
alpha = abs ( 1.0 - dotProd );
It works but when the objects are not located in the center of the camera view, the falloff isn't consistent anymore, farther side have a larger falloff :
Falloff larger towards edge of camera view
Is there a way to get consistent falloff thickness all over the camera view (all sphere would be distorded by perspective but the falloff contour would be the same everywhere) ?
Thanks in advance!
Unless you’re using an orthographic camera your view dir is incorrect.
Try
vec4 vp = modelViewMatrix * vec4( position, 1.);
vec3 viewdir = - normalize(vp.xyz);

Three.js Get local position of vertex in shader, is that even what I need?

I am attempting to implement this technique of rendering grass into my three.js app.
http://davideprati.com/demo/grass/
On level terrain at y position 0, everything looks absolutely fantastic!
Problem is, my app (game) has the terrain modified by a heightmap so very few (if any) positions on that terrain are at y position 0.
It seems this vertex shader animation code assumes the grass object is sitting at y position 0 for the following vertex shader code to work as intended:
if (pos.y > 1.0) {
float noised = noise(pos.xy);
pos.y += sin(globalTime * magnitude * noised);
pos.z += sin(globalTime * magnitude * noised);
if (pos.y > 1.7){
pos.x += sin(globalTime * noised);
}
}
This condition works on the assumption that terrain is flat and at position 0, so that only vertices above the ground animate. Well.. umm.. since all vertices are above 1 with a heightmap (mostly), some strange effects occur, such as grass sliding all over the place lol.
Is there a way to do this where I can specify a y position threshold based more on the sprite than its world position? Or is there a better way all together to deal with this "slidy" problem?
I am an extreme noobie when it comes to shader code =]
Any help would be greatly appreciated.
I have no idea what I'm doing.
Edit* Ok, I think the issue is that I am altering the y position of each mesh merged into the main grass container geometry based on the y position of the terrain it sits on. I guess the shader is looking at the local position, but since the geometry itself vertically displaced, the shader doesn’t know how to compensate. Hmm…
Ok, I made a fiddle that demonstrates the issue:
https://jsfiddle.net/titansoftime/a3xr8yp7/
Change the value on line# 128 to a 1 instead of 2 and everything looks fine. Not sure how to go about fixing this.
Also, I have no idea why the colors are doing that, they look fine in my app.
If I understood the question correctly:
You are right in asking for "local" position. Lets say the single strand of grass is a narrow strip, with some height segments.
If you want this to be modular, easy to scale and such, this would most likely extend in some direction in the 0-1 range. Lets say it has four segments along that direction, which would yield vertices with with coordinates [0.0, 0.333, 0.666, 1.0]. It makes slightly more sense than an arbitrary range, because it's easy to reason that 0 is ground, 1 is the tip of the blade.
This is the "local" or model space. When you multiply this with the modelMatrix you transform it to world space (call it localToWorld).
In the shader it could look something like this
void main(){
vec4 localPosition = vec4( position, 1.);
vec4 worldPosition = modelMatrix * localPosition;
vec4 viewPosition = viewMatrix * worldPosition;
vec4 projectedPosition = projectionMatrix * viewPosition; //either orthographic or perspective
gl_Position = projectedPosition;
}
This is the classic "you have a scene graph node" which you transform. Depending on what you set for your mesh position, rotation and scale vec4 worldPosition will be different, but the local position is always the same. You can't tell from that value alone if something is the bottom or top, any value is viable since your terrain can be anything.
With this approach, you can write a shader and logic saying that if a vertex is at height of 0 (or less than some epsilon) don't animate.
So this brings us to some logic, that works in some assumed space (you have a rule for 1.0, and 1.7).
Because you are translating the geometries, and merging them, you no longer have this user friendly space that is the model space. Now these blades may very well skip local2world transformation (it may very well end up being just an identity matrix).
This messes up your logic for selecting the vertices obviously.
If you have to take the approach of distributing them as such, then you need another channel to carry the meaning of that local space, even if you only use it for that animation.
Two suitable channels already exist - UV, and vertex color. Uv's you can imagine as having another flat mesh, in another space, that maps to the mesh you are rendering. But in this particular case it seems like you can use a custom attribute aBladeHeight that can be a float for example.
void main(){
vec4 worldPosition = vec4(position, 1.); //you "burnt/baked" this transformation in, so no need to go from local to world in the shader
vec2 localPosition = uv; //grass in 2d, not transformed to your terrain
//this check knows whats on the bottom of the grass
//rather than whats on the ground (has no idea where the ground is)
if(localPosition.y){
//since local does not exist, the only space we work in is world
//we apply the transformation in that space, but the filter
//is the check above, in uv space, where we know whats the bottom, whats the top
worldPosition.xy += myLogic();
}
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
To mimic the "local space"
void main(){
vec4 localSpace = vec4(uv,0.,1.);
gl_Position = projectionMatrix * modelViewMatrix * localSpace;
}
And all the blades would render overlapping each other.
EDIT
With instancing the shader would look something like this:
attribute vec4 aInstanceMatrix0; //16 floats to encode a matrix4
attribute vec4 aInstanceMatrix1;
attribute vec4 aInstanceMatrix2;
//attribute vec4 aInstanceMatrix3; //but one you know will be 0,0,0,1 so you can pack in the first 3
void main(){
vec4 localPos = vec4(position, 1.); //the local position is intact, its the normalized 0-1 blade
//do your thing in local space
if(localPos.y > foo){
localPos.xz += myLogic();
}
//notice the difference, instead of using the modelMatrix, you use the instance attributes in it's place
mat4 localToWorld = mat4(
aInstanceMatrix0,
aInstanceMatrix1,
aInstanceMatrix2,
//aInstanceMatrix3
0. , 0. , 0. , 1. //this is actually wrong i think, it should be the last column not row, but for illustrative purposes,
);
//to pack it more effeciently the rows would look like this
// xyz w
// xyz w
// xyz w
// 000 1
// off the top of my head i dont know what the correct code is
mat4 foo = mat4(
aInstanceMatrix0.xyz, 0.,
aInstanceMatrix1.xyz, 0.,
aInstanceMatrix2.xyz, 0.,
aInstanceMatrix0.w, aInstanceMatrix1.w, aInstanceMatrix2.w, 1.
)
//you can still use the modelMatrix with this if you want to move the ENTIRE hill with all the grass with .position.set()
vec4 worldPos = localToWorld * localPos;
gl_Position = projectionMatrix * viewMatrix * worldPos;
}

THREE.js/GLSL: WebGL shader to color fragments based on world space position

I have seen solution to color fragments based on their position in screen space or in their local object space like Three.js/GLSL - Convert Pixel Coordinate to World Coordinate.
Those are working with screen coordinates and do change when the camera moves or rotates; or only apply to local object space.
What I like to accomplish instead is to color fragments based on their position in world space (as in world space of the three.js scene graph).
Even when the camera moves the color should stay constant.
Example of wished behaviour: A 1x1x1 cube positioned in world space at (x:0,y:0,z:2) would have its third component (blue == z) always between 1.5 - 2.5. This should be true even if the camera moves.
What I have got so far:
vertex shader
varying vec4 worldPosition;
void main() {
// The following changes on camera movement:
worldPosition = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
// This is closer to what I want (colors dont change when camera moves)
// but it is in local not world space:
worldPosition = vec4(position, 1.0);
// This works fine as long as the camera doesnt move:
worldPosition = modelViewMatrix * vec4(position, 1.0);
// Instead I'd like the above behaviour but without color changes on camera movement
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
fragment shader
uniform vec2 u_resolution;
varying vec4 worldPosition;
void main(void)
{
// I'd like to use worldPosition something like this:
gl_FragColor = vec4(worldPosition.xyz * someScaling, 1.0);
// Here this would mean that fragments xyz > 1.0 in world position would be white and black for <= 0.0; that would be fine because I can still divide it to compensate
}
Here is what I have got:
https://glitch.com/edit/#!/worldpositiontest?path=index.html:163:15
If you move with wasd you can see that the colors don't stay in place. I'd like them to, however.
Your worldPosition is wrong. You shouldn't implicate the camera in the calculation. That means, no projectionMatrix nor viewMatrix.
// The world poisition of your vertex: NO CAMERA
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
// Position in normalized screen coords: ADD CAMERA
gl_Position = projectionMatrix * viewMatrix * worldPosition;
// Here you can compute the color based on worldPosition, for example:
vColor = normalize(abs(worldPosition.xyz));
Check this fiddle.
Note
Notice that the abs() used here for the color can potentially give the same color for different position, which might not be what you're looking for.

Three.js shader pointLight to global position

All hello, I'm doing a fog shader and trying to add light sources.
Recently I asked how to correctly determine the position of the light source, the question was solved, but another problem turned out, for the fog I use the position of the modelMatrix
vec3 fogVPosition = (modelMatrix * vec4( position, 1.0 )).xyz
To calculate the intensity of light, I use the calculation of the distance from the light source to the point of fog.
float diffuseCoefficient = max( 1.0 - (distance(pointLights[i].position,fogVPosition) / plDistance), 0.0)
But how did I find out that the position of the light source is transmitted relative to the camera, so I can not calculate the distance correctly and the light source changes its position if I shuffle the camera.
Here's an example of what I did http://codepen.io/korner/pen/BWJLrq
In this example, it can be seen that the light sources are not positioned correctly on the surface.
My goal is done as on this screen: http://dt-byte.ru/f97dc222.png
This works if you put the global position of the light source manually, I just need to somehow get the position of the light source in the global position pointLights[i].position
The problem is solved!
Forgive my friends, I stepped, I did not realize that you can add a modelViewMatrix
I added one more variable ligVPosition
fogVPosition = (modelMatrix * vec4( position, 1.0 )).xyz;
ligVPosition = (modelViewMatrix * vec4( position, 1.0 )).xyz;
Then changed the fogVPosition to the ligVPosition
float diffuseCoefficient = max( 1.0 - (distance(pointLights[i].position,ligVPosition) / plDistance), 0.0)
Now everything works fine!

THREE.JS GLSL sprite always front to camera

I'm creating a glow effect for car stop lights and found a shader that makes it possible to always face the camera:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main() {
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * -viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
This solution is quite simple and almost works. It reacts to camera movement and it would be great. BUT this element is a child of a car. The car itself is moving around and when it rotates the material stops pointing directly at the camera.
I don't want to use SpritePlugin or LensFlarePlugin because they slow down my game by 20fps so I'll stick to this lightweight solution.
I found a solution for Direct 3d that you have to remove rotation data from tranformation matrix, but I don't know how to do this in THREE.js
I guess that instead of adding calculations with car transformation there must be a way to simplify this shader instead.
How to simplify this shader so the material always faces the camera?
From the link below: "To do spherical billboarding, just remove all rotations by setting the identity matrix". How to do it ShaderMaterial in THREE.js?
http://www.geeks3d.com/20140807/billboarding-vertex-shader-glsl/
The problem here I think is intercepting transformation matrix from ShaderMaterial before it's passed to the shader, but I'm not sure.
Probably irrelevant but here's also fragment shader:
uniform vec3 glowColor;
varying float intensity;
void main() {
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
edit: for now I found a workaround which is eliminating parent's rotation influence by setting opposite quaternion. Not perfect and it's happening in CPU not GPU
this.quaternion._x = -this.parent.quaternion._x;
this.quaternion._y = -this.parent.quaternion._y;
this.quaternion._z = -this.parent.quaternion._z;
this.quaternion._w = -this.parent.quaternion._w;
Are you looking for an implementation of billboarding? (make a 2D sprite always face camera) If so, all you need to do is this:
"vec3 billboard(vec2 v, mat4 view){",
" vec3 up = vec3(view[0][1], view[1][1], view[2][1]);",
" vec3 right = vec3(view[0][0], view[1][0], view[2][0]);",
" vec3 p = right * v.x + up * v.y;",
" return p;",
"}"
v is the offset from the center, basically the 4 vertices in a plane that faces the z-axis. Eg. (1.0, 1.0), (1.0, -1.0), (-1.0, 1.0), and (-1.0, -1.0).
Use it like so:
"vec3 worldPos = billboard(a_offset, u_view);"
// then do whatever else.

Resources