Three js 2d matrix visualization - three.js

I am trying to visualize 2d matrices using Three js. These matrices are the states of the neurons in a neural network. The matrices are not huge (64 x 32) The values in these matrices will change and I want those new values to be displayed in the visualization.
For the 2d matrix I want a plane of neurons.
I have tried creating a particle system using a plane geometry with as many vertices as neurons in the data matrix.
var width = 32;
var height = 64;
var planeGeometry = new THREE.PlaneGeometry( width, height, width - 1 , height - 1 );
var particlePlane = new THREE.ParticleSystem( planeGeometry, shaderMaterial );
In the fragment shader each particle is given a base texture (a white circle)
gl_FragColor = texture2D(baseTexture, gl_PointCoord);
And then I use a second texture containing the data matrix values (greyscale pixel values) to modify each base texture.
// Sets particle texture to desired color
// vertexPosition is a vec2 in coordinates local to the plane
gl_FragColor = gl_FragColor * texture2D( dataTexture, vertexPosition );
To calculate vertexPosition in the vertex share I do the following (irrelevant lines ommitted):
uniform float width;
uniform float height;
varying vec2 vertexPosition;
void main()
{
vertexPosition = vec2( position.x / width, position.y / height );
}
This is where I'm getting caught up. The vertexPosition does not seem to be mapping properly to the dataTexture pixels. I want a one to one correspondence between particles and pixels.
How do I properly map from the location of particles/vertexes on a plane to equivalent pixel locations in a texture?
I am new to three js, so please feel free to tell me my approach is totally off.

To get texture coordinates, there are ready to use projection matrix in glsl, here is what I would use as a vertex shader
varying vec2 vertexPosition;
void main() {
vertexPosition = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Then you have the xy position to use in the fragment in the varying vertexPosition.

Related

How can I color points in Three JS using OpenGL and Fragment Shaders to depend on the points' distance to the scene origin

To clarify I am using React, React Three Fiber, Three JS
I have 1000 points mapped into the shape of a disc, and I would like to give them texture via ShaderMaterial. It takes a vertexShader and a fragmentShader. For the color of the points I want them to transition in a gradient from blue to red, the further away points are blue and the closest to origin points are red.
This is the vertexShader:
const vertexShader = `
uniform float uTime;
uniform float uRadius;
varying float vDistance;
void main() {
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
vDistance = length(mvPosition.xyz);
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = 5.0;
}
`
export default vertexShader
And here is the fragmentShader:
const fragmentShader = `
uniform float uDistance[1000];
varying float vDistance;
void main() {
// Calculate the distance of the fragment from the center of the point
float d = 1.0 - length(gl_PointCoord - vec2(0.5, 0.5));
// Interpolate the alpha value of the fragment based on its distance from the center of the point
float alpha = smoothstep(0.45, 0.55, d);
// Interpolate the color of the point between red and blue based on the distance of the point from the origin
vec3 color = mix(vec3(1.0, 0.0, 0.0), vec3(0.0, 0.0, 1.0), vDistance);
// Set the output color of the fragment
gl_FragColor = vec4(color, alpha);
}
`
export default fragmentShader
I have tried solving the problem at first by passing an array of normalized distances for every point, but I now realize the points would have no idea how to associate which array index is the distance correlating to itself.
The main thing I am confused about it is how gl_FragColor works. In the example linked the idea is that every point from the vertexShader file will have a vDistance and use that value to assign a unique color to itself in the fragmentShader
So far I have only succeeded in getting all of the points to be the same color, they do not seem to differ based on distance at all

Applying a perspective transformation matrix from GIMP into a GLSL shader

So I'm trying to add a rotation and a perspective effect to an image into the vertex shader. The rotation works just fine but I'm unable to make the perspective effect. I'm working in 2D.
The rotation matrix is generated from the code but the perspective matrix is a bunch of hardcoded values I got from GIMP by using the perspective tool.
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.58302f, -0.29001f, 103.0f,
-0.00753f, 0.01827f, 203.0f,
-0.00002f, -0.00115f, 1.0f
});
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
uniform mat3 u_rotation;
uniform mat3 u_perspective;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
vec3 vec = vec3(a_texCoord0 * 500.0, 1.0);
vec = vec * u_perspective;
vec = vec3((vec.xy / vec.z) / 500.0, 0.0);
vec -= vec3(0.5, 0.5, 0.0);
vec = vec * u_rotation;
v_texCoords = vec.xy + vec2(0.5);
gl_Position = u_projTrans * a_position;
}
For the rotation, I'm offsetting the origin so that it rotates around the center instead of the top left corner.
Pretty much everything I know about GIMP's perspective tool comes from http://www.math.ubc.ca/~cass/graphics/manual/pdf/ch10.ps This was suggesting I would be able to reproduce what GIMP does after reading it, but it turns out I can't. The result shows nothing (no pixel) while removing the perspective part shows the image rotating properly.
As mentioned in the link, I'm dividing by vec.z to convert my homogeneous coordinates back to a 2D point. I'm not using the origin shifting for the perspective transformation as it was mentioned in the link that the top left corner was used as an origin. p.11:
There is one thing to be careful about - the origin of GIMP
coordinates is at the upper left, with y increasing downwards.
EDIT:
Thanks to #Rabbid76's answer, it's now showing something! However, it's not transforming my texture like the matrix was transforming my image on GIMP.
My transformation matrix on GIMP was supposed to do something a bit like that:
But instead, it looks like something like that:
This is what I think from what I can see from the actual result:
https://imgur.com/X56rp8K (Image used)
(As pointed out, it texture parameter is clamp to edge instead of clamp to border, but that's beside the point)
It looks like it's doing the exact opposite of what I'm looking for. I tried offsetting the origin to the center of the image and to the bottom left before applying the matrix without success. This is a new result but it's still the same problem: How to apply the GIMP perspective matric into a GLSL shader?
EDIT2:
With more testing, I can confirm that it's doing the "opposite". Using this simple downscale transformation matrix:
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.75f, 0f, 50f,
0f, 0.75f, 50f,
0f, 0f, 1.0f
});
The result is an upscaled version of the image:
If I invert the matrix programmatically, it works for the simple scaling matrix! But for the perspective matrix, it shows that:
https://imgur.com/v3TLe2d
EDIT3:
Thanks to #Rabbid76 again it turned out applying the rotation after the perspective matrix does the rotation before and I end up with a result like this: https://imgur.com/n1vWq0M
It is almost it! The only problem is that the image is VERY squished. It's just like the perspective matrix was applied multiple times. But if you look carefully, you can see it rotating while in perspective just like I want it. The problem now is how to unsquish it to get a result just like I had in GIMP. (The root problem is still the same, how to take a GIMP matrix and apply it in a shader)
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
The matrix
0.58302 -0.29001 103.0
-0.00753 0.01827 203.0
-0.00002 -0.00115 1.0f
is a 2D perspective transformation matrix. It operates with 2D Homogeneous coordinate.
See 2D affine and perspective transformation matrices
Since the matrix which is displayed in GIMP is the transformation from the perspective to the orthogonal view, the inverse matrix has to be used for the transformation.
The inverse matrix can be calculated by calling inv().
The matrix is setup to performs a operation of a Cartesian coordinate in the range [0, 500], to a Homogeneous coordinates in the range [0, 500].
Your assumption is correct, you have to scale the input from the range [0, 1] to [0, 500] and the output from [0, 500] to [0, 1].
But you have to scale the 2D Cartesian coordinates
Further you have to do the rotation after the perspective projection and the Perspective divide.
It may be necessary (dependent on the bitmap and the texture coordinate attributes), that you have to flip the V coordinate of the texture coordinates.
And most important, the transformation has to be done per fragment in the fragment shader.
Note, since this transformation is not linear (it is perspective transformation), it is not sufficient to to calculate the texture coordinates on the corner points.
vec2 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// uv_h: perspective devide and downscale [0, 500] -> [0, 1]
vec3 uv_p = vec3(uv_h.xy / uv_h.z / scale, 1.0);
// rotate
uv_p = vec3(uv_p.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0);
return uv_p.xy;
}
Of course you can do the transformation in the vertex shader too.
But then you have to pass the 2d homogeneous coordinate to from the vertex shader to the fragment shader
This is similar to set a clip space coordinates to gl_Position.
The difference is that you have a 2d homogeneous coordinate and not a 3d. and you have to do the Perspective divide manually in the fragment shader:
Vertex shader:
attribute vec2 a_texCoord0;
varying vec3 v_texCoords_h;
uniform mat3 u_perspective
vec3 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// downscale
return vec3(uv_h.xy / scale, uv_h.z);
}
void main()
{
v_texCoords_h = Project2D( a_texCoord0 );
.....
}
Fragment shader:
varying vec3 v_texCoords_h;
uniform mat3 u_rotation;
void main()
{
// perspective divide
vec2 uv = vertTex.xy / vertTex.z;
// rotation
uv = (vec3(uv.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0)).xy;
.....
}
See the preview, where I used the following 2D projection matrix, which is the inverse matrix from that one which is displayed in GIMP:
2.452f, 2.6675f, -388.0f,
0.0f, 7.7721f, -138.0f,
0.00001f, 0.00968f, 1.0f
Further note, in compare to u_projTrans, u_perspective is initialized in row major order.
Because of that you have to multiply the vector from the left to u_perspective:
vec_h = vec3(vec.xy * 500.0, 1.0) * u_perspective;
But you have to multiply the vector from the right to u_projTrans:
gl_Position = u_projTrans * a_position;
See GLSL Programming/Vector and Matrix Operations
and Data Type (GLSL)
Of course this may change if you transpose the matrix when you set it by glUniformMatrix*

Retrieve Vertices Data in THREE.js

I'm creating a mesh with a custom shader. Within the vertex shader I'm modifying the original position of the geometry vertices. Then I need to access to this new vertices position from outside the shader, how can I accomplish this?
In lieu of transform feedback (which WebGL 1.0 does not support), you will have to use a passthrough fragment shader and floating-point texture (this requires loading the extension OES_texture_float). That is the only approach to generate a vertex buffer on the GPU in WebGL. WebGL does not support pixel buffer objects either, so reading the output data back is going to be very inefficient.
Nevertheless, here is how you can accomplish this:
This will be a rough overview focusing on OpenGL rather than anything Three.js specific.
First, encode your vertex array this way (add a 4th component for index):
Vec4 pos_idx : xyz = Vertex Position, w = Vertex Index (0.0 through NumVerts-1.0)
Storing the vertex index as the w component is necessary because OpenGL ES 2.0 (WebGL 1.0) does not support gl_VertexID.
Next, you need a 2D floating-point texture:
MaxTexSize = Query GL_MAX_TEXTURE_SIZE
Width = MaxTexSize;
Height = min (NumVerts / MaxTexSize, 1);
Create an RGBA floating-point texture with those dimensions and use it as FBO color attachment 0.
Vertex Shader:
#version 100
attribute vec4 pos_idx;
uniform int width; // Width of floating-point texture
uniform int height; // Height of floating-point texture
varying vec4 vtx_out;
void main (void)
{
float idx = pos_idx.w;
// Position this vertex so that it occupies a unique pixel
vec2 xy_idx = vec2 (float ((int (idx) % width)) / float (width),
floor (idx / float (width)) / float (height)) * vec2 (2.0) - vec2 (1.0);
gl_Position = vec4 (xy_idx, 0.0f, 1.0f);
//
// Do all of your per-vertex calculations here, and output to vtx_out.xyz
//
// Store the index in the W component
vtx_out.w = idx;
}
Passthrough Fragment Shader:
#version 100
varying vec4 vtx_out;
void main (void)
{
gl_FragData [0] = vtx_out;
}
Draw and Read Back:
// Draw your entire vertex array for processing (as `GL_POINTS`)
glDrawArrays (GL_POINTS, 0, NumVerts);
// Bind the FBO's color attachment 0 to `GL_TEXTURE_2D`
// Read the texture back and store its results in an array `verts`
glGetTexImage (GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, verts);

glClipPlane - Is there an equivalent in webGL?

I have a 3D mesh. Is there any possibility to render the sectional view (clipping) like glClipPlane in OpenGL?
I am using Three.js r65.
The latest shader that I have added is:
Fragment Shader:
uniform float time;
uniform vec2 resolution;
varying vec2 vUv;
void main( void )
{
vec2 position = -1.0 + 2.0 * vUv;
float red = abs( sin( position.x * position.y + time / 2.0 ) );
float green = abs( cos( position.x * position.y + time / 3.0 ) );
float blue = abs( cos( position.x * position.y + time / 4.0 ) );
if(position.x > 0.2 && position.y > 0.2 )
{
discard;
}
gl_FragColor = vec4( red, green, blue, 1.0 ); }
Vertex Shader:
varying vec2 vUv;
void main()
{
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
Unfortunately in the OpenGL-ES specification against which WebGL has been specified there are no clip planes and the vertex shader stage lacks the gl_ClipDistance output, by which plane clipping is implemented in modern OpenGL.
However you can use the fragment shader to implement per-fragment clipping. In the fragment shader test the position of the incoming fragment against your set of clip planes and if the fragment does not pass the test discard it.
Update
Let's have a look at how clip planes are defined in fixed function pipeline OpenGL:
void ClipPlane( enum p, double eqn[4] );
The value of the first argument, p, is a symbolic constant,CLIP PLANEi, where i is
an integer between 0 and n − 1, indicating one of n client-defined clip planes. eqn
is an array of four double-precision floating-point values. These are the coefficients
of a plane equation in object coordinates: p1, p2, p3, and p4 (in that order). The
inverse of the current model-view matrix is applied to these coefficients, at the time
they are specified, yielding
p' = (p'1, p'2, p'3, p'4) = (p1, p2, p3, p4) inv(M)
(where M is the current model-view matrix; the resulting plane equation is unde-
fined if M is singular and may be inaccurate if M is poorly-conditioned) to obtain
the plane equation coefficients in eye coordinates. All points with eye coordinates
transpose( (x_e, y_e,z_e, w_e) ) that satisfy
(p'1, p'2, p'3, p'4)  x_e  ≥ 0
 y_e 
 z_e 
 w_e 
lie in the half-space defined by the plane; points that do not satisfy this condition
do not lie in the half-space.
So what you do is, you add uniforms by which you pass the clip plane parameters p' and add another out/in pair of variables between the vertex and fragment shader to pass the vertex eye space position. Then in the fragment shader the first thing you do is performing the clip plane equation test and if it doesn't pass you discard the fragment.
In the vertex shader
in vec3 vertex_position;
out vec4 eyespace_pos;
uniform mat4 modelview;
void main()
{
/* ... */
eyespace_pos = modelview * vec4(vertex_position, 1);
/* ... */
}
In the fragment shader
in vec4 eyespace_pos;
uniform vec4 clipplane;
void main()
{
if( dot( eyespace_pos, clipplane) < 0 ) {
discard;
}
/* ... */
}
In the newer versions (> r.76) of three.js clipping is supported in the THREE.WebGLRenderer. There is an array property called clippingPlanes where you can add your custom clipping planes (THREE.Plane instances).
For three.js you can check these two examples:
1) WebGL clipping (code base here on GitHub)
2) WebGL clipping advanced (code base here on GitHub)
A simple example
To add a clipping plane to the renderer you can do:
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var plane = new THREE.Plane( normal, constant );
renderer.clippingPlanes = [plane];
Here a fiddle to demonstrate this.
You can also clip on object level by adding a clipping plane to the object material. For this to work you have to set the renderer localClippingEnabled property to true.
// set renderer
renderer.localClippingEnabled = true;
// add clipping plane to material
var normal = new THREE.Vector3( -1, 0, 0 );
var constant = 0;
var color = 0xff0000;
var plane = new THREE.Plane( normal, constant );
var material = new THREE.MeshBasicMaterial({ color: color });
material.clippingPlanes = [plane];
var mesh = new THREE.Mesh( geometry, material );
Note: In r.77 some of the clipping functionality in the THREE.WebGLRenderer was moved moved to a separate THREE.WebGLClipping class, check here for reference in the three.js master branch.

Get position from depth texture

Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}

Resources