Threejs generates a bumpy plane based on a matrix - three.js

Suppose now I have the following data:
[
[0.32, 0.45, 0.54, 0.39, 0.48, 0.44],
[0.43, 0.43, 0.34, 0.43, 0.65, 0.43],
...
]
How to generate a plane in threejs, and the ups and downs on the plane follow the above data, and since the data is discrete, there should be a smooth transition between these points, can threejs do it

Related

Ugly rendering of GLTF with vertex colors

The geometry
I have a cube represented as a triangular mesh:
Applying a color
Each of the vertices gets a color assigned, to apply a color map onto the cube.
In GLTF format, the mesh is therefore described as follows:
{
"primitives": [
{
"attributes": {
"POSITION": 0,
"COLOR_0": 2
},
"indices": 1
}
]
}
Problem: Ugly rendering
Unfortunately, this is rendered with ugly artifacts. The color of the three vertices of a face are not smoothly interpolated across the face, but there are brighter and darker zones along the edges of the triangles. This happens with various renderers such as Babylon.js, Cesium, Filament, Three.js.
What's the reason for those artifacts and how can I get a smooth colormap on my cube?

Does THREE.ShaderMaterial create one shader per particle? I'm not sure how it works, If yes, how can I share/update data between each particles shader

I would like to create a shader that simulates gravity between 2 particles. For this, each particle must know the position of the other particles, update its position accordingly, and therefore "share" its new position with the other particles.
If I understand correctly, when I do:
material = new THREE.ShaderMaterial({
depthWrite: false,
blending: AdditiveBlending,
vertexColors: true,
vertexShader: galaxyVortexShader,
fragmentShader: galaxyFragmentShader,
uniforms: {
uTime: {value: 0},
uSize: { value: 10 * renderer.getPixelRatio()},
uPositions: { value: positionsVec3}
}
});
I create a shader for each particle ? The problem is that I send the position of all the particles once in "uPositions", but if each particle has its own shader, how can they update their position in the uPositions array to share it to other particles ?
What you're describing is demonstrated in the official protoplanets demo. It basically
Calculates all velocities in a shader that is output as a 64x64 Texture.
This Texture gets passed to a second shader that uses it to calculate all positions. This way each particle has access to all velocities.
Then when rendering the planets onscreen, they all have access to both velocities & positions textures, so each vertex can access all data for their adjacent vertices. Using 64x64 textures gives you data for 4096 unique particles.

ThreeJS world unit to pixel conversion

Is there a way compute the ratio between world unit and pixels in ThreeJS ? I need to determine how many units apart my objects need to be in order to be rendered 1 pixel apart on the screen.
The camera is looking at the (x,y) plane from a (0, 0, 10) coordinate, and objects are drawn in 2D on the (x,y) plane at z=0.
<Canvas gl={{ alpha: true, antialias: true }} camera={{ position: [0, 0, 10] }}>
I cannot seem to figure out what the maths are or if there is any function that does it already...
I'm thinking I might have to compare the size of the canvas in pixels and world units, but I dont know how to get that either. There's also this raycasting solution, but surely there has to be a way to just compute it, no ?

Convert Cubemap coordinates to equivalents in Equirectangular

I have a set of coordinates of a 6-image Cubemap (Front, Back, Left, Right, Top, Bottom) as follows:
[ [160, 314], Front; [253, 231], Front; [345, 273], Left; [347, 92], Bottom; ... ]
Each image is 500x500p, being [0, 0] the top-left corner.
I want to convert these coordinates to their equivalents in equirectangular, for a 2500x1250p image. The layout is like this:
I don't need to convert the whole image, just the set of coordinates. Is there any straight-forward conversion por a specific pixel?
convert your image+2D coordinates to 3D normalized vector
the point (0,0,0) is the center of your cube map to make this work as intended. So basically you need to add the U,V direction vectors scaled to your coordinates to 3D position of texture point (0,0). The direction vectors are just unit vectors where each axis has 3 options {-1, 0 , +1} and only one axis coordinate is non zero for each vector. Each side of cube map has one combination ... Which one depends on your conventions which we do not know as you did not share any specifics.
use Cartesian to spherical coordinate system transformation
you do not need the radius just the two angles ...
convert the spherical angles to your 2D texture coordinates
This step depends on your 2D texture geometry. The simplest is rectangular texture (I think that is what you mean by equirectangular) but there are other mappings out there with specific features and each require different conversion. Here few examples:
Bump-map a sphere with a texture map
How to do a shader to convert to azimuthal_equidistant
For the rectangle texture you just scale the spherical angles into texture resolution size...
U = lon * Usize/(2*Pi)
V = (lat+(Pi/2)) * Vsize/Pi
plus/minus some orientation signs to match your coordinate systems.
btw. just found this (possibly duplicate QA):
GLSL Shader to convert six textures to Equirectangular projection

THREE.BufferGeometry - vertex normals and face normals

In the documentation for THREE.BufferGeometry is written:
normal (itemSize: 3)
Stores the x, y, and z components of the face or vertex normal vector of each vertex in this geometry. Set by .fromGeometry().
When is this variable holding vertex normals and when face normals?
Is it as simple as if a THREE.MeshMaterial is used the normals are interpreted as face normals and when a THREE.LineMaterial is used the normals are used as vertex normals? Or is it more complicated then that.
I also understood that THREE.FlatShading can be used for rendering a mesh with flat shading (face normals point straight outward).
geometry = new THREE.BoxGeometry( 1000, 1000, 1000 );
material = new THREE.MeshPhongMaterial({
color: 0xff0000,
shading: THREE.FlatShading
});
mesh = new THREE.Mesh( geometry, material );
I would say normals are not necessary any more. Why are my buffer geometries made from for example a THREE.BoxGeometry still holding a normal attribute in such case? Is this information still used for rendering or would removing them from the buffer geometry be a possible optimization?
BufferGeometry normals are vector normals and shader interpolates normal value for each fragment from vertices belonging to that face (in most cases triangle)
when you convert THREE.BoxGeometry which has normals computed by default, they stay set up even in the BufferGeometry conversion output, as geometry does not have any way to "know" whether you need normals or any of the attributes (material program decides what attributes are used)
you can remove the normals with geometry.removeAttribute("normal")

Resources