Three.js: BufferAttribute or Float32Attribute? - three.js

This example shows 500 transparent triangles being rendered. The code uses new THREE.Float32Attribute( triangles * 3 * 4, 4 );
Am I correct that the latest and greatest way is to use a THREE.BufferAttribute instead of the Float32Attribute?
Also,
The transparent property is set in the THREE.RawShaderMaterial however, opacity is not set anywhere.
I would think that this would be because the color values are set in a loop, with the fourth color value standing for opacity but all of the triangles are consistently transparent (without any varying as I can tell).
I am just not perceiving things correctly here?

Yes. In this RawShaderMaterial example, you could write it like so:
var colors = new Float32Array( triangles * 3 * 4 );
for ( var i = 0, l = triangles * 3 * 4; i < l; i += 4 ) {
colors[ i ] = Math.random();
colors[ i + 1 ] = Math.random();
colors[ i + 2 ] = Math.random();
colors[ i + 3 ] = Math.random();
}
geometry.addAttribute( 'color', new THREE.BufferAttribute( colors, 4 ) );
colors[ i + 3 ] contains the alpha value, and is passed to the fragment shader as a varying:
varying vec4 vColor;
three.js r.73

Related

Use 2 meshes + shader materials with each a different fragment shader in 1 scene (three.js)

I have 2 meshes with each a shaderMaterial and each a different fragment shader. When I add both meshes to my scene, only one will show up. Below you can find my 2 fragment shaders (see both images to see what they look like). They're basically the same.
What I want to achieve: Use mesh1 as a mask and put the other one, mesh2 (purple blob) on top of the mask.
Purple blob:
// three.js code
const geometry1 = new THREE.PlaneBufferGeometry(1, 1, 1, 1);
const material1 = new THREE.ShaderMaterial({
uniforms: this.uniforms,
vertexShader,
fragmentShader,
defines: {
PR: window.devicePixelRatio.toFixed(1)
}
});
const mesh1 = new THREE.Mesh(geometry1, material1);
this.scene.add(mesh1);
// fragment shader
void main() {
vec2 res = u_res * PR;
vec2 st = gl_FragCoord.xy / res.xy - 0.5;
st.y *= u_res.y / u_res.x * 0.8;
vec2 circlePos = st;
float c = circle(circlePos, 0.2 + 0. * 0.1, 1.) * 2.5;
float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y * .1 - u_time * 0.005 - cos(u_time * .001) * .01;
float n = snoise3(vec3(offx, offy, .9) * 2.5) - 2.1;
float finalMask = smoothstep(1., 0.99, n + pow(c, 1.5));
vec4 bg = vec4(0.12, 0.07, 0.28, 1.0);
vec4 bg2 = vec4(0., 0., 0., 0.);
gl_FragColor = mix(bg, bg2, finalMask);
}
Blue mask
// three.js code
const geometry2 = new THREE.PlaneBufferGeometry(1, 1, 1, 1);
const material2 = new THREE.ShaderMaterial({
uniforms,
vertexShader,
fragmentShader,
defines: {
PR: window.devicePixelRatio.toFixed(1)
}
});
const mesh2 = new THREE.Mesh(geometry2, material2);
this.scene.add(mesh2);
// fragment shader
void main() {
vec2 res = u_res * PR;
vec2 st = gl_FragCoord.xy / res.xy - 0.5;
st.y *= u_res.y / u_res.x * 0.8;
vec2 circlePos = st;
float c = circle(circlePos, 0.2 + 0. * 0.1, 1.) * 2.5;
float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y * .1 - u_time * 0.005 - cos(u_time * .001) * .01;
float n = snoise3(vec3(offx, offy, .9) * 2.5) - 2.1;
float finalMask = smoothstep(1., 0.99, n + pow(c, 1.5));
vec4 bg = vec4(0.12, 0.07, 0.28, 1.0);
vec4 bg2 = vec4(0., 0., 0., 0.);
gl_FragColor = mix(bg, bg2, finalMask);
}
Render Target code
this.rtWidth = window.innerWidth;
this.rtHeight = window.innerHeight;
this.renderTarget = new THREE.WebGLRenderTarget(this.rtWidth, this.rtHeight);
this.rtCamera = new THREE.PerspectiveCamera(
this.camera.settings.fov,
this.camera.settings.aspect,
this.camera.settings.near,
this.camera.settings.far
);
this.rtCamera.position.set(0, 0, this.camera.settings.perspective);
this.rtScene = new THREE.Scene();
this.rtScene.add(this.purpleBlob);
const geometry = new THREE.PlaneGeometry(window.innerWidth, window.innerHeight, 1);
const material = new THREE.MeshPhongMaterial({
map: this.renderTarget.texture,
});
this.mesh = new THREE.Mesh(geometry, material);
this.scene.add(this.mesh);
I'm still new to shaders so please be patient. :-)
There are probably infinite ways to mask in three.js. Here's a few
Use the stencil buffer
The stencil buffer is similar to the depth buffer in that it for every pixel in the canvas or render target there is a corresponding stencil pixel. You need to tell three.js you want a stencil buffer and then you can tell it when rendering what to do with the stencil buffer when you're drawing things.
You the stencil settings on Material
You tell three.js
what to do if the pixel you're drawing fails the stencil test
what to do if the pixel your drawing fails the depth test
what to do if the pixel you're drawing passes the depth test.
The things you can tell it to do for each of those conditions are keep (do nothing), increment, decrement, increment wraparound, decrement wraparound, set to a specific value.
You can also specify what the stencil test is by setting Material.stencilFunc
So, for example you can clear the stencil buffer to 0 (the default?), set the stencil test so it always passes, and set the conditions so if the depth test passes you set the stencil to 1. You then draw a bunch of things. Everywhere they are drawn there will now be a 1 in then stencil buffer.
Now you change the stencil test so it only passes if it equals 1 (or 0) and then draw more stuff, now things will only be drawn where the stencil equals the value you set
This exmaple uses the stencil
Mask with an alpha mask
In this case you need 2 color textures and an alpha texture. How you get those is up to you. For example you could load all 3 from images. Or you could generate all 3 using 3 render targets. Finally you pass all 3 to a shader that mixes them as in
gl_FragColor = mix(colorFromTexture1, colorFromTexture2, valueFromAlphaTexture);
This example uses this alpha mixing method
Note that if one of your 2 colors textures has an alpha channel you could use just 2 textures. You'd just pass one of the color textures as your mask.
Or of course you could calculate a mask based on the colors in one image or the other or both. For example
// assume you have function that converts from rgb to hue,saturation,value
vec3 hsv = rgb2hsv(colorFromTexture1.rgb);
float hue = hsv.x;
// pick one or the other if color1 is close to green
float mixAmount = step(abs(hue - 0.33), 0.05);
gl_FragColor = mix(colorFromTexture1, colorFromTexture2, mixAmount);
The point here is not that exact code, it's that you can make any formula you want for the mask, based on whatever you want, color, position, random math, sine waves based on time, some formula that generates a blob, whatever. The most common is some code that just looks up a mixAmount from a texture which is what the linked example above does.
ShaderToy style
Your code above appears to be a shadertoy style shader which is drawing a fullscreen quad. Instead of drawing 2 separate things you can just draw them in the same shader
vec4 computeBlueBlob() {
...
return blueBlobColor;
}
vec4 computeWhiteBlob() {
...
return whtieBlobColor;
}
vec4 main() {
vec4 color1 = computeBlueBlob();
vec4 color2 = computeWhiteBlob();
float mixAmount = color.a; // note: color2.a could be any
// formula to decide which colors
// to draw
gl_FragColor = mix(color1, color2, mixAmount);
}
note just like above how you compute mixAmount is up to you. Based it off anything, color1.r, color2.r, some formula, some hue, some other blob generation function, whatever.

Using Three.js to put a pin on a map of the USA

I'm trying to figure out a way to use latitude and longitude to put a pin on a map of the USA.
I'm using a perspective camera btw.
This is my mesh, which basically adds a color map, and a displacement map to give it some height:
const mapMaterial = new MeshStandardMaterial({
map: colorTexture,
displacementMap: this.app.textures[displacementMap],
metalness: 0,
roughness: 1,
displacementScale: 3,
color: 0xffffff,
//wireframe: true
})
const mapTextureWidth = 100
const mapTextureHeight = 100
const planeGeom = new PlaneGeometry(mapTextureWidth, mapTextureHeight, mapTextureWidth - 1, mapTextureHeight - 1)
this.mapLayer = new Mesh(planeGeom, mapMaterial)
this.mapLayer.rotation.x = -1
this.mapLayer.position.set(0, 0, 0); // set the original position
I've also added a camera to give it a slight tilt so we can see the height in the mountains and such.
In the end it looks like this:
What I need to do is add a map pin on the map by using latitude and longitude.
I've played around with converting lat and long to pixels, but that gives me an x and y relative to the screen, and not the map itself, (found this in a different SO post):
convertGeoToPixelPosition(
latitude, longitude,
mapWidth , // in pixels
mapHeight , // in pixels
mapLonLeft , // in degrees
mapLonDelta , // in degrees (mapLonRight - mapLonLeft);
mapLatBottom , // in degrees
mapLatBottomDegree
) {
const x = (longitude - mapLonLeft) * (mapWidth / mapLonDelta);
latitude = latitude * Math.PI / 180
const worldMapWidth = ((mapWidth / mapLonDelta) * 360) / (2 * Math.PI)
const mapOffsetY = (worldMapWidth / 2 * Math.log((1 + Math.sin(mapLatBottomDegree)) / (1 - Math.sin(mapLatBottomDegree))))
const y = mapHeight - ((worldMapWidth / 2 * Math.log((1 + Math.sin(latitude)) / (1 - Math.sin(latitude)))) - mapOffsetY)
return { "x": x , "y": y}
}
Any thoughts on how I can transform the latitude and longitude to world coordinates?
I've already created the sprite for the map pin, and adding them works great, just have to figure out the proper place to put them....
Add your marker as a child of the mapLayer...
this.mapLayer.add( marker )
then set its position:
marker.position.set( (x/4096)-0.5)*100, (y/4096)-0.5)*100, 0)
where x and y are what you get from your convertGeo function.

How to morphTarget of an .obj file (BufferGeometry)

I'm trying to morph the vertices of a loaded .obj file like in this example: https://threejs.org/docs/#api/materials/MeshDepthMaterial - when 'wireframe' and 'morphTargets' are activated in THREE.MeshDepthMaterial.
But I can't reach the desired effect. From the above example the geometry can be morphed via geometry.morphTargets.push( { name: 'target1', vertices: vertices } ); however it seems that morphTargets is not available for my loaded 3D object as it is a BufferGeometry.
Instead I tried to change independently each vertices point from myMesh.child.child.geometry.attributes.position.array[i], it kind of works (the vertices of my mesh are moving) but not as good as the above example.
Here is a Codepen of what I could do.
How can I reach the desired effect on my loaded .obj file?
Adding morph targets to THREE.BufferGeometry is a bit different than THREE.Geometry. Example:
// after loading the mesh:
var morphAttributes = mesh.geometry.morphAttributes;
morphAttributes.position = [];
mesh.material.morphTargets = true;
var position = mesh.geometry.attributes.position.clone();
for ( var j = 0, jl = position.count; j < jl; j ++ ) {
position.setXYZ(
j,
position.getX( j ) * 2 * Math.random(),
position.getY( j ) * 2 * Math.random(),
position.getZ( j ) * 2 * Math.random()
);
}
morphAttributes.position.push(position); // I forgot this earlier.
mesh.updateMorphTargets();
mesh.morphTargetInfluences[ 0 ] = 0;
// later, in your render() loop:
mesh.morphTargetInfluences[ 0 ] += 0.001;
three.js r90

Ray tracing to a Point Cloud with a custom vertex shader in Three.js

How can you ray trace to a Point Cloud with a custom vertex shader in three.js.
This is my vertex shader
void main() {
vUvP = vec2( position.x / (width*2.0), position.y / (height*2.0)+0.5 );
colorP = vec2( position.x / (width*2.0)+0.5 , position.y / (height*2.0) );
vec4 pos = vec4(0.0,0.0,0.0,0.0);
depthVariance = 0.0;
if ( (vUvP.x<0.0)|| (vUvP.x>0.5) || (vUvP.y<0.5) || (vUvP.y>0.0)) {
vec2 smp = decodeDepth(vec2(position.x, position.y));
float depth = smp.x;
depthVariance = smp.y;
float z = -depth;
pos = vec4(( position.x / width - 0.5 ) * z * (1000.0/focallength) * -1.0,( position.y / height - 0.5 ) * z * (1000.0/focallength),(- z + zOffset / 1000.0) * 2.0,1.0);
vec2 maskP = vec2( position.x / (width*2.0), position.y / (height*2.0) );
vec4 maskColor = texture2D( map, maskP );
maskVal = ( maskColor.r + maskColor.g + maskColor.b ) / 3.0 ;
}
gl_PointSize = pointSize;
gl_Position = projectionMatrix * modelViewMatrix * pos;
}
In the Points class, ray tracing is implemented as follows:
function testPoint( point, index ) {
var rayPointDistanceSq = ray.distanceSqToPoint( point );
if ( rayPointDistanceSq < localThresholdSq ) {
var intersectPoint = ray.closestPointToPoint( point );
intersectPoint.applyMatrix4( matrixWorld );
var distance = raycaster.ray.origin.distanceTo( intersectPoint );
if ( distance < raycaster.near || distance > raycaster.far ) return;
intersects.push( {
distance: distance,
distanceToRay: Math.sqrt( rayPointDistanceSq ),
point: intersectPoint.clone(),
index: index,
face: null,
object: object
} );
}
}
var vertices = geometry.vertices;
for ( var i = 0, l = vertices.length; i < l; i ++ ) {
testPoint( vertices[ i ], i );
}
However, since I'm using a vertex shader, the geometry.vertices don't match up to the vertices on the screen which prevents the ray trace from working.
Can we get the points back from the vertex shader?
I didn't dive into what your vertex-shader actually does, and I assume there are good reasons for you to do it in the shader, so it's likely not feasible to redo the calculations in javascript when doing the ray-casting.
One approach could be to have some sort of estimate for where the points are, use those for a preselection and do some more involved calculation for the points that are closest to the ray.
If that won't work, your best bet would be to render a lookup-map of your scene, where color-values are the id of a point that is rendered at the coordinates (this is also referred to as GPU-picking, examples here, here and even some library here although that doesn't really do what you will need).
To do that, you need to render your scene twice: create a lookup-map in the first pass and render it regularly in the second pass. The lookup-map will store for every pixel which particle was rendered there.
To get that information you need to setup a THREE.RenderTarget (this might be downscaled to half the width/height for better performance) and a different material. The vertex-shader stays as it is, but the fragment-shader will just output a single, unique color-value for every particle (or anything that you can use to identify them). Then render the scene (or better: only the parts that should be raycast-targets) into the renderTarget:
var size = renderer.getSize();
var renderTarget = new THREE.WebGLRenderTarget(size.width / 2, size.height / 2);
renderer.render(pickingScene, camera, renderTarget);
After rendering, you can obtain the content of this lookup-texture using the renderer.readRenderTargetPixels-method:
var pixelData = new Uint8Array(width * height * 4);
renderer.readRenderTargetPixels(renderTarget, 0, 0, width, height, pixelData);
(the layout of pixelData here is the same as for a regular canvas imageData.data)
Once you have that, the raycaster will only need to lookup a single coordinate, read and interpret the color-value as object-id and do something with it.

Odd results from shaders used to pre-process spring physics simulation

I'm doing a spring physics simulation using 2D samplers to house and pre-process some position data in a fragment shader, and getting very odd results. If I start with 16 individually located springs (a point at the end of an invisible spring originating from an invisible anchor), the visualization ends up with eight pairs, each pair hanging from the same spring anchor point. However, if I simply run the visualization to place the points using only the tOffsets values, all the information to calculate each of the anchor points is there and displays correctly (though no physics, of course). It's once I add back in the spring physics that I end up with pairs again. Also, from watching the visualization, I can tell that the pairs' anchor points values are none of the original 16 anchor point values. Any idea what's going on here? (See both the fiddle and the starred inline comments below.)
(three.js v 80)
See the fiddle using v79 here.
uniform sampler2D tPositions;
uniform sampler2D tOffsets;
varying vec2 vUv;
void main() {
float damping = 0.98;
vec4 nowPos = texture2D( tPositions, vUv ).xyzw;
vec4 offsets = texture2D( tOffsets, vUv ).xyzw;
vec2 velocity = vec2(nowPos.z, nowPos.w);
vec2 anchor = vec2( offsets.x, 130.0 );
// Newton's law: F = M * A
float mass = 24.0;
vec2 acceleration = vec2(0.0, 0.0);
// 1. apply gravity's force: **this works fine
vec2 gravity = vec2(0.0, 2.0);
gravity /= mass;
acceleration += gravity;
// 2. apply the spring force ** something goes wrong once I add the spring physics - the springs display in pairs
float restLength = length(yAnchor - offsets.y);
float springConstant = 0.2;
// Vector pointing from anchor to point position
vec2 springForce = vec2(nowPos.x - anchor.x, nowPos.y - anchor.y);
// length of the vector
float distance = length( springForce );
// stretch is the difference between the current distance and restLength
float stretch = distance - restLength;
// Calculate springForce according to Hooke's Law
springForce = normalize( springForce );
springForce *= (1.0 * springConstant * stretch);
springForce /= mass;
acceleration += springForce; // ** If I comment out this line, all points display where expected, and fall according to gravity. If I add it it back in the springs work properly but display in 8 pairs as opposed to 16 independent locations
velocity += acceleration;
velocity *= damping;
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out to texture for the next shader
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y); // **the pair problem shows up with this line active
// sanity checks with comments:
// gl_FragColor = vec4(newPosition.x, newPosition.y, 0.0, 0.0); // **the pair problem also shows up in this case
// gl_FragColor = vec4( offsets.x, offsets.y, velocity ); // **all points display in the correct position, though no physics
// gl_FragColor = vec4(nowPos.x, nowPos.y, 0.0, 0.0); // **all points display in the correct position, though no physics
UPDATE 1:
Could the problem be with the number of values (rgba, xzyw) agreeing between all of the pieces of my program? I've specified rgba values wherever I can think to, but perhaps I've missed somewhere. Here is a snippet from my javascript:
if ( ! renderer.context.getExtension( 'OES_texture_float' ) ) {
alert( 'OES_texture_float is not :(' );
}
var width = 4, height = 4;
particles = width * height;
// Start creation of DataTexture
var positions = new Float32Array( particles * 4 );
var offsets = new Float32Array( particles * 4 );
// hardcoded dummy values for the sake of debugging:
var somePositions = [10.885510444641113, -6.274578094482422, 0, 0, -10.12020206451416, 0.8196354508399963, 0, 0, 35.518341064453125, -5.810637474060059, 0, 0, 3.7696402072906494, -3.118760347366333, 0, 0, 9.090447425842285, -7.851400375366211, 0, 0, -32.53229522705078, -26.4628849029541, 0, 0, 32.3623046875, 22.746187210083008, 0, 0, 7.844726085662842, -15.305091857910156, 0, 0, -32.65345001220703, 22.251712799072266, 0, 0, -25.811357498168945, 32.4153938293457, 0, 0, -28.263731002807617, -31.015430450439453, 0, 0, 2.0903847217559814, 1.7632032632827759, 0, 0, -4.471604347229004, 8.995194435119629, 0, 0, -12.317420959472656, 12.19576358795166, 0, 0, 36.77312469482422, -14.580523490905762, 0, 0, 36.447078704833984, -16.085195541381836, 0, 0];
for ( var i = 0, i4 = 0; i < particles; i ++, i4 +=4 ) {
positions[ i4 + 0 ] = somePositions[ i4 + 0 ]; // x
positions[ i4 + 1 ] = somePositions[ i4 + 1 ]; // y
positions[ i4 + 2 ] = 0.0; // velocity
positions[ i4 + 3 ] = 0.0; // velocity
offsets[ i4 + 0 ] = positions[ i4 + 0 ];// - gridPositions[ i4 + 0 ]; // width offset
offsets[ i4 + 1 ] = positions[ i4 + 1 ];// - gridPositions[ i4 + 1 ]; // height offset
offsets[ i4 + 2 ] = 0; // not used
offsets[ i4 + 3 ] = 0; // not used
}
positionsTexture = new THREE.DataTexture( positions, width, height, THREE.RGBAFormat, THREE.FloatType );
positionsTexture.minFilter = THREE.NearestFilter;
positionsTexture.magFilter = THREE.NearestFilter;
positionsTexture.needsUpdate = true;
offsetsTexture = new THREE.DataTexture( offsets, width, height, THREE.RGBAFormat, THREE.FloatType );
offsetsTexture.minFilter = THREE.NearestFilter;
offsetsTexture.magFilter = THREE.NearestFilter;
offsetsTexture.needsUpdate = true;
rtTexturePos = new THREE.WebGLRenderTarget(width, height, {
wrapS:THREE.RepeatWrapping,
wrapT:THREE.RepeatWrapping,
minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,
format: THREE.RGBAFormat,
type:THREE.FloatType,
stencilBuffer: false
});
rtTexturePos2 = rtTexturePos.clone();
simulationShader = new THREE.ShaderMaterial({
uniforms: {
tPositions: { type: "t", value: positionsTexture },
tOffsets: { type: "t", value: offsetsTexture },
},
vertexShader: document.getElementById('texture_vertex_simulation_shader').textContent,
fragmentShader: document.getElementById('texture_fragment_simulation_shader').textContent
});
fboParticles = new THREE.FBOUtils( width, renderer, simulationShader );
fboParticles.renderToTexture(rtTexturePos, rtTexturePos2);
fboParticles.in = rtTexturePos;
fboParticles.out = rtTexturePos2;
UPDATE 2:
Perhaps the problem has to do with how the texels are being read from these textures? Somehow it may be reading between two texels, and so coming up with an averaged position shared by two springs? Is this possible? If so, where would I look to fix it?
I never discovered the problem with the fiddle in my question above; however, I did eventually find the newer version of the THREE.FBOUtils script I was using above - it is now called THREE.GPUComputationRenderer. After implementing it, my script finally worked!
For those who find themselves trying trying so solve a similar problem, here is the new and improved fiddle using the GPUComputationRenderer in place of the old FBOUtils.
Here, from the script documentation, is a basic use case of GPUComputationRenderer:
//Initialization...
// Create computation renderer
var gpuCompute = new GPUComputationRenderer( 1024, 1024, renderer );
// Create initial state float textures
var pos0 = gpuCompute.createTexture();
var vel0 = gpuCompute.createTexture();
// and fill in here the texture data...
// Add texture variables
var velVar = gpuCompute.addVariable( "textureVelocity", fragmentShaderVel, pos0 );
var posVar = gpuCompute.addVariable( "texturePosition", fragmentShaderPos, vel0 );
// Add variable dependencies
gpuCompute.setVariableDependencies( velVar, [ velVar, posVar ] );
gpuCompute.setVariableDependencies( posVar, [ velVar, posVar ] );
// Add custom uniforms
velVar.material.uniforms.time = { value: 0.0 };
// Check for completeness
var error = gpuCompute.init();
if ( error !== null ) {
console.error( error );
}
// In each frame...
// Compute!
gpuCompute.compute();
// Update texture uniforms in your visualization materials with the gpu renderer output
myMaterial.uniforms.myTexture.value = gpuCompute.getCurrentRenderTarget( posVar ).texture;
// Do your rendering
renderer.render( myScene, myCamera );

Resources