three.js - Adjusting opacity of individual particles - three.js

I am trying to vary the opacity of particles as a function of their distance from a plane.
This issue describes my problem, and the answer a year ago was essentially "you can't". Opacity is apparently a parameter of a material, not an element, and hence individual particle opacity is not possible.
Has anything changed, is there any way I could achieve this? If individual particle colouring is possible, I imagine this isn't out of reach.
Cheers

EDIT - This answer shows how to set per-point opacity using a custom ShaderMaterial. See https://stackoverflow.com/a/67892506/1461008 for an approach using PointsMaterial.
ParticleSystem has been renamed to PointCloud and then to Points.
Yes, you can create a Point Cloud and vary the alpha value of each particle's color dynamically.
In three.js, you can do this by setting the Point Cloud's material to be a ShaderMaterial having an attribute equal to the desired alpha value for each particle.
If ShaderMaterials, vertex shaders and fragment shaders are new to you, here is a really simple Fiddle that implements a Point Cloud with dynamic alphas: https://jsfiddle.net/9Lvrnpwc/.
EDIT: Updated fiddle
three.js r.148

Not sure why, but proposed solution didn't work for me. I used somewhat tricky shading to make points round and blurry at edges. So the corners of points were supposed to be transparent, but they appeared black: http://jsfiddle.net/5kz64ero/1/
Relevant part of my fragment shader:
// Distance from 0.0 to 0.5 from the center of the point
float d = distance(gl_PointCoord, vec2(0.5, 0.5));
// Applying sigmoid to smoothen the edge
float opacity = 1.0 / (1.0 + exp(16.0 * (d - 0.25)));
gl_FragColor = vec4(opacity * vColor, opacity);
I figured that traditionally this is solved by depth-sorting (with farthest points coming first), and I found some evidence that some older implementations of ParticleSystem in Three contained sortParticles attribute. But it's not there anymore. And in my case sorting would really involve redoing that every time camera position changes. Instead I set depthWrite: false and it seems to solve the issue.
The result: http://jsfiddle.net/5kz64ero/6/

Related

Mirror FBX animation in Three.js

Is there any way to mirror FBX animation in Three.js?
I already have boxer animations and now I need to made a mirrored clone of each to realize both sides stands.
I know that I can do it in Blender or another program but I wanna reduce my app size.
Oh, I forgot to post an answer :) So the answer to my question is to apply scale matrix.
For example, if you want to mirror model by X-axis, you need to multiply its position by scale matrix with coefficients (-1, 1, 1). Also you need to multiply position along X-axis and rotation around Y-axis by negative one to prevent side transformation.
Here's some THREE.js code:
this.model.position.x *= -1.0;
this.model.rotation.y *= -1.0;
this.model.applyMatrix4(new Matrix4().makeScale(-1.0, 1.0, 1.0));

How can I light emission per vertex and per vertex lighting in ThreeJS?

I want to see a chart with color specified per vertex and to get little bit of shading too.
But if I use MeshBasicMaterial I only get VertexColor with no dynamic shading.
On the other hand, if I use MeshPhongMaterial I just get shading but without emissiveness from my vertex colors.
As the THREE.JS PhongMaterial supports vertexColors, giving you a nice combination of dynamic lighting and vertex colors, I'm not quite sure I understand your question. Perhaps that is something you should investigate more?
However, as an alternative to writing a custom shader you could try rendering your model in multiple passes.
This will not give you as much control over the way the vertex colors and phong lighting are combined as a shader would, but often a simple add/multiply blend can give pretty decent results.
Algorithm:
- create two meshes for the BufferGeometry, one with the BasicMaterial and one with the PhongMaterial
- for the PhongMaterial, set
depthFunc = THREE.EqualDepth
transparent = true;
blending = THREE.AdditiveBlending(or MultiplyBlending)
- render the first mesh
- render the second mesh at the exact same spot

apply morphTargets to ParticleSystem or PointCloud

I am using Three.js version 65.
I am displaying a set of points # time t=0 in 3D space using ParticleSystem. And also I am having next set of points at time t=1. Now I want to animate it as in JSONLoader morphTarget animation? Could anybody suggest me the best way to achieve this?
(or)
Can I prefer WebGL shader programming for this? Please suggest.
Thanks in advance.
Yes you can do that with shaders. You'd e.g. create a custom shader for your particle system with the attributes vec3 position, vec3 nextPosition and a uniform float scale which goes from 0 to 1.
Then you can add some logic to the shader where you calculate the new position like vec3 pos = position * scale + nextPosition * (1.0 - scale) (along with the usual billboard / GL_Point code ofc). And when you reached scale 1 you swap position with nextPosition and fill nextPosition with the relative follower.
Good luck have fun :)
PS: My code mentioned is just for linear interpolation. In your case you might consider other interpolations. Maybe even add another two attribute vectors to indicate the leading and the following point in order to calculate the new position with a bezier curve.
Lastly you'll have to give a thought to performance sooner or later. If you have 10k particles and 1k "states" you might run into performance issues.

3D sprites, writing correct depth buffer information

I am writing a particle engine for iOS using Monotouch and openTK. My approach is to project the coordinate of each particle, and then write a correctly scaled textured rectangle at this screen location.
it works fine, but I have trouble calculating the correct depth value so that the sprite will correctly overdraw and be overdrawn by 3D objects in the scene.
This is the code I am using today:
//d=distance to projection plane
float d=(float)(1.0/(Math.Tan(MathHelper.DegreesToRadians(fovy/2f))));
Vector3 screenPos=Vector3.Transform(ref objPos,ref viewMatrix, out screenPos);
float depth=1-d/-screenPos.Z;
Then I am drawing a trianglestrip at the screen coordinate where I put the depth value calculated above as the z coordinate.
The results are almost correct, but not quite. I guess I need to take the near and far clipping planes into account somehow (near is 1 and far is 10000 in my case), but I am not sure how. I tried various ways and algorithms without getting accurate results.
I'd appreciate some help on this one.
What you really want to do is take your source position and pass it through modelview and projection or whatever you've got set up instead if you're not using the fixed pipeline. Supposing you've used one of the standard calls to set up the stack, such as glFrustum, and otherwise left things at identity then you can get the relevant formula directly from the man page. So reading directly from that you'd transform as:
z_clip = -( (far + near) / (far - near) ) * z_eye - ( (2 * far * near) / (far - near) )
w_clip = -z
Then, finally:
z_device = z_clip / w_clip;
EDIT: as you're working in ES 2.0, you can actually avoid the issue entirely. Supply your geometry for rendering as GL_POINTS and perform a normal transform in your vertex shader but set gl_PointSize to be the size in pixels that you want that point to be.
In your fragment shader you can then read gl_PointCoord to get a texture coordinate for each fragment that's part of your point, allowing you to draw a point sprite if you don't want just a single colour.

How do I use a vertex shader to multiply my vertex data by a uniform?

This is a question that came from an earlier problem I had. Basically, I was trying to implement orthographic scaling in my shader by modifying the scale components of my projection matrix, but it wasn't possible. What I had to actually do was scale the verts before "sending them in" to my shader via a draw. That works like a charm...
But, of course, the issue is that in software now I'm responsible for scaling all my verts before handing them off to the shader. This makes me wonder if it would be possible to have a vertex shader do this. I imagine it is, but I can't figure it out.
What I'm doing is just going through all 4 of my verts (held in float vertices[8]) and doing *= scale;. To be slightly more accurate, I multiply the X and Y components separately by scaleX and scaleY.
How can I do this same thing in a vertex shader?
replace gl_Vertex with (gl_Vertex * scale) everywhere in your vertex shader. Or if you're using a user-defined input for your coordinate, put * scale on that.

Resources