I am trying to transition between 3+ 3D models with some nice perlin noise based on user input, just like this:
http://experience.mausoleodiaugusto.it/en/
http://na.leagueoflegends.com/en/featured/skins/project-2016
I can easily transition between two models in my vertex shader, passing down a u_morphFactor uniform variable, which I tween between 0 and 1 (0 = first model, 1 = second model). My question is how should I do it with 3 or more models.
Here is how I handle my geometry:
class CustomGeometry extends THREE.BufferGeometry {
// pass down the two models' reference geometries
constructor (geo1, geo2) {
super()
let { count } = geo1.attributes.position
let timeArray = new Float32Array(count)
let targetArray = new Float32Array(count)
for (let i = 0; i < count; i += 3) {
// assign next model's vertex position to the current one.
// if there are is no corresponding vertex, simply move to vec3(0, 0, 0)
targetArray[i + 0] = geo2.attributes.position.array[i + 0] || 0
targetArray[i + 1] = geo2.attributes.position.array[i + 1] || 0
targetArray[i + 2] = geo2.attributes.position.array[i + 2] || 0
}
// assign position AND targetPosition as attributes, so we can transition between them
this.addAttribute('a_targetPosition', new THREE.BufferAttribute(targetArray, 3))
this.addAttribute('position', geo1.attributes.position)
}
}
Now with the two models' vertices uploaded to the GPU, I can pass down the uniform and make my transition:
let uniforms = {
u_time: { value: 0 },
u_morphFactor: { value: 0 } // show first model by default
}
And the GLSL is:
vec3 new_position = mix(position, a_targetPosition, u_morphFactor);
However, I still can't wrap my head around how should I approach this same technique with 3 or more models. I guess I have to mess with the shader math that handles u_morphFactor..
TL;DR: I know how to map vertices from one 3D model to the next, simply going from 0 to 1 in my shaders. How do I do this with 3 or more models?
Related
I have a project that loads various models (.obj, can be anything) and generates particles from the geometry position using Float23Array's.
Given the geometries of each model are completely different, this causes the number of particles to change depending on which model is used.
The code I'm using to populate the buffer attribute is below:
import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader.js';
const dataSize = 1024;
const modelLoader = new OBJLoader();
const modelObject = await modelLoader.loadAsync('/path/to/model.obj');
const positionData = new Float32Array(dataSize * dataSize * 3);
const modelChildren = modelObject.children as Mesh[];
const bufferPositions = modelChildren
.filter(({ isMesh }) => isMesh)
.map(({ geometry: { attributes } }) => attributes.position.array as Float32Array);
const combinedBuffer = concatFloat32Arrays(bufferPositions); // merge Float32's
for (let index = 0, length = positionData.length; index < length; index += 3) {
positionData[index] = combinedBuffer[index];
positionData[index + 1] = combinedBuffer[index + 1];
positionData[index + 2] = combinedBuffer[index + 2];
}
return new Float32BufferAttribute(positionData, 3);
A portion of the positionData array is empty, e.g 0, obviously because the combinedBuffer[index] is undefined.
Can anyone point me in the right direction?
I basically want an equal number of particles for each geometry, regardless of the model's geometry complexity.
You normally handle this use case by allocating a large enough buffer and then use BufferGeometry.setDrawRange() to decide which part of the data you want to draw. The values of vertices outside of the draw range doesn't matter with this approach.
I want to morph the vertices from one 3D model to the next with three.js, using different durations and delays for individual vertices.
First of all, I load both of my models, take the model with bigger vert count and copy it's vertices positions. Then, I take the second model and assign it's vertices positions as a targetPosition attribute:
class CustomGeometry extends THREE.BufferGeometry {
constructor (refGeo1, refGeo2) {
super()
let count = refGeo1.attributes.position.count
let targets = new Float32Array(count * 3)
for (let i = 0; i < count; i += 3) {
// if there is no corresponding vertex, simply assign 0
targets[i + 0] = refGeo2.attributes.position.array[i + 0] || 0
targets[i + 1] = refGeo2.attributes.position.array[i + 1] || 0
targets[i + 2] = refGeo2.attributes.position.array[i + 2] || 0
}
this.addAttribute('position', refGeo1.attributes.position)
this.addAttribute('targetPosition', new THREE.BufferAttribute(targets, 3))
}
}
Then I can pass an uniform variable mixFactor and do this in my vertex shaders:
vec3 newPosition = mix(position, targetPosition, mixFactor)
And here is an live example
This works fine, except the vertices move uniformly through space (with same timing durations) when the mixFactor variable is changed. How does one add delay and different timings to the mix?
I understand I can do attribute float mixFactor insted of uniform and then tween all of them on the CPU, but this will kill my performance... I suspect I have to add transitionTiming and transitionDelay float attributes to my vertices, but am lost how to do the math in my shaders...
You add yet another attribute for each vertex, maybe call it mixAmountOffset. Use that in your mixAmount calculation in the shader.
attribute float mixAmountOffset;
uniform float mixAmount;
....
float realMixAmount = clamp(mixAmount + mixAmountOffset, 0., 1.);
Then use realMixmount in you mix
vec4 position = mix(position1, position2, realMixAmount);
You only have to update mixAmount in JavaScript
to try to be clearer you need realMixAmount to pass between 0 and 1 for every vertex. So let's say you set mixAmountOffset to negative values from 0 to -1 for vertex
for (let i = 0; i < vertexCount ++i) {
mixAmountOffset[i] = -i / vertexCount;
}
now the math for mixAmount + mixAmountOffset needs to pass all points from 0 to 1. The range for mixAmountOffset is now 0 to -1 which means when mixAmount is 0 mixAmount + mixAmountOffset will go from 0 to -1 across all vertices. Because of the clamp that will be 0 to 0 across all vertices.
When mixAmount is 1 then mixAmount + mixAmountOffset will go from 1 to 0. Meaning only the first vertex is using the second model's positions, the last vertex is still using the first model's position.
You need to lerp mixAmount from 0 to 2. That way when mixAmount is 2 then mixAmount + mixAmountOffset will go from 2 to 1 across all vertices and the clamp will make that 1 to 1 across all vertices. In other words all vertices will be using model2
What mixAmountOffsets you use and how to do the math to compute a realMixeAmount is really up to you. The point is you need to make the math so before things start all vertices get a realMixAmount of 0 and after things end all vertices get a realMixAmount of 1. There are an infinite number of ways to do that.
Looking at your code you were passing in the vertex index for mixAmountOffset so for example you could pass yet another uniform vertexCount and do scaledMixAmountOffset = mixAmountOffset / vertexCount.
Or, you could put positive values from 0 to 1 in the mixAmountOffset buffer and lerp mixAmount from -1 to 1.
Or ... etc ...
Other things you might want to do:
Of course you can use some kind of smoothing function on mixAmount when you lerp but in this case since every vertex is effectively lerping separately you probably want to put the smoothing function inside the shader
Example:
...
float smooth(float pos) {
pos *= 2.;
if (pos < 1.) {
return 0.5 * pow(2., 10. * (pos - 1.));
}
return 0.5 * (-pow(2., -10. * --pos) + 2.);
}
void main () {
float realFactorOffset = clamp(mixFactor + mixFactorOffset, 0.0, 1.0);
realFactorOffset = smooth(realFactorOffset);
vec3 newPosition = mix(position, targetPosition, realFactorOffset);
...
see: https://codepen.io/greggman/pen/aymqdV
I use this code to draw triangles in Cesium:
var mypositions = Cesium.Cartesian3.fromDegreesArrayHeights(triangles);
// unroll 'mypositions' into a flat array here
var numPositions = mypositions.length;
var pos = new Float64Array(numPositions * 3);
var normals = new Float32Array(numPositions * 3);
for (var i = 0; i < numPositions; ++i) {
pos[i * 3] = mypositions[i].x;
pos[i * 3 + 1] = mypositions[i].y;
pos[i * 3 + 2] = mypositions[i].z;
normals[i * 3] = 0.0;
normals[i * 3 + 1] = 0;
normals[i * 3 + 2] = 1.0;
}
console.log(normals)
var geometry = new Cesium.Geometry({
vertexFormat : Cesium.VertexFormat.ALL,
attributes: {
position: new Cesium.GeometryAttribute({
componentDatatype: Cesium.ComponentDatatype.DOUBLE, // not FLOAT
componentsPerAttribute: 3,
values: pos
}),
normal: new Cesium.GeometryAttribute({
componentDatatype: Cesium.ComponentDatatype.FLOAT,
componentsPerAttribute: 3,
values: normals
})
},
// Don't need the following line if no vertices are shared.
// indices: new Uint32Array([0, 1, 2, 3, 4, 5]),
primitiveType: Cesium.PrimitiveType.TRIANGLES,
boundingSphere: Cesium.BoundingSphere.fromVertices(pos)
});
var myInstance = new Cesium.GeometryInstance({
geometry: geometry,
attributes: {
color: new Cesium.ColorGeometryInstanceAttribute(0.0039215697906911,
0.7333329916000366,
0,
1)
},
show: new Cesium.ShowGeometryInstanceAttribute(true)
});
var TIN = viewer.scene.primitives.add(new Cesium.Primitive({
geometryInstances: [myInstance],
asynchronous: false,
appearance: new Cesium.PerInstanceColorAppearance({
closed: true,
translucent: false,
flat: false
//,
//vertexShaderSource: "",
//fragmentShaderSource: ""
})
}));
This is what I get:
I would like to enable shading, so the result should be similar as on figure below:
I tried to write Vertex and Fragment GLSL shader but without success. I am not familiar with GLSL and I was getting a compiling error. Is there any another way to create this kind of shading?
Thanks!
Regardless of the fact that you haven't posted your GLSL shaders or gotten that to work, your problem (once you eventually figure out the GLSL stuff) is that you're setting all your normals to point in the +Z direction instead of actually being normal to each triangle, like the second screenshot of yours shows.
var normals = new Float32Array(numPositions * 3);
for (var i = 0; i < numPositions; ++i) {
pos[i * 3] = mypositions[i].x;
pos[i * 3 + 1] = mypositions[i].y;
pos[i * 3 + 2] = mypositions[i].z;
normals[i * 3] = 0.0;
normals[i * 3 + 1] = 0;
normals[i * 3 + 2] = 1.0;
}
What you need to do instead is set the positions and then for the normals, operate on sets of 3 vertices (a triangle) instead of individual vertices. This way you can actually calculate a surface normal. This is one of many explanations of how to do that:
https://math.stackexchange.com/questions/305642/how-to-find-surface-normal-of-a-triangle
I'm not familiar with Cesium, but I'd imagine there'd be some "default" shader that did basic lighting. If not, then the basis for simple lighting like this is called Lambertian reflectance. Your vertex shader would output a color that is calculated as dot(N, L) where N is the normal of your vertex and L is the vector from that vertex to your light source (or just the negative of the direction of your light if it's a directional/environment/sun/etc light). The fragment shader would simply pass that color back out.
I am developing plugin to load multiple bojects to model and performing multiple transformations
I am working almost one week i go through sketchucation, sketchup ruby documentation, transformation matrices topics etc and i cannot figure what i am doing wrong.
I am 100% sure that scalling and translating works, but i dont know what about rotations, basiaclly i want to rotate given object around axis.
I would really appreciate any kind of professional ruby plugin makers help!!!
My target is to achieve such component:
but my current result is:
My code is here, i know it is not shortest:
#1)Scale
scale_transformation = Geom::Transformation.scaling(#transform.scale_vector[0],
#transform.scale_vector[1],
#transform.scale_vector[2])
#2)Rotate
vector = Geom::Vector3d.new(1, 0, 0)#x axis
rotate_transformation_x = Geom::Transformation.rotation(#transform.transformation.origin, vector, #transform.rotation_vector[0].degrees)#degrees->radians
vector = Geom::Vector3d.new(0, 1, 0)#y axis
rotate_transformation_y = Geom::Transformation.rotation(#transform.transformation.origin, vector, #transform.rotation_vector[1].degrees)#degrees->radians
vector = Geom::Vector3d.new(0, 0, 1)#z axis
rotate_transformation_z = Geom::Transformation.rotation(#transform.transformation.origin, vector, #transform.rotation_vector[2].degrees)#degrees->radians
#if rotate is [0, 0, 0] than rotate transformation is identity matrix:
if(#transform.rotation_vector[0] == 0 )
rotate_transformation_x = Geom::Transformation.new()
end
if(#transform.rotation_vector[1] == 0 )
rotate_transformation_y = Geom::Transformation.new()
end
if(#transform.rotation_vector[2] == 0 )
rotate_transformation_z = Geom::Transformation.new()
end
rotate_transformation = rotate_transformation_x * rotate_transformation_y * rotate_transformation_z
#3)Translate
translate_transformation = Geom::Transformation.translation(Geom::Vector3d.new(
meters_to_inches(#transform.translation_vector[0]),
meters_to_inches(#transform.translation_vector[1]),
meters_to_inches(#transform.translation_vector[2])
))
#if translate is [0, 0, 0] than translate transformation is identity matrix:
if(#transform.translation_vector[0] == 0 && #transform.translation_vector[1] == 0 && #transform.translation_vector[2] == 0 )
translate_transformation = Geom::Transformation.new()
end
#using overloaded operator '*' for matrixes
puts #path + "\r\nFROM PARENT : " + #transform.to_s + ", matrix = " + #transform.transformation.to_a.each_slice(4).inject { |str, row| "#{str}\r\n#{row}" }
#transform.transformation = (translate_transformation * rotate_transformation * scale_transformation) * #transform.transformation
#transform.transformation if there was not previous transformation is identity matrix.
I think my mistake can be connected with order of multiplying matrices
THANK YOU FOR ANY KIND OF HELP!!!
rotation and translation definition are as follows(for example):
I need to orient one node to point its Z-axis at another node in 3D. Yeah, the perfect job for the LookAtConstraint. And for most of my work LookAt is fine. But when I apply LookAt to a particular node, I can no longer animate that node's translation with SCNAction. Picture a hydrogen atom leaving a molecule as it ionizes. The orientation is needed to properly rotate the bond (a cylinder) bewteen the hydrogen and an oxygen atom on the molecule.
I can orient the bond FROM the oxygen TO the hydrogen and animate. But this disorients most of the other bonds which were getting by just fine with LookAt's.
I gave this a mighty try before realizing it answers a somewhat different question:
Calculate rotations to look at a 3D point?
I had a similar issue with a project. What I eventually realized was that I need to use multiple constraints. One for translation (movement) and the other using the look at constraint.
I would move the object and then apply the look at constraint; in this case, it was a camera following an objects being moved using actions. Code snippet follows:
let targetNodeConstraint = SCNLookAtConstraint(target: someObject)
targetNodeConstraint.gimbalLockEnabled = true
let followObjectConstraint = SCNTransformConstraint(inWorldSpace: true, withBlock: { (node, matrix) -> SCNMatrix4 in
let transformMatrix = SCNMatrix4MakeTranslation(
self.someObject.position.x - 1.0,
self.someObject.position.y, self.someObject.position.z + 1.0)
return transformMatrix
})
// Position the object behind the other object & rotate it to
roadCamera.constraints = [followObjectConstraint, targetNodeConstraint]
The important thing to note is the order in which the constraints are added to the object using an array. In the code above, I am ignoring the current matrix before I apply a transform matrix (I should re-write this code someday)
The complete source code of this "experiment" is on GitHub as I try things out.
https://github.com/ManjitBedi/CubeTrip
Hopefully, this is helpful.
My solution here. Deal with situation that node continuously translate in space and should always toward a position.
#discardableResult
func yew(_ node:SCNNode, toPosition position:SCNVector3) -> Float
{
var eularAngle = SCNVector3Zero
let tranform = node.transform
var forward = GLKVector3Make(tranform.m31, tranform.m32, tranform.m33)
var toWard = GLKVector3Make(position.x - node.position.x, position.y - node.position.y, position.z - node.position.z)
forward = GLKVector3Normalize(GLKVector3Make(forward.x, 0, forward.z))
toWard = GLKVector3Normalize(GLKVector3Make(toWard.x, 0, toWard.z))
var dotProduct = GLKVector3DotProduct(forward,toWard)
dotProduct = (dotProduct > 1) ? 1 : ((dotProduct < -1) ? -1 : dotProduct)
var yew = acos(dotProduct)
if yew < 0 {
assert(false)
}
//toward is clockwise of forward
let isCW = GLKVector3CrossProduct(forward, toWard).y < 0
if isCW {
yew = -yew
}
eularAngle.y = yew
node.eulerAngles = SCNVector3Make(eularAngle.x + wrapperNode.eulerAngles.x,
eularAngle.y + wrapperNode.eulerAngles.y,
eularAngle.z + wrapperNode.eulerAngles.z)
return yew
}
#discardableResult
func pitch(_ node:SCNNode, toPosition position:SCNVector3) -> Float{
var eularAngle = SCNVector3Zero
let tranform = node.transform
var toWard = GLKVector3Make(position.x - node.position.x, position.y - node.position.y, position.z - node.position.z)
var forward = GLKVector3Make(tranform.m31, tranform.m32, tranform.m33)
forward = GLKVector3Normalize(forward)
toWard = GLKVector3Normalize(toWard)
var dotProduct = GLKVector3DotProduct(forward,toWard)
dotProduct = (dotProduct > 1) ? 1 : ((dotProduct < -1) ? -1 : dotProduct)
var pitch = acos(dotProduct)
//toward is clockwise of forward, if right vector of model and crossProfuct.x has same direction
let crossProduct = GLKVector3CrossProduct(forward, toWard)
let isCW = (crossProduct.x <= 0) != (tranform.m11 <= 0)
if isCW {
pitch = -pitch
}
eularAngle.x = pitch
node.eulerAngles = SCNVector3Make(eularAngle.x + node.eulerAngles.x,
eularAngle.y + node.eulerAngles.y,
eularAngle.z + node.eulerAngles.z)
return pitch
}
func orient(_ node:SCNNode, toPosition position:SCNVector3) {
self.yew(node, toPosition: position)
self.pitch(node, toPosition: position)
}