I’ve created a some basic model in Blender. It’s 4 times subdivided cube (I need faces to look like squares), then faces was split by edges (in Blender too). Then I need to separate final mesh by loose parts in threejs (if I do that in Blender the exported file is too big, like a few MB big). So each face become separate one.
How should I do that?
Step 1 (blender)
Step 2 (blender)
After step 2 each face is a separate mesh. I need to replicate step 2 in ThreeJS.
As a result I need to explode faces of a sphere
Here's what I have so far
I'll need much more faces to achieve the desired result. One possible solution would be to place 2 spheres one inside another and then "explode" them simultaneosly. But I need faces to be much smaller too.
My "explosion" code is heavily based on this: https://github.com/akella/ExplodingObjects/blob/0ed8d2668e3fe9913133382bb139c73b9d554494/src/egg.js#L178
And here's demo:
https://tympanus.net/Development/ExplodingObjects/index-heart.html
In your case I would use bufferGeometry.
According to this showcase: https://threejs.org/examples/#webgl_buffergeometry
16000 triangles are generated with normal orientations.
I think you should use BufferGeometry.
Build on top of your codePen,
Here you'll find a solution to have quad faces (instead of your triangles) oriented along a sphere surface.
The core to get the quad faces laying along the surface of a sphere:
for (let down = 0; down < segmentsDown; ++down) {
const v0 = down / segmentsDown;
const v1 = (down + 1) / segmentsDown;
const lat0 = (v0 - 0.5) * Math.PI;
const lat1 = (v1 - 0.5) * Math.PI;
for (let across = 0; across < segmentsAround; ++across) {
//for each quad we randomize the radius
const radius = radiusOfSphere + Math.random()*1.5*radiusOfSphere;
const u0 = across / segmentsAround;
const u1 = (across + 1) / segmentsAround;
const long0 = u0 * Math.PI * 2;
const long1 = u1 * Math.PI * 2;
//for each quad you have 2 triangles
//first triangle of the quad
//getPoint() returns xyz coord in vector3 array with (latitude longitude radius) input
positions.push(...getPoint(lat0, long0, radius));
positions.push(...getPoint(lat1, long0, radius));
positions.push(...getPoint(lat0, long1, radius));
//second triangle of the quad. Order matter for UV mapping,
positions.push(...getPoint(lat1, long0, radius));
positions.push(...getPoint(lat1, long1, radius));
positions.push(...getPoint(lat0, long1, radius));
}
}
https://codepen.io/mquantin/pen/mdqmwMa
I hope this will do the job for you.
I display a "curved tube" and color its vertices based on their distance to the plane the curve lays on.
It works mostly fine, however, when I reduce the resolution of the tube, artifacts starts to appear in the tube colors.
Those artifacts seem to depend on the camera position. If I move the camera around, sometimes the artifacts disappear. Not sure it makes sense.
Live demo: http://jsfiddle.net/gz1wu369/15/
I do not know if there is actually a problem in the interpolation or if it is just a "screen" artifact.
Afterwards I render the scene to a texture, looking at it from the "top". It then looks like a "deformation" field that I use in another shader, hence the need for continuous color.
I do not know if it is the expected behavior or if there is a problem in my code while setting the vertices color.
Would using the THREEJS Extrusion tools instead of the tube geometry solve my issue?
const tubeGeo = new THREE.TubeBufferGeometry(closedSpline, steps, radius, curveSegments, false);
const count = tubeGeo.attributes.position.count;
tubeGeo.addAttribute('color', new THREE.BufferAttribute(new Float32Array(count * 3), 3));
const colors = tubeGeo.attributes.color;
const color = new THREE.Color();
for (let i = 0; i < count; i++) {
const pp = new THREE.Vector3(
tubeGeo.attributes.position.array[3 * i],
tubeGeo.attributes.position.array[3 * i + 1],
tubeGeo.attributes.position.array[3 * i + 2]);
const distance = plane.distanceToPoint(pp);
const normalizedDist = Math.abs(distance) / radius;
const t2 = Math.floor(i / (curveSegments + 1));
color.setHSL(0.5 * t2 / steps, .8, .5);
const green = 1 - Math.cos(Math.asin(Math.abs(normslizedDist)));
colors.setXYZ(i, color.r, green, 0);
}
Low-res tubes with "Normals" material shows different artifact
High resolution tube hide the artifacts:
I need to compute 3D coordinates from a screen-space position using a rendered depth-map. Unfortunately, using the regular raytracing is not an option for me because I am dealing with a single geometry containing something on the order of 5M faces.
So I figured I will do the following:
render a depth-map with RGBADepthPacking into a renderTarget
use a regular unproject-call to compute a ray from the mouse-position (exactly as I would do when using raycasting)
lookup the depth from the depth-map at the mouse-coordinates and compute a point along the ray using that distance.
This kind of works, but somehow the located point is always slightly behind the object, so there is probably something wrong with my depth-calculations.
Now some details about the steps above
Rendering the depth-map is pretty much straight-forward:
const depthTarget = new THREE.WebGLRenderTarget(w, h);
const depthMaterial = new THREE.MeshDepthMaterial({
depthPacking: THREE.RGBADepthPacking
});
// in renderloop
renderer.setClearColor(0xffffff, 1);
renderer.clear();
scene.overrideMaterial = depthMaterial;
renderer.render(scene, camera, depthTarget);
Lookup the stored color-value at the mouse-position with:
renderer.readRenderTargetPixels(
depthTarget, x, h - y, 1, 1, rgbaBuffer
);
And convert back to float using (adapted from the GLSL-Version in packing.glsl):
const v4 = new THREE.Vector4()
const unpackDownscale = 255 / 256;
const unpackFactors = new THREE.Vector4(
unpackDownscale / (256 * 256 * 256),
unpackDownscale / (256 * 256),
unpackDownscale / 256,
unpackDownscale
);
function unpackRGBAToDepth(rgbaBuffer) {
return v4.fromArray(rgbaBuffer)
.multiplyScalar(1 / 255)
.dot(unpackFactors);
}
and finally computing the depth-value (I found corresponding code in readDepth() in examples/js/shaders/SSAOShader.js which I ported to JS):
function computeDepth() {
const cameraFarPlusNear = cameraFar + cameraNear;
const cameraFarMinusNear = cameraFar - cameraNear;
const cameraCoef = 2.0 * cameraNear;
let z = unpackRGBAToDepth(rgbaBuffer);
return cameraCoef / (cameraFarPlusNear - z * cameraFarMinusNear);
}
Now, as this function returns values in range 0..1 I think it is the depth in clip-space coordinates, so I convert them into "real" units using:
const depth = camera.near + depth * (camera.far - camera.near);
There is obviously something slightly off with these calculations and I didn't figure out the math and details about how depth is stored yet.
Can someone please point me to the mistake I made?
Addition: other things I tried
First I thought it should be possible to just use the unpacked depth-value as value for z in my unproject-call like this:
const x = mouseX/w * 2 - 1;
const y = -mouseY/h * 2 + 1;
const v = new THREE.Vector3(x, y, depth).unproject(camera);
However, this also doesn't get the coordinates right.
[EDIT 1 2017-05-23 11:00CEST]
As per #WestLangleys comment I found the perspectiveDepthToViewZ() function which sounds like it should help. Written in JS that function is
function perspectiveDepthToViewZ(invClipZ, near, far) {
return (near * far) / ((far - near) * invClipZ - far);
}
However, when called with unpacked values from the depth-map, results are several orders of magnitude off. See here.
Ok, so. Finally solved it. So for everyone having trouble with similar issues, here's the solution:
The last line of the computeDepth-function was just wrong. There is a function perspectiveDepthToViewZ in packing.glsl, that is pretty easy to convert to JS:
function perspectiveDepthToViewZ(invClipZ, near, far) {
return (near * far) / ((far - near) * invClipZ - far);
}
(i believe this is somehow part of the inverse projection-matrix)
function computeDepth() {
let z = unpackRGBAToDepth(rgbaBuffer);
return perspectiveDepthToViewZ(z, camera.near, camera.far);
}
Now this will return the z-axis value in view-space for the point. Left to do is converting this back to world-space coordinates:
const setPositionFromViewZ = (function() {
const viewSpaceCoord = new THREE.Vector3();
const projInv = new THREE.Matrix4();
return function(position, viewZ) {
projInv.getInverse(camera.projectionMatrix);
position
.set(
mousePosition.x / windowWidth * 2 - 1,
-(mousePosition.y / windowHeight) * 2 + 1,
0.5
)
.applyMatrix4(projInv);
position.multiplyScalar(viewZ / position.z);
position.applyMatrix4(camera.matrixWorld);
};
}) ();
I'm using jsc3d to load and display some 3d objects on a canvas. The viewer has already a built-in feature that allows to rotate the "view coordinates" (correct me if i'm wrong) about the Y axis by dragging the mouse.
The rotation is performed through a classic rotation matrix, and finally the trasformation matrix is multiplied by this rotation matrix.
The totation about the Y axis is calculated in a way that resembles a circular movement around the whole scene of loaded objects:
JSC3D.Matrix3x4.prototype.rotateAboutYAxis = function(angle) {
if(angle != 0) {
angle *= Math.PI / 180;
var c = Math.cos(angle);
var s = Math.sin(angle);
var m00 = c * this.m00 + s * this.m20;
var m01 = c * this.m01 + s * this.m21;
var m02 = c * this.m02 + s * this.m22;
var m03 = c * this.m03 + s * this.m23;
var m20 = c * this.m20 - s * this.m00;
var m21 = c * this.m21 - s * this.m01;
var m22 = c * this.m22 - s * this.m02;
var m23 = c * this.m23 - s * this.m03;
this.m00 = m00; this.m01 = m01; this.m02 = m02; this.m03 = m03;
this.m20 = m20; this.m21 = m21; this.m22 = m22; this.m23 = m23;
}
};
Now, dragging the mouse will apply this rotation about the Y axis on the whole world, like on the left side in the picture below. Is there a way, to apply a rotation about the Up vector to keep it in the initial position, like it appear on the right side?
I tried something like that:
var rotY = (x - viewer.mouseX) * 360 / viewer.canvas.height;
var rotMat = new JSC3D.Matrix3x4; // identity
rotMat.rotateAboutYAxis(rotY);
viewer.rotMatrix.multiply(rotMat);
but it has no effect.
What operations shall be applied to my rotation matrix to achieve a rotation about the Up vector?
Sample: https://jsfiddle.net/4xzjnnar/1/
This 3D library has already some built-in functions to allow scene rotation about X,Y,and Z axis, so there is no need to implement new matrix operations for that, we can use the existing functions rotateAboutXAyis, rotateAboutYAxis and rotateAboutZAxis, which apply an in-place matrix multiplication of the desired rotation angle in degrees.
The scene in JSC3D is transformed by a 3x4 matrix where the rotation is stored in the first 3 values of each row.
After applying a scene rotation and/or translation, applying a subsequent rotation about the Up vector, is a problem of calculate a rotation about an arbitrary axis.
A very clean and didactic explanation how to solve this problem is described here: http://ami.ektf.hu/uploads/papers/finalpdf/AMI_40_from175to186.pdf
Translate the P 0 (x 0 ,y 0 ,z 0 ) axis point to the origin of the coordinate system.
Perform appropriate rotations to make the axis of rotation coincident with
z-coordinate axis.
Rotate about the z-axis by the angle θ.
Perform the inverse of the combined rotation transformation.
Perform the inverse of the translation.
Now, its easy to write a function for that, because we use the functions already available in JSC3D (translation part is omitted here).
JSC3D.Viewer.prototype.rotateAboutUpVector = function(angle) {
angle %= 360;
/* pitch, counter-clockwise rotation about the Y axis */
var degX = this.rpy[0], degZ = this.rpy[2];
this.rotMatrix.rotateAboutXAxis(-degX);
this.rotMatrix.rotateAboutZAxis(-degZ);
this.rotMatrix.rotateAboutYAxis(angle);
this.rotMatrix.rotateAboutZAxis(degZ);
this.rotMatrix.rotateAboutXAxis(degX);
}
Because all above mentioned functions are using degrees, we need to get back the actual Euler angles from the rotation matrix (simplified):
JSC3D.Viewer.prototype.calcRollPitchYaw = function() {
var m = this.rotMatrix;
var radians = 180 / Math.PI;
var angleX = Math.atan2(-m.m12, m.m22) * radians;
var angleY = Math.asin(m.m01) * radians;
var angleZ = Math.atan2(-m.m01, m.m00) * radians;
this.rpy[0] = angleX;
this.rpy[1] = angleY;
this.rpy[2] = angleZ;
}
The tricky part here, is that we need always to get back the current rotation angles, as they results from the applied rotations, so a separate function must be used to store the current Euler angles every time that a rotation is applied to the scene.
For that, we can use a very simple structure:
JSC3D.Viewer.prototype.rpy = [0, 0, 0];
This will be the final result:
I can't find anywhere an explaination about how to use the frames option for ExtrudeGeometry in Three.js. Its documentation says:
extrudePath — THREE.CurvePath. 3d spline path to extrude shape along. (creates Frames if (frames aren't defined)
frames — THREE.TubeGeometry.FrenetFrames. containing arrays of tangents, normals, binormals
but I don't understand how frames must be defined. I think using the "frames" option, passing three arrays for tangents, normals and binormals (calculated in some way), but how to pass them in frames?... Probably (like here for morphNormals):
frames = { tangents: [ new THREE.Vector3(), ... ], normals: [ new THREE.Vector3(), ... ], binormals: [ new THREE.Vector3(), ... ] };
with the three arrays of the same lenght (perhaps corresponding to steps or curveSegments option in ExtrudeGeometry)?
Many thanks for an explanation.
Edit 1:
String.prototype.format = function () {
var str = this;
for (var i = 0; i < arguments.length; i++) {
str = str.replace('{' + i + '}', arguments[i]);
}
return str;
}
var numSegments = 6;
var frames = new THREE.TubeGeometry.FrenetFrames( new THREE.SplineCurve3(spline), numSegments );
var tangents = frames.tangents,
normals = frames.normals,
binormals = frames.binormals;
var tangents_list = [],
normals_list = [],
binormals_list = [];
for ( i = 0; i < numSegments; i++ ) {
var tangent = tangents[ i ];
var normal = normals[ i ];
var binormal = binormals[ i ];
tangents_list.push("({0}, {1}, {2})".format(tangent.x, tangent.y, tangent.z));
normals_list.push("({0}, {1}, {2})".format(normal.x, normal.y, normal.z));
binormals_list.push("({0}, {1}, {2})".format(binormal.x, binormal.y, binormal.z));
}
alert(tangents_list);
alert(normals_list);
alert(binormals_list);
Edit 2
Times ago, I opened this topic for which I used this solution:
var spline = new THREE.SplineCurve3([
new THREE.Vector3(20.343, 19.827, 90.612), // t=0
new THREE.Vector3(22.768, 22.735, 90.716), // t=1/12
new THREE.Vector3(26.472, 23.183, 91.087), // t=2/12
new THREE.Vector3(27.770, 26.724, 91.458), // t=3/12
new THREE.Vector3(31.224, 26.976, 89.861), // t=4/12
new THREE.Vector3(32.317, 30.565, 89.396), // t=5/12
new THREE.Vector3(31.066, 33.784, 90.949), // t=6/12
new THREE.Vector3(30.787, 36.310, 88.136), // t=7/12
new THREE.Vector3(29.354, 39.154, 90.152), // t=8/12
new THREE.Vector3(28.414, 40.213, 93.636), // t=9/12
new THREE.Vector3(26.569, 43.190, 95.082), // t=10/12
new THREE.Vector3(24.237, 44.399, 97.808), // t=11/12
new THREE.Vector3(21.332, 42.137, 96.826) // t=12/12=1
]);
var spline_1 = [], spline_2 = [], t;
for( t = 0; t <= (7/12); t+=0.0001) {
spline_1.push(spline.getPoint(t));
}
for( t = (7/12); t <= 1; t+=0.0001) {
spline_2.push(spline.getPoint(t));
}
But I was thinking the possibility to set the tangent, normal and binormal for the first point (t=0) of spline_2 to be the same of last point (t=1) of spline_1; so I thought if that option, frames, could return in some way useful for the purpose. Could be possible to overwrite the value for a tangent, normal and binormal in the respective list, to obtain the same value for the last point (t=1) of spline_1 and the first point (t=0) of spline_2, so to guide the extrusion? For example, for the tangent at "t=0" of spline_2:
tangents[0].x = 0.301;
tangents[0].y = 0.543;
tangents[0].z = 0.138;
doing the same also for normals[0] and binormals[0], to ensure the same orientation for the last point (t=1) of spline_1 and the first one (t=0) of spline_2
Edit 3
I'm trying to visualize the tangent, normal and binormal for each control point of "mypath" (spline) using ArrowHelper, but, as you can see in the demo (on scene loading, you need zoom out the scene slowly, until you see the ArrowHelpers, to find them. The relative code starts from line 122 to line 152 in the fiddle), the ArrowHelper does not start at origin, but away from it. How to obtain the same result of this reference demo (when you check the "Debug normals" checkbox)?
Edit 4
I plotted two splines that respectively end (blue spline) and start (red spline) at point A (= origin), displaying tangent, normal and binormal vectors at point A for each spline (using cyan color for the blue spline's labels, and yellow color for the red spline's labels).
As mentioned above, to align and make continuous the two splines, I thought to exploit the three vectors (tangent, normal and binormal). Which mathematical operation, in theory, should I use to turn the end face of blue spline in a way that it views the initial face (yellow face) of red spline, so that the respective tangents (D, D'-hidden in the picture), normals (B, B') and binormals (C, C') are aligned? Should I use the ".setFromUnitVectors (vFrom, VTO)" method of quaternion? In its documentation I read: << Sets this quaternion to the rotation required to rotate vFrom direction vector to vector direction VTO ... vFrom VTO and are assumed to be normalized. >> So, probably, I need to define three quaternions:
quaternion for the rotation of the normalized tangent D vector in the direction of the normalized tangent D' vector
quaternion for the rotation of the normalized normal B vector in the direction of the normalized normal B' vector
quaternion for the rotation of the normalized binormal C vector in the direction of the normalized binormal C' vector
with:
vFrom = normalized D, B and C vectors
VTO = normalized D', B' and C' vectors
and apply each of the three quaternions respectively to D, B and C (not normalized)?
Thanks a lot again
Edit 5
I tried this code (looking in the image how to align the vectors) but nothing has changed:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var b1_b2_angle = binormals_1[ binormals_1.length - 1 ].angleTo( binormals_2[ 0 ] ); // angle between binormals_1 (at point A of spline 1) and binormals_2 (at point A of spline 2)
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromAxisAngle( normals_1[ normals_1.length - 1 ], b1_b2_angle ); // quaternion equal to a rotation on normal_1 as axis
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis ); // apply quaternion to binormals_1
var n1_n2_angle = normals_1[ normals_1.length - 1 ].angleTo( normals_2[ 0 ] ); // angle between normals_1 (at point A of spline 1) and normals_2 (at point A of spline 2)
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromAxisAngle( binormals_1[ binormals_1.length - 1 ], -n1_n2_angle ); // quaternion equal to a rotation on binormal_1 as axis
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis ); // apply quaternion to normals_1
nothing in this other way also:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromUnitVectors( binormals_1[ binormals_1.length - 1 ].normalize(), binormals_2[ 0 ].normalize() );
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis );
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromUnitVectors( normals_1[ normals_1.length - 1 ].normalize(), normals_2[ 0 ].normalize() );
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis );