Threejs UV mapping distortion - three.js

I have following problem image gets distorted along the face side (red line).
I have a few layers like this on each I have the same distortion.
I have double checked UV mapping and both faces have the same 2 UV coordinates for same 2 points. I would paste some code but it is a bit complex UV calculation. I am trying to see if somebody had a similar problem.
Uv mapping for this part (Red line matches picture above)
Here is the code for UV calculation but you probably wont be able to find an error there.
calculateNewUvMapping(vertices: Array<Vector3>): Array<Vector2> {
const orderedVertices = _.clone(vertices).reverse();
const newArray: Array<Vector2> = new Array<Vector2>();
const startV3 = orderedVertices[0];
const endV3 = orderedVertices[orderedVertices.length - 1];
const start = new Vector2(startV3.x, startV3.z);
const end = new Vector2(endV3.x, endV3.z);
// Distance between two vertices on scale from 0 to 1
const distanceAlpha = 1 / (orderedVertices.length - 1);
newArray.push(start);
for (let i = 1; i < orderedVertices.length - 1; i++) {
const newUv = new Vector2();
const interpolationValue = distanceAlpha * i;
newUv.lerpVectors(start, end, interpolationValue);
newArray.push(newUv);
}
newArray.push(end);
return newArray;
}
EDIT: Its not normals calculation.

Related

Creating gyroid pattern in 2D image algorithm

I'm trying to fill an image with gyroid lines with certain thickness at certain spacing, but math is not my area. I was able to create a sine wave and shift a bit in the X direction to make it looks like a gyroid but it's not the same.
The idea behind is to stack some images with the same resolution and replicate gyroid into 2D images, so we still have XYZ, where Z can be 0.01mm to 0.1mm per layer
What i've tried:
int sineHeight = 100;
int sineWidth = 100;
int spacing = 100;
int radius = 10;
for (int y1 = 0; y1 < mat.Height; y1 += sineHeight+spacing)
for (int x = 0; x < mat.Width; x++)
{
// Simulating first image
int y2 = (int)(Math.Sin((double)x / sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1+y2), radius, EmguExtensions.WhiteColor, -1, LineType.AntiAlias);
// Simulating second image, shift by x to make it look a bit more with gyroid
y2 = (int)(Math.Sin((double)x / sineWidth + sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1 + y2), radius, EmguExtensions.GreyColor, -1, LineType.AntiAlias);
}
Resulting in: (White represents layer 1 while grey layer 2)
Still, this looks nothing like real gyroid, how can I replicate the formula to work in this space?
You have just single ugly slice because I do not see any z in your code (its correct the surface has horizontal and vertical sin waves like this every 0.5*pi in z).
To see the 3D surface you have to raycast z ...
I would expect some conditional testing of actually iterated x,y,z result of gyroid equation against some small non zero number like if (result<= 1e-6) and draw the stuff only then or compute color from the result instead. This is ideal to do in GLSL.
In case you are not familiar with GLSL and shaders the Fragment shader is executed for each pixel (called fragment) of the rendered QUAD so you just put the code inside your nested x,y for loops and use your x,y instead of pos (you can ignore the Vertex shader its not important).
You got 2 basic options to render this:
Blending the ray casted surface pixels together creating X-Ray like image. It can be combined with SSS techniques to get the impression of glass or semitransparent material. Here simple GLSL example for the blending:
Vertex:
#version 400 core
in vec2 position;
out vec2 pos;
void main(void)
{
pos=position;
gl_Position = vec4(position.xy,0.0,1.0);
}
Fragment:
#version 400 core
in vec2 pos;
out vec3 out_col;
void main(void)
{
float n,x,y,z,dz,d,i,di;
const float scale=2.0*3.1415926535897932384626433832795;
n=100.0; // layers
x=pos.x*scale; // x postion of pixel
y=pos.y*scale; // y postion of pixel
dz=2.0*scale/n; // z step
di=1.0/n; // color increment
i=0.0; // color intensity
for (z=-scale;z<=scale;z+=dz) // do all layers
{
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
if (d<=1e-6) i+=di; // if near surface add to color
}
out_col=vec3(1.0,1.0,1.0)*i;
}
Usage is simple just render 2D quad covering screen without any matrices with corner pos points in range <-1,+1>. Here result:
Another technique is to render first hit to surface creating mesh like image. In order to see the details we need to add basic (double sided) directional lighting for which surface normal is needed. The normal can be computed by simply partialy derivate the equation by x,y,z. As now the surface is opaque then we can stop on first hit and also ray cast just single period in z as anything after that is hidden anyway. Here simple example:
Fragment:
#version 400 core
in vec2 pos; // input fragmen (pixel) position <-1,+1>
out vec3 col; // output fragment (pixel) RGB color <0,1>
void main(void)
{
bool _discard=true;
float N,x,y,z,dz,d,i;
vec3 n,l;
const float pi=3.1415926535897932384626433832795;
const float scale =3.0*pi; // 3.0 periods in x,y
const float scalez=2.0*pi; // 1.0 period in z
N=200.0; // layers per z (quality)
x=pos.x*scale; // <-1,+1> -> [rad]
y=pos.y*scale; // <-1,+1> -> [rad]
dz=2.0*scalez/N; // z step
l=vec3(0.0,0.0,1.0); // light unit direction
i=0.0; // starting color intensity
n=vec3(0.0,0.0,1.0); // starting normal only to get rid o warning
for (z=0.0;z>=-scalez;z-=dz) // raycast z through all layers in view direction
{
// gyroid equation
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
// surface hit test
if (d>1e-6) continue; // skip if too far from surface
_discard=false; // remember that surface was hit
// compute normal
n.x =+cos(x)*cos(y); // partial derivate by x
n.x+=+sin(y)*cos(z);
n.x+=-sin(z)*sin(x);
n.y =-sin(x)*sin(y); // partial derivate by y
n.y+=+cos(y)*cos(z);
n.y+=+sin(z)*cos(x);
n.z =+sin(x)*cos(y); // partial derivate by z
n.z+=-sin(y)*sin(z);
n.z+=+cos(z)*cos(x);
break; // stop raycasting
}
// skip rendering if no hit with surface (hole)
if (_discard) discard;
// directional lighting
n=normalize(n);
i=abs(dot(l,n));
// ambient + directional lighting
i=0.3+(0.7*i);
// output fragment (render pixel)
gl_FragDepth=z; // depth (optional)
col=vec3(1.0,1.0,1.0)*i; // color
}
I hope I did not make error in partial derivates. Here result:
[Edit1]
Based on your code I see it like this (X-Ray like Blending)
var mat = EmguExtensions.InitMat(new System.Drawing.Size(2000, 1080));
double zz, dz, d, i, di = 0;
const double scalex = 2.0 * Math.PI / mat.Width;
const double scaley = 2.0 * Math.PI / mat.Height;
const double scalez = 2.0 * Math.PI;
uint layerCount = 100; // layers
for (int y = 0; y < mat.Height; y++)
{
double yy = y * scaley; // y position of pixel
for (int x = 0; x < mat.Width; x++)
{
double xx = x * scalex; // x position of pixel
dz = 2.0 * scalez / layerCount; // z step
di = 1.0 / layerCount; // color increment
i = 0.0; // color intensity
for (zz = -scalez; zz <= scalez; zz += dz) // do all layers
{
d = Math.Sin(xx) * Math.Cos(yy); // compute gyroid equation
d += Math.Sin(yy) * Math.Cos(zz);
d += Math.Sin(zz) * Math.Cos(xx);
if (d > 1e-6) continue;
i += di; // if near surface add to color
}
i*=255.0;
mat.SetByte(x, y, (byte)(i));
}
}

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

Vertex color interpolation artifacts

I display a "curved tube" and color its vertices based on their distance to the plane the curve lays on.
It works mostly fine, however, when I reduce the resolution of the tube, artifacts starts to appear in the tube colors.
Those artifacts seem to depend on the camera position. If I move the camera around, sometimes the artifacts disappear. Not sure it makes sense.
Live demo: http://jsfiddle.net/gz1wu369/15/
I do not know if there is actually a problem in the interpolation or if it is just a "screen" artifact.
Afterwards I render the scene to a texture, looking at it from the "top". It then looks like a "deformation" field that I use in another shader, hence the need for continuous color.
I do not know if it is the expected behavior or if there is a problem in my code while setting the vertices color.
Would using the THREEJS Extrusion tools instead of the tube geometry solve my issue?
const tubeGeo = new THREE.TubeBufferGeometry(closedSpline, steps, radius, curveSegments, false);
const count = tubeGeo.attributes.position.count;
tubeGeo.addAttribute('color', new THREE.BufferAttribute(new Float32Array(count * 3), 3));
const colors = tubeGeo.attributes.color;
const color = new THREE.Color();
for (let i = 0; i < count; i++) {
const pp = new THREE.Vector3(
tubeGeo.attributes.position.array[3 * i],
tubeGeo.attributes.position.array[3 * i + 1],
tubeGeo.attributes.position.array[3 * i + 2]);
const distance = plane.distanceToPoint(pp);
const normalizedDist = Math.abs(distance) / radius;
const t2 = Math.floor(i / (curveSegments + 1));
color.setHSL(0.5 * t2 / steps, .8, .5);
const green = 1 - Math.cos(Math.asin(Math.abs(normslizedDist)));
colors.setXYZ(i, color.r, green, 0);
}
Low-res tubes with "Normals" material shows different artifact
High resolution tube hide the artifacts:

Compute 3D point from mouse-position and depth-map

I need to compute 3D coordinates from a screen-space position using a rendered depth-map. Unfortunately, using the regular raytracing is not an option for me because I am dealing with a single geometry containing something on the order of 5M faces.
So I figured I will do the following:
render a depth-map with RGBADepthPacking into a renderTarget
use a regular unproject-call to compute a ray from the mouse-position (exactly as I would do when using raycasting)
lookup the depth from the depth-map at the mouse-coordinates and compute a point along the ray using that distance.
This kind of works, but somehow the located point is always slightly behind the object, so there is probably something wrong with my depth-calculations.
Now some details about the steps above
Rendering the depth-map is pretty much straight-forward:
const depthTarget = new THREE.WebGLRenderTarget(w, h);
const depthMaterial = new THREE.MeshDepthMaterial({
depthPacking: THREE.RGBADepthPacking
});
// in renderloop
renderer.setClearColor(0xffffff, 1);
renderer.clear();
scene.overrideMaterial = depthMaterial;
renderer.render(scene, camera, depthTarget);
Lookup the stored color-value at the mouse-position with:
renderer.readRenderTargetPixels(
depthTarget, x, h - y, 1, 1, rgbaBuffer
);
And convert back to float using (adapted from the GLSL-Version in packing.glsl):
const v4 = new THREE.Vector4()
const unpackDownscale = 255 / 256;
const unpackFactors = new THREE.Vector4(
unpackDownscale / (256 * 256 * 256),
unpackDownscale / (256 * 256),
unpackDownscale / 256,
unpackDownscale
);
function unpackRGBAToDepth(rgbaBuffer) {
return v4.fromArray(rgbaBuffer)
.multiplyScalar(1 / 255)
.dot(unpackFactors);
}
and finally computing the depth-value (I found corresponding code in readDepth() in examples/js/shaders/SSAOShader.js which I ported to JS):
function computeDepth() {
const cameraFarPlusNear = cameraFar + cameraNear;
const cameraFarMinusNear = cameraFar - cameraNear;
const cameraCoef = 2.0 * cameraNear;
let z = unpackRGBAToDepth(rgbaBuffer);
return cameraCoef / (cameraFarPlusNear - z * cameraFarMinusNear);
}
Now, as this function returns values in range 0..1 I think it is the depth in clip-space coordinates, so I convert them into "real" units using:
const depth = camera.near + depth * (camera.far - camera.near);
There is obviously something slightly off with these calculations and I didn't figure out the math and details about how depth is stored yet.
Can someone please point me to the mistake I made?
Addition: other things I tried
First I thought it should be possible to just use the unpacked depth-value as value for z in my unproject-call like this:
const x = mouseX/w * 2 - 1;
const y = -mouseY/h * 2 + 1;
const v = new THREE.Vector3(x, y, depth).unproject(camera);
However, this also doesn't get the coordinates right.
[EDIT 1 2017-05-23 11:00CEST]
As per #WestLangleys comment I found the perspectiveDepthToViewZ() function which sounds like it should help. Written in JS that function is
function perspectiveDepthToViewZ(invClipZ, near, far) {
return (near * far) / ((far - near) * invClipZ - far);
}
However, when called with unpacked values from the depth-map, results are several orders of magnitude off. See here.
Ok, so. Finally solved it. So for everyone having trouble with similar issues, here's the solution:
The last line of the computeDepth-function was just wrong. There is a function perspectiveDepthToViewZ in packing.glsl, that is pretty easy to convert to JS:
function perspectiveDepthToViewZ(invClipZ, near, far) {
return (near * far) / ((far - near) * invClipZ - far);
}
(i believe this is somehow part of the inverse projection-matrix)
function computeDepth() {
let z = unpackRGBAToDepth(rgbaBuffer);
return perspectiveDepthToViewZ(z, camera.near, camera.far);
}
Now this will return the z-axis value in view-space for the point. Left to do is converting this back to world-space coordinates:
const setPositionFromViewZ = (function() {
const viewSpaceCoord = new THREE.Vector3();
const projInv = new THREE.Matrix4();
return function(position, viewZ) {
projInv.getInverse(camera.projectionMatrix);
position
.set(
mousePosition.x / windowWidth * 2 - 1,
-(mousePosition.y / windowHeight) * 2 + 1,
0.5
)
.applyMatrix4(projInv);
position.multiplyScalar(viewZ / position.z);
position.applyMatrix4(camera.matrixWorld);
};
}) ();

Three.js - How to use the frames option in ExtrudeGeometry

I can't find anywhere an explaination about how to use the frames option for ExtrudeGeometry in Three.js. Its documentation says:
extrudePath — THREE.CurvePath. 3d spline path to extrude shape along. (creates Frames if (frames aren't defined)
frames — THREE.TubeGeometry.FrenetFrames. containing arrays of tangents, normals, binormals
but I don't understand how frames must be defined. I think using the "frames" option, passing three arrays for tangents, normals and binormals (calculated in some way), but how to pass them in frames?... Probably (like here for morphNormals):
frames = { tangents: [ new THREE.Vector3(), ... ], normals: [ new THREE.Vector3(), ... ], binormals: [ new THREE.Vector3(), ... ] };
with the three arrays of the same lenght (perhaps corresponding to steps or curveSegments option in ExtrudeGeometry)?
Many thanks for an explanation.
Edit 1:
String.prototype.format = function () {
var str = this;
for (var i = 0; i < arguments.length; i++) {
str = str.replace('{' + i + '}', arguments[i]);
}
return str;
}
var numSegments = 6;
var frames = new THREE.TubeGeometry.FrenetFrames( new THREE.SplineCurve3(spline), numSegments );
var tangents = frames.tangents,
normals = frames.normals,
binormals = frames.binormals;
var tangents_list = [],
normals_list = [],
binormals_list = [];
for ( i = 0; i < numSegments; i++ ) {
var tangent = tangents[ i ];
var normal = normals[ i ];
var binormal = binormals[ i ];
tangents_list.push("({0}, {1}, {2})".format(tangent.x, tangent.y, tangent.z));
normals_list.push("({0}, {1}, {2})".format(normal.x, normal.y, normal.z));
binormals_list.push("({0}, {1}, {2})".format(binormal.x, binormal.y, binormal.z));
}
alert(tangents_list);
alert(normals_list);
alert(binormals_list);
Edit 2
Times ago, I opened this topic for which I used this solution:
var spline = new THREE.SplineCurve3([
new THREE.Vector3(20.343, 19.827, 90.612), // t=0
new THREE.Vector3(22.768, 22.735, 90.716), // t=1/12
new THREE.Vector3(26.472, 23.183, 91.087), // t=2/12
new THREE.Vector3(27.770, 26.724, 91.458), // t=3/12
new THREE.Vector3(31.224, 26.976, 89.861), // t=4/12
new THREE.Vector3(32.317, 30.565, 89.396), // t=5/12
new THREE.Vector3(31.066, 33.784, 90.949), // t=6/12
new THREE.Vector3(30.787, 36.310, 88.136), // t=7/12
new THREE.Vector3(29.354, 39.154, 90.152), // t=8/12
new THREE.Vector3(28.414, 40.213, 93.636), // t=9/12
new THREE.Vector3(26.569, 43.190, 95.082), // t=10/12
new THREE.Vector3(24.237, 44.399, 97.808), // t=11/12
new THREE.Vector3(21.332, 42.137, 96.826) // t=12/12=1
]);
var spline_1 = [], spline_2 = [], t;
for( t = 0; t <= (7/12); t+=0.0001) {
spline_1.push(spline.getPoint(t));
}
for( t = (7/12); t <= 1; t+=0.0001) {
spline_2.push(spline.getPoint(t));
}
But I was thinking the possibility to set the tangent, normal and binormal for the first point (t=0) of spline_2 to be the same of last point (t=1) of spline_1; so I thought if that option, frames, could return in some way useful for the purpose. Could be possible to overwrite the value for a tangent, normal and binormal in the respective list, to obtain the same value for the last point (t=1) of spline_1 and the first point (t=0) of spline_2, so to guide the extrusion? For example, for the tangent at "t=0" of spline_2:
tangents[0].x = 0.301;
tangents[0].y = 0.543;
tangents[0].z = 0.138;
doing the same also for normals[0] and binormals[0], to ensure the same orientation for the last point (t=1) of spline_1 and the first one (t=0) of spline_2
Edit 3
I'm trying to visualize the tangent, normal and binormal for each control point of "mypath" (spline) using ArrowHelper, but, as you can see in the demo (on scene loading, you need zoom out the scene slowly, until you see the ArrowHelpers, to find them. The relative code starts from line 122 to line 152 in the fiddle), the ArrowHelper does not start at origin, but away from it. How to obtain the same result of this reference demo (when you check the "Debug normals" checkbox)?
Edit 4
I plotted two splines that respectively end (blue spline) and start (red spline) at point A (= origin), displaying tangent, normal and binormal vectors at point A for each spline (using cyan color for the blue spline's labels, and yellow color for the red spline's labels).
As mentioned above, to align and make continuous the two splines, I thought to exploit the three vectors (tangent, normal and binormal). Which mathematical operation, in theory, should I use to turn the end face of blue spline in a way that it views the initial face (yellow face) of red spline, so that the respective tangents (D, D'-hidden in the picture), normals (B, B') and binormals (C, C') are aligned? Should I use the ".setFromUnitVectors (vFrom, VTO)" method of quaternion? In its documentation I read: << Sets this quaternion to the rotation required to rotate vFrom direction vector to vector direction VTO ... vFrom VTO and are assumed to be normalized. >> So, probably, I need to define three quaternions:
quaternion for the rotation of the normalized tangent D vector in the direction of the normalized tangent D' vector
quaternion for the rotation of the normalized normal B vector in the direction of the normalized normal B' vector
quaternion for the rotation of the normalized binormal C vector in the direction of the normalized binormal C' vector
with:
vFrom = normalized D, B and C vectors
VTO ​​= normalized D', B' and C' vectors
and apply each of the three quaternions respectively to D, B and C (not normalized)?
Thanks a lot again
Edit 5
I tried this code (looking in the image how to align the vectors) but nothing has changed:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var b1_b2_angle = binormals_1[ binormals_1.length - 1 ].angleTo( binormals_2[ 0 ] ); // angle between binormals_1 (at point A of spline 1) and binormals_2 (at point A of spline 2)
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromAxisAngle( normals_1[ normals_1.length - 1 ], b1_b2_angle ); // quaternion equal to a rotation on normal_1 as axis
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis ); // apply quaternion to binormals_1
var n1_n2_angle = normals_1[ normals_1.length - 1 ].angleTo( normals_2[ 0 ] ); // angle between normals_1 (at point A of spline 1) and normals_2 (at point A of spline 2)
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromAxisAngle( binormals_1[ binormals_1.length - 1 ], -n1_n2_angle ); // quaternion equal to a rotation on binormal_1 as axis
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis ); // apply quaternion to normals_1
nothing in this other way also:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromUnitVectors( binormals_1[ binormals_1.length - 1 ].normalize(), binormals_2[ 0 ].normalize() );
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis );
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromUnitVectors( normals_1[ normals_1.length - 1 ].normalize(), normals_2[ 0 ].normalize() );
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis );

Resources