I have used Collada model for one of my product 3D preview with canvas as texture and normally it's working fine but for some angles 3D corner part are not rendering proper.
I have attached screenshot of canvas and 3D modal along with DAE modal.
Is there issue with modal? please find Collada model here
Please find my code below:
width = 256, height = 256;
camera = new THREE.PerspectiveCamera( 1, width / height, 0.01, 300 );
camera.position.set( 8, 10, 8 );
camera.lookAt( 0, 3, 0 );
scene = new THREE.Scene();
scene.background = new THREE.Color( 0xffffff );
loadingManager = new THREE.LoadingManager( function () {
scene.add(collada.scene ); // while loading collada model i am updating scene and assign new collada.scene to it
});
loader = new ColladaLoader( loadingManager );
ambientLight = new THREE.AmbientLight( 0xffffff, 1 );
scene.add( ambientLight );
spotLight = new THREE.SpotLight( 0xffffff, 1 );
spotLight.target = scene;
spotLight.position.set(0, 0, 0);
spotLight.castShadow = !0;
spotLight.shadow && spotLight.shadow.mapSize.set(width, height);
There is an issue in your Collada file although it is not necessarily responsible for the rendering artifacts. The problem is the following section:
<library_images>
<image id="Map #3-image" name="Map #3"><init_from>file://D:\Mit\3D Box\57.15 x 57.15 x 152.4 overlay\57.15 x 57.15 x 152.4 Open\57.15 x 57.15 x 152.4 Front Open.png</init_from></image>
<image id="Map #5-image" name="Map #5"><init_from>file://D:\Mit\3D Box\57.15 x 57.15 x 152.4 overlay\57.15 x 57.15 x 152.4 Open\57.15 x 57.15 x 152.4 Back Open.png</init_from></image>
<image id="Map #6-image" name="Map #6"><init_from>file://D:\Mit\3D Box\57.15 x 57.15 x 152.4 overlay\57.15 x 57.15 x 152.4 Open\57.15 x 57.15 x 152.4 Border Open.png</init_from></image>
</library_images>
As you can see, it contain absolute file path targeting a file system. These definitions do not work if you load the Collada asset in a browser. Assuming the textures are located in the same directory it should be:
<library_images>
<image id="Map #3-image" name="Map #3"><init_from>57.15 x 57.15 x 152.4 Front Open.png</init_from></image>
<image id="Map #5-image" name="Map #5"><init_from>57.15 x 57.15 x 152.4 Back Open.png</init_from></image>
<image id="Map #6-image" name="Map #6"><init_from>57.15 x 57.15 x 152.4 Border Open.png</init_from></image>
</library_images>
The geometry of the model itself seems to render fine when importing it in the three.js editor.
Related
I'm trying to create a 2D square with curved/round edges. As I understand it using planes is the way to go. I'm struggling to figure out how exactly to do this though. It seems like it should be simple. Can't seem to find any straightforward answers online either so I would really appreciate the help.
// Create plane
let geometry = new THREE.PlaneGeometry(1, 1)
// Round the edges somehow?
this.mesh = new THREE.Mesh(geometry, material)
this.mesh.rotation.x = -Math.PI / 2
this.container.add(this.mesh)
Managed to get it working based off #prisoner849 suggestion to use THREE.Shape() and THREE.ShapeBufferGeometry(). Posting my answer but its mostly the same as the found here here
let x = 1; let y = 1; let width = 50; let height = 50; let radius = 20
let shape = new THREE.Shape();
shape.moveTo( x, y + radius );
shape.lineTo( x, y + height - radius );
shape.quadraticCurveTo( x, y + height, x + radius, y + height );
shape.lineTo( x + width - radius, y + height );
shape.quadraticCurveTo( x + width, y + height, x + width, y + height - radius );
shape.lineTo( x + width, y + radius );
shape.quadraticCurveTo( x + width, y, x + width - radius, y );
shape.lineTo( x + radius, y );
shape.quadraticCurveTo( x, y, x, y + radius );
let geometry = new THREE.ShapeBufferGeometry( shape );
this.mesh = new THREE.Mesh(geometry, material)
this.mesh.rotation.x = -Math.PI / 2
this.container.add(this.mesh)
I use function like this in three.js 69
function Point3DToScreen2D(point3D,camera){
var p = point3D.clone();
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * window.innerWidth;
vector.y = -(vector.y - 1) / 2 * window.innerHeight;
return vector;
}
It works fine when i keep the scene still.
But when i rotate the scene it made a mistake and return wrong position in the screen.It occurs when i rotate how about 180 degrees.It shoudn't have a position in screen but it showed.
I set a position var tmpV=Point3DToScreen2D(new THREE.Vector3(-67,1033,-2500),camera); in update and show it with css3d.And when i rotate like 180 degrees but less than 360 , the point shows in the screen again.Obviously it's a wrong position that can be telled from the scene and i haven't rotate 360 degrees.
I know little about the Matrix,So i don't know how the project works.
Here is the source of project in three.js:
project: function () {
var matrix;
return function ( camera ) {
if ( matrix === undefined ) matrix = new THREE.Matrix4();
matrix.multiplyMatrices( camera.projectionMatrix, matrix.getInverse( camera.matrixWorld ) );
return this.applyProjection( matrix );
};
}()
Is the matrix.getInverse( camera.matrixWorld ) redundant? I tried to delete it and it didn't work.
Can anyone help me?Thanks.
You are projecting a 3D point from world space to screen space using a pattern like this one:
var vector = new THREE.Vector3();
var canvas = renderer.domElement;
vector.set( 1, 2, 3 );
// map to normalized device coordinate (NDC) space
vector.project( camera );
// map to 2D screen space
vector.x = Math.round( ( vector.x + 1 ) * canvas.width / 2 ),
vector.y = Math.round( ( - vector.y + 1 ) * canvas.height / 2 );
vector.z = 0;
However, using this approach, points behind the camera are projected to screen space, too.
You said you want to filter out points that are behind the camera. To do that, you can use this pattern first:
var matrix = new THREE.Matrix4(); // create once and reuse
...
// get the matrix that maps from world space to camera space
matrix.getInverse( camera.matrixWorld );
// transform your point from world space to camera space
p.applyMatrix( matrix );
Since the camera is located at the origin in camera space, and since the camera is always looking down the negative-z axis in camera space, points behind the camera will have a z-coordinate greater than zero.
// check if point is behind the camera
if ( p.z > 0 ) ...
three.js r.71
Like the example above but you can check vector.z to determine if it's in front.
var vector = new THREE.Vector3();
var canvas = renderer.domElement;
vector.set( 1, 2, 3 );
// map to normalized device coordinate (NDC) space
vector.project( camera );
// map to 2D screen space
vector.x = Math.round( ( vector.x + 1 ) * canvas.width / 2 ),
vector.y = Math.round( ( - vector.y + 1 ) * canvas.height / 2 );
//behind the camera if z isn't in 0..1 [frustrum range]
if(vector.z > 1){
vector = null;
}
To delve a little deeper into this answer:
// behind the camera if z isn't in 0..1 [frustrum range]
if(vector.z > 1){
vector = null;
}
This is not true. The mapping is not continuous. Points beyond the far
plane also map to z-values greater than 1
What exactly does the z-value of a projected vector stand for? X and Y are in normalised clipspace [-1,1] , what about z?
Would this be true?
projectVector.project(camera);
var inFrontOfCamera = projectVector.z < 1;
Since the camera is located at the origin in camera space, and since the camera is always looking down the negative-z axis in camera space, points behind the camera will have a z-coordinate greater than 1.
//check if point is behind the camera
if ( p.z > 1 ) ...
NOTICE: If this condition is satisfied, then the projected coordinates need to be centrosymmetric
{x: 0.233, y: -0.566, z: 1.388}
// after transform
{x: -0.233, y: 0.566, z: 1.388}
I'm creating a tool to rotate images in ThreeJs, but it doesn't work when dealing with negative scales.
The image is displayed in a Mesh created using a THREE.PlaneGeometry element and a material which maps to to correspongin image.
The tool is an object that has an element called gizmo (it's a small mesh) which is selected and dragged by the user to rotate the object.
To do the rotation I define an angle and an axis. The angle is defined by two vectors created using the the position of the gizmo (original and current) and the position of the Mesh.
var gizmoOriginalPosition = this.gizmoOriginalPosition.clone().applyMatrix4( this.matrixWorld );
var imagePosition = this.imageToTransformOriginalPosition.clone().applyMatrix4( this.imageToTransformParentOriginalMatrix );
var vector1 = gizmoOriginalPosition.sub( imagePosition ).normalize();
var vector2 = point.sub( imagePosition ).normalize();
var angle = Math.acos( vector1.dot( vector2 ) );
var axis = new THREE.Vector3( 0, 0, 1 );
var ortho = vector2.clone().cross( vector1 );
var _m = this.imageToTransformOriginalMatrix.clone();
this.tempMatrix.extractRotation( _m );
var q = new THREE.Quaternion().setFromRotationMatrix( this.tempMatrix );
var _axis = axis.clone().applyQuaternion( q );
var f = ortho.dot( _axis );
f = f > 0 ? 1 : -1;
angle *= -f;
var q = new THREE.Quaternion().setFromAxisAngle( axis, angle );
var Q = new THREE.Quaternion().multiplyQuaternions( this.imageToTransformOriginalQuaternion, q );
imageToTransform.quaternion.copy( Q );
The axis of rotation is always ( 0, 0, 1) because the Mesh is a plane in XY.
point is the new position of the gizmo using a plane of intersection.
The vectors to define the angle are in world coordinates. ortho is a vector to define the direction of the angle, so the Mesh rotates in the direction of the mouse pointer. I define the direction of the angle with the f value obtained using ortho and axis. The axis ( 0, 0, 1 ) is rotated so its direction is in world coordinates ( ortho is in world coordinates ).
This works as expected in almost every case, except when the Mesh has a negative scale in X and Y. Here the image rotates in the opposite direction to the mouse pointer.
Thanks.
I have a cube of size 1 x 1 x 2. On the larger (1 x 2) face, I would like to show one color on half of the face, and another color on the other half. What would the recommended way of implementing this? Should I use hierarchy to build this 1 x 1 x 2 cube using two 1 x 1 x 1 cubes of different face color?
Here is the pattern to follow. Adjust to your liking:
var geometry = new THREE.CubeGeometry( 10, 10, 20, 1, 1, 2 );
for ( var i = 0; i < geometry.faces.length; i ++ ) {
geometry.faces[ i ].color.setHSL( Math.random(), 0.5, 0.5 ); // pick your colors
}
var material = new THREE.MeshBasicMaterial( { vertexColors: THREE.FaceColors } );
var mesh = new THREE.Mesh( geometry, material );
If you are using CanvasRenderer, you can set material.overdraw = 0.5 to try to eliminate the diagonal lines. This is not required for WebGLRenderer.
three.js r.60
With the following code, I want to set the rectangle's texture, but the problem is that the texture image does not repeat over the whole rectangle:
var penGeometry = new THREE.CubeGeometry(length, 15, 120);
var wallTexture = new THREE.ImageUtils.loadTexture('../../3D/brick2.jpg');
wallTexture.wrapS = wallTexture.wrapT = THREE.MirroredRepeatWrapping;
wallTexture.repeat.set(50, 1);
var wallMaterial = new THREE.MeshBasicMaterial({ map: wallTexture });
var line = new THREE.Mesh(penGeometry, wallMaterial);
line.position.x = PenArray.lastPosition.x + (PenArray.currentPosition.x - PenArray.lastPosition.x) / 2;
line.position.y = PenArray.lastPosition.y + (PenArray.currentPosition.y - PenArray.lastPosition.y) / 2;
line.position.z = PenArray.lastPosition.z + 60;
line.rotation.z = angle;
The texture image is http://wysnan.com/NightClubBooth/brick1.jpg
The result is http://wysnan.com/NightClubBooth/brick2.jpg
Only a piece of texture is rendered correctly but not all of the rectangle, why?
And how to render all the rectangle with this texture image?
For repeat wrapping, your texture's dimensions must be a power-of-two (POT).
For example ( 512 x 512 ) or ( 512 x 256 ).
three.js r.58