Getting global normal from hitTest.face.normal - three.js

I am doing a hitTest to create a section plane on a face normal. To get the global normal I have to do some reworking of the hitTest.face.normal. It seems to almost work, but my result seems to be slightly off from the actual normal, so I am thinking I am doing something wrong:
const normalMatrix = new THREE.Matrix3().getNormalMatrix( this.hitTest.object.matrixWorld );
const normal = this.hitTest.face.normal.clone().applyMatrix3( normalMatrix )
this.SectionExtension.tool.setSectionPlane(normal, this.hitTest.point)
As seen on the picture, my ending cut plane is slightly off the actual plane
Can anyone see what might be off in that way of getting the plane, or do anyone have a better way of finding the global normal?
Thank you in advance!

Could you share more details on what the face normal should be? Here is the code snippet how the viewer's context menu Section Plane creates the section plane based on the face hit point. It might help.
const selected = viewer.getSelection();
const intersection = viewer.impl.hitTest(status.canvasX, status.canvasY, false, selected);
// Ensure that the selected object is the on that recieved the context click.
if (intersection && intersection.face && selected.indexOf(intersection.dbId) !== -1) {
sectionExtension.tool.setSectionPlane(section, intersection.face.normal, intersection.point);
}

If anyone should be interested, then i found a solution to my problem
For some reason, when running
const currentFragId = this.hitTest.fragId;
const renderProxy = this.viewer.impl.getRenderProxy(this.viewer.model,currentFragId);
Before using the normal helped me to get the correct normal. I do not use the 'renderProxy' for anything - but i assume that it helps the viewer in some way. Anyway - this works for me!

Related

Issues with placementTransform Matrix when loading multiple models to Autodesk Forge

When loading multiple models, I am using the placementTransform parameters.
the issues I am facing is that the Rotation works however the translation does not.
var Rmat = new THREE.Matrix4();
Tmat=new THREE.Matrix4().makeTranslation(X,Y,Z);
Rmat.makeRotationZ(Angle);
Rmat.multiply(Tmat);
var modelOptions = {
placementTransform: Rmat,
sharedPropertyDbPath: doc.getRoot().getPropertyDbManifest()
};
As far as I know, placementTransform should support both translation and rotation. Try applying the transformations individually (only translation or only rotation), see if both are applied as expected. And also double-check if you're multiplying the matrices in the correct order.
Moreover, if you can reproduce the issue in a sample app, please share it with us via forge (dot) help (at) autodesk (dot) com and we'll take a look at it.
I tried all the combinations the only one that worked was using the globalOffset
and commented this._firstGlobalOffset the code that worked is as follows:
//this._firstGlobalOffset = {x:0,y:0,z:0}; // Commented
var Rmat = new THREE.Matrix4();
Rmat.makeRotationZ(Angle);
var modelOptions = {
placementTransform: Rmat,
globalOffset:{x:X,y:X,z:Z},
sharedPropertyDbPath: doc.getRoot().getPropertyDbManifest()
};

Manipulate gltf model children

I'VE loaded a GLTF file with 2 mesh object in it (cube1, cube2) and rendered it, looks OK.
The problem is this :
I'm trying to manipulate the opacity / scale of those objects separately.
Tried to address them with:
Var cube1 = gltf.Scene.GetObjectByName('Cube1');
But when I try to define cube1.opacity, I get an "undefined" error.
Any help is appreciated!
Thanks
Well... even tho' the question is simple, the answer not so much.
First you are using a loader, usually this implies that your cube model will be a bit more complicated. You will go something like this:
3D Object > Children > Mesh [x] > material > Opacity
I have a live example here:
https://boxelizer.com/renderer.php?mid=7740369e824e4eadbd83e6f01fa96caa
In which you can go into the console and change that property like this:
model.children[1].material.transparent = true;
model.children[1].material.opacity = .5;
model.children[1].material.needsUpdate = true;
Your model may be a bit different but I hope this example helps you figure out yours.

EffectComposer second pass "overwrites" first pass

i want to render a texture on the background and the 3dscene in the foreground. i used the effectcomposer to do this.
how ever my first pass (the background) seems to be "overwritten" with the 2nd pass (the scene) the result only the scene gets drawn with a black background. it looks like the background of the second pass isnt drawn transparent or the transparancy is lost.
http://jsfiddle.net/mdwzx1f8/8/
var renderTex = new THREE.TexturePass(myTex);
var renderScene = new THREE.RenderPass(scene, camera);
composer.addPass(renderTex);
composer.addPass(renderScene);
var effectCopy = new THREE.ShaderPass(THREE.CopyShader);
effectCopy.renderToScreen = true;
composer.addPass(effectCopy);
i hope someone can take a quick look at it and point me in the right direction
thanks in advance
Updates:
07/07/2015
I tried clearing the zbuffer with renderer.clear(false, true, false);
Found a post on masking which i looked at but it wasnt added to
threejs as far as i can tell
https://github.com/mrdoob/three.js/issues/2448
08/07/2015
Found another interesting page https://github.com/mrdoob/three.js/issues/5979 not sure if this is related yet
Updated the fiddle if you comment line 53 you will see the 1st pass which should be visible if the scene background is drawn transparent
Bobafett in the threejs irc channel helped me out and he found my issue, it turns out that i called:
renderer.autoClear = false;
instead onrenderer.autoClearColor = false;
Here is the modified and working fiddle:
http://jsfiddle.net/mdwzx1f8/9/
I would like to thank all who have helped me in the search for the solution

Creating THREE.Line's with different endpoints using THREE.BufferGeometry

I am creating several THREE.Lines using THREE.BufferGeometry. Initially my app had them all starting at the origin and things worked as expected. Now, I would like to be able to start (and end) them at any point.
This fiddle (http://jsfiddle.net/9nVqU/) illustrates (I hope) how changing one end of the line away from the origin causes unexpected results.
I wondered if it was because any given line follows on from the previous one - switching the start/end order didn't change anything though so if that were true, I'd expect it to break.
Maybe I have the arrays set up incorrectly or the attributes that tell THREE.js how to interpret it - I think I need 2 * 3 verts for each line but changes I made to buffer_geometry.attributes = { seemed to make things worse.
FWIW, the actual effect I'm trying to achieve is to selectively turn on and off the lines based on user input. I can do that already by changing the end position but then I lose that value and I don't want to store it elsewhere. I thought that I could move the start point to the end point to switch it off and then move the start point to the origin again to re-enable it. If there is a way to enable/disable lines individually with BufferGeometry, then that would clearly be better.
First of all, you would have to do this:
var line = new THREE.Line( buffer_geometry, material );
line.type = THREE.LinePieces;
Second, this is not supported in r.58 , but should be.
As a work-around, you can hack WebGLRenderer.renderBufferDirect() like so:
// render lines
setLineWidth( material.linewidth );
var position = geometryAttributes[ "position" ];
primitives = ( object.type === THREE.LineStrip ) ? _gl.LINE_STRIP : _gl.LINES;
_gl.drawArrays( primitives, 0, position.numItems / 3 );
_this.info.render.calls ++;
_this.info.render.points += position.numItems;
three.js r.58

dojo.gfx matrix transformation

Matrix transformations has got my head spinning. I've got a dojox.gfx.group which I want to be draggable with Mover and then be able to rotate it around a certain point on the surface. My basic code looks like this:
this.m = dojox.gfx.matrix,
.
.
.
updateMatrix: function(){
var mtx = this.group._getRealMatrix();
var trans_m = this.m.translate(mtx.dx, mtx.dy);
this.group.setTransform([this.m.rotateAt(this.rotation, 0, 0), trans_m]);
}
The rotation point is at (0,0) just to keep things simple. I don't seem to understand how the group is being rotated.
Any reference to simplistic tutorial on matrix transformations would also help. The ones I've checked out haven't help too much.
Try the official dojox.gfx matrix tutorial. See if the official documentation helps.
The official documentation is where my head started spinning. Been staring at that for quite a long time because I couldn't make out how to feed the new coordinates into upcoming matrix transformations.
I've finally managed to figure out the problem though. It was a matter of connecting a listener to when the Mover triggers onMoveStop:
dojo.connect(movable, "onMoveStop", map, "reposition");
I then get the new moved distances and feed them into any rotation or scaling matrix translations in my graphic class:
updateMatrix: function(){
//So far it is the group which is being rotated
if (this.group) {
if(!this.curr_matrix){
this.curr_matrix = this.initial_matrix;
}
this.group.setTransform([
this.m.rotateAt(this.rotation, this.stage_w_2, this.stage_h_2),
this.m.scaleAt(this.scaling, this.stage_w_2, this.stage_h_2),
this.curr_matrix
]);
//this.group.setTransform([
// this.m.rotateAt(this.rotation, mid_x, mid_y),
// this.m.scaleAt(this.scaling, mid_x, mid_y),
// this.initial_matrix]);
}
},
reposition: function(){
mtx = this.group._getRealMatrix();
this.curr_matrix = this.m.translate(mtx.dx, mtx.dy);
},
Life's dandy again. Thanks Eugene for the suggestions.

Resources