I have a model with joints that individual shapes rotate around. The model is translated to the center. The matrix applied with applyMatrix() is a rotation matrix based on a quaternion value so it can rotate around without gimbal lock.
But then I want to translate and rotate around a JOINT point, which uses an offset of the model, so I can find that point, because it's just model.x/y/z + offset.x/y/z. However, this doesn't take into consideration the rotation from applyMatrix(), but I want to be able to calculate where the Joint point (x,y,z) is at whatever rotation it's in.
translate(width/2, height/2, 0);
applyMatrix(m[1][1],m[1][2],m[1][0],0,
m[2][1],m[2][2],m[2][0],0,
m[0][1],m[0][2],m[0][0],0,
0,0,0,1);
pushMatrix();
// rotate around this joint point
translate(offsetX, offsetY, offsetZ);
// !!!!
// WHAT is the new x,y,z right here??
// !!!!
rotate(amt);
translate(-offsetX, -offsetY, -offsetZ);
shape(component)
popMatrix();
Check out the coordinates section of the reference.
Specifically, it sounds like you're looking for the screenX() and screenY() functions. Or you could use the modelX() and modelY() functions to go the other way and convert your screen coordinates into model coordinates.
Related
In Three.js there seems to be quite a few ways of rotation which i personally do not find very intuitive. See e.g. the example
http://cloud.engineering-bear.com/apps/robot/robot.html
I get very strange unexpected effects when I apply rotation to multiple objects. E.g. when I rotate objects that have been added to each other and start rotating the parent the individual objects will all over sudden by placed differently in respect to each other then they originally where. I am now experimenting with grouping and would like to avoid the same effect.
See http://pi-q-robot.bitplan.com/example/robot?robot=/models/thing3088064.json for the current state of affairs and https://github.com/BITPlan/PI-Q-Robot for the source code.
So i searched for proper examples following the different API options:
rotation
function renderScene() {
stats.update();
//side1.rotation.z += 0.02;
pivot.rotation.z += 0.02;
https://jsfiddle.net/of1vfhzz/1/
https://github.com/mrdoob/three.js/issues/1958
rotateOnAxis
three.js rotate Object3d around Y axis at it center
How to rotate a 3D object on axis three.js?
ThreeJS - rotation around object's own axis
rotateAroundWorldAxis
object.rotateAroundWorldAxis(p, ax, r * Math.PI * 2 / frames);
How to rotate a object on axis world three.js?
https://stackoverflow.com/a/32038265/1497139
https://jsfiddle.net/b4wqxkjn/7/
THREE.js Update rotation property of object after rotateOnWorldAxis
rotateOnWorldAxis
object.rotateOnWorldAxis( axis, angle );
Rotate around World Axis
rotateAboutPoint
Three JS Pivot point
Rotation anchor point in Three.js
setRotationFromAxisAngle
https://threejs.org/docs/#api/en/core/Object3D.setRotationFromAxisAngle
setEulerFromQuaternion
quaternion = new THREE.Quaternion().setFromAxisAngle( axisOfRotation, angleOfRotation );
object.rotation.setEulerFromQuaternion( quaternion );
Three.js - Rotating a sphere around a certain axis
applyMatrix
this.mesh.updateMatrixWorld(); // important !
childPart.mesh.applyMatrix(new THREE.Matrix4().getInverse(this.mesh.matrixWorld))
Applying a matrix in Three.js does not what I expect
I like the jsFiddle for https://stackoverflow.com/a/56427636/1497139
var pivot = new THREE.Object3D();
pivot.add( cube );
scene.add( pivot );
I also found the following discussions
pivot issue in discourcee.three.js.org
https://discourse.threejs.org/t/rotate-group-around-pivot/3656
https://discourse.threejs.org/t/how-to-rotate-an-object-around-a-pivot-point/6838
https://discourse.threejs.org/t/set-dynamically-generated-groups-pivot-position-to-the-center-of-its-children-objects-position/6349
https://discourse.threejs.org/t/my-3d-model-is-not-rotating-around-its-origin/3339/3
https://jsfiddle.net/blackstrings/c0o3Lm45/
https://discourse.threejs.org/t/rotate-object-at-end-point/2190
https://jsfiddle.net/f2Lommf5/3594/
Questions
None of the above information is clear enough to get to the point of the problem to be solved. The graphics above are much clearer stating the problem than the proposals are stating a solution.
a)
I'd like to use the cylinder as the axis even when the cylinder is moved.I'd expect the easiest way to go would be to use rotateAroundWorldAxis - is that available in the latest revision from three.js or do i have to add it from e.g. https://stackoverflow.com/a/32038265/1497139?
b) I'd like to get a chain of objects to be rotated to later apply inverse kinematics as in
https://github.com/jsantell/THREE.IK
https://jsantell.github.io/THREE.IK/
Although i looked at the source code of that solutions I can't really find the place where the parent-child positioning and rotating is happening. What are the relevant lines of code / API functions that would make proper rotation around a chain of joints happen?
I already looked in the Bone/Skeleton API of Three.js but had the same problem there - lots of lines of code but no clear point where the rotation/positioning between child and parent happens.
Question a)
Basically it works as expected:
cylinder.position.set( options.x, 15, options.z );
pivot.position.x=options.x;
pivot.position.z=options.z;
see
https://jsfiddle.net/wf_bitplan_com/4f6ebs90/13/
Question b)
see
https://codepen.io/seppl2019/pen/zgJVKM
The key is to set the positions correctly. Instead of the proposal at https://stackoverflow.com/a/43837053/1497139 the size is computed in this case.
// create the pivot to rotate around/about
this.pivot = new THREE.Group();
this.pivot.add(this.mesh);
// shift the pivot position to fit my size + the size of the joint
this.pivot.position.set(
x,
y + this.size.y / 2 + this.pivotr,
z + this.size.z / 2
);
// reposition the mesh accordingly
this.mesh.position.set(0, this.size.y / 2, 0);
I'm trying to transform a Plane according to a Object3D (position and rotation). That Plane is used as a clippingPlane.
If I call Plane.applyMatrix4( Object.matrixWorld ) it just applies the matrix once, and doesn't bind the Plane to that matrix for future transformations.
However if I call the same function in a loop the transformations applied to the Plane are continuous.
EG if I call Object.rotate.z = 1 once, and then Plane.applyMatrix4( Object.matrixWorld ) in a loop, the Plane rotates 1 unit along the Z axis at every loop.
Any ideas?
Being this Plane used as a clipping plane, I also tried to transform it in the shader material of the mesh being clipped, and it maybe would be the best performance-wise, but I'm not so skilled to accomplish that.
I would just to this:
object.add( plane );
In this way, plane is a child of object. All transformations applied to object are also applied to plane. Besides, it's now very easy to transform plane relative to object.
The quickest solution I found is to reset and apply Object's .matrixWorld to the Plane. As I said before, it would be great to add useful transformation and "binding" methods to the THREE.Plane object, since it's used as clipping plane too.
Right now I did this way:
// will store the object's inverse transformations matrix in world coords
var inversePrevMatrix = new THREE.Matrix4();
function loop(){
// reset plane previous transformations
plane.applyMatrix4( inversePrevMatrix );
// apply actual object matrix in world coordinates
plane.applyMatrix4( object.matrixWorld );
// set prevMatrix
inversePrevMatrix.getInverse( object.matrixWorld );
}
I am trying to create horizontal cylinder. I have found the below link which is used for creating vertical cylinder.
How to draw a cylinder on html5 canvas
can someone tell me the changes to create a horizontal cylinder?
In general, if you have a drawing you want to rotate, you can use transforms.
Transforms move, rotate (and scale) the canvas without the need to recode your desired drawing.
A Demo: http://jsfiddle.net/m1erickson/RU26r/
This transform will rotate your original drawing 90 degrees:
drawRotatedCylinder(100,100,50,30,90);
function drawRotatedCylinder(x,y,w,h,degreeAngle){
// save the context in its unrotated state
context.save();
// translate to the rotation point
// your object will rotate around the rotation point
// so if you want it to rotate from the center then
// translate to x+width/2, y+height/2
context.translate(x+w/2,y+h/2);
// rotate by 90 degrees
// rotate() takes radians, so convert to radians
// with radians==degrees*Math.PI/180
context.rotate(degreeAngle*Math.PI/180);
// draw your original shape--no recoding required!
drawCylinder(-w/2,-h/2,w,h);
// restore the context to its untransformed state
context.restore();
}
I am working on my own deffered rendering engine. I am rendering the scene to the g-buffer containing diffuse color, view space normals and depth (for now). I have implemented directional light for the second rendering stage and it works great. Now I want to render a point light, which is a bit harder.
I need the point light position for the shader in view space because I have only depth in the g-buffer and I can't afford a matrix multiplication in every pixel. I took the light position and transformed it by the same matrix, by which I transform every vertex in shader, so it should align with verices in the scene (using D3DXVec3Transform). But that isn't the case: transformed position doesn't represent viewspace position nearly at all. It's x,y coordinates are off the charts, they are often way out of the (-1,1) range. The transformed position respects the camera orientation somewhat, but the light moves too quick and the y-axis is inverted. Only if the camera is at (0,0,0), the light stands at (0,0) in the center of the screen. Here is my relevant rendering code executed every frame:
D3DXMATRIX matView; // the view transform matrix
D3DXMATRIX matProjection; // the projection transform matrix
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x,y,z), // the camera position
&D3DXVECTOR3 (xt,yt,zt), // the look-at position
&D3DXVECTOR3 (0.0f, 0.0f, 1.0f)); // the up direction
D3DXMatrixPerspectiveFovLH(&matProjection,
fov, // the horizontal field of view
asp, // aspect ratio
znear, // the near view-plane
zfar); // the far view-plane
D3DXMATRIX vysl=matView*matProjection;
eff->SetMatrix("worldViewProj",&vysl); //vertices are transformed ok ín shader
//render g-buffer
D3DXVECTOR4 lpos; D3DXVECTOR3 lpos2(0,0,0);
D3DXVec3Transform(&lpos,&lpos2,&vysl); //transforming lpos into lpos2 using vysl, still the same matrix
eff->SetVector("poslight",&lpos); //but there is already a mess in lpos at this time
//render the fullscreen quad with wrong lighting
Not that relevant shader code, but still, I see the light position this way (passing IN.texture is just me being lazy):
float dist=length(float2(IN.texture0*2-1)-float2(poslight.xy));
OUT.col=tex2D(Sdiff,IN.texture0)/dist;
I have tried to transform a light only by matView without projection, but the problem is still the same. If I transform the light in a shader, it's the same result, so the problem is the matrix itself. But it is the same matrix as is transforming the vertices! How differently are vertices treated?
Can you please take a look at the code and tell me where the mistake is? It seems to me it should work ok, but it doesn't. Thanks in advance.
You don't need a matrix multiplication to reconstruct view position, here is a code snippet (from andrew lauritzen deffered light example)
tP is the projection transform, position screen is -1/1 pixel coordinate and viewspaceZ is linear depth that you sample from your texture.
float3 ViewPosFromDepth(float2 positionScreen,
float viewSpaceZ)
{
float2 screenSpaceRay = float2(positionScreen.x / tP._11,
positionScreen.y / tP._22);
float3 positionView;
positionView.z = viewSpaceZ;
positionView.xy = screenSpaceRay.xy * positionView.z;
return positionView;
}
Result of this transform D3DXVec3Transform(&lpos,&lpos2,&vysl); is a vector in homogeneous space(i.e. projected vector but not divided by w). But in you shader you use it's xy components without respecting this(w). This is (quite probably) the problem. You could divide vector by its w yourself or use D3DXVec3Project instead of D3DXVec3Transform.
It's working fine for vertices as (I suppose) you mul them by the same viewproj matrix in the vertex shader and pass transformed values to interpolator where hardware eventually divides it's xyz by interpolated 'w'.
I know the camera is at 0,0,0 and I need to rotate the world around it, but I'm getting confused as to what order to do translations and rotations.
If have a theoretical x,y,z coordinate system where the camera is at cx,cy,cz and it's oriented to cox,coy,coz and I have a cube which sits at bx,by,bz oriented to box,boy,boz then what series of glTranslatef and glRotatef are required to rotate the box correctly and at the correct location away from the camera?
Here are the basic operations, but I have no idea what order to put them in and what other operations are required to make it show up as expected.
gl.glLoadIdentity();
// rotation and translation for cube
gl.glRotatef(box, 1,0,0);
gl.glRotatef(boy, 0,1,0);
gl.glRotatef(boz, 0,0,1);
gl.glTranslatef(bx,by,bz);
// rotation and translation for camera
gl.glRotatef(cox, 1,0,0);
gl.glRotatef(coy, 0,1,0);
gl.glRotatef(coz, 0,0,1);
gl.glTranslatef(cx,cy,cz);
// draw the cube
cube.draw(gl);
Do it the other way around: camera transform first, then your object(s):
gl.glLoadIdentity();
// rotation and translation for camera
gl.glRotatef(-cox, 1,0,0);
gl.glRotatef(-coy, 0,1,0);
gl.glRotatef(-coz, 0,0,1);
gl.glTranslatef(-cx,-cy,-cz);
// rotation and translation for cube
gl.glRotatef(box, 1,0,0);
gl.glRotatef(boy, 0,1,0);
gl.glRotatef(boz, 0,0,1);
gl.glTranslatef(bx,by,bz);
// draw the cube
cube.draw(gl);