I am using a handpose estimation model that uses the webcam to generate (x, y, z) coordinates for each joint from a moving hand (the z is estimated accurately). I also have a .glb character with a full T-bone skeleton structure, with hands, that is made using Blender.
What I cannot figure out is how to use these real-time data points to animate the imported 3D character’s hand in ThreeJS . The (x, y, z) is in 3D cartesian plane and from what I’ve read in the docs, ThreeJS uses Euler/Quaternion angles for rotation (correct me if I’m wrong). I’m at an impasse right now because I am unsure of how to convert this into angular data.
I am fairly new to animation so please do let me know if there are other libraries that can help me do this in an easier fashion.
I think you are looking for inverse kinematics. It calculates variable joint parameters (angles or scales) depending on the end of the chain. The end of the chain i yours xyz position in space. The chain is arm for example. The joints are virtual joints of yours characters rig.
Related
I am trying to create a 3D Visualization of an RC airplane in Threebox. The RC plane sends live telemetry, including:
GPS Coordinates
Gyro sensor data, showing the pitch, roll and heading of the plane.
I have now loaded a Model of an airplane in Threebox, no problems with that.
My problem comes down to the rotation of the plane. I want the plane object to represent the current orientation of the RC plane. Since I have the live telemetry from the flight controller, this should be possible.
In the Documentation, I have found this method, which seemed like exactly what i needed:
plane.setRotation({x: roll, y: pitch, z: yaw/heading})
And it basically works. I can rotate the Plane around its axes. But things get messed up when I combine the rotations.
For example: When I just update the roll axis, the Object behaves just like I want it to. However, when i change the heading of the plane by 90 degrees, the roll axis suddenly becomes the pitch axis. It seems to me, that the axes of the Plane object don't rotate with the plane itself.
I've prepared a recreation of the issue on jsfiddle. You can change the heading of the plane using the slider in the bottom right.
I've been stuck on this for days, would be super happy for any help!
There are lots of issues with your jsfiddle that prevent it from running. To isolate an issue and make it easier to test you should eliminate as many variables as possible - you're using two third-party libraries that will play a big hand in how transformations behave, particularly threebox.
I would recommend sticking with three.js's built in transformation tools unless you specifically need some lat/lng transformations, or other transformations to move between a local cartesian space and a global coordinate system. In this case, a very basic plane.setRotationFromEuler(new THREE.Euler(yaw, pitch, roll)) should do the trick. Be aware of how much order in euler rotations can affect the outcome, and that three.js uses radians for all its rotations, not degrees.
I'm trying to create this 3D tile system world merged from smaller 3D objects - in order create these we use another application made in Unity which loads all small 3D assets separate and may be used to create your new model. Upon saving these model files there will be a JSON file created which contains all scales, positions, rotation etc. of all used 3D models.
We have decided to use this system of 'North, East, South, West' to make sure everything will look good in production. However now when we're trying to render these same JSON files in ThreeJS we have noticed the X axis is reversed compared to the Unity application that we're using .
What we want is this:
North is increasing Z value (north and south are fine)
East is increasing X value
West is decreasing X value
At the moment this is what's going wrong in ThreeJS:
East is decreasing X value
West is increasing X value
What we already have tried is this:
mirror / flip the camera view
when a coordinate drops below 0 we make it absolute (-10 will be 10)
when a coordinate is above 0 we make it negative (10 will be -10)
But nothing of the above had the desired effect. Reversing the coordinates with code brings other problems when it comes to scaled, rotated objects that are smaller or larger than 1x1x1 size. Ideally would be that we don't have to change our coordinates and that still can be used as a solid reference by changing the direction of the X axis from the left side to the right side of 0,0,0
Currently ThreeJS uses the 'right handed coordinate system' and what we desire is a left handed coordinate system. Is this something that is possible to configure within ThreeJS?
Anyone an idea what i can try except flipping all X coordinates?
It's not something you can configure in three.js or Unity. Different file formats typically have a notional coordinate system built into them. GLTF, for example, is represented in a right-handed coordinate system. It's the responsibility of the format importers and exporters to handle the conversion -- this is what the builtin three.js importers do.
I would suggest using an existing format such as GLTF to represent your scene (there is an existing Unity exporter available and an importer available for three.js).
Or if you'd like to retain control over your own file format you can do the left to right handed coordinate system conversion yourself either at export from Unity or import to three.js. Looking at your image it looks like you'll want to multiple all of the X values by -1.0 to get them to look the same. You'll want to save your rotations as quaternions, as well, to avoid rotation order differences.
Of course you could always just scale the whole scene by -1.0 on X but that may make it difficult to work with other parts of three.js.
I would consider to apply a (-1, 1, 1) scale to the root of your "Unity exported scene", this way you can still keep the other part of your scene unchanged.
obj3d.scale.set(-1, 1, 1);
I am trying to build a simple camera matching (or match moving) application. The functionality is the same as that in most 3d applications like 3ds Max or Maya. Given an image of a cube and a 3d model of the cube, the user selects points on the image corresponding to each vertex of the model. The application must then generate a camera view that displays the 3d cube model from the same angle as shown in the image.
Can anyone point me in the direction of an algorithm for that?
PS: The camera is calibrated and the camera calibration matrix is available to the program
You can try with the algorithm illustrated step-by-step on http://www.offbytwo.net/camera-matching/. The Octave source code is provided, too.
As a plus, you don't need to start with a cube, but just with any two edges parallel to the x axis and two in the y direction.
I'm creating an HTML5 canvas 3D renderer, and I'd say I've gotten pretty far without the help of SO, but I've run into a showstopper of sorts. I'm trying to implement backface culling on a cube with the help of some normals calculations. Also, I've tagged this as WebGL, as this is a general enough question that it could apply to both my use case and a 3D-accelerated one.
At any rate, as I'm rotating the cube, I've found that the wrong faces are being hidden. Example:
I'm using the following vertices:
https://developer.mozilla.org/en/WebGL/Creating_3D_objects_using_WebGL#Define_the_positions_of_the_cube%27s_vertices
The general procedure I'm using is:
Create a transformation matrix by which to transform the cube's vertices
For each face, and for each point on each face, I convert these to vec3s, andn multiply them by the matrix made in step 1.
I then get the surface normal of the face using Newell's method, then get a dot-product from that normal and some made-up vec3, e.g., [-1, 1, 1], since I couldn't think of a good value to put in here. I've seen some folks use the position of the camera for this, but...
Skipping the usual step of using a camera matrix, I pull the x and y values from the resulting vectors to send to my line and face renderers, but only if they have a dot-product above 0. I realize it's rather arbitrary which ones I pull, really.
I'm wondering two things; if my procedure in step 3 is correct (it most likely isn't), and if the order of the points I'm drawing on the faces is incorrect (very likely). If the latter is true, I'm not quite sure how to visualize the problem. I've seen people say that normals aren't pertinent, that it's the direction the line is being drawn, but... It's hard for me to wrap my head around that, or if that's the source of my problem.
It probably doesn't matter, but the matrix library I'm using is gl-matrix:
https://github.com/toji/gl-matrix
Also, the particular file in my open source codebase I'm using is here:
http://code.google.com/p/nanoblok/source/browse/nb11/app/render.js
Thanks in advance!
I haven't reviewed your entire system, but the “made-up vec3” should not be arbitrary; it should be the “out of the screen” vector, which (since your projection is ⟨x, y, z⟩ → ⟨x, y⟩) is either ⟨0, 0, -1⟩ or ⟨0, 0, 1⟩ depending on your coordinate system's handedness and screen axes. You don't have an explicit "camera matrix" (that is usually called a view matrix), but your camera (view and projection) is implicitly defined by your step 4 projection!
However, note that this approach will only work for orthographic projections, not perspective ones (consider a face on the left side of the screen, facing rightward and parallel to the view direction; the dot product would be 0 but it should be visible). The usual approach, used in actual 3D hardware, is to first do all of the transformation (including projection), then check whether the resulting 2D triangle is counterclockwise or clockwise wound, and keep or discard based on that condition.
Given the 3D vector of the direction that the camera is facing and the orientation/direction vector of a 3D object in the 3D space, how can I calculate the 2-dimensional slope that the mouse pointer must follow on the screen in order to visually be moving along the direction of said object?
Basically I'd like to be able to click on an arrow and make it move back and forth by dragging it, but only if the mouse pointer drags (roughly) along the length of the arrow, i.e. in the direction that it's pointing to.
thank you
I'm not sure I 100% understand your question. Would you mind posting a diagram?
You might find these of interest. I answered previous questions to calculate a local X Y Z axis given a camera direction (look at) vector, and also a question to translate an object in a plane parallel to the camera.
Both of these examples use Vector dot product, Vector cross product to compute the required vectors. In your example the vector dot product can be also used to output the angle between two vectors once you have found them.
It depends to an extent on the transformation that you are using to convert your 3d real world coordinates to 2d screen coordinates, e.g. perspective, isometric, etc... You will typically have a forward (3d -> 2d) and backward (2d -> 3d) transformation in play, where the backward transformation loses information. (i.e. going forward each 3d point will map to a unique 2d point, but going back from the point may not yield the same 3d point). You can often project the mouse point onto the object to get the missing dimension.
For mouse dragging, you typically get the user to specify an operation (translation on the plane of projection, zooming in or out, or rotating about an anchor point). Your input is the mouse coordinate at the start and end of the drag, which you transform into your 3d coordinate system to get two 3d coordinates, which will give you dx, dy, dz for dragging / translation etc...