I am working on an animation project in that I suppose to parse the .bvh file and get the animation as the output. but there are a couple of things that i would like to change.
I know that the pelvis is the root joint and the position and rotation information of the pelvis joint is in regard to the global coordinate, so:
usually where is the global coordinate in a .bvh format file?
how I can change the origin point (0,0,0) of the .bvh to the ground instead of the root joint?
Related
I'm new to ThreeJS. Currently, I wanna load a model from .obj file and use AxesHelperto measure its length. However, once I loaded the model I found the origin point of the model is different with the origin point of AxesHelper, there is a distance between them. I know I could set their position manually, but I hope once I load a model, the origin of the model is the same with AxesHelper. How could I set the origin point of them exactly same? Thanks in advance.
The origin point depends on how the geometry is translated relative to its local (0,0,0) point. You usually get good results by centering the object's geometry based on its bounding box via BufferGeometry.center(). You can also directly use the respective translate() method to translate the geometry with an arbitrary offset.
In general, it's preferable to make such model adjustments during the design phase in a tool like Blender and not after the import in a 3D engine.
for a personal project, I've created a simple 3D engine in python using as little libraries as possible. I did what I wanted - I am able to render simple polygons, and have a movable camera. However, there is a problem:
I implemented a simple flat shader, but in order for it to work, I need to know the camera location (the camera is my light source). However, the problem is that I have no way of knowing the camera's location in the world space. At any point, I am able to display my view matrix, but I am unsure about how to extract the camera's location from it, especially after I rotate the camera. Here is a screenshot of my engine with the view matrix. The camera has not been rotated yet and it is very simple to extract its location (0, 1, 4).
However, upon moving the camera to a point between the X and Z axes and pointing it upwards (and staying at the same height), the view matrix changes to this:
It is obvious now that the last column cannot be taken directly to determine the camera location (it should be something like (4,1,4) on the last picture).
I have tried a lot of math, but I can't figure out the way to determine the camera x,y,z location from the view matrix. I will appreciate any and all help in solving this, as it seems to be a simple problem, yet whose solution eludes me. Thank you.
EDIT:
I was advised to transform a vertex (0,0,0,1) by my view matrix. This, however, does not work. See the example (the vertex obviously is not located at the printed coordinates):
Just take the transform of the vector (0,0,0,1) with the modelview matrix: Which is simply the rightmost column of the modelview matrix.
EDIT: #ampersander: I wonder why you're trying to work with the camera location in the first place, if you assume the source of illumination to be located at the camera's position. In that case, just be aware, that in OpenGL there is no such thing as a camera, and in fact, what the "view" transform does, is move everything in the world around so that where you assume your camera to be ends up at the coordinate origin (0,0,0).
Or in other words: After the modelview transform, the transformed vertex position is in fact the vector from the camera to the vertex, in view space. Which means that for your assumed illumination calculation the direction toward the light source, is the negative vertex position. Take that, normalize it to unit length and stick it into the illumination term.
I'm trying to create this 3D tile system world merged from smaller 3D objects - in order create these we use another application made in Unity which loads all small 3D assets separate and may be used to create your new model. Upon saving these model files there will be a JSON file created which contains all scales, positions, rotation etc. of all used 3D models.
We have decided to use this system of 'North, East, South, West' to make sure everything will look good in production. However now when we're trying to render these same JSON files in ThreeJS we have noticed the X axis is reversed compared to the Unity application that we're using .
What we want is this:
North is increasing Z value (north and south are fine)
East is increasing X value
West is decreasing X value
At the moment this is what's going wrong in ThreeJS:
East is decreasing X value
West is increasing X value
What we already have tried is this:
mirror / flip the camera view
when a coordinate drops below 0 we make it absolute (-10 will be 10)
when a coordinate is above 0 we make it negative (10 will be -10)
But nothing of the above had the desired effect. Reversing the coordinates with code brings other problems when it comes to scaled, rotated objects that are smaller or larger than 1x1x1 size. Ideally would be that we don't have to change our coordinates and that still can be used as a solid reference by changing the direction of the X axis from the left side to the right side of 0,0,0
Currently ThreeJS uses the 'right handed coordinate system' and what we desire is a left handed coordinate system. Is this something that is possible to configure within ThreeJS?
Anyone an idea what i can try except flipping all X coordinates?
It's not something you can configure in three.js or Unity. Different file formats typically have a notional coordinate system built into them. GLTF, for example, is represented in a right-handed coordinate system. It's the responsibility of the format importers and exporters to handle the conversion -- this is what the builtin three.js importers do.
I would suggest using an existing format such as GLTF to represent your scene (there is an existing Unity exporter available and an importer available for three.js).
Or if you'd like to retain control over your own file format you can do the left to right handed coordinate system conversion yourself either at export from Unity or import to three.js. Looking at your image it looks like you'll want to multiple all of the X values by -1.0 to get them to look the same. You'll want to save your rotations as quaternions, as well, to avoid rotation order differences.
Of course you could always just scale the whole scene by -1.0 on X but that may make it difficult to work with other parts of three.js.
I would consider to apply a (-1, 1, 1) scale to the root of your "Unity exported scene", this way you can still keep the other part of your scene unchanged.
obj3d.scale.set(-1, 1, 1);
Usually when I want to rotate an object/node in my Ogre scene I call the node's rotate() method. That rotates the node locally relative to it's current rotation. So for example, when I start with 0 rotation, then rotate twice for 5 degrees about one axis, after the second call the object is rotated by 10 degrees in total.
Now I need to set the absolute rotation of the node/object directly, regardless of its current rotation. Thus, say I don't know the objects current rotation, I need to set it say to 45 degrees on the X axis. Something like setRotation().
I know there is a setOrientation() method in the SceneNode class, which expects a quaternion object. I also know that I can get the current orientation quaternion. What I don't know: how can I use/change this current orientation quaternion to set the new absolute rotation of the node?
You can find a well written introduction to Quaternions and Ogre here: http://www.ogre3d.org/tikiwiki/Quaternion+and+Rotation+Primer
Especially Resetting Orientation might be of interest.
In this example I am confused about the use of rotate and translate, more specifically,
// The earth rotates around the sun
pushMatrix();
rotate(theta);
translate(50,0);
fill(50,200,255);
ellipse(0,0,10,10);
What does rotate(theta); rotate? What is the relation of rotate and translate?
rotate and translate are acting on the current coordinate system.
i.e. draw actions (rect(), ellipse(), etc...) will apply within a the current coordinate system. where rotate & translate will move the coordinate system.
prior to the block of code you supplied, the current point was translated to the center of the window in order to draw the sun.
pushMatrix() saves that position and then relative to the suns center, "everything" (the coordinate system) is rotated by theta, and "everything" is translated by (50,0) effectively moving the "current point" to the correct position to then draw the earth using ellipse(0,0,10,10).
Note you could probably omit the translate step and use ellipse(50,0,10,10) to get the same visual result, if it were not for the fact that the next code block is dependent on that translate to get the moon position correctly.
Here is an interesting link which explains this in terms of "moving the graph paper";