Three.js rotating and positioning of geometries - three.js

I try to write an export of an 3-d plant modelling software to Three.js but got stuck with the rotations and translations of the objects.
So far I tried to use quaternions and transformation matrices but both results are not satisfying. For my tests I use a simple binary tree that originally looks like this:
the results of my export are this:
You can find the code of both export under
http://ufgb966.forst.uni-goettingen.de/three/test2Quaternion.html
http://ufgb966.forst.uni-goettingen.de/three/test2Matrix.html
It seams that my rotations are made around the wrong point. Each rotation should be done around the origin of each geometry. What would be a method to achieve the result I'm looking for?

Just in case you haven't already, you might want to take a look at this: Using Matrices & Object3Ds in THREE and this How to use matrix for transformation, it helped me out. Also note that three.js uses a right handed coordinate system which you probably know.
If you export from blender, try -Z Forward, Y up.

In my opinion you must change sequence of translation and rotation transformations - the problem is in transformations sequence.

Related

Inverse Kinematics: How to Parameterize a Ball-and-Socket Joint?

I'm learning about inverse kinematics, and am trying to write a human skeleton simulation. I am having trouble deciding how to parameterize the rotation of a ball-and-socket joint.
Two methods that I can think of:
The familiar axis-angle (or Euler angle) way. Can change the characteristics of the joint by changing the order of rotation. Can also just use rotation matrices.
Using two quaternion rotations, one along the axis of the bone, and one to determine the orientation. I think this is more intuitive in terms of simulating the joint.
So which one should I use? As far as I can make out:
The axis-angle method is prone to gimbal-lock, which I can visualize
For the other method it is ambiguous as to which axes should be used when calculating the Jacobian entries - the v vector thing in this equation
(source: https://www.math.ucsd.edu/~sbuss/ResearchWeb/ikmethods/iksurvey.pdf, page 5)
I'm inclined to use the second method as I can get around the problem by using CCD instead of Jacobian pseudo-inverses. But I would just like to know which of these methods is used as standard (axis-angle or quaternions), and if so, what are the particular details I need to take into account if I were to adopt it.
Any advice would be helpful, but preferably professional, and in a non-esoteric language should you be kind enough to spare some code :-]

WebGL Heightmap from array?

Anyone happen to know of an example or can point me in the right direction on rendering a heightmap/terrain in WebGL from a three dimensional array? Basically I have an array that contains data relevant to x and y coordinates and a 'height' (z axis).
Everything I've found (like in the threejs world) shows how to create one dynamically or from a 2d image. Ideally I'd like to have the color of the pixel/particle related to the height. Basically looking to do something like below but in WebGL:
There are many examples on how to do this already available. You can search for three.js + heigthmap.
Or try three.js + 3d graph.
Here is something called a "Graphulus-Function" that looks pretty much exactly like what you need.
Here you can find another interesting reference.
Without more details on your data it is hard to say if these examples suit your needs...
Check also this three.js issue 1003 on GitHub: "Terrain from Heightmap" where there is a discussion about this topic and lots of great examples are mentioned.

PointCloud with multiple Kinects

I am trying to make a PointCloud mapping user with multiple kinects on Processing. I get the user's front and back with 2 kinects on opposite sides and generate both PointClouds.
The trouble is that the PointClouds X/Y/Z are not syncronized, it just puts the two of them on screen and it surely looks messy. There is a way to calculate or make a comparison between them, to translate the second PointCloud to "join" the first? I could translate the position manually, but if I move the sensors it will go off again.
Supposing all the Kinects are stationary, I guess you would have to go in this order:
decide on which Kinect to use as a global reference,
get parameters for a 3D transformation for each of the other Kinects - I'd try to
use PMatrix3D and applyMatrix(), although it may be slow,
apply the transformations on to each of the other Kinects' point clouds and draw
the clouds
I don't (yet) know how to get the transformation parameters for a Procrustes transformation, but assuming they won't change, you'd probably have to set up multiple reference points, maybe by displaying the point clouds from each pair of Kinects and registering the points you know are the same in both point clouds. After getting enough of them, construct a PMatrix3D and apply it inside push/popMatrix.
This is the approach used by this guy: http://www.youtube.com/watch?v=ujUNj1RDL4I
An alternative approach would be to use an Iterative Closest Point algorithm and construct 3D transform from its output. I'd really like an ICP or PCL library for Processing, if anyone knows a good one.

3D triangulation algorithm

Does anybody know what triangulation algorithm Maya uses? Lacking that, what would be the most probable algoritms to try? I tried a few simple off the top of my head (shortest/longest resulting edges, smallest minimum angle, smallest/biggest area), but they where all wrong. Is Delaunay the most plausible algoritm?
Edit: by the way, pseudo code on how to implement Delaunay for a 2D quad in 3D space to generate two triangles are more than welcome!
Edit 2: Unfortunately, this is not the answer in 3D-space (only applicable in 2D).
I don't like to second-guess people's intentions but if you are simply trying to get out of Maya what is shown in the viewport you can extract Maya's triangulation by starting with MItMeshPolygon::getTriangles.
(The corresponding normals and vertex colours are straightforwardly accessible. UVs require a little more effort -- I don't remember the details (all my Maya code is with my ex employer) but whilst at first glance it may seem like you don't have the data, in fact it's all there, just not conveniently.)
(One further note -- if your artists try hard enough, they can create polygons that crash Maya when getTriangles is called, even though they render OK and can be manipulated with the UI. This used to happen every few months, so it's worth bearing in mind but probably not worth worrying about too much.)
If you don't want to use the API or Python, then running polyTriangulate before exporting, then undo afterwards (to get back the original polygons) would let you examine out the triangulated mesh. (You may want or need to save the scene to a temp file, then reload it afterwards and use file to give it its old name back, if your export process does stuff that is difficult or impossible to undo.)
This is a bit hacky, but you're guaranteed to get the exact triangulation Maya is using. Rather easier than writing your own triangulation code, and almost certainly a LOT easier than trying to work out whatever Maya does internally...
Jonathan Shewchuk has a very popular 2D triangulation tool called Triangle, and a 3D version should appear soon. He also has a number of papers on this topic that might be of use.
You might try looking at Voronoi and Delaunay Techniques by Henrik Zimmer. I don't know if it's what Maya uses, but the paper describes some common techniques.
Here you can find an applet that demonstrates the Incremental, Gift Wrap, Divide and Conquer and QuickHull algorithms computing the Delaunay triangulation in 3D. Pointers to each algorithm are provided.

Liquify filter/iwarp

I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.

Resources