I'm trying to find or create a working example of inverse kinematic posing in three.js. Ideally I would like to export human models from Makehuman via their Collada exporter, load them with THREE.ColladaLoader and set them into different poses in three.js programmatically or through some dat.GUI interface. A bit like an artist doll implementation - I don't need animation, but real-time feedback when tweaking the pose would be nice, and inverse kinematic style posing would be highly preferred.
I've been studying and searching information for days. This http://www.youtube.com/watch?v=6om9xy6rnc0 is very close, but I was unable to find any example code or downloads. The closest working example I've found is this: http://mrdoob.github.com/three.js/examples/webgl_animation_skinning.html However that appears to use predefined animation frames, which in turn appears to manipulate the bones in forward kinematics manner so that was not much help either.
I couldn't even find a model for testing, as I don't know what to look for when searching something with IK rigs/skinning/bones compatible with Three.js. Makehuman does seem to have plenty of rigging export options, I don't know if any of those are usable.
Is there a usable IK system in Three.js, and if so, are there any working examples, working human models, or any hints on which exact rigging system/workflow should study to accomplish this? If direct Collada support is not possible, creating the characters in Blender and exporting them is an option too..
EDIT: found this live demo http://www.akjava.com/demo/poseeditor/ but the code is totally unreadable.
I don’t feel competent enough to answer your question, but I’ll post three links which may put you on the right track.
wylieconlon/kinematics
– a great demo of 2D inverse kinematics animation. The code is totally readable.
https://www.khanacademy.org/computer-programming/inverse-kinematics/1191743453
– another demo, this time less flexible but more terse.
How to calculate inverse kinematics
– a rabbit hole of links. Just in case you’d want to dive right into the thing.
This seems promising.
Fullik : javascript fast iterative solver for Inverse Kinematics on three.js
is a conversion from java to Caliko 3D libs
Caliko library is an implementation of the FABRIK inverse kinematics (IK) algorithm
https://github.com/lo-th/fullik
Related
I am currently trying to dive into the topic of WebGL shaders with THREE.js. I would appreciate if someone could give me some starting points for the following scenario:
I would like to create a fluid-like material, which either interacts with the users mouse or «flows» on it's on.
a little like this
http://cake23.de/turing-fluid.html
I would like to pass a background image to it, which serves as a starting point in terms of which colors are shown in the «liquid sauce» and where they are at the beginning. so to say: I define the initial image which is then transformed by a self initiated liquid flowing and also by the users interaction.
How I would proceed, with my limited knowledge:
I create a plane with the wanted image as a texture.
On top (between the image and the camera) I create a new mesh (plane too?) and this mesh has some custom vertex and fragment shaders applied.
Those shaders should somehow take the color from behind (from the image) and then move those vertices around following some physical rules...
I realize that the given example above has unminified code, but still it is so much, that I can't really break it down to simpler terms, which I fully understand. So I would really appreciate if someone could give me some simpler concepts which serve as a starting point for me.
more pages addressing things like this:
http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
https://29a.ch/sandbox/2012/fluidwebgl/
https://haxiomic.github.io/GPU-Fluid-Experiments/html5/
Well, anyway thanks for every link or reference, for every basic concept or anything you'd like to share.
Cheers
Edit:
Getting a similar result (visually) like this image would be great:
I'm trying to accomplish a similar thing. I am being surfing the web a lot. Looking for any hint I can use. so far, my conclusions are:
Try to support yourself using three.js
The magic are really in the shaders, mostly in the fragments shaders it could be a good thing start understanding how to write them and how they work. This link is a good start. shader tutorial
understand the dynamic (natural/real)behavior of fluid could be valuable. (equations)
maybe, this can help you a bit too. Raindrop simulation
If you have found something more around that, let me know.
I found this shaders already created. Maybe, any of them can help you without forcing you to learn a plenty of stuff. splash shaders
good luck
I'm looking to implement a simple morphing animation between two images.
Here's a simple demo of what I'm trying to create: http://i.imgur.com/7377yHr.gif
I'm pretty comfortable with Objective-C and JavaScript but since the concepts and algorithms are abstract, I'm more than willing to see examples in any language or framework.
I would like to know how hard it would be to tackle this -- it doesn't have to be exact but as long as it gives the impression of a morph I'll be satisfied.
Where would I start?
It seems like in your example is being used a combination of mesh wrap morphing and cross dissolve morphing. Mesh morphing can be tricky and as far as I know it requires a manual input (defining the mesh), so depending on what you want to do it might not be suitable for you.
If you are looking for a cheap technique (in terms of effort), probably just doing cross dissolve would work for you, since is very easy to implement. You just need to combine both images by increasing the alpha of the target image and decreasing the alpha on the origin image.
These articles give an overview of the techniques:
[PDF] http://css1a0.engr.ccny.cuny.edu/~wolberg/pub/vc98.pdf
[PDF] http://www.sorging.ro/en/member/serveFile/format/pdf/slug/image-morphing-techniques
[PDF] http://cs.haifa.ac.il/hagit/courses/ip/Lectures/Ip05_GeomOper.pdf
The last link comes from a comment in a similar question: Morphing, 3 algorithms, image processing
IES (Illuminating Engineering Society) is a file format (.ies) that enhances lights in animation tools. It adds accurate falloff, dispersion, color temperature, spatial emission, brightness and stuff like that. It's an industrial standard to show how lighting products really look like. Many animation tools (Maya, Cinema4D, Blender etc.) are able to utilize this format.
Yet, I'm still searching for a way to import/use IES in WebGL frameworks. Using an animation tool (in my case Blender) to import and process .ies-files and finally export the project to a webgl-format seemed to be the most promising method to me. I tried out a couple of WebGL-Frameworks (three.js, x3dom, coppercube) and many more export formats/import-/converter scripts but none of it produced satisfying results. The processed results showed no lights, default lights or at best the same number of lights with no further attributes.
Does anybody by any chance know of a working combination of animation tool -> export function -> WebGL-framework or WebGL-ies-format-import that would do the trick?
Are there people with the same problem and longing for a solution?
You could maybe transfer some attributes from .ies files (color, intensity and such), but the Three.js renderer simply doesn't support the complex light properties .ies (like the shape of light) is meant to describe. So, even if you were able to import/export those properties, the default Three.js light system would not be able to render them properly.
Even if you implemented your own shaders and lights (you can do that in Three.js), it probably would still be prohibitively slow and/or inaccurate, as you probably need some raytracing/pathtracing -based approach for good enough results. WebGL approach to rendering in general, no matter what library or framework you use, is not very good fit for complex, accurate simulation of lights and shadows.
That being said, I would be highly interested in any solution, even a crude approximate support would be useful.
Three.js now has an example of area lights. It is a very crude approximation (analytic evaluation of the area integral after some serious simplification). There is an article by Insomniac Games in GPU Pro 5 that offers are more accurate solution.
This only helps you with the area and shape of the light; you are on your own with the other attributes.
P.S. It's hard to tell from your question, but if your light format contains spectrum there have recently been some nice articles on real-time spectral lighting:
http://www.numb3r23.net/2013/03/26/spectral-rendering-on-the-gpu-now-with-bumps/
I'm going to program a fancy (animated) about-box for an app I'm working on. Since this is where programmers are often allowed to shine and play with code, I'm eager to find out what kind of cool algorithms the community has implemented.
The algorithms can be animated fractals, sine blobs, flames, smoke, particle systems etc.
However, a few natural constraints come to mind: It should be possible to implement the algorithm in virtually any language. Thus advanced directx code or XNA code that utilizes libraries that aren't accessible in most languages should not be posted. 3D is most welcome, but it shouldn't rely on lots of extra installs.
If you could post an image along with your code effect, it would be awesome.
Here's an example of a cool about box with an animated 3D figure and some animated sine blobs on the titlebar:
And here's an image of the about box used in Winamp, complete with 3D animations:
I tested and ran the code on this page. It produces an old-school 2D flame effect. Even when I ran it on an N270 in HD fullscreen it seemed to work fine with no lag. The code and all source is posted on the given webpage.
Metaballs is another possibly interesting approach. They define an energy field around a blob and will melt two shapes together when they are close enough. A link to an article can be found here.
Something called a Wolfram Worm seems so be an awesome project to attempt. It would be easy to calculate random smooth movement by using movement along two connected bezier curves. Loads of awesome demos can be found on this page:
http://levitated.net/daily/index.html
(source: levitated.net)
I like a lot the Julia 4D quaternion fractal.
(source: macromedia.com)
Video: Julia 4D animation in F#
I'm messing around with image manipulation, mostly using Python. I'm not too worried about performance right now, as I'm just doing this for fun. Thus far, I can load bitmaps, merge them (according to some function), and do some REALLY crude analysis (find the brightest/darkest points, that kind of thing).
I'd like to be able to take an image, generate a set of control points (which I can more or less do now), and then smudge the image, starting at a control point and moving in a particular direction. What I'm not sure of is the process of smudging itself. What's a good algorithm for this?
This question is pretty old but I've recently gotten interested in this very subject so maybe this might be helpful to someone. I implemented a 'smudge' brush using Imagick for PHP which is roughly based on the smudging technique described in this paper. If you want to inspect the code feel free to have a look at the project: Magickpaint
Try PythonMagick (ImageMagick library bindings for Python). If you can't find it on your distribution's repositories, get it here: http://www.imagemagick.org/download/python/
It has more effect functions than you can shake a stick at.
One method would be to apply a Gaussian blur (or some other type of blur) to each point in the region defined by your control points.
One method would be to create a grid that your control points moves and then use texture mapping techniques to map the image back onto the distorted grid.
I can vouch for a Gaussian Blur mentioned above, it is quite simple to implement and provides a fairly decent blur result.
James