I've recently been getting into character animation, but since I come from a programming background I like to approach everything procedurally.
I got started with rigging and animation using Blender and Cascadeur, but that approach seems very limited from my perspective. I saw that Houdini is capable of procedural animation, but the resources seem quite limited so I am not sure to what extent.
I would like to able to abstract motion logic and potentially reuse it, layer it, and use custom logic to drive bones. Also, I would like to be able to use noise and custom blending functions (spring for example). In some cases block out the base motion traditionally and layer procedural motion and in other cases define the base motion logically and again layer other motion on top.
Example:
I want to animate a cork flip and drive the height of the flip based on the angular momentum of the sweeping leg and also drive the angular momentum of the rotation based on the relative position between the hand and elbow joints, and the chest bone. This way I can adjust certain parameters and get different motions. Obviously there are many holes in that example, but that is the gist of what I am looking to achieve.
Is Houdini what I am looking for and if so, what resources could I look into?
Or should I be looking into custom solutions?
Are there any known techniques or approaches for achieving what I've given as an example (in Houdini or as a custom solution)?
Related
Is there a way to animate wind for certain game objects?
For example, branches of trees should gently move, like there's a breeze in game. Not gameplay, more like a special background effect.
If it's not possible in code, what would be the best way to create proper sprite images?
I see a few options to actually achieve that Wind Effect.
SKFieldNode, It allows you to actually apply physics effects to nodes. And if you want real tree branches that can move based on the physics, you should combine SKFieldNode with SKPhysicsJoint. When you combine those two you can actually create indepedent Branch Nodes to receive some kind of force to simulate a wind effect. To understand what you can do with SKPhysicsJoint, check out this guide. This solution can get really complex and hard to achieve with superficial understanding of SpriteKit Engine, but you can create an amazing effect using it. Personally, I would not recommend if you have deadlines to attend, physics always get buggy if you lose your grasp on what you are doing, you may invest a lot of time trying to achieve this using physics.
Create different animation of your tree responding to wind movement and control which animation frame you should use at that specif case. I Would highly recommend this one, because you will have control over what is going on with you tree and people that play games don't pay that much attention to what is going on with background, altho is a good thing to think about it from the game experience.
Create your tree textures and change the anchor point to be at the place you want the effect to be more effective and responsive. The farthest the coordinate in the node is from the anchor points coordinate, less effect from your SKAction it will get.
Sorry for my English, is not my native language.
I have been using Three.js for a couple of weeks now and am really blown away by the power of this library!
Now, I would like to leverage it in order to run raytracing simulations for lower frequencies than light (RF). I figure it should be possible to alter or create new lights that would represent the sources of RF waves and then write a specific renderer that would take into account the interference aspects that apply in the RF context.
Is that the right approach? If so what functions/libraries should I focus on? Maybe all the raytracing/raycasting is already done and can be reused with little modification?
Any help appreciated!
Thanks
M.
Yes depending on what domain you are modelling, radiometric simulations can be done with ray tracing. Photometric renderers are Radiometric renderers but with the domain restricted to the visible spectrum. Usually you can simply replace words like 'Lux' with 'Flux' which can give the indication that the domain being modelled can be outside the visible spectrum.
Bear in mind, depending on what you wish model, you need to assume that the RF waves will behave in a similar vein to a ray/photon. i.e Diffraction or movement through vector fields will require intelligence beyond standard path tracing, instead look at ray marching used in volumetric renderers. Volumetric ray tracing is more akin to finite element analysis since we don't just calculate the intersection point of the ray, but also consider the ray propagation through a medium (vector field) and the effect this medium has on the resultant ray path.
P.S Sorry can't help with specifics of Threejs :( Never used it.
I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.
In After Effects, a lot of my time is spent in the graph editor, and a lot of the motion I create is similar. I've used Ease and Wizz to use expressions to duplicate easing motion between layers, but sometimes the motion I'm working on is unsuitable for this technique.
Is there a simple way to just copy the speed graph of a key framed property in After Effects to another layer, leaving its basic position values intact? It seems like it would be a basic feature of the application.
Almost 4 years later, but if you're still looking for a solution, I think this is what you are looking:
http://aescripts.com/easecopy/
This script allows you to copy just the curve and leave every other value (position/scale/rotation/etc) intact.
I have a hierarchical animated model in DirectX which loads and animates based on the following DirectX sample: http://msdn.microsoft.com/en-us/library/ee418677%28VS.85%29.aspx
As good as the sample is it does not really go into some of the details of animation that I'd like. For example, if I have a mesh which has a running animation and a throwing animation as seperate animation sets how can I get the throwing animation to occur for bones above the hip and the walking animation to occur for bones underneath the hip?
Also if I wanted to for example have the person lean left or right would I simply have to find the bone for the hip and multiplay a rotation matrix by its matrix? In this case I think the matrix is m_amxBoneOffsets?
Composing multiple animations to a single one is usually the job of an animation system, something that is way out of scope of the D3D sample.
Let's look at your 2 examples:
running and throwing
Well, in this case you could apply the animation for the lower part of the body from the running animation and the animation for the upper part of the body from the throwing animation. And you'd get a very crappy result.
The how is just a matter of knowing which bones are where in the bone palette (something that depends on how they are stored, and in which order, but nothing inherently hard. The definite reference should be the documentation of the tool generating the animation data)
In practice, you're better off with a blending of the 2 animation. This is, in general, is hard, and software packages exist out there that do this for you. Gamebryo, e.g.
Or, an animation of a running guy who throws is different enough from a standing guy who throws that you might be better off having 2 animations.
Leaning
If you apply a rotation matrix to the root bone, you'll simply rotate your whole character.
Now if you rotate the next bone in the hierarchy (from the spine), you'll get all the bones that depend on it to rotate likewise. It will probably do what you want, but there's a sure way to find out. Try it!
Well the thing is the running animation SHOULD affect the throwing animation slightly. What you need to look into is animation blending.
I'm sure Valve wrote a good paper on how they implemented it in Counter-strike many years ago. Its not on the valve site though so I'm not sure where I got this memory from ...