Automation: Find handle lengths for animation curves - animation

Given two end points and the angle of their curve handles, along with two or three sample points that the arc must intersect, is there a method to find the length of the two handles on each end? And can it be simplified if you can assume the arc follows the parabolic curves of constants like gravity or friction?
enter image description here
It's for a script that is a simple physics engine bur rather than creating a simulation object it creates animation curves for regular objects. I want to optimize it down from adding keyframes on every frame, to adding only essential keyframes and using animation curves.
I think simple arcs could be done in animation automation, but I have no idea what kind of math would be involved.

Related

Need a geometric edge/crease detection method

I am experimenting with a primitive rendering style (solid colors with highlighted edges/creases) for an open-source game I contribute to. The world geometry is fairly simplistic, and is mostly comprised of blocks, pyramids, and there may ultimately be other simple volumes like cylinders, cones, other kinds of prisms, etc. The rendering will be done with OpenGL ES 2.
I have been experimenting with edge detection methods for the edges/creases. It seemed like doing shader-based edge detection (I tried the sobel filter and several other algorithms) on the depth value and face normals would be easiest, but I was unable to get a good result, mostly due to the precision limits of the depth buffer and the complexity of far-away geometry, as well as the inability to do any good antialiasing on the edges.
I ultimately decided that I needed to render the lines geometrically so I could make them thick and smooth out the edges, etc. I would like to generate the lines programmatically from the geometry definition prior to rendering to improve runtime performance. I can get most of the effect I want by drawing the main geometry, set a depth offset, then draw lines over the geometry. However, this technique has some shortcomings, as seen below:
Overlapping Geometry
There may be several pieces of geometry overlapping or adjoining to form more complex structures. Where several pieces have overlapping/coplanar faces, I want to draw an outline around them but not around each individual piece so you can see each separate part.
Current result on top; desired result on bottom:
Creases
This issue was also visible in the image above, but the image below also shows what my goals are. I want to draw lines where there are creases in overlapping geometry to make them stand out a lot more.
Current result on top; desired result on bottom:
From what I can tell so far, for the overlapping faces problem, I think I need to do intersection tests between my lines and any nearby intersecting faces, then somehow break the lines up and get rid of the segments that cross other faces. To create lines in the creases between geometry, I think I need to do some kind of intersection tests between the two faces that form the crease. However, I'm having a hard time wrapping my head around the step-by-step procedure for doing that. Again, I would like to set up these lines programmatically in a pre-rendering step if possible. Any guidance would be much appreciated.

Render a THREE.Line with variable thickness based on distance from camera

I'm trying to render lines (railroads, roads etc) onto a globe. At present, I'm using THREE.LineBasicMaterial and using the linewidth property to control thickness, but it would look much better if the thickness of the line at a given point was inversely proportional to the distance of that point from the camera.
Is such a thing possible (perhaps with a custom shader) or is the only way to construct a tube that follows the same path as the line? (And if so, what would the best approach be?)

Is there a common technique for drawing a "stretchy" line

I'm trying to figure out how to draw an stretchy/elastic line between two points in openGL/Cocos2d on iPhone. Something like this
Where the "band" get's thinner as the line gets longer. iOS uses the same technique I'm aiming for in the Mail.app, pull to refresh.
First of all, is there a name for this kind of thing?
My first thought was to plot a point on the radius of the starting and ending circles based on the angle between to the two, and draw a quadratic bezier curve using the distance/2 as a control point. But I'm not a maths whizz so I'm struggling to figure out how to place the control point which will adjust the thickness of the path.
But a bigger problem is that I need to fill the shape with a colour, and that doesn't seem to be possible with OpenGL bezier curves as far as I can tell since curves don't seem to form part of a shape that can be filled.
So I looked at using a spline created using a point array, but that opens up a whole new world of mathematical pain as I'd have to figure out where all the points along the edge of the path are.
So before I go down that rabbit hole, I'm wondering wether there's something simpler that I'm overlooking, or if anyone can point me towards the most effective technique.
I'm not sure about a "common" technique that people use, other than calculating it mathematically, but this project, SlimeyRefresh, is a good example of how to accomplish this.

Vertex buffer objects and glutsolidsphere

I have to draw a great collection of spheres in a 3D physical simulation of a "spring-mass" like system.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Is there any method to draw OpenGL spheres in a way faster than glutSolidSphere?
Spheres are self-similar; every sphere is just a scaled version of any other sphere. I see no need to regenerate any geometry. Indeed, I see no need to have more than one sphere at all.
It's simply a matter of providing the proper scaling matrix. I would suggest a sphere of radius one centered at the origin for your display list or buffer object mesh. Then you can just transform it to different locations, using a scale to set the new radius.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
Why are you generating a display list at all, if the geometry you put into is is dynamic. Display lists are meant for static geometry that never or only seldomly changes.
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Actually VBOs are most efficient with static geometry as well. In general you want to keep the number of actual geometry updates as low as possible. In your case the only thing updating are the positions (and maybe the size) of the spheres. This is a prime example for instanced drawing. However this also works well, with updating only a uniform or the transformation matrix and do the call drawing a sphere.
The idea of Vertex Arrays and VBOs is, that you draw a whole batch of geometry with a single call. A sphere would be such a batch.

Programming 3D animations?

What are the different ways to handle animations for 3D games?
Do you somehow program the animations in the modeler, and thus the file, than read and implement them in the game, or do you create animation functions to animate your still vectors?
Where are some good tutorials for the programming side of this? Google only gave me the modeler's side.
In production environments, animators use specialized tools such as Autodesk's 3DS Max to generate keyframed animations of 3D models. For each animation for each model, the animator constructs a number of poses for the model called the keyframes, which are then exported out into the game's data format.
The game then loads these keyframes, and to animate the model at a particular time, it picks the two nearest keyframes and interpolates between them to give a smooth animation, even with a small number of keyframes.
Animated models are typically constructed with a bone hierarchy. There is a root bone, which controls the model's location and orientation within the game world. All of the other bones are defined relative to some parent bone to create a tree. Each vertex of the model is tied to a particular bone, so the model can be controlled with a much smaller number of parameters: the relative positions, orientations, and scales of each bone.
Smooth skinning is a technique used to improve the quality of animations. With smooth skinning, a vertex is not tied to just one bone, but it can be tied to multiple bones (usually a hard limit, such as 4, is set; vertices rarely need more than 3) with corresponding weights. This makes the animator's job harder, since he has to do a lot more work with vertices near joints, but the result is a much smoother animation with less distortion around the joints.
Alternatively, some games use procedural animation, whereby the animation data is constructed at runtime. The bone positions and orientations are computed according to some algorithm, such as inverse kinematics or ragdoll physics. Other options are of course available, but they must be coded by programmers.
Instead of procedurally animating the bones and using forward kinematics to determine the locations of all of the model's vertices, another option is to just procedurally generate the position of each vertex on its own. This allows for more complex animations that aren't bound by bone hierarchies, but of course it's much more complicated and much harder to do well.
Different ways to handle animation in games?
For characters, typically:
Skeletal Deformation
Blend shapes
Sometimes even fully animated meshes (LA. Noire did that)
In the recent Tomb Raider we use a compute job to simulate thousands of splines for the hair, and the vertices are controlled by these splines with a vertex shader.
Often these are driven by key-frame animation which are interpolated at runtime, and sometimes we add code to drive the influences (bones, blendshape weights, etc.) procedurally.
For some simple objects that move rigidly, sometimes the design will have animation control over the whole object without having to deform it. Sometimes simple things like shrubs can be made to move by using something akin to blendshapes where you have a full mesh pose in a few extremes and you kind of blend between those extremes.
As for tutorials... well... Are you interested in writing the math for the animation yourself or are you more interested in getting some animated character in your game and focus on the gameplay?
For just doing the gameplay, Unity is a pretty great platform to play around with.
If you are however interested in understanding the math for animation or games in general, here's what I'd read up on as a primer:
Matrices used for 3d transforms (make sure you understand the commutative, associative and distributive laws, and learn what each row and column represents - they are simpler than they seem)
Quaternions for rotations (less intuitive, but closely coupled to an axis and rotation)
And don't leave out the basics:
Vector Dot Product
Vector Cross Product
Also like 10 years ago I wrote a simple animation library, which is pretty simple, free from most of the more advance concepts so it's not a bad place to look if you want to get a basic idea of how it works: http://animadead.sf.net
Good Luck!

Resources