Mix animations in an 3d Tennis game - animation

I am trying to make an 3d Tennis game. This is my first Unity Game. My game requires multiple animations like running and hitting the ball, making a volley on net, Serving the ball, etc.
I am confused about how to mix multiple animations is Unity. What is the best way to approach the problem. Are blend trees appropriate to do this kind of stuff.

Related

Is it possible to control the 3D animation's movement in Unity script?

I have this golfer animation. It is quite smooth movement ant the best I can find in golfer animation.
But its movements are always the same pattern.
For example, if I like to use the model to represent real person's movements outside in playing golf. Is it possible?
That means, can I control the club's 3D position and orientation in animation from script?
You could use Inverse Kinematics (https://docs.unity3d.com/Manual/InverseKinematics.html)
With this you attach the hands on the club and move the club as you want :
The hands / arms and close bones with follow it in a "realistic" way while the rest of the body will be animated as usual.
However the club movement will need some work to get a nice feeling like your original animations.
Target Matching (https://docs.unity3d.com/Manual/TargetMatching.html) could be useful too or even better but I never used that.

Artificial intelligence functionality in three.js

Does Three.JS have a function or capability of AI( Artificial intelligence )? Specifically let's say a FPS game. I want enemies to look for me and try to kill me, is it possible in three.js? Do they have a functionality or a system of such?
Webgl
create buffer
bind buffer
allocate data
set up state
issue draw call
run GLSL shaders
three.js
create a 3d context using WebGL
create 3 dimensional objects
create a scene graph
create primitives like spheres, cubes, toruses
move objects around, rotate them scale them
test for intersections between rays, triangles, planes, spheres, etc.
create 'materials' (rather than shaders)
javascript
write algorithms
I want enemies to look for me and try to kill me
Yes, three.js is capable of doing this, you just have to write an algorithm using three's classes. Your enemies would be 3d objects, casting rays, intersecting with other objects, etc.
You would be building a game engine, and you could use three.js as your rendering framework within that engine. Rendering is just one part of it. Think of a 2d shooter, you could make it using a 2d context, but you could also enhance it and make it 2.5d, by working with a 3d context. Everything else can stay the same.
any webgl engine that might have it ? or is it just not a webgl thing
Unity probably has everything you can possibly think of. Unity is capable of outputting WebGL, so it could be considered a 'webgl engine'.
Bablyon.js is more engine like.
Three Js is the best and most powerfull WebGL 3d engine that has no equal on the market , and its missing out on such an ability
Three.js isn't exactly a 3d engine. Wikipedia says:
Three.js is a lightweight cross-browser JavaScript library/API used to
create and display animated 3D computer graphics on a Web browser.
Three.js uses WebGL.
so if i need to just draw a car, or a spinning logo, i don't need them to come looking for me, or try to shoot me. I just need them to stay in one place, and rotate.
For a graphics demo you don't even need this - with a few draw instructions, you could render a full screen quad with a very elaborate pixel shader. Three gives you a ton of options, especially if you consider all the featured examples.
It works both ways, while you can expand three.js anyway you want, you can strip it down for just a very specific purpose.
If you need to build an app that needs to do image processing, and feature no '3d' graphics, you could still leverage webgl with three.js.
You don't need any vector, matrix, ray , geometry classes.
If you don't have vector3, you probably cant keep planeGeometry, but you would use bufferGeometry, and manually construct a plane. No transformations need to happen, so no need for matrix classes. You'd use shaders, and textures, and perhaps something like the EffectsComposer.
I’m afraid not. Three.js is just a engine for displaying 3d content.
Using it to create games only is one possibility. However few websites raise with pre-coded stuff like AI (among other things) to attract game creators, but using them is more restrictive than writing the exact code you need
Three.js itself doesn't however https://mugen87.github.io/yuka/ is a great AI engine that can work in collaboration with three to create AI.
They do a line if sight and a shooting game logic, as well as car logic which I've been playing around with recently, a React Three Fiber example here: https://codesandbox.io/s/loving-tdd-u1fs9o

How to collide .fbx animation in Unity

I am new to unity. I have two animation in .fbx format.They can move..Now i want when both will collide with each other a sound will produce.Is there any idea how i will do this.Thanks in advance
I think you need to read about how Physics work, and then how Trigger-Events and Colission detection is handled.
Read this here, and this. The first one gives you insight on how the Unity engine works. The latter provides a video tutorial on how to do Collision Detection.
If you don't want to do that and just want the code, I found this on a quick Google:
var crashSound : AudioClip; // set this to your sound in the inspector function
OnCollisionEnter (collision : Collision) {
// next line requires an AudioSource component on this gameobject5.
audio.PlayOneShot(crashSound);
}
You can add a MeshCollider to the fbx meshes. Anyway, this is not a good idea because this will cause performance issues.
You can create an empty gameobject for each character, and add to them: the fbx animation and a simple collider (some cube, sphere, capsule, etc). Then, when you use a script for them, you attach it to the parent object and from there you handle the whole thing.
If you want that the collider moves from specific places from the animation (Like the punch movement, or a kick),then you can ask to your 3D animator/modeler to add a simple mesh on that points. For example, a sphere on one punch, which will move with the animation. Then, in Unity, you will hide the mesh of the sphere but add a mesh collider to it. :)
Hope it helps!
Most of the time, if you apply an animation to an object then you'll loose the physics reaction. Don't trust me? See here: http://www.youtube.com/watch?v=oINKQUJZc1Q
Obviously, animation are not part of Unity physics. Think about it... Unity physics decide position and rotation of objects accordingly to Newton and friends laws. How do you think these laws can accord to a keyframe arbitrary animation? They can't: hence the crazy results you get when you try.
How to solve it? Use Unity physics also for animation: learn to master rigidbody.AddForce and all the other stuff described here.
You may always want to keep the physics and the animation separated. That's how you get out of trouble.
If you really want to know: here's my personal experience on how to mediate physics with animation.
Sometimes, even binding a simple parameter to the physics and another
to an animation (or a script which mediates user input) may result in
catastrophic results. I've made a starship: rotation controller by
user mouse (by flagging "block rigidbody rotation"), direction and
speed by physics. It was inside a box collider. Imagine what happens
if a cube, orientated by a few degrees angles, meets a flat ground: it
should fall and rotate until one of the faces lays completely on the
ground. This was impossible, as I blocked any physics interaction with
the rotation of the body: as a result the box wanted to get flat on
the ground but couldn't. This tension eventually made it move forward
forever: something impossible in real world. To mediate this error,
I've made the "block rotation" parameter change dynamically according
to the user input: as the ship is moving the rotation is controlled by
the user but as soon as the user stop controlling the ship the
rotation parameter is given back to the physics engine. Another
solution would be to cast a ray down the collider, check if the ground
is near and avoid collisions if the ship is not moving (this is how
the banshee in Halo Combat Evolved is controlled, I think). When
playing videogames, always have a look at how your user input is
mediated into the physics engine: you may discover things which a
normal player normally wouldn't notice.

Blending animations in DirectX - is this technically possible?

I have an animated mesh in the .x format I've loaded with D3DXLoadMeshHierarchyFromX and have an animation controller for it. The mesh has two animations, one for walking and one for throwing where the walk animation.
Is it at all possible to blend the two animations in such a way that both animations can run together with the walk taking priority for frames below the hip while throwing animation takes priority for frames above it? If it is will the effect look convincing therefore worth pursuing? Do game developers typically blend animations in such a way to get all the different animations they wish or do they simply create multiple versions of the same animation, i.e. walking while throwing, standing while throwing, walking without throwing?
You can set High and low priority animation tracks with ID3DXAnimationController::SetTrackPriority. You can then blend between them using ID3DXAnimationController::SetPriorityBlend.

Programming 3D animations?

What are the different ways to handle animations for 3D games?
Do you somehow program the animations in the modeler, and thus the file, than read and implement them in the game, or do you create animation functions to animate your still vectors?
Where are some good tutorials for the programming side of this? Google only gave me the modeler's side.
In production environments, animators use specialized tools such as Autodesk's 3DS Max to generate keyframed animations of 3D models. For each animation for each model, the animator constructs a number of poses for the model called the keyframes, which are then exported out into the game's data format.
The game then loads these keyframes, and to animate the model at a particular time, it picks the two nearest keyframes and interpolates between them to give a smooth animation, even with a small number of keyframes.
Animated models are typically constructed with a bone hierarchy. There is a root bone, which controls the model's location and orientation within the game world. All of the other bones are defined relative to some parent bone to create a tree. Each vertex of the model is tied to a particular bone, so the model can be controlled with a much smaller number of parameters: the relative positions, orientations, and scales of each bone.
Smooth skinning is a technique used to improve the quality of animations. With smooth skinning, a vertex is not tied to just one bone, but it can be tied to multiple bones (usually a hard limit, such as 4, is set; vertices rarely need more than 3) with corresponding weights. This makes the animator's job harder, since he has to do a lot more work with vertices near joints, but the result is a much smoother animation with less distortion around the joints.
Alternatively, some games use procedural animation, whereby the animation data is constructed at runtime. The bone positions and orientations are computed according to some algorithm, such as inverse kinematics or ragdoll physics. Other options are of course available, but they must be coded by programmers.
Instead of procedurally animating the bones and using forward kinematics to determine the locations of all of the model's vertices, another option is to just procedurally generate the position of each vertex on its own. This allows for more complex animations that aren't bound by bone hierarchies, but of course it's much more complicated and much harder to do well.
Different ways to handle animation in games?
For characters, typically:
Skeletal Deformation
Blend shapes
Sometimes even fully animated meshes (LA. Noire did that)
In the recent Tomb Raider we use a compute job to simulate thousands of splines for the hair, and the vertices are controlled by these splines with a vertex shader.
Often these are driven by key-frame animation which are interpolated at runtime, and sometimes we add code to drive the influences (bones, blendshape weights, etc.) procedurally.
For some simple objects that move rigidly, sometimes the design will have animation control over the whole object without having to deform it. Sometimes simple things like shrubs can be made to move by using something akin to blendshapes where you have a full mesh pose in a few extremes and you kind of blend between those extremes.
As for tutorials... well... Are you interested in writing the math for the animation yourself or are you more interested in getting some animated character in your game and focus on the gameplay?
For just doing the gameplay, Unity is a pretty great platform to play around with.
If you are however interested in understanding the math for animation or games in general, here's what I'd read up on as a primer:
Matrices used for 3d transforms (make sure you understand the commutative, associative and distributive laws, and learn what each row and column represents - they are simpler than they seem)
Quaternions for rotations (less intuitive, but closely coupled to an axis and rotation)
And don't leave out the basics:
Vector Dot Product
Vector Cross Product
Also like 10 years ago I wrote a simple animation library, which is pretty simple, free from most of the more advance concepts so it's not a bad place to look if you want to get a basic idea of how it works: http://animadead.sf.net
Good Luck!

Resources