I am new to unity. I have two animation in .fbx format.They can move..Now i want when both will collide with each other a sound will produce.Is there any idea how i will do this.Thanks in advance
I think you need to read about how Physics work, and then how Trigger-Events and Colission detection is handled.
Read this here, and this. The first one gives you insight on how the Unity engine works. The latter provides a video tutorial on how to do Collision Detection.
If you don't want to do that and just want the code, I found this on a quick Google:
var crashSound : AudioClip; // set this to your sound in the inspector function
OnCollisionEnter (collision : Collision) {
// next line requires an AudioSource component on this gameobject5.
audio.PlayOneShot(crashSound);
}
You can add a MeshCollider to the fbx meshes. Anyway, this is not a good idea because this will cause performance issues.
You can create an empty gameobject for each character, and add to them: the fbx animation and a simple collider (some cube, sphere, capsule, etc). Then, when you use a script for them, you attach it to the parent object and from there you handle the whole thing.
If you want that the collider moves from specific places from the animation (Like the punch movement, or a kick),then you can ask to your 3D animator/modeler to add a simple mesh on that points. For example, a sphere on one punch, which will move with the animation. Then, in Unity, you will hide the mesh of the sphere but add a mesh collider to it. :)
Hope it helps!
Most of the time, if you apply an animation to an object then you'll loose the physics reaction. Don't trust me? See here: http://www.youtube.com/watch?v=oINKQUJZc1Q
Obviously, animation are not part of Unity physics. Think about it... Unity physics decide position and rotation of objects accordingly to Newton and friends laws. How do you think these laws can accord to a keyframe arbitrary animation? They can't: hence the crazy results you get when you try.
How to solve it? Use Unity physics also for animation: learn to master rigidbody.AddForce and all the other stuff described here.
You may always want to keep the physics and the animation separated. That's how you get out of trouble.
If you really want to know: here's my personal experience on how to mediate physics with animation.
Sometimes, even binding a simple parameter to the physics and another
to an animation (or a script which mediates user input) may result in
catastrophic results. I've made a starship: rotation controller by
user mouse (by flagging "block rigidbody rotation"), direction and
speed by physics. It was inside a box collider. Imagine what happens
if a cube, orientated by a few degrees angles, meets a flat ground: it
should fall and rotate until one of the faces lays completely on the
ground. This was impossible, as I blocked any physics interaction with
the rotation of the body: as a result the box wanted to get flat on
the ground but couldn't. This tension eventually made it move forward
forever: something impossible in real world. To mediate this error,
I've made the "block rotation" parameter change dynamically according
to the user input: as the ship is moving the rotation is controlled by
the user but as soon as the user stop controlling the ship the
rotation parameter is given back to the physics engine. Another
solution would be to cast a ray down the collider, check if the ground
is near and avoid collisions if the ship is not moving (this is how
the banshee in Halo Combat Evolved is controlled, I think). When
playing videogames, always have a look at how your user input is
mediated into the physics engine: you may discover things which a
normal player normally wouldn't notice.
Related
I'm just diving into Cannon.js and was wondering how to achieve this.
My use case is that I have a circular area on the floor where if the player steps inside, they will trigger some interactions. I didn't want to do raycasting for this because I was already using Cannon.js for other collisions and felt that raycasting would add another layer of performance latency.
Right now my player object is a simple sphere shape RigidBody that I move by setting its velocity. I made the interaction area to be a Cylinder shape with a very low height. However, when the player goes over this object, the collision gets registered successfully but the player spins out of control and there's a noticeable bump in the movement.
Is there a standard way to register these kinds of objects for Cannon.js? I would like it so there is no bump, almost as if it's an invisible object that the player can pass through but it still registers collisions.
A typical way of solving this issue in games is the usage of triggers. So physics engines are unrelated to this topic.
A trigger can be implemented as a simple bounding volume like a bounding sphere. In three.js, it's an instance of THREE.Sphere. Each simulation step, you test if the trigger was activated by the player. You can do this in various ways. For example by just testing if the player's position is inside the bounding sphere via Sphere.containsPoint(). Or you represent the player as another bounding volume like a bounding sphere or AABB (via THREE.Box3) and then perform an intersection test.
Both approaches are very fast and should not noticeable affect the performance of your app. Even if you are doing these tests with more game entities who potentially activate triggers.
Here is a simple example that demonstrates the concept of triggers in Yuka:
https://mugen87.github.io/yuka/examples/entity/trigger/
I try to implement a scene, where an object is updated in different way for each eye (eg. I want to project opposite rotation of box for each eye).
I have demo application written with WebGL using Three.js as described in Google Developers tutorial.
But there is only one scene, containing one mesh, with single update function. I can't find way to implement separation of update, so it's done seperately for each eye (just as rendering is done) and wonder if it's even possible.
Anybody has any experiences with similar case?
Your use case is rather unusual (and may I say, eye-watering), so basically the answer is no: Three.js has abstracted away the left/right eye dichotomy of VR. Internally it renders the scene using an array of 2 camera's, with the correct left/eye setting.
Fortunately, every object has an onBeforeRender(renderer, scene, camera, ...) event. If you hook that event, and find a way to distinguish the left/right eye camera you should be able to modify the orientation just before it gets rendered.
A (perhaps too) simple way to distinguish the camera's would be to keep a track of the index with a counter.
I have this golfer animation. It is quite smooth movement ant the best I can find in golfer animation.
But its movements are always the same pattern.
For example, if I like to use the model to represent real person's movements outside in playing golf. Is it possible?
That means, can I control the club's 3D position and orientation in animation from script?
You could use Inverse Kinematics (https://docs.unity3d.com/Manual/InverseKinematics.html)
With this you attach the hands on the club and move the club as you want :
The hands / arms and close bones with follow it in a "realistic" way while the rest of the body will be animated as usual.
However the club movement will need some work to get a nice feeling like your original animations.
Target Matching (https://docs.unity3d.com/Manual/TargetMatching.html) could be useful too or even better but I never used that.
I have a rigged 3d model of a person with animations that i made. It is a player model in a first person shooter. I want this model to "bow", when looking down or do the opposite, when looking up. To achieve this, i decided, instead of making an animation for each degree the player might decide looking at, to rotate models spine, depending on the angle of the camera. In scene view, i can easily change rotation value and get the results i want, however, when game is running, those parameters seem to be "locked" and no matter what script i tried, i cant seem to change the rotation value. I figured, perhaps, when animation is playing, i cant change things it effects, so made a body mask to excluded torso from animations and spines rotation was still locked away from me. Is there a way to rotate models spine, when its doing its normal, lets say, idle, animation? is there actually another easy way to achieve this?
You have to update in LateUpdate(). Unity's animator does it's changes to the transform in Update(). By doing it in LateUpdate() it will be handled after the animation has made it's changes.
Using three.js am trying to create a floor that reflects the objects that sit upon it. Preferably the floor material should reflect not like a mirror but in a more 'matte' or diffused way.
To achieve this I looked to Jaume Sanchez Elias who has made a great example using a cube camera: Look for the "smooth material" example on this page:
http://www.clicktorelease.com/blog/making-of-cruciform
Here is my attempt using the same technique. But as you see the reflections are misplaced, they do not appear underneath the mountain objects as expected.
http://dev.udart.dk/stackoverflow_reflections/
I am looking to correct this or to use any other technique that will achieve a more correct diffused reflection.
There are three.js examples using the cube camera technique but they all create mirror-like effects not a soft reflection.
Vibber. Parallax-corrected cubemaps, the technique used in cru·ci·form, only works for closed volumes, like cubes. It works really well to simulate correct reflections inside a room, but not so much for outdoors or open/large scenes. They also can't reflect anything that it's inside the cubemap, you'd have to split the volume in many sub-volumes.
I can think of a couple of solutions for what you want to achieve:
SSR: Screen-space reflections, you can find more info in many places on the internet. It's not the most trivial of effects to implement, and you might have to change the way you render your scene.
Simpler post-processing approach: since you have a flat floor, render the mountains vertically flipped on a framebuffer object, blur it, and render the regular scene on top. For extra effect, render the depth of the flipped mountains, and use that value as the blur radius, to get diffuse reflections.
As always, there's a ton of ways to achieve the (un)expected result :)