First person movement tracking using simple camera - algorithm

The situation: I need to be able to track a hovering drone's translation (not height) and rotation over the ground using a downwards facing camera. I wouldn't know where to start looking. Can anyone with experience point me to some theory or resources? I'm looking for the type of algorithms a mouse would use but am not having much luck so far. Most results detail tracking an object in a fixed frame. In my case the environment is relatively static and the camera moves.

Related

Arbitrary direction for DeviceOrientationControls

When using DeviceOrientationControls, I need to allow the user to reset their view to an arbitrary direction. Basically if I'm sitting in a chair with limited range of head motion, I want to allow the camera to switch to a different direction (how I trigger that change is not important).
alphaOffsetAngle works great for resetting the view to look left, right, or behind, but not for looking up or down (or left/right, but rotated).
I tried adding offset angles for Beta and Gamma, but that isn't as straightforward as I hoped. I also tried adding the camera to an Object3D and rotating the parent. That sortof worked, but the controls got all wonky when the camera's parent was rotated.
lookAt() is pretty much what I want, but the DeviceOrientationControls update() seems to blow that away.
Does anyone have a working example of this arbitrary camera direction with the deviceorientationcontrols?
This question is similar these, but I have not found a workable solution:
Add offset to DeviceOrientationControls in three.js
and:
DeviceOrientationControls.js - Calibration to ideal starting center

ThreeJS translate scene position

I've been working at a game which is set in space meaning that the player can move through the solar system.
The issue comes when the player travels further away, and gets Float32 precision issues.
I've been searching for a few hours to find a fix for this, but nothing helped so far.
What I also tried was to rescale all the meshes to be tiny.. about 100 times smaller than their initial scale, but that behaves the same when reaching larger coordinates.
Another solution would be to translate the world position, not the player, which will do the job.. but I honestly have no clue how to achieve this without changing each mesh position.
I've also set the renderer to use { logarithmicDepthBuffer: true} but that still wont help me.. the player model starts jumping, flickering.
I spent alot of time by trying to find a solution to help me with this issue, so I appreciate any kind of advice.
To move your scene you can use:
scene.translateX(i);
scene.translateY(i);
scene.translateZ(i);
Where i is the increment from the existing position offset. This can give you the illusion of an first person movement.
This is a common solution to very large scenes.

How does the smart toothbrush detect regions

I recently came across a product called Kolibree on kickstarted, which is a smart toothbrush. From what they say on their website, it seems that Kolibree can detect each tooth. I have some exposure to gesture recognition and flight dynamics (roll angle, pitch angle, heading angle, ...) the technologies I believe need be used in this product, but I'm confused how it can accurately detect EACH tooth ? I think we can detect the left, right, up and down region using roll and pitch angle, maybe a little more precisely by using the heading angle. but accurate to each tooth is beyond my understanding. Could someone shed light on this ?
thanks,
Ted
from the kickstarter video it has:
Accelerometers
Gyroscopes
Magnetometers
These provide relative position and absolute direction of the device
So how to detect tooths? I would start with this:
tooth shape
by brushing you can collect surface data of close proximity to brush
but only when no significant surface movement is detected then
this can differentiate tooth types by curvature shape/size
so you have an idea in what part of jaw you are
vibrations
spinning brush creates noise pulses in accelerometer readings
these should be dependent on the movement and surface shape
when linear movement is detected (you move brush from side to side)
then the gaps between tooths will create measurable readings in acceleration
this can be used to recognize relative tooth position
angular constraints
when we brush teeth on the left/right side or up down of the mouth
we hold the brush differently
this can be also measured
if overall angular position is within certain borders
then we can assume which side of mouth are actually brushing
when you put all these data together
then we can improve the accuracy of tooth scan to better numbers
also if some kind of calibration is used that can improve it more
for example hold/click some button to start calibration
and move around the mouth by specific calibration movement ...
[notes]
some things that have to be taken in mind
left/right handed people hold the brush differently
this goes also for motoric dis-functions (disabled people)
missing or curved tooth anomalies (can be later used as mark point)
my guess is by adding camera info (for example from the linked device)
for head/jaw position detection can improve detection even more

How to collide .fbx animation in Unity

I am new to unity. I have two animation in .fbx format.They can move..Now i want when both will collide with each other a sound will produce.Is there any idea how i will do this.Thanks in advance
I think you need to read about how Physics work, and then how Trigger-Events and Colission detection is handled.
Read this here, and this. The first one gives you insight on how the Unity engine works. The latter provides a video tutorial on how to do Collision Detection.
If you don't want to do that and just want the code, I found this on a quick Google:
var crashSound : AudioClip; // set this to your sound in the inspector function
OnCollisionEnter (collision : Collision) {
// next line requires an AudioSource component on this gameobject5.
audio.PlayOneShot(crashSound);
}
You can add a MeshCollider to the fbx meshes. Anyway, this is not a good idea because this will cause performance issues.
You can create an empty gameobject for each character, and add to them: the fbx animation and a simple collider (some cube, sphere, capsule, etc). Then, when you use a script for them, you attach it to the parent object and from there you handle the whole thing.
If you want that the collider moves from specific places from the animation (Like the punch movement, or a kick),then you can ask to your 3D animator/modeler to add a simple mesh on that points. For example, a sphere on one punch, which will move with the animation. Then, in Unity, you will hide the mesh of the sphere but add a mesh collider to it. :)
Hope it helps!
Most of the time, if you apply an animation to an object then you'll loose the physics reaction. Don't trust me? See here: http://www.youtube.com/watch?v=oINKQUJZc1Q
Obviously, animation are not part of Unity physics. Think about it... Unity physics decide position and rotation of objects accordingly to Newton and friends laws. How do you think these laws can accord to a keyframe arbitrary animation? They can't: hence the crazy results you get when you try.
How to solve it? Use Unity physics also for animation: learn to master rigidbody.AddForce and all the other stuff described here.
You may always want to keep the physics and the animation separated. That's how you get out of trouble.
If you really want to know: here's my personal experience on how to mediate physics with animation.
Sometimes, even binding a simple parameter to the physics and another
to an animation (or a script which mediates user input) may result in
catastrophic results. I've made a starship: rotation controller by
user mouse (by flagging "block rigidbody rotation"), direction and
speed by physics. It was inside a box collider. Imagine what happens
if a cube, orientated by a few degrees angles, meets a flat ground: it
should fall and rotate until one of the faces lays completely on the
ground. This was impossible, as I blocked any physics interaction with
the rotation of the body: as a result the box wanted to get flat on
the ground but couldn't. This tension eventually made it move forward
forever: something impossible in real world. To mediate this error,
I've made the "block rotation" parameter change dynamically according
to the user input: as the ship is moving the rotation is controlled by
the user but as soon as the user stop controlling the ship the
rotation parameter is given back to the physics engine. Another
solution would be to cast a ray down the collider, check if the ground
is near and avoid collisions if the ship is not moving (this is how
the banshee in Halo Combat Evolved is controlled, I think). When
playing videogames, always have a look at how your user input is
mediated into the physics engine: you may discover things which a
normal player normally wouldn't notice.

Are there any C++ libraries for real-time animation of a 3D model using changing x,y coordinates of feature key points

I am working on a project where I am to use the Kinect to track facial expressions and animate a 3D model (.ply) accordingly.
So far I have managed to track a human face with a finite number of key-points on the face. I am able to get the coordinates of each key-point at every frame.
I am not very adept with animating techniques and general concepts of Mesh deformation and would really appreciate if one could provide a library which provides a high-level API so as to do said animation using x,y coordinates of key-points.
I am aware of CUBICA but unsure whether it can be used for what I want. Please excuse me as I am not very adept with this and would appreciate any help.
I too am looking for something that would do this (with .NET if possible), so I could hook Kinect up, set the position of each joint and see the character animate - without needing to set the position and angle of each bone which quickly gets very complex when you take into account the X,Y and Z positions.
So far my research has lead me to believe I will need a 3D engine that supports inverse kinematics - if anyone else had any better advice I'd be keen to hear it.

Resources