Object floating and moving with user movement - google-project-tango

i'm trying to create a simple Tango application to visualize my 3D models. The problem is that, when I hold my Tango device and move around, I can see my model is also moving with me.
Looks like my physical movement is not reflected in my Tango app in correct scale - like when I stepped for 3 feets, I only moved 2.5 feets or so in the app. However I found other Tango applications on the same device works perfectly - their 3D objects are stable and stays on the same spot without moving.
Please advice. Thank you.

Related

Would it be possible to change the default values on a xbox 360 kinect?

I got a kinect for 360 in hopes of using it in some projects regarding tracking and skeleton work, but I was stopped right on place because the kinect 360 does not have a near mode.
I'm pretty sure there's a way to force the kinect into a somewhat "near mode" by changing the default values for tracking distance, but I have no idea on where to change it or how, perhaps any of you could help me?
There is a "near mode" for Kinect v1. The according property has to be set manually tho. see here

Override image feed into Vuforia in Unity

I want to record the video feed captured from Vuforia then play the scene back, allowing for the tracked image marker to be enabled or disabled upon playback. I know Vuforia allows me to access camera properties with Vuforia.CameraDevice.Instance but there doesn't seem to be a way to override the incoming image with a prerecorded one.
I know I could record the state (position and rotation) of the objects during the recording but it seems more elegant for them to be tracking in realtime based off a prerecorded video feed. Thanks.
I attempted this as well, to no avail.
From: Is is possible to use Vuforia without a camera?
...but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).

track the movement of eyeballs of the user in iphone

I am developing an application where I need to track the movement of eyeballs of the user i.e. whether the user is looking at the top or bottom of an iPhone. I have used FaceDetection in my earlier projects but in that only the eyes were detected not the movement of eyes.
Is there any API or any Framework which could detect the motion of eyeballs??
Any help would be great.

MONODROID camera preview with opengl overlay

I have created an augmented reality application using Monodroid and it works fine on a technical basis. However, the graphics I used were drawn on a canvas and are really too slow.
The application is a simple heads-up compass and speed display a-la luke-skywalkers binoculars.
I am trying to get a camera preview going with an openGL translucent/transparent overlay and yes, I have read whats available but its all pure Android SDK / Java.
Does anyone know of a method of getting this effect in C# and Monodroid possibly using the AndroidGameView? Whatever I do I can see one or the other but never both at the same time.
Unhelpful jerks are a pleasure to work with.
http://bobpowelldotnet.blogspot.fr/2012/10/monodroid-camera-preview-as-opengl.html

Generate and post Multitouch-Events in OS X to control the mac using an external camera

I am currently working on a research project for my university. The goal is to control a Mac using the Microsoft Kinect camera. Another student is writing the Kinect driver (which will be mounted somewhere on the ceiling or the wall behind the Mac and which outputs the position of all fingers on the Macs screen).
It is my responsibility to use that finger-positions and react on them. The goal is to use one single finger to control the mouse and react on multiple fingers the very same way, like they are on the trackpad.
I thought that this is going to be easy and straight forward, but its not. It is actually very easy to control the mouse cursor using one finger (using CGEvent), but unfortunately there is no public API for creating and posting Multitouch-Gestures to the system.
I've done a lot of research, including catching all CGEvents using an event tap at the lowest possible position and trying to disassemble them, but no real progress so far.
Than I stumbled over this and realized, that even the lowest position for an event tap is not deep enough:
Extending Functionality of Magic Mouse: Do I Need a kext?
When I got it right, the built-in Trackpad (and the MagicMouse and the MagicTrackpad) communicates over a KEXT-Kernel-Extension with the private MultitouchSupport-framework, which is generating and posting the incoming data in some way to the OS.
So I would need to use private APIs from the MultitouchSupport.framework to do the very same thing like the Trackpad does, right?
Or would I need to write a KEXT-Extension?
And if I need to use the MultitouchSupport-framework:
How can I disassemble it to get the private APIs? (I know class-dump, but that only works on Objective-C-frameworks, which this framework is not)
Many thanks for any response!
NexD.
"The goal is to use one single finger to control the mouse and react on multiple fingers the very same way" here if I understand what you are trying to do is you try to track fingers from Kinect. But the thing is Kinect captures only major body joints. But you can do this with other third party libraries I guess. Here is a sample project I saw. But its for windows. Just try to get the big picture there http://channel9.msdn.com/coding4fun/kinect/Finger-Tracking-with-Kinect-SDK-and-the-Kinect-for-XBox-360-Device

Resources