Responding to tilt of iPhone in Sprite Kit - xcode

I have been building a Sprite Kit game for quite some time now. Just recently I have been adding gyro/tilt functionality. Using the CMMotionManager, I've been able to access the numbers surprisingly easily. However, my problem arises as a result of how the acceleration.x values are stored.
You see, the way my game works, when the game starts, the phone quickly calibrates itself to how it's currently being held, and then I respond to changes in the acceleration.x value (holding your phone in landscape orientation, this is equivalent to tilting your screen towards and away from you.) However, laying your phone flat is 1.0 and tilting it straight towards you is 0.0, and then it loops back through those values if you go beyond that. So, if someone is sitting upright and their phone is calibrated at .1, and they tilt their phone .2 downwards, the results will not be what is expected.
Is there any easy way to counteract this?

Why are you trying to make your own system for this? You shouldn't really be using the accelerometer values directly.
There is a class called CMAttitude that contains all the information about the orientation of the device.
This orientation is not taken raw from accelerometer data but uses a combination of the accelerometers, gyroscopes and magnetometer to calculate the current attitude of the device.
From this you can then take the roll, pitch and yaw values and use those instead of having to calculate them yourself.
Class documentation for CMAttitude.

Related

Zooming or enhancing performance on ZXing

I'm using ZXing.Mobile.Forms (ZxingScannerPage) in a Xamarin project for an app in which users would have to scan QR Codes and barcodes that are fairly high up (say, around 3 to 4 meters from the ground). The problem is, it's too far for the camera to detect!
I've already tried on more than one phone, one of them being fairly high end. Is there a way to use a zooming functionality, or to enhance the scanner's maximum range? If not, is there a better alternative to ZXing? Or will I just have to tell my bosses to hang all the labels lower?

Does the project tango tablet work outdoors?

I'm looking to develop an outdoor application but not sure if the tango tablet will work outdoors. Other depth devices out there tend to not work well outside becuase they depend on IR light being projected from the device and then observed after it bounces off the objects in the scene. I've been looking for information on this and all I've found is this video - https://www.youtube.com/watch?v=x5C_HNnW_3Q. Based on the video, it appears it can work outside by doing some IR compensation and/or using the depth sensor but just wanted to make sure before getting the tablet.
If the sun is out, it will only work in the shade, and darker shade is better. I tested this morning using the Java Point Cloud sample app, and only get > 10k points in my point cloud in center of my building's shadow, close to the building. Toward the edge of the shadow the depth point cloud frame rate goes way down and I get the "Few depth points" message. If it's overcast, I'm guessing your results will vary, depending on how dark it is, I haven't tested this yet.
The tango (yellowstone) tablet also works by projecting IR light patterns, like the other depth sensing devices you mentioned.
You can expect the pose tracking and area learning to work as well as they do indoors. The depth perception, however, will likely not work well outside in direct sunlight.

Using windows phone combined motion api to track device position

I'd like to track the position of the device with respect to an initial position with high accuracy (ideally) for motions at a small scale (say < 1 meter). The best bet seems to be using motionReading.SensorReading.DeviceAcceleration. I tried this. But ran into few problems. Apart from the noisy readings (which I was expecting and can tolerate), I see some behaviors that are conceptually wrong - e.g. If I start from rest, move the phone around and bring it back to rest- and in the process periodically update the velocity vector along all the dimensions, I would expect the magnitude of the velocity to be very small (ideally 0). But I don't see that. I have extensively reviewed available help including the official msdn pages but I don't see any examples where the position/velocity of the device are updated using the acceleration vector. Is the acceleration vector that the api returns (atleast in theory) supposed to be the rate of change of velocity or something else? (FYI - my device does not have a gyroscope, so the api is going to be the low accuracy version.)

Image transformation for Kinect game

I am working on a Kinect game where I am supposed "to dress" the player into a kind of garment.
As the player should always stand directly in front of the device, I am using a simple jpg file for this "dressing".
My problem starts when the user, while still standing in the frontal position, bends the knees or leans right or left. I want to apply an appropriate transform to this "dress" image so that it still will cover player's body more or less correctly.
From Kinect sensors I can get a current information about the following player's body parts positions:
Is there any library (C++, C#, Java) or a known algorithm that can make such transformation?
Complex task but possible.
I would split the 'dress' into arms, torso/upper body and lower. you could then use (from memory) AffineTransform in java though most languages have algorithms for matrix transforms against images.
The reason I suggest splitting the image is that when you do a transform you will be distorting the top part of the image and it will allow you to do some rotation (for when people lean) and wrap the arms as they move also.
EDIT:
I would also NOT transform each frame (cpu intensive) I would create a rainbow table of the possible angles and do a lookup for the image

How do you make and recognize gestures?

I am using a web cam to get video feed and then performing motion tracking on this video feed. The motion tracker returns (x,y) co-ordinates continuously.
I want to use these (x,y) to recognize gestures such as "swipe left", "swipe right", "swipe up" or "swipe down".
How do i make and store templates of these gestures and how do i figure out/recognize if one
of the gestures has happened ?
Thank you in advance :)
PS: I am using Flex 4 and ActionScript 3.0. If someone could help me out with the logic, i can write it in ActionScript.
An approach I could think of working would be to have a series of (X,Y) coordinates representing points along the gesture. On a small scale if a gesture that passed through your screen was graphed as such:
|1|1|1|
|1|0|0|
|1|0|0|
and represented (from the upper left corner representing 0,0):
(0,2)(0,1)(0,0)(0,1)(0,2)
Break the x,y coordinates up into individual 2d arrays with total distance traveled between the current coordinate and the first point (in all cases in this example it would increment by 1) so you would have two arrays:
X:(0,0)(1,1)(2,2)
Y:(0,1)(1,1)(2,2)
Now do a least square fit on each array to find the closest representation of the change in x and change in y as quadratic functions. Do the same to your per-deteremined gestures and then plug in the x,y coordinates of your per-determined gestures into to the user's gesture's quadratic function and the per-determined gestures you designed and see which one it matches the closest. This is your gesture.
(I've never tried processing gestures, but I don't see why this wouldn't work)
You should divide your task into smaller subtasks. In computer vision there is no thing like a generic gesture detection that works out of the box in all environments.
First of all, you need to be able to detect motion at all. There are several ways to do this, e.g. background subtraction or blob tracking.
Then you need to extract certain features from your image, e.g. a hand. Again, there is more than one way to do this. Starting from skin color approximation/evaluation, which is very error prone to different lighting conditions, to more advanced techniques which really try to "analyze" the shape of an object. Those algorithms "learn" over time how a hand should look like.
I can only recommend to buy a decent book about computer vision and to research the web for articles etc. There are also libraries like OpenCV you can use for learning more about the implementation side. There should be several ports of OpenCV to ActionScript 3. I also can recommend the articles and tools from Eugene Zatepyakin (http://blog.inspirit.ru). He's doing great CV stuff with ActionScript 3.
Long story short, you should research motion tracking and feature extraction.
The best place to start is to read about how sign language recognition or trackpad input works, such as creating reference images and comparing them to user input. Specific to Adobe, there's the FLARToolKit, which is detailed in an augmented reality article on their website.
References:
Trackpad Science
Hand Gesture Recognition
Sign Language Recognition Research - PDF
Gesture Recognition Walkthrough - Video

Resources