Multiple Sphero in AR - sphero-api

How well does the Sphero AR handle multiple Sphero at this point? Are they capable of meshing their locator grids so as to have the same 'map' be shared between the two? I was thinking perhaps, such as in Rolling Dead, that the phone camera could briefly mark multiple Sphero so as to set them up under the same grid by calculating their position in relation to each other, the player, and a center point as designated by the player (as with the cupcakes in Sharky). I program for Android, so I'm unable to test this myself.

The Sphero AR SDK currently does not support more than one Sphero.

Related

Is there any way to get live telemetry data from the DJI Mavic 2 Zoom to another unit, say a computer?

As part of a course on my university we've been given the task of taking the live wind telemetry from a drone and then feeding it to a neural network so that it gives better estimates than just using a sensor.
The research we've concluded so far tells us that our drone, the DJI Mavic 2 Zoom, is only compatible with the Windows SDK but not the onboard SDK.
Simply our question is; is there any way for us to send the raw wind speed and direction data from the drones sensors to a computer?
Create Android application with DJI Mobile SDK and send data from msdk to your computer with wifi.
The SDK only provides the wind warning level(0, 1 and 2). It does not provide any information regarding the direction from which the wind is blowing or the actual speed of the wind.
The aircraft tries to stay in it's current position on it's own, even if there is moderate wind blowing. However the drone does not tell the user how much it has to work in which direction to negate the effect of the wind.
I assume you're better off with accessing real time wind information for your position from a weather service on the internet, if that's available in your country.
I've done a wind meter app.
The best method is:
Fly against the wind
In virtualstick use angle mode, and set pitch and roll to 0 This will let the drone drift with the wind.
Slowly rotate the yaw.
Meassure the speed, when it stop increasing, the gps-speed gives you the windspeed and direction.
Warning, in strong wind you have to fly for quite a while against the wind.
The yaw rotation needs to be done due to the drone is never exactly leveled, and it will pick up speed at one direction. If turned, it cancel that out.
Send the info to a server over internet/wifi.
I've done this on an android phone connected the controller.
Windows api doesn't seem to support virtualsticks, which I find strange. In that case it must be done on android or ios, and trasnmitted to a server. I might be wrong since I never used the windows api.

Project Tango, camera position?

I would like to be able to take a photo indoors and be able to determine the position (x,y,z coordinates) of the camera in the room. Is this/will this be possible with Project Tango and the Lenovo Phab 2 Pro phone?
Thanks.
Yes, if you made an app yourself you could use an ADF (Area Description File) to determine the devices position relative to where you started to record the ADF. You can get position as XYZ and rotation as a quaternion.
Worth noting is that the camera on the Project Tango Tablet DK has a really bad camera, so it really wouldnt be worth the effort with that, although the Phab 2 Pro probably has a better camera.

Does the project tango tablet work outdoors?

I'm looking to develop an outdoor application but not sure if the tango tablet will work outdoors. Other depth devices out there tend to not work well outside becuase they depend on IR light being projected from the device and then observed after it bounces off the objects in the scene. I've been looking for information on this and all I've found is this video - https://www.youtube.com/watch?v=x5C_HNnW_3Q. Based on the video, it appears it can work outside by doing some IR compensation and/or using the depth sensor but just wanted to make sure before getting the tablet.
If the sun is out, it will only work in the shade, and darker shade is better. I tested this morning using the Java Point Cloud sample app, and only get > 10k points in my point cloud in center of my building's shadow, close to the building. Toward the edge of the shadow the depth point cloud frame rate goes way down and I get the "Few depth points" message. If it's overcast, I'm guessing your results will vary, depending on how dark it is, I haven't tested this yet.
The tango (yellowstone) tablet also works by projecting IR light patterns, like the other depth sensing devices you mentioned.
You can expect the pose tracking and area learning to work as well as they do indoors. The depth perception, however, will likely not work well outside in direct sunlight.

Responding to tilt of iPhone in Sprite Kit

I have been building a Sprite Kit game for quite some time now. Just recently I have been adding gyro/tilt functionality. Using the CMMotionManager, I've been able to access the numbers surprisingly easily. However, my problem arises as a result of how the acceleration.x values are stored.
You see, the way my game works, when the game starts, the phone quickly calibrates itself to how it's currently being held, and then I respond to changes in the acceleration.x value (holding your phone in landscape orientation, this is equivalent to tilting your screen towards and away from you.) However, laying your phone flat is 1.0 and tilting it straight towards you is 0.0, and then it loops back through those values if you go beyond that. So, if someone is sitting upright and their phone is calibrated at .1, and they tilt their phone .2 downwards, the results will not be what is expected.
Is there any easy way to counteract this?
Why are you trying to make your own system for this? You shouldn't really be using the accelerometer values directly.
There is a class called CMAttitude that contains all the information about the orientation of the device.
This orientation is not taken raw from accelerometer data but uses a combination of the accelerometers, gyroscopes and magnetometer to calculate the current attitude of the device.
From this you can then take the roll, pitch and yaw values and use those instead of having to calculate them yourself.
Class documentation for CMAttitude.

Suitability of using Core Animation on iOS vs using Cocos2D and OpenGL ES?

I finished a breakout game tutorial in a book, but the ball, which is a 20x20 pixel image, was skipping frames and not moving very smoothly. That is the case on the Simulator as well as on an iPhone 4S (the real thing). The code wasn't using NSTimer (which may be slower), but was using CADisplayLink and UIImageView setFrame to do the animation.
Is Core Animation on iOS not very suitable for development animation type of games? Say if it is a game of
Invaders (Space Invaders)
Breakout (as a game in a tutorial)
Arkanoid
Angry Birds / Cut the Rope / Fruit Ninja
For these types of games, is Core Animation really suitable for writing (2) above? For (1), (3), and (4), either Cocos2D or OpenGL ES is more suitable of doing the job. And the performance of Cocos2D and OpenGL ES are very close. Is that true?
Cocos2D is often looked at because of its ease for programming common game logic, like collision detection and sprite animations, frame-by-frame, scaling and other processes that are quite common in game development, where you string together multiple animations, combine then, sequence them, do call backs, and more. That is one of the big benefits of the engine.
However, performance is another. Cocos offers batch nodes, which combine all graphic elements into a single OpenGL call, rather than "drawing" each to the screen separately in each frame; this can dramatically improve performance, especially for large graphics. If you had skipping frames, I wonder if batch sprites in Cocos would have been the missing link.
I'm very impressed by Core Animation and want to hope that it can hold its own with performance issues in games. My understanding is that CA is, like Cocos, also built on top of OpenGL ES, so I'd expect it possible to achieve good results in either. It could be that doing so in Cocos is easier simply because it has been designed and optimized internally for game development.
If you are having performance problems with a 2D app, this is likely caused by a lack of understanding of how to get the most efficient results from CoreGraphics as opposed to something that switching to OpenGL will fix. A 2D game will work just fine with CoreGraphics, you just need to start with the right approach. First off, you should not be rendering the entire view over again on each CADisplayLink callback. Instead, setup a UIView that contains multiple CALayer objects. Set the layer like so: CALayer.contents = (id) cgImage and then let the system take care of rendering it when the x, y, or animation elements change. You just need to position your elements and define the animations that move the elements around. With this approach, the system will cache the animating image on the graphics card behind the scenes and redraw using GPU operations.

Resources