Getting detected features from Google Tango Motion Tracking API - google-project-tango

I would like to know how to get the current feature points used in motion tracking and the ones that are present in the learned area (detected or not).
There is an older, related post without an useful answer:
How is it possible to get tracked features from tango APIs used for motion tracking. I'm using the tango to not do SLAM and IMU-integration on my own.
What do I need to do, to visualize the tracked features like they did in some of the presentation videos. https://www.youtube.com/watch?v=2y7NX-HUlMc (0:35 - 0:55)
What I want in general is some kind of measure or visual guidance on how good the devices learned the current environment. I know, there is is the Inspector App but I need this information on the fly.
Thanks for your Help ;)

If you want to check if an area is present in your learned area model and which is not, you can use the Tango Debug Overlay App. It has a field 'Tracking Success' that only counts up if the device sees learned feature points (ADF on) or finds new feature points (ADF off) (http://grauonline.de/alexwww/tmp/tango_debug_overlay_app.jpg). Additionally, you can request that debug information like Tango Debug Overlay App does (as a simple text) via UDP port 29361 in your App and parse the returned debug text (although this is not recommended at all for a real app as this interface is not documented)
PS: In Tango Core 01-19-2017 this counter does not seem to work anymore.

Related

DJI-Mobile-SDK/position control

I am useing DJI Mobile SDK to crtate an APP by Android Studio now.I want to know how to use the GPS signal of the aircraft and the phone to realize position control. Is there any API in the DJI Mobile SDK I can use?
You may follow this sample to run simple higher-level GPS position control https://developer.dji.com/mobile-sdk/documentation/ios-tutorials/GSDemo.html
If you stop at some waypoint, it can automatically hold the position. It is a simple recreation of the DJI pilot app waypoint planning.
For low-level GPS position control requires a higher understanding of the system. This usually allows interesting applications such as allow drone to follow some person or precision landing to some mark or circle around some tower. There is not much open-source implementation available on the internet. You have to search in the MSDK for the API for some basic control and you also need to have deep understanding in the field that you are trying to achieve e.g real-time object detection, low-level control framework, Visual-Inertial SLAM etc

Is getLastLocation of android locationManager using GnssAntennaInfo?

I'm an Android developer.
From Android 11, the GnssAntennaInfo class that can utilize dual-frequency GNSS has been added, and it has been confirmed that developers can use it.
If so, is GnssAntennaInfo already used in getLastLocation method of locationManager provided by google location api??
Or is it still necessary for developers to utilize GnssAntennaInfo provided to improve location accuracy??
Sorry I can't give a definitive answer but, since there hasn't been any other response yet I'll answer what I'm pretty sure is the correct answer.
No.
From: https://www.androidpolice.com/2020/03/19/pixel-4-dual-band-gps/
In the sheer infinite expanse of the Android 11 developer documentation, an entry for GNSS or dual-band GPS support has surfaced. While the Pixel 4 isn't explicitly mentioned, it's plausible that owners will be able to use the hardware once developers add support for the new GnssAntennaInfo class to their apps. Hopefully, Google Maps will be among the first to utilize it — even if the difference isn't too meaningful, it might make GPS more reliable.
This matches my own experience. I recently purchased an S20 hoping for this exact feature (~ 1 yard accuracy), currently I'm seeing ~ 4 yrds. I've confirmed I see L5 satellites and my phone just got the Android 11 update tonight. All of my GPS apps show NO improvement in accuracy. I'm unsure of Google Maps, don't really know how to tell with it. However, if some "provided" function like getLastLocation were benefitting from high accuracy I think I'd see it in my GPS apps.
Hence, if I can figure out exactly how L5 can improve accuracy, I' going to attempt to write this myself. I'm having a hard time tracking that info down though...

DJI Phantom API or hackable procedure

Maybe I have't looked hard enough, but I spent yesterday googling for a bit and found no relevant projects on hacking the DJI Phantom Drone in order to create new coordinating apps. This is besides the app for coordination DJI currently uses for their drone. I'm trying to see if there's a way to communicate with the Drone with a specific protocol in order to accept a set of procedures.
Any help would be awesome,
Thanks.
Great News for you and all us Droneys! DJI has launched their SDK since you asked this question. They released it last November and you can now apply for a license and write your own apps for the Phantom2 Vision+ using their SDK.
Check it out at https://developer.dji.com/
I am already building a project using the SDK - you can follow my progress on my blog / product site. I will also try to update it with good DJI related development links and tips.
This post is old but I think it is good to leave a foot print for others :)
There is this new company called NVdrones, which created a peace of hardware that you can attach to any drone (you need physical access to the flight controller), and once you do that you can use their SDK (Arduino, Java, Android and Javascript) to write your app without the need of hacking, soldering or anything else. It is just plug and play.
Another benefit is that you are not locked with a specific drone (DJI SDK or 3DRobotics SDK), you can use the board on anything you want. Which gives lots of flexibility.
The developer site is http://developers.NVdrones.com
Hope this helps.
This is a great topic!
You could check how to hack your copter here: https://github.com/flyver/Flyver-SDK/wiki/-2.2--How-To:-Flyver-Hack-a-Copter
By opening the drone, taking out the original controller, soldering a few wires and sticking an Android phone to it, you will have the ability to program your Phantom in a modern manner with an open source SDK and application based development. This means that you could add computer vision to it, automation or additional hardware. You could also use smartphones, web and other interactive devices for remote controlling the copter instead of using the standard remote controls.
The Phantom, however, is offcenter balanced due to the fact that most people use gimbal with it. Without the gimbal is a lot less stable from my experiments so you will have to put some extra work in center balancing it.

Map Tile Caching for Offline Viewing

I'm trying to build an application that will use open source maps from Open Street Maps (though the concept should be applicable to any map provider). The application will enable the user to specify a number of waypoints along a route prior to departure.
Because I don't have a data plan for my cell phone (and because rambling in the countryside rarely gives you a good connection), I want to be able to pre-load the relevant map tiles for the waypoints and/or route before departure so that maps can continue to be used without a data connection.
My initial thoughts are to download the required tiles from the map provider and store them in isolated storage. However, the Bing Maps control implementation, which uses the TileSource class relies on returning an absolute URI that it can download the tile(s) from, which clearly won't work with data stored in isolated storage.
The question has already been asked: Windows Phone 7 Map Control with custom layer in offline mode, but wasn't answered and I'm wondering if since then anyone has cracked the problem.
I've seen this done with a custom layer placed over the map. Tiles are then loaded from anywhere you like (IsolatedStorage, online, somehwere else?) into the custom layer.
Sorry, I don't have any code I can share which demonstrates this at the moment but am currently doing something very similar.
I built a small prototype using OpenStreetMaps for Android. I think it might be interesting to look at the repository and therefore, find a solution similar to mine. I did download the maps before hand, but maybe you can use an online solution for this. This is the repo: https://github.com/kikofernandez/OpenStreetMapExample and the video of how it could look like: https://vimeo.com/40619538.
I used for this prototype OpenLayers, OpenStreetMaps, JavaScript and a WebView in Android. I would like to give you further details but it was just a prototype.
If you can store the data locally (embed it in the XAP), you can reference it via an absolute URI. Chris Walshie talks about it here.
So, for example, once you have the installation path for the app, you can reference the resource like this:
Uri toResource = new Uri("file:///Applications/Install/4FFA38B5-00AF-4760-A7EB-7C0C0BC1D31A/Install/EMBEDDED_RESOURCE", UriKind.Absolute);
Have you set the Build Action on your image(s) to Content?
If your app is running on WP8 then use the built in maps control in the Windows Phone 8 SDK as this already supports offline maps out of the box. If targeting WP7 it is possible to get offline maps to work but takes a lot of work. I created this for a customer a few years ago and I believe that it took me a little over 3000 lines of code to do. Mind you they wanted to also have a framework for adding tiles from various sources such as downloading over and area and downloading zipped files. They way I managed to get the rendering to work was to a canvas to the map without setting it's position. This will be default make it a child of the map but it will not move. I then made the canvas the same size as the map and used the resize event to resize the canvas should the map be resized. I then used the view change event to trigger a method to render the tiles. When this event fired I first calculated all the tiles in view using the code found here: http://msdn.microsoft.com/en-us/library/bb259689.aspx
I then would pull the tiles from isolated storage and draw them on the canvas. For performance I keep track of which tiles I added to the canvas so that if the tile was still in view I simply changed it's position rather than reloading it from isolated storage. I also removed any images that were no longer in view. Overall this works fine but there were some minor issues such as not having the smooth transition between zoom levels. If you really wanted that it is possible to get that to work but requires a lot more math. Also, if you zoom into an area where there is no tiles you end up with an empty map. You can create a custom map mode to prevent the user from going into areas where you don't have tiles.
A solution
The question is a bit old, but there's a solution for anyone who can use Qt.
The solution is not limited to the Windows Phone platform, I've done it targetting Android, and it also works on my desktop.
In Qt, you'll want to patch the OSM Plugin used by QtLocation. It's simple, quick and easy.
How to do it ?
A quick implementation could modify the QGeoTiledMappingManagerEngineOsm class to make it call your own QGeoTileFetcher instead of QGeoTileFetcherOsm.
There may be better ways to acheive this, but at least it works for me.
Basically, you make a fetcher that reads tiles from the filesystem instead of the network.
You build your filesystem database once, from an online resource for instance (see below) and you deploy it with your application for its offline use.
Where do I get tiles from ?
Information how to get the tiles to your offline implementation is available here :
http://wiki.openstreetmap.org/wiki/Slippy_map_tilenames
Here are two sources for tiles that can be used for free :
Open Street Maps project servers
Mapquest Open Tiles servers
Take care of the licensing and terms of use.
Open Street Map
Project : wiki.openstreetmap.org/wiki/Main_Page
License : www.openstreetmap.org/copyright
Terms of use : wiki.openstreetmap.org/wiki/Tile_usage_policy
Servers are currently named like *.tile.openstreetmap.org
MapQuest-OSM Tiles
Project : developer.mapquest.com/web/products/open/map
License : opendatacommons.org/licenses/odbl/
Terms of use : developer.mapquest.com/web/info/terms-of-use
Servers are currently named like otile*.mqcdn.com
(Sorry for strange links : I haven't got enough reputation to post real links).

Using views from other apps as CoreAnimation Layer

All,
How can I use (NS)Views from other applications as Layers in my CA app. I.e. I'd like to display a Keynote presentation as Layer in my CA app.
I found the iChatTheatre API which looks promising - however I'd need the oposite. An API to get the contents from an app - not to provide it.
Any pointers?
Thanks.
Take a look at the "Son of Grab" sample.
It shows you how to use the CGWindow*() API that was introduced with Mac OS X 10.5
The API allows you to get the content of a whole window, so you have to find a way to get the portions of the window you are interested in.
I don't believe there's a public way to do what you're talking about. Your best approach is probably to reverse-engineer the iChat AV system (the receiving side) and see if you can replicate it. Some initial work has been done by the ICP project. It's very sketchy, but it's a start.
Another approach is the QuickLook API, which has the advantage of not having to run the source application. So far Apple hasn't made the reading side of that API available either. Ciarán Walsh did some handy reverse engineering on QL a couple of years ago, and I've played with that approach, but it is somewhat klunky. You can generate the panel as Ciarán explains, but put it off screen. You can then copy the contents into an NSImage using NSBitmapImageRep -initWithFocusedViewRect:. Unfortunately you can wind up with some funky visual artifacts in this (like scroll bars in some cases), but for some applications it can be effective.

Resources