DJI-Mobile-SDK/position control - dji-sdk

I am useing DJI Mobile SDK to crtate an APP by Android Studio now.I want to know how to use the GPS signal of the aircraft and the phone to realize position control. Is there any API in the DJI Mobile SDK I can use?

You may follow this sample to run simple higher-level GPS position control https://developer.dji.com/mobile-sdk/documentation/ios-tutorials/GSDemo.html
If you stop at some waypoint, it can automatically hold the position. It is a simple recreation of the DJI pilot app waypoint planning.
For low-level GPS position control requires a higher understanding of the system. This usually allows interesting applications such as allow drone to follow some person or precision landing to some mark or circle around some tower. There is not much open-source implementation available on the internet. You have to search in the MSDK for the API for some basic control and you also need to have deep understanding in the field that you are trying to achieve e.g real-time object detection, low-level control framework, Visual-Inertial SLAM etc

Related

Is it possible to send GPS coordinates as navigation target to DJI drones through mobile SDK?

I am new to the drone community. I am trying to use a drone to follow a driving vehicle and record a video, this would be part of my graduate school project.
I know DJI provides an "active track" function but it seems like the maximum speed for the tracking is around 20-25 mph, and it cannot track the vehicle at a top-down angle (drone looks straight down at the vehicle).
I have an idea to send the GPS info of the ego vehicle to my mobile device and use a customized app to read the GPS location and set it as a target so the drone can follow it.
Is it doable? Is it in general worth the effort? Or is it better off to just use the active track and work with the best angle that I can get?
I've done that.
Send gps updates from a tracker phone to the phone connected to rc.
You have to use virtualstick mode in the MSDK. Tha max speed is 15m/s
Virtual stick functions in the DJI Mobile SDK simulate the remote controller's joysticks, and therefore an aircraft can be automated to fly in any way a human can manually fly it. Compared to missions, this is a more complicated, but flexible way to automate flight.
Works very well.
An example:
https://youtu.be/i3axYfIOHTY?t=55
It's a little shaky, but thats fixed now :-)

Getting detected features from Google Tango Motion Tracking API

I would like to know how to get the current feature points used in motion tracking and the ones that are present in the learned area (detected or not).
There is an older, related post without an useful answer:
How is it possible to get tracked features from tango APIs used for motion tracking. I'm using the tango to not do SLAM and IMU-integration on my own.
What do I need to do, to visualize the tracked features like they did in some of the presentation videos. https://www.youtube.com/watch?v=2y7NX-HUlMc (0:35 - 0:55)
What I want in general is some kind of measure or visual guidance on how good the devices learned the current environment. I know, there is is the Inspector App but I need this information on the fly.
Thanks for your Help ;)
If you want to check if an area is present in your learned area model and which is not, you can use the Tango Debug Overlay App. It has a field 'Tracking Success' that only counts up if the device sees learned feature points (ADF on) or finds new feature points (ADF off) (http://grauonline.de/alexwww/tmp/tango_debug_overlay_app.jpg). Additionally, you can request that debug information like Tango Debug Overlay App does (as a simple text) via UDP port 29361 in your App and parse the returned debug text (although this is not recommended at all for a real app as this interface is not documented)
PS: In Tango Core 01-19-2017 this counter does not seem to work anymore.

Is it possible to control DJI Phantom 2 Vision (plus) with smartphone?

I'm checking at DJI specification for Phantom 2 Vision and Phantom 2 Vision plus and also API reference, but I'm not sure wether is it possible to pilot drone without remote controller? How is the communication via SDK done - is smartphone directly communicating with drone, or is going via remote controller?
DJI SDK Level 2 does allow direct flight control of the DJI Phantom 2 Vision+. It communicates the commands to the unit using wifi. That being said, the wifi extender that your phone is connected to is attached to the controller, so you still need the controller to be turned on and in the vicinity in order to operate, but the device is still what is sending the commands to the Phantom. I would recommend that someone always have the controller at the ready incase the wifi network has interference / phone dies etc. I am using the SDK in this way for my side project www.followmephantom.com.
Hope that helps.
FYI - to use the level SDK you first have to submit a proposal for your project to DJI and get their approval..... and pay a fee in most cases.
Here is a helpful link that shows the difference in features for the SDK level 1 and 2.
https://dev.dji.com/en/products/sdk/mobile-sdk/features/level-compare
From 3 weeks of playing: the smart phone controls the gimbal and the camera in general, all the flight controls are strictly from the remote control unit albeit waypoints from the smart phone could be argued as control by the phone. I havent flown it without the smart phone but I am fairly sure I could, without blades on the sitting room floor it seemed possible. I cant imagine why you would want controls from your phone, when things go pear-shaped you would be pretty much screwed with just a touch screen.

DJI Phantom API or hackable procedure

Maybe I have't looked hard enough, but I spent yesterday googling for a bit and found no relevant projects on hacking the DJI Phantom Drone in order to create new coordinating apps. This is besides the app for coordination DJI currently uses for their drone. I'm trying to see if there's a way to communicate with the Drone with a specific protocol in order to accept a set of procedures.
Any help would be awesome,
Thanks.
Great News for you and all us Droneys! DJI has launched their SDK since you asked this question. They released it last November and you can now apply for a license and write your own apps for the Phantom2 Vision+ using their SDK.
Check it out at https://developer.dji.com/
I am already building a project using the SDK - you can follow my progress on my blog / product site. I will also try to update it with good DJI related development links and tips.
This post is old but I think it is good to leave a foot print for others :)
There is this new company called NVdrones, which created a peace of hardware that you can attach to any drone (you need physical access to the flight controller), and once you do that you can use their SDK (Arduino, Java, Android and Javascript) to write your app without the need of hacking, soldering or anything else. It is just plug and play.
Another benefit is that you are not locked with a specific drone (DJI SDK or 3DRobotics SDK), you can use the board on anything you want. Which gives lots of flexibility.
The developer site is http://developers.NVdrones.com
Hope this helps.
This is a great topic!
You could check how to hack your copter here: https://github.com/flyver/Flyver-SDK/wiki/-2.2--How-To:-Flyver-Hack-a-Copter
By opening the drone, taking out the original controller, soldering a few wires and sticking an Android phone to it, you will have the ability to program your Phantom in a modern manner with an open source SDK and application based development. This means that you could add computer vision to it, automation or additional hardware. You could also use smartphones, web and other interactive devices for remote controlling the copter instead of using the standard remote controls.
The Phantom, however, is offcenter balanced due to the fact that most people use gimbal with it. Without the gimbal is a lot less stable from my experiments so you will have to put some extra work in center balancing it.

Google glass sdk for Epson moverio

What I understand from technical specs of Google glass is that it displays a 2D plane on one of the eye's projector. Android sdk in addition with GDK provides tools for writing apps for the device with features that can sense eye and voice actions. But, it does not provide 3D stereoscopic vision as this would require projector on both eyes.
On the other hand Epson Moverio promises true 3D augmented reality experience Having used Moverio, I can see two projector for both eyes that is able to project steroscopic images.
Perhaps I should have done a more extensive research regarding the spectrum of products/toolkit available, still I have some Questions/Doubts of which until now I could not find any information.
Q1. Does google provide any 2-eye-projector kind of glasses product?
ANS: No
Q2. Does google glasses development kit (the api) provides features for generating left & right views for a 3D object for EPSON Moverio? I have seen that Wikitude and Metaio comes with these kind of features. Did google provide any support in gdk?
ANS: NO. Not from google.
Q3. Does Epson plan to roll out any developer's tool for easily create 3D markers and plot them in the projected space?
ANS: Not announced yet from EPSON.
There is no current support in Google Glass for stereoscopic views.

Resources