How to setup dji L2 api demo? - joystick

I'm intended to create an android app in which you can control the drone (in my case phantom 2 vision) I want to control it using a virtual joystick, I already have the Level 2 API. To do that I wanted to see a "working app" with my own eyes, to understand how I should use the API. I tried to run the dji demo application (I followed the steps they pointed on the documentation, like put the api key in the manifest), the application seems to work ok, but I only can control the gimbal, the virtual joystick does not work for some reason. Is there any limitation in terms of android OS version, devices, phantom firmware version etc?. I made some questions on dji's forum but no one gave me a concrete answer, I hope some one here can give me a hint :)
I'm using a Samsung galaxy note 10.1. I'm working on the DJI-SDK-Android-V2.4.0 project.
I could get "D/GsProtocolJoystickDemoActivity: GroundStationResult GS_Result_Failed"
while I was debugging.

Since you could invoke the API from gimbal and camera correctly, I am assuming that you have already activated your app. Here is my point, the virtual joystick could be used only in the ground station mode. My suggestions are as following:
Turn the remote controller mode to the F mode
Invoke DJIDrone.getDJIGroundStation().openGroundStation().
Invoke the joystick methods.
Please Note: The app key should have the LEVEL2 access so that you could invoke the Ground Station relative methods.

Related

DJI Mobile SDK and DJI Simulator

Is it possible with the Mobile SDK to write an application that receives way points from a web service and then starts the drone and monitors its operation?
The use case is as follows:
- Start drone
- Fly to a height of 2m
- Take picture/video and send/stream picture/video to the app
- Land again
Is it possible to simulate my code in the DJI Simulator and then when I know everything works use a Spark or Mavic for a real-life demonstration?
Yes absolutely, although it's not necessarily MobileSDK specific and here's a example:
1/ You create a desktop (native or web) app that does the mission planning. This app can save the mission in a known format - My advice is to create a framework/library to manage this format -
2/ A mobile app built on top of the MobileSDK reads the mission in the format - using the said framework that manages this format.
3/ The mobile app translates the mission requirements into missions system available on MobileSDK either through WaypointMissions, MissionControl or even VirtualStick commands.
As for simulation, once the drone is in simulator mode, the execution will work and show how it executes.
If you want to take things further, you can even stream back from the mobile app data to your destkop app to superpose actual path against planned path.
I can't drop a code source for this as it's extensive, but hopefully this helps.

Apple Home Kit - Server driven

Is it possible with Apple Home Kit and therefor Siri, to communicate with a server with intents rather than communicating with the device, like a light, itself?
I would like to say "Siri turn on the light in Room X", and the light fixture in the room to be controlled by a server, so the intent goes to the cloud rather than the light itself.
You can use Homebridge for that:
Homebridge is a lightweight NodeJS server you can run on your home network that emulates the iOS HomeKit API
It works quite well. I use it, running on a Raspberry Pi, to control (from Siri and also from the "Home" app) a 433Mhz transmitter to control lights and other remote control devices in my home.
There are a lot of plugins that you can install to control devices, but it's also possible to create your own plugins to receive the intents and perform an action based on them.

UWP Mediabox - a few questions

I have a question to you and I really hope you can provide me some information.
I wish to build a media center because I have not found any possibilities to cast my stuff straight to the big screen from my Windows mobile phone.
Off course there is the wireless display adapter from Microsoft but I wish not to cast my whole display to my tv.
After testing a few product (Amazon fire tv box, apple tv 3, display dock and the wireless dock) I came to the conclusion that I can not have an all in one solution which fits my perceptions.
From that point I thought that I have to build my own "tv application".
Ok ok... There is kodi(xbmc) and so on... But I think this is just making a detour.
Following features must be included:
running on Windows 10
Cast music, videos and pictures.
Ability to launch and download windows store apps.
Project Rome implementation to share data across devices.
Seems possible but here´s one big problem...
If we are talking about mediaboxes, we do talk about those small boxes besides your tv. Instead off building a micro ATX setup, I want to take this to the next level... using IoT (Raspberry Pi 3).
Using IoT may have some advantages but there are a few disadvantages I have to worry about.
Will Windows 10 work properly on IoT (advantages - disadvantages)
Media streaming?
ARM architecture
Bluetooth, WIFI, Ethernet connectivity
I have never ever worked on IoT before, so I am kinda noob again. I´am asking for some advices to make this possible.
[UWP] How can I stream data (e.g. video, music, images) to another application?
[UWP] Implement a remote control - just like the amazon fire tv controler ?
Advantages - Disadvantages of using Windows 10 on a Raspberry Pi ?
Using windows 10 default applications (Groove Music, Images, Videos - Application) to play incomming data?
What do you think? Is it possible to create a Mediacenter which is running on a raspberry pi using windows 10?
Thank you in advance.
The most straightforward idea would be to create an always-running app with a MediaPlayerElement with a Source property that can be set programmatically by a remote control app. A remote control app could also control the pause, play, next, previous actions.
Be aware that there is no hardware video acceleration support for Raspberry Pi on Windows IoT Core yet, and probably that also won't come soon. There are other devices that do have proper video drivers (look at the hardware support page of Windows IoT Core).
Also be aware that there is no Windows Store on Windows IoT Core, unless you are an OEM (then you can publish your properly signed apps in an official way to devices that are managed by you).
A simpler way would be to buy a Windows 10 box from aliexpress. Then you can use Miracast to stream your screen, install apps from the App Store and play films directly on it, for example using Kodi for which remote control apps exist.

WiFi: what does OS X call a "device"?

Where I work, we are building a GUI to run on iOS or Android; the GUI is intended to control an embedded board. The embedded board does not have WiFi, an Ethernet port, or a USB port, but it does have an RS-232 serial port; so we are using a product called a WiSnap.
http://serialio.com/products/mobile/wifi/WiSnapKit2.php
We have been able to connect to the WiSnap using OS X, or using iOS (an iPad 2). But none of our Android tablets recognize the device at all. The WiSnap acts as a WiFi access point, and broadcasts an SSID; the Android tablets do not list this SSID in the list of available WiFi access points. Under Linux Mint 12, my Laptop can see the WiSnap, but attempts to connect to it fail. Interestingly, my cell phone (a Droid 2) is able to see the WiSnap, but I don't have telnet on my phone so I haven't tested to see if it actually works.
Under OS X, I noticed something. In the drop-down list of WiFi access points, there are two distinct groups: the top group, which contains most of the listed WiFi access points, then a lower group, that has a sub-heading that says "Devices" and contains just the WiSnap and something called "hpsetup". (I don't know what "hpsetup" is or where it might be; there are lots of WiFi users in this neighborhood.)
The WiSnap is operating in "ad-hoc" mode with no security at all.
So, my question is: what is the significance of OS X calling the WiSnap a "device"? It is frustrating to try to search Google for "WiFi devices"; you get a giant haystack of results that are not related to this.
Also, is there anything we can do to make an Android tablet see the WiSnap and connect to it?
Can anyone recommend a good resource where I can read up on WiFi? Again Google hasn't helped much; there are so many introductions to WiFi out there, most of them at a very simple level.
Thanks for any help you can give me.
EDIT: The vendor does claim Android compatibility for some models of WiSnap, but not for others.
This lists Android as supported: http://serialio.com/products/mobile/wifi/WiSnapKit2.php
This does not: http://serialio.com/products/mobile/wifi/WiSnapAAA.php
I guess I should contact the vendor, but I do want to understand what is going on, so I was hoping to get advice from the StackOverflow community about this.
EDIT: We did contact the vendor. What we found out is that the WiSnap can be a stand-alone device only in ad-hoc mode. If you set up a WiFi router or access point, the WiSnap will join the network in infrastructure mode. But the WiSnap will not act as an infrastructure mode access point.
Android OS at the moment only supports infrastructure mode. So, if we want to use an Android tablet with a WiSnap we would have to set up some sort of WiFi router or access point. We are looking into other solutions now.
I'm relatively sure that hpsetup is the ad hoc wifi for an HP wireless printer. So perhaps the ad-hoc/peer-to-peer qualifier is what causes OS X to classify it as a device.

How would I go about implementing an OSX (desktop) Core Location provider?

I want to use Mac OSX for a location application, but want to provide fake location data (via a GPS log file). How would I go about implementing this? I've looked at http://developer.apple.com/library/mac/#documentation/CoreLocation/Reference/CoreLocation_Framework/index.html but it only talks about USING the framework, not about how it gets its location information. I've implemented location providers for the Android platform, but seen nothing so far about core location.
There is clearly an interface for it: if it is copied from iOS I know there is a super-secret Bluetooth GPS protocol to provide location information from GPS devices (there is at least one device that does so) and I expect a similar hook is somewhere in OSX.
Thanks.

Resources