Can I plan a mission with DJI SDK without GPS? - dji-sdk

I'm trying to control the drone to fly autonomously, but in an area without GPS access. Will I be able to use the SDK to tell it to fly x meter forward/backward/up/down etc without GPS?

The answer is YES. I rarely use GPS in my navigation task. The only difference is how complex you need to hardware/software to be
DJI OSDK
For most of my project in recent year, I use DJI OSDK ROS to fly the drone with pure LDIAR or camera. see example video from here. Inside it is running visual internal navigation with stereo node. I tried with DJI A3/N3/ M100/M210/M600. all works fine. Complex onboard hardware, but software simple and straight forward.
https://www.youtube.com/watch?v=1AbfRENy3OQ&t=90s
DJI MSDK or PSDK
For other cases like DJI MSDK or PSDK(if you have access) you can use other methods like stream down video stream and do on ground localisation and then send the control command out. See my video(this is not using DJI A3, but using similar concept. I drop this idea after school project as it is deemed not suitable for actual commercial application). It is PTAM with EKF for IMU fusion
https://www.youtube.com/watch?v=6xNINp7nnDge
The code running behind is from here https://github.com/tum-vision/tum_ardrone.
The DJI MSDK is meant to replace this link mentioned in the tum_arrone https://github.com/AutonomyLab/ardrone_autonomy .
All you have to do is to modify the source code input and output system to as a android C++ lib. It is not easy job but I already have seen other people doing it. Its simple in hardware but more work in software
DJI WSDK
Even for the DJI windows SDK you can still use pure PTAM based approach on the feature-rich area. As shown in the image below. It is running semi direct visual odometry from ETH group. Its minimal effort in both hardware and software. The only problem is you need it to be a feature-rich area.
I quite disagree with #Ken as the optical flow is only meant for low-level/Microcontroller position hold. It is not meant for dynamic odometry/state estimation. For high-level general localization and mapping, it requires at least a visual odometry/SLAM output. And not only low altitude, the medium to high altitude will also work as shown in the figure below
The Code for getting this image is available at here https://github.com/uzh-rpg/rpg_svo

Since positioning relies very heavily on the aircraft's GPS system I cannot see how you can accurately control movements. You will have other issues also since most autonomous flight operations will not start until there are 6 (or 7) locked satellites.
You will also find that the aircraft will lack the ability to hold position (except when it's low enough for the vision systems to hold position).
My suggestion would be to look into the Virtual joystick parts of the SDK but honestly I feel you will not be happy and the possibility to achieve what you ask above may not be possible.

Related

General tips in running a Unity3D-Animation on Microsoft Hololens

I'm going to have the task to make sure that an animation created for in Unity3D can be run on a Microsoft Hololens. I don't have any further information about the animation yet but I wanted to ask in advance if there are any big things i should keep in mind.
In the animation you're playing a "character" in first person mode, controlled by wasd or the arrow keys and you can look up, down, left, right with the mouse. There are (as known to me) no special interactions besides colliders.
And another question: is it easier to test the animation on the actual hololens or to use a hololens-emulator on my laptop?
I know it's a lot to ask right now without any code or stuff but I still hope that some of you can give me a little advide :)
In my experience it is difficult to say. The HoloLens, besides it is an awesome device with nice specs for that size, has quite limited graphical power. Try to minimize your model's vetices to a reasonable low amount (e.g. using Blender's decimate feature). Set down the quality in Unity's quality setting as proposed in the Dev-Guide.
For your emulation question: The emulator does not emulate the HoloLens' specs (like processor, memory...), but emulates input concepts etc., while running a Hyper-V virtual machine. So the performance in the emulator is dependent to your computer's hardware and is not related to the actual performance on a HoloLens.
Also take a look at the performance guidelines from Microsoft
I worked on HoloLens for a couple of projects. A few points that can be useful for you:
the first big thing I would keep in my mind is understanding if the character has to move in a VR environment. In this case HoloLens is almost useless because its lenses will allow you to see the surroundings [the real ones] distracting you from the virtual world. This is exactly what happens with their pre-installed HoloTour. Nice attempt but you will not totally feel in Roma or Machu Picchu
the second big thing that I would consider is the fact that - at least for the first release - HoloLens has a very limited field of view, that "amounts to the size of a monitor in front of you – equivalent to 15 inches" [source]. It is likely that - in a situation where the character will look in every direction - the objects that you put in the AR space will end up being cut or invisible
about testing: the emulator is really exceptional, I didn't find great differences between it and the real device. Of course if you already have the real HoloLens I would use that. But if not I would first develop and test on the emulator to understand if the project is worth the purchase

How is it possible to get tracked features from tango APIs used for motion tracking

As it is shown in Project Tango GTC Video, some local features are extracted and tracked for motion estimation that is then fused with accelerometer data.
Since any developer may need to track features to develop his/her apps, I was wondering if there would be a way to get those features through APIs.
Although it is possible to extract some point and retrieve their flow using estimated 6DOF pose returned by the APIs, it adds extra overhead. Another issue with this approach is that the pure visual flow (including outliers) is not achievable and is influenced by IMU data.
So my question is that if these features are tracked using hardware-accelerated algorithms, how can we get them using APIs without having to implement it and do a redundant task.
Any answer and suggestion would be appreciated.
It is straightforward to compile OpenCV for the Tango with nVidia's TADP package. Use 3.0r4. You may need to merge some OpenCV-4-Android bits but it's easy, and the ES examples will fail on the device but don't sweat it.
Google released the "Project Tango ADF Inspector" on play store - I haven't actually had any time to play with it, but its the first thing to offer any look inside that data - I think Google considers this data sensitive and is cautious in this area, with good reason - If you look for the starred "important" note on this page you should get a feel for the sensitivity of that issue.

interactive Augmented Reality 3D drawer

I'm planning on doing an interactive AR application that will use a laser sensor (for distances), GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder
movements. The user can choose from a number of ready-made 3D-models, and should be able to place them by selecting the desired location on the screen.
My target platform will be a 8"-handheld-device, running on windows8.
Any hints what would be the best AR-SDK or 3D-viewer to work with?
thanks in advance!
There are quite a few 3D viewers that are working in the browsers. But most recently and most notably: va3C viewer
It is webgl based app and doesnt require a server, so if your handheld device supports webgl, then you are good to go, however, whether it works on IE or not is questionable ;).
Although based on my experience and your usecase, I believe client side JS libraries do not provide enough access to the device's hardware. So you might have to serve the information like GPS, Gyroscope, from the server side, then gather this on the client using something like socket.io and then mash it up alongside the geometry.
I am trying to do something similar, although havent quite done it yet. Will keep you posted.
Another approach I am exploring is X3DOM, which gives the ability to write 3D data like XML alongside HTML, which is quite declarative and simple to pickup. X3DOM derives from X3D.
Tell me if you need more info.
Also, worth exploring for its motion abilities, is Robot Studio, which is a desktop app with SDK.

Great idea for embedded development

For my university I (and three others), are searching for a project that utilizes at least one embedded device, web services or other web technology, and a Graphical User Interface.
Currently we are looking at developing a unified remote, that is an extendable application on a cell phone through which you can control your media center. Any ideas, or advice on this will be appreciated, though it is not the focus of this question.
We are having a hard time finding interesting (or funny) projects on which we can work a complete semester. Any ideas will be greatly appreciated. The software will be released as free software. (GPL or BSD license).
We all have a Bsc in Software Engineering.
EDIT: I am very pleased with the suggestions so far. Thanks to everyone, and keep it coming.
How about follower: carry a device, as you move from room to room in your house devices configure themselves to your preference - lights, music etc. If two people are in the room some precedence rules.
Is that possible just on the presence of a mobile phone?
Another idea (from the top of my head):
A work environment ensurance thing. We programmers like to develop in nice and quiet environments. Unfortunately some people tends to annoy us with their disturbing behaviour (or just by being loud).
So the project could be to create devices wich tracks the stress level (sweat levels, pulse etc.) of the individual and their impact onto others.
An example: One individual is very loud (the device should measure this), and others around him becomes stressed and/or unfocused because of this. The serverside sw, should then detect and warn him to quit down a bit to improve the work environment.
Comments?
What do you peeps like doing? Build an app for it.
So, if you like drinking coffee build a application which will find the nearest frothy coffee shoppe (or if you're particular, the nearest Peets/Starbucks/Whatever-ocino). This idea works for beer too.
If you buy stuff off e-Bay build a sniper app.
If you enjoy playing frisbee build an app which locates your nearest friends and sends them a text asking whether they want to goof off lectures and go to the park.
Heck, you could even build an app which monitors your SO questions and alerts you when you get an answer (although I don't know whether the data services SO currently offer will be up to the job).
The standout companies that have made great universal (programmable) remotes are : logitech, and philips.
One of the big problems with these types of devices is the ability of the general consumer to actually program all of their various devices. Logitech has done an outstanding job of providing a fairly simple Web based user setup experience that then implements a very usable universal control.
I would definitely look at what they have done for some ideas on universal remote controls.
How about an app and hardware that will tell me when my wife's plants need watering? (It's somehow my fault if they don't get watered.)
OK then: the recipe generating fridge. Rfid tags on the contents know what's available and the expiry dates. The database knows the recipes. The fridge emails/texts you to say "buy some mushrooms and you can have a delicous ham and mushroom omelette while the eggs are still fresh."
Benjamin and all those aspiring to do embedded projects ...
When you start a project, especially in embedded systems, you need to understand that the hardware is not your PC but some special device. And every sensor will be a transducer in itself. The only thing that would matter to students is that everything costs and are costly
So, it will be good to make sure that the idea is such that,
It can be completed by the
project members within the given timeframe
All the required development
tools like hardware etc can be
really bought
Of all, it good to ensure that the
project enables you to learn
something useful for your career ...
To do all this it is better set some achievable goals
Develop a system in which you can program the lighting system of your house. You can set up their schedule one time and everything should work automatically.
I really love working witht the Atmel ststk1000/stk1006/stk1002 development boards for tht AVR32. ATSTK1000
2x Ethernet
QVGA lcd
USB 2.0
SD/MMC
Conpact flash
Supported embedded linux
IR
Audio
ps2 interfaces
uarts
++
familiy atmel page:
AVR 32 family home
online forums
Forums for CPU

How to implement a voice changer?

I want to write a app which change the microphone input voice and make it like robot or some funny man's voice.It must support send changed voice to all application like IM Software or Game Client. Which technology should I pick up? Windows WaveForm Api? DirectX?
audio driver?
Thank you very much!
There's an MSDN Coding4Fun article that explains how to create a voice changer that operates over Skype, in C# (.NET). The full source code is also hosted as a project on CodePlex. In addition, it should be fairly easy do something else with the audio (as opposed to streaming it via Skype), since the project is based around the NAudio framework, which contains a good level of abstraction. Anyway, it is a reasonably complete (and stable) example - definitely worth checking out in my opinion.
If you want/need to use C++ or some other language for development, then this project should at least give you some ideas about how to go about it. Still, if you can use .NET, then you're in luck I think.
Robot voice is often done with a ring modulator effect, mixing the voice with a sine wave - this is easier. Or use a vocoder effect, modulating the voice onto some other waveform, like rectangle - might be a bit more tricky. Go read up how the effects work, get a program with which you can check out how they sound (Audacity works for the ring modulator, finding and using a vocoder may be a bit harder). Then read how it's done or get a library which will do the processing for you.
You are looking to support VSTi or DXi plugins.
There are tons that also act as vocoders, even for free.
You just need to write the host application.
Take a look here :)
Now that's a neat idea, especially for a mobile app.
I'd probably start off-line by using a .wav file as input to get the effects working the way I wanted. You can use any high level language for this, but you probably want something that will map reasonably well into C/C++.
In terms of a production version, I'd go native and do this in C or C++. You want something fast for real time audio processing & I like to avoid dependencies on things like .net for distribution. (Not that I have anything against .net, it's great for servers and distribution within a company but I'm not so keen on having it as a dependency for shrink wrap software.)
Windows DirectShow would be a tempting option - you could do some interesting effects with multi-media as well if you had the voice morpher implemented as a direct show filter.
What you're looking for is a vocoder. I don't know if any of the technologies listed above has a vocoder effect, but the best chance would be with DirectX.
Try this sample app .I think its useful to you.Link

Resources