DJI Mobile SDK - how is distance between adjacent waypoints calculated? - dji-sdk

Using the DJI Mobile SDK to upload Waypoint Missions, if two adjacent waypoints are determined by the DJI to be too close (within 0.5 meters), the upload is rejected.
Does anyone know the algorithm used to determine the distance between adjacent waypoints in a waypoint mission?
Specifically, is the DJI algorithm using a haversine calculation for distance between lat, lon coordinates and if so, what is the earth radius used? Is it the UIGG mean radius: 6371008.8 meters. Or some other radius?
Or does it use the ellipsoidal Vincenty formula (WGS-84)?
This information would be useful for more precise waypoint decimation prior to mission upload.

First off, I would comment that DJI answering an internal implementation question is very unlikely since it would expose them to having to support the implementation over time and across aircraft. Different aircraft, different technologies may result in varying implementations.
What has always worked for me it to use standard "distance between points" calculations, either common from map formulas or as built into platform SDK (iOS, Android, etc.) I have found these to be sufficiently accurate enough to plan even complex flights.

Based on several tests I can now confirm that the current DJI internal distance computation is dependent on latitude and/or longitude. Meaning that you will get different results for the same (!) waypoint distance pair depending on where your two points are anchored.
A 1.5-meter distance Waypoint Pair got accepted as a mission in a location in central Europe but were rejected with WAYPOINT_DISTANCE_TOO_CLOSE for a location in the central US.
(We verified with https://gps-coordinates.org/distance-between-coordinates.php that both waypoint distance pairs had the same 1.5 meter distance between them.)
So it's safe to assume that DJI has a bug in their distance calculation.

Related

Distance to horizon with terrain elevation data

Looking for an algorithm to compute actual distance from a latitude/longitude/elevation to the visible horizon taking into account the actual surrounding terrain and the curve of the earth. Assume you have enough terrain data for the surrounding several hundred miles from any of the open elevation datasets. The problem can be simplified to an approximate by checking a few cardinal directions. Ideally I'd like to be able to compute the real solution as well.
Disclosure: I'm the developer and maintainer of the below mentioned software package.
I'm not sure if you're still looking for a solution as this question is already a bit older. However, one solution for your problem would be to apply the open-source package HORAYZON (https://github.com/ChristianSteger/HORAYZON). It's based on the high-performance ray tracing library Intel Embree (https://www.embree.org) and thus very fast and it considers Earth's curvature. With this package, you can compute the horizon angle and the distance to the horizon line for one or multiple arbitrary location(s) on a Digital Elevation Model (DEM) and set various options like the number of cardinal sampling directions, the maximal search distance for the horizon, etc. However - I'm not sure what you mean by "real solution". Do you mean the "perfect" solution - i.e. by considering elevation information from all DEM cells without doing a discrete sampling along the azimuth angle? Unfortunately, this cannot be done with the above mentioned package (but one could theoretically implement it).

What is the time unit returned by dji.common.mission.waypoint.Builder.calculateTotalTime()

Unfortunately, DJI SDK documentation only states:
Calculate the total time of the waypoint list.
Is that in seconds? Seconds' fraction? Minutes?
The same applies to Builder.calculateTotalDistance():
Calculate the total distance of the waypoint list.
I couldn't deduce the right time (and distance) units since I got different values for the same set of waypoints in different map locations (!). For instance, the same equally spaced waypoints in middle-east Brazil is different when they're applied in north-west of Canada. The difference is ~15%. What's the reason?
My SDK version is 4.13.1 for Android (MSDK).
I've e-mailed them through dev#dji.com and they've answered that the distance is in meters and the time, in seconds.
About the different values for middle-east Brazil and the north-west of Canada, I have no clue.
About the geographical variations, 1 degree of longitude at the equator is much bigger (in meters) that 1 degree close to the north pole. Sounds right what you observed.

How to get FLIGHT time between two locations using the Google Maps Distance Matrix API?

I need to get travel time by plane between two locations.
In Distance Matrix API there is no Travel Mode like "flight" - https://developers.google.com/maps/documentation/javascript/distancematrix#travel_modes
Any suggestions?
Unfortunately there is no easy way to do this.
However, you could do a trick and estimate the flight time by using geocoding and the geometry library. At least if you don't need real flight data.
The steps would be:
Transform origin and destination from String to LatLng (Geocoding API)
Create markers for each location. (Geometry lib)
Calculate airline distance using computeDistanceBetween (like this: google.maps.geometry.spherical.computeDistanceBetween(path1, path2))
Calculate approximate flight time using average airplane speed.
Hope this helps.
This is sort of a work-around, but an easy one.
Use the measure distance tool in google maps, then for flight time of regular commercial airliners, divide the distance in miles by 500mph to get flight time in hours. For a small propeller plane (like a cessna), use 165mph.

Looking for ways for a robot to locate itself in the house

I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)

Reach a waypoint using GPS/Compass/Accelerometer - Algorithm?

I currently have a robot with some sensors, like a GPS, an accelerometer and a compass. The thing I would like to do is my robot to reach a GPS coordinate that I enter. I wondered if any algorithm to do that already existed. I don't want a source code, which wouldn't have any point, just the procedure to follow for my robot to do so, for me to be able to understand what I do... At the moment, let's imagine that I can access the GPS coordinate everytime, so no need of a Kalman filter. I know it's unrealistic, but I would like to programm it step by step, and Kalman is the next step.
If anyone has an idea...
To get a bearing (positive angle east of north) between two lat-long points use:
bearing=mod(atan2(sin(lon2-lon1)*cos(lat2),(lat1)*sin(lat2)-sin(lat1)*cos(lat2)*cos(lon2-lon1)),2*pi)
Note - angles probably have to be in radians depending on your math package.
But for small distances you can just calculate how many meters in one degree of lat and long at your position and then treat them as flat X,Y coords.
For typical 45deg latitudes it's around 111.132 km/deg lat, 78.847 km/deg lon.
1) orient your robot toward its destination.
2) Move forward until the distance between you and your destination is increasing where you should go back to 1)
3) BUT ... if you are close enough (under a threshold), consider that you arrived at the destination.
You can use the Location class. It's BearingTo function computes the bearing you have to follow to reach another location.
There is a very nice page explaining the formulas between GPS-based distance, bearing, etc. calculation, which I have been using:
http://www.movable-type.co.uk/scripts/latlong.html
I am currently trying to do these calculations myself, and just found out that in Martin Becket answer there is an error. If you compare to the info of that webpage, you will see that the part in the middle:
(lat1)*sin(lat2)
should actually be:
cos(lat1)*sin(lat2)
Would have left a comment, but don't have the reputation yet...

Resources