How can I obtain Absolute Altitude Value (ASL) using DJI MSDK and M300 drone - dji-sdk

I am trying to get the drone absolute altitude value (ASL) in real time (before taking off)
I have 2 drones, Mavic 2 enterprise advanced & M300.
When using the below code in mavic2, I was able to obtain the ASL, however the same code returns NULL value when using with M300:
Object heightAboveSeaLevel = KeyManager.getInstance().getValue(FlightControllerKey.create(FlightControllerKey.ABSOLUTE_GPS_ALTITUDE));
Also tried with no luck the below:
DJIKey GPSKey = FlightControllerKey.create(FlightControllerKey.ABSOLUTE_GPS_ALTITUDE);
DJISDKManager.getInstance().getKeyManager().getValue(GPSKey, new GetCallback() {
#Override public void onSuccess(#NonNull Object value) {
}
#Override public void onFailure(#NonNull DJIError error) {
}
});
On a side note, when using the M300, the ASL value is shown in the DJI Pilot app.
Anyone has accomplish the above or has any ideas to what I should use?

This is from DJI:
This KEY does not currently support the M300. Do you know how to use MSDK V5, which provides the KEY for obtaining real-time altitude: KeyRTKAbsoluteAltitude?
MSDK V5:https://developer.dji.com/doc/mobile-sdk-tutorial/cn/

Related

Xamarin SDK for OpenStreetMap

I'm trying to make a Proof of Concept (C#) for basic graphical maps, as an alternative to Google Maps for Andriod and iOS devices - because Google started charging fees for their APIs (from my understanding only affecting web right now).
I doesn't need to be particularly advanced, simply a GUI that shows a base map where you can draw:
Markers
Lines
Polygons
The only requirements I have is that it should be open-source, or at as low a cost as possible.
What I've done so far is to use data from http://openstreetmap.org - and set up a tile-server https://switch2osm.org/serving-tiles/ on a separate linux machine.
Furthermore, it went fairly quick to create a simple web app with OpenLayers.js and Leaflet.js connected to the custom tile-server with the requirments met.
What I need to do now is to find a free or cheap mobile SDK for Xamarin for Android and iOS. I managed to render a map from my own tile-server and add markers by referring .dll's from this zip from 2014 (only tested for Andriod): https://github.com/OsmSharp/ui/releases/tag/v4.2.0.723
using OsmSharp.Android.UI;
using OsmSharp.Android.UI.Data.SQLite;
using OsmSharp.Math.Geo;
using OsmSharp.UI.Map;
using OsmSharp.UI.Map.Layers;
[Activity(Label = "#string/app_name", MainLauncher = true)]
public class MainActivity : AppCompatActivity
{
private MapView _mapView { get; set; }
private Layer _mapLayer { get; set; }
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
try
{
Native.Initialize();
_mapView = new MapView(this, new MapViewSurface(this))
{
MapTilt = 0,
MapCenter = new GeoCoordinate(lat, long),
MapZoom = 16,
Map = new Map()
};
// create a marker under Resources/drawable/pin.png
using (var bitmap = BitmapFactory.DecodeResource(Resources, Resource.Drawable.pin))
{
var marker = new MapMarker(this, new GeoCoordinate(lat, long), MapMarkerAlignmentType.CenterBottom, bitmap);
_mapView.AddMarker(marker);
}
_mapLayer = _mapView.Map.AddLayerTile("http://*.*.*.*/{0}/{1}/{2}.png");
SetContentView(_mapView);
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
}
However, these .dlls seemed to lack support for Lines and Polygons. I have tried to get something similar to work with OsmSharp's latest NuGet package (2018-06-04), but my novice Xamarin experiance only gets me so far.
Does anyone have any tips on how to use my own tile-server and render native maps on Android and iOS devices?
PS. It doen't strictly need to be OpenStreetMap with OsmSharp connected to a custom tile-server, that's just something im leaning towards right now. Again, requirents are open-source or at a low cost with the fexebility to add Markers, Lines and Polygons.

Ibeacon regions closed sets?

We are trying to use Altbeacon library to satisfy the next study case:
We want to put several IBeacons in a room or corridor with a distance of no more than 3 meters between each of them, and we want to get the current closest Ibeacon based on the user phone which scans for the beacons.
We first tried to build regions with only one beacon each, wondering that a region were a closed set, meaning that when you enter in a region, you couldn’t be in other region at the same time, and when you leave a region, you enter in the next closest one and so. But that’s not the approach that the library implements.
We want to know if there’s any way in Altbeacon library to apply our approach or if some kind of patch has to be made to satisfy the study case that I present to you.
The easiest way to accomplish this goal is to range for all beacons using a single region, and start ranging:
#Override
public void onBeaconServiceConnect() {
try {
// Set up a region that matches all of your beacons. You may want to replace the first
// null with a UUID that all your beacons share.
Region allBeaconsRegion = new Region("all beacons", null, null, null);
beaconManager.startRangingBeaconsInRegion(mAllBeaconsRegion);
beaconManager.setRangeNotifier(this);
} catch (RemoteException e) {
Log.e(TAG, "Cannot connect to beacon service");
}
}
Note, if you are using a custom Application class with the RegionBootstrap, you can put the above code above inside the didEnterRegion method instead of inside the onBeaconServiceConnect method.
Once you start ranging, you will get a callback once per second with a list of all visible beacons. You can add code to determine which one is closest:
#Override
public void didRangeBeaconsInRegion(Collection<Beacon> beacons, Region arg1) {
Beacon closestBeacon = null;
for (Beacon beacon : beacons) {
if (closestBeacon == null) {
closestBeacon = beacon;
}
else {
if (closestBeacon.getDistance() > beacon.getDistance()) {
closestBeacon = beacon;
}
}
}
// Do Something with closestBeacon here
}
Keep in mind that the closest beacon may change back and forth due to radio noise, so you probably need to add extra logic to protect against the closest beacon flipping back and forth too often.

RadiusNetworks iBeacon didRangeBeaconsInRegion return 0 beacons

I am trying to detect iBeacons with a specific UUID and Major. The didRangeBeaconsInRegion is being called but the Beacon collection it returns has 0 entries.
The below is my code (abridged a bit)
private static final String BEACON_UUID = "F8AD3E82-0D91-4D9B-B5C7-7324744B2026";
private static final int BEACON_MAJOR = 36582;
#Override
public void onIBeaconServiceConnect() {
iBeaconManager.setRangeNotifier(new RangeNotifier() {
#Override
public void didRangeBeaconsInRegion(Collection<IBeacon> iBeacons, Region region) {
if (iBeacons.size() > 0) {
IBeacon thisBeacon = iBeacons.iterator().next();
}
}
});
try {
iBeaconManager.startRangingBeaconsInRegion(new Region("myUniqueID", BEACON_UUID, BEACON_MAJOR ,null));
} catch (RemoteException e) {
e.printStackTrace();
}
}
I am assuming I am doing my binding correctly as the didRangeBeaconsInRegion(..) is being called successfully.
I have used RadiusNetwork's own application to detected the beacons and that works fine and I can see them all so it is not seem to be an issue with Bluetooth on my device
A couple of tips:
Double check that your BEACON_UUID and BEACON_MAJOR are correct for the beacon that is transmitting. For testing, try setting both of these to null temporarily until you get it working, then you can set them back to the values you have.
It is normal for the iBeacons.size() to be zero sometimes if a beacon did not happen to be detected in a given cycle. But it should not always be of size zero. I'm not sure how you are testing, but try adding a Log.d(TAG, "Number of beacons detected: "+iBeacons.size()); and let it run to see if you ever get a non-zero number.
I suggest to check the uuid , major and minor values of your beacons and make them match with the region u want.
didRangeBeaconsInRegion should return an array af beacons.
You can use the "beecon" app to update easily the values.
Hope this can help you.
Regards.

Unable to use GL11ExtensionPack in Monodroid

I'm using a GLSurfaceView and need to use some of the methods in the IGL11ExtensionPack (GlGenRenderbuffersOES(), for example). In the native Android examples I've seen, this is simply done by casting the GL10 object provided by the framework to a Gl11ExtensionPack. As in this example from Vogella:
public void onDrawFrame(GL10 gl) {
if (mContextSupportsFrameBufferObject) {
GL11ExtensionPack gl11ep = (GL11ExtensionPack) gl;
...
gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, mFramebuffer);
...
}
In Monodroid, I'm getting a class cast exception when I try to do the cast
public void OnSurfaceCreated(IGL10 gl, EGLConfig config)
{
IGL11ExtensionPack extpack = (IGL11ExtensionPack)gl;
...
}
Is anyone else using a GLSurfaceView with the GL11ExtensionPack successfully in Monodroid?
As a side note, there is a long-since resolved Android issue that I would not expect to apply here.
As per Atsushi Eno's response on the Xamarin Bugzilla, the solution is to use a JavaCast<T> as in:
IGL11ExtensionPack extpack = gl.JavaCast<IGL11ExtensionPack>();

Differense between Motion API and Accelerometer in Windows Phone

I use Motion API in my application for Windows Phone, but not all devices is support Motion API. However in Windows Phone exists Accelerometer API that supports most devices.
What difference between this APIs for my using? Please see this two code blocks:
Using Motion API:
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
if (motion == null)
{
motion = new Motion();
motion.TimeBetweenUpdates = TimeSpan.FromMilliseconds(15);
motion.CurrentValueChanged += OnSensorReadingChangedMotion;
motion.Start();
}
}
private void OnSensorReadingChangedMotion(object sender, SensorReadingEventArgs<MotionReading> e)
{
Dispatcher.BeginInvoke(() => CurrentValueChangedMotion(e.Gravity.X, e.Gravity.Y, e.Gravity.Z));
}
And using Accelerometer:
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
if (_accelerometer == null)
{
_accelerometer = new Accelerometer();
}
_accelerometer.TimeBetweenUpdates = TimeSpan.FromMilliseconds(15);
_accelerometer.CurrentValueChanged += OnSensorReadingChangedAccel;
_accelerometer.Start();
}
private void OnSensorReadingChangedAccel(object sender, SensorReadingEventArgs<AccelerometerReading> sensorReadingEventArgs)
{
Dispatcher.BeginInvoke(() => CurrentValueChangedAccelerometer(sensorReadingEventArgs.SensorReading.Acceleration.X, sensorReadingEventArgs.SensorReading.Acceleration.Y, sensorReadingEventArgs.SensorReading.Acceleration.Z));
}
I found that the accelerometer readings is the more volatile.
I need to use a vector {x, y, z} for my app. What is actually the difference between
{.SensorReading.Acceleration.X, .SensorReading.Acceleration.Y,.SensorReading.Acceleration.Z}
and
{e.Gravity.X, e.Gravity.Y, e.Gravity.Z}
?
What difference between Motion API and Accelerometer, if I use a similar vector?
Here is the difference :
Acceleration property from accelerometer includes resultant of gravity force as well as forces caused by phone movements, when Gravity property from Motion API is only gravity force (calculated using multiple sensors then separated from forces caused by phone movements).
According to this MSDN post :
The Accelerometer sensor detects the force of gravity along with any forces resulting from the movement of the phone. The combined motion API, accessed using the Motion class, uses multiple device sensors to separate the gravity vector from the device acceleration and allows you to easily determine the current attitude (yaw, pitch, and roll) of the device.

Resources