Is there any way to get live telemetry data from the DJI Mavic 2 Zoom to another unit, say a computer? - dji-sdk

As part of a course on my university we've been given the task of taking the live wind telemetry from a drone and then feeding it to a neural network so that it gives better estimates than just using a sensor.
The research we've concluded so far tells us that our drone, the DJI Mavic 2 Zoom, is only compatible with the Windows SDK but not the onboard SDK.
Simply our question is; is there any way for us to send the raw wind speed and direction data from the drones sensors to a computer?

Create Android application with DJI Mobile SDK and send data from msdk to your computer with wifi.

The SDK only provides the wind warning level(0, 1 and 2). It does not provide any information regarding the direction from which the wind is blowing or the actual speed of the wind.
The aircraft tries to stay in it's current position on it's own, even if there is moderate wind blowing. However the drone does not tell the user how much it has to work in which direction to negate the effect of the wind.
I assume you're better off with accessing real time wind information for your position from a weather service on the internet, if that's available in your country.

I've done a wind meter app.
The best method is:
Fly against the wind
In virtualstick use angle mode, and set pitch and roll to 0 This will let the drone drift with the wind.
Slowly rotate the yaw.
Meassure the speed, when it stop increasing, the gps-speed gives you the windspeed and direction.
Warning, in strong wind you have to fly for quite a while against the wind.
The yaw rotation needs to be done due to the drone is never exactly leveled, and it will pick up speed at one direction. If turned, it cancel that out.
Send the info to a server over internet/wifi.
I've done this on an android phone connected the controller.
Windows api doesn't seem to support virtualsticks, which I find strange. In that case it must be done on android or ios, and trasnmitted to a server. I might be wrong since I never used the windows api.

Related

How to design a mission to visit several locations and in each location process some computer vision tasks on the mobile platform?

I would need to make my dron Mavic2 Pro to visit approx 10 locations in relatively low altitude 1.7 m. In each location the camera should look at right direction and the mission paused to let the mobile application process some CV tasks. I am not sure what is the best approach to make a mission that is partially processed on mobile platform? What to use in DJI mobile sdk api to pause mission when the location is reached?
I am going to use a time line mission composed from sequence of GoToAction. I wonder if this is a good way to do it. Is there a better solution?
Is MissionControl.Listener right place to interrupt a mission when a TimeLineElement finish or should I use WaypointReachedTrigger?
I wasn't able to find any suitable example.
Please add specific programming question. Otherwise, all answer is primarily opinion-based see https://stackoverflow.com/help/dont-ask for detail
DJI method allows you to control the drones gimbal and gps navigation through MissionAction. The gotoaction is a subclass of the Missionaction class. the gotoaction only goes to some GPS location . So you need other things in the mission action such as gimbalattiudeaction and camera capture action to perform camera pointing and capturing. See the Fig below.
For CV tasks, it is easy to link DJI app to OpenCV. But I highly not recommend you to do so as the task such as dection using CNN system takes too much resources. The popular approach is to upload the image taken in local buffer to a local server with GPU for processing in near real-time manner. See the Fig below, I`m using WSDK with windows online OCR for detection. Video at https://youtu.be/CcndnHkriyA . I tried with local phone based approch, but result is limited by the model accuracy. And I could not apply high accuracy model because of the processing demand for such model is high. You can see my demo in the Fig below
What you want is pretty ez to implement but hard to perfect. Flying in low altitude(1.7m) requires you to have some degree of obstacle avoidance and GPSless path planning. The one implemented in Mavic hardware is only simple avoidance or slip through. For a bit more complex like go around a wall or maze-like environment, it better to add your own global path planer and local path planner. For the feedback you can use the SVO method to get odometry and map the local sparse obstacle map for inflated radius calculation. See Fig below.
Fig taken from video https://www.youtube.com/watch?v=2YnIMfw6bJY.
The feedback code is available at https://github.com/uzh-rpg/rpg_svo.
The path planning code you can try with ETH`s https://github.com/ethz-asl/mav_voxblox_planning as well.
Good luck with your work.

Remove GPS on matrice 100

With my team we are programming indoor flight for the matrice 100, and we don't have the use of the GPS.
Is it possible to remove it?
And sometimes at the floor level we have electromagnetic problems, and the drone refuses to turn on the rotors, are they any way to force it ?
We use guidance, and I have noticed even without GPS and with electromagnetic interference the drone is stable.
As of Mar 2018, on ALL DJI drones you need to have at least the compass connected in order to start flight.
Can't you just remove the GPS module from the M100? The module rises above the rest of the craft; it's that little white puck with "DJI" written on it in red.
Alternatively, I've heard of people covering it with tin foil to prevent the GPS signal from coming through.

Does the project tango tablet work outdoors?

I'm looking to develop an outdoor application but not sure if the tango tablet will work outdoors. Other depth devices out there tend to not work well outside becuase they depend on IR light being projected from the device and then observed after it bounces off the objects in the scene. I've been looking for information on this and all I've found is this video - https://www.youtube.com/watch?v=x5C_HNnW_3Q. Based on the video, it appears it can work outside by doing some IR compensation and/or using the depth sensor but just wanted to make sure before getting the tablet.
If the sun is out, it will only work in the shade, and darker shade is better. I tested this morning using the Java Point Cloud sample app, and only get > 10k points in my point cloud in center of my building's shadow, close to the building. Toward the edge of the shadow the depth point cloud frame rate goes way down and I get the "Few depth points" message. If it's overcast, I'm guessing your results will vary, depending on how dark it is, I haven't tested this yet.
The tango (yellowstone) tablet also works by projecting IR light patterns, like the other depth sensing devices you mentioned.
You can expect the pose tracking and area learning to work as well as they do indoors. The depth perception, however, will likely not work well outside in direct sunlight.

Disabling inertia tracker

Is there anyway to disable the "inertial motion sensors" for a program, so that my program does not track the devices acceleration? I've noticed that if a user suddenly moves and then suddenly stops with the device the motion tracking becomes inaccurate.
Even if you could, "disabling the sensors" is not a good idea. And when you say "disabling" I assume you mean setting them to an initialization state rather than just ignoring the data stream. The 3-axis accelerometer and gyroscope data are fused with position data to provide your relative location during motion tracking. You have no way of knowing which of these data streams is the source of the inaccuracy, and just turning off acceleration would likely require a re-calibration of all sensors so that tracking (data fusion) is accurate.
Replicate the error with as much data as possible (speed, stopping time, orientation of the tablet, distance to the nearest object, nearest object characteristics, etc.) and report it to the project Tango team.

Using windows phone combined motion api to track device position

I'd like to track the position of the device with respect to an initial position with high accuracy (ideally) for motions at a small scale (say < 1 meter). The best bet seems to be using motionReading.SensorReading.DeviceAcceleration. I tried this. But ran into few problems. Apart from the noisy readings (which I was expecting and can tolerate), I see some behaviors that are conceptually wrong - e.g. If I start from rest, move the phone around and bring it back to rest- and in the process periodically update the velocity vector along all the dimensions, I would expect the magnitude of the velocity to be very small (ideally 0). But I don't see that. I have extensively reviewed available help including the official msdn pages but I don't see any examples where the position/velocity of the device are updated using the acceleration vector. Is the acceleration vector that the api returns (atleast in theory) supposed to be the rate of change of velocity or something else? (FYI - my device does not have a gyroscope, so the api is going to be the low accuracy version.)

Resources