Remove GPS on matrice 100 - dji-sdk

With my team we are programming indoor flight for the matrice 100, and we don't have the use of the GPS.
Is it possible to remove it?
And sometimes at the floor level we have electromagnetic problems, and the drone refuses to turn on the rotors, are they any way to force it ?
We use guidance, and I have noticed even without GPS and with electromagnetic interference the drone is stable.

As of Mar 2018, on ALL DJI drones you need to have at least the compass connected in order to start flight.

Can't you just remove the GPS module from the M100? The module rises above the rest of the craft; it's that little white puck with "DJI" written on it in red.
Alternatively, I've heard of people covering it with tin foil to prevent the GPS signal from coming through.

Related

Is there any way to get live telemetry data from the DJI Mavic 2 Zoom to another unit, say a computer?

As part of a course on my university we've been given the task of taking the live wind telemetry from a drone and then feeding it to a neural network so that it gives better estimates than just using a sensor.
The research we've concluded so far tells us that our drone, the DJI Mavic 2 Zoom, is only compatible with the Windows SDK but not the onboard SDK.
Simply our question is; is there any way for us to send the raw wind speed and direction data from the drones sensors to a computer?
Create Android application with DJI Mobile SDK and send data from msdk to your computer with wifi.
The SDK only provides the wind warning level(0, 1 and 2). It does not provide any information regarding the direction from which the wind is blowing or the actual speed of the wind.
The aircraft tries to stay in it's current position on it's own, even if there is moderate wind blowing. However the drone does not tell the user how much it has to work in which direction to negate the effect of the wind.
I assume you're better off with accessing real time wind information for your position from a weather service on the internet, if that's available in your country.
I've done a wind meter app.
The best method is:
Fly against the wind
In virtualstick use angle mode, and set pitch and roll to 0 This will let the drone drift with the wind.
Slowly rotate the yaw.
Meassure the speed, when it stop increasing, the gps-speed gives you the windspeed and direction.
Warning, in strong wind you have to fly for quite a while against the wind.
The yaw rotation needs to be done due to the drone is never exactly leveled, and it will pick up speed at one direction. If turned, it cancel that out.
Send the info to a server over internet/wifi.
I've done this on an android phone connected the controller.
Windows api doesn't seem to support virtualsticks, which I find strange. In that case it must be done on android or ios, and trasnmitted to a server. I might be wrong since I never used the windows api.

How to design a mission to visit several locations and in each location process some computer vision tasks on the mobile platform?

I would need to make my dron Mavic2 Pro to visit approx 10 locations in relatively low altitude 1.7 m. In each location the camera should look at right direction and the mission paused to let the mobile application process some CV tasks. I am not sure what is the best approach to make a mission that is partially processed on mobile platform? What to use in DJI mobile sdk api to pause mission when the location is reached?
I am going to use a time line mission composed from sequence of GoToAction. I wonder if this is a good way to do it. Is there a better solution?
Is MissionControl.Listener right place to interrupt a mission when a TimeLineElement finish or should I use WaypointReachedTrigger?
I wasn't able to find any suitable example.
Please add specific programming question. Otherwise, all answer is primarily opinion-based see https://stackoverflow.com/help/dont-ask for detail
DJI method allows you to control the drones gimbal and gps navigation through MissionAction. The gotoaction is a subclass of the Missionaction class. the gotoaction only goes to some GPS location . So you need other things in the mission action such as gimbalattiudeaction and camera capture action to perform camera pointing and capturing. See the Fig below.
For CV tasks, it is easy to link DJI app to OpenCV. But I highly not recommend you to do so as the task such as dection using CNN system takes too much resources. The popular approach is to upload the image taken in local buffer to a local server with GPU for processing in near real-time manner. See the Fig below, I`m using WSDK with windows online OCR for detection. Video at https://youtu.be/CcndnHkriyA . I tried with local phone based approch, but result is limited by the model accuracy. And I could not apply high accuracy model because of the processing demand for such model is high. You can see my demo in the Fig below
What you want is pretty ez to implement but hard to perfect. Flying in low altitude(1.7m) requires you to have some degree of obstacle avoidance and GPSless path planning. The one implemented in Mavic hardware is only simple avoidance or slip through. For a bit more complex like go around a wall or maze-like environment, it better to add your own global path planer and local path planner. For the feedback you can use the SVO method to get odometry and map the local sparse obstacle map for inflated radius calculation. See Fig below.
Fig taken from video https://www.youtube.com/watch?v=2YnIMfw6bJY.
The feedback code is available at https://github.com/uzh-rpg/rpg_svo.
The path planning code you can try with ETH`s https://github.com/ethz-asl/mav_voxblox_planning as well.
Good luck with your work.

Multiple Tangos Looking at one location - IR Conflict

I am getting my first Tango in the next day or so; worked a little bit with Occipital's Structure Sensor - which is where my background in depth perceiving camera's come from.
Has anyone used multiple Tango at once (lets say 6-10), looking at the same part of a room, using depth for identification and placement of 3d character/content? I have been told that multiple devices looking at the same part of a room will confuse each Tango as they will see the other Tango's IR dots.
Thanks for your input.
Grisly
I have not tried to use several Tangos, but I have however tried to use my Tango in a room where I had a Kinect 2 sensor, which caused the Tango to go bananas. It seems however like the Tango has lower intensity on its IR projector in comparison, but I would still say that it is a reasonable assumption that it will not work.
It might work under certain angles but I doubt that you will be able to find a configuration of that many cameras without any of them interfering with each other. If you would make it work however, I would be very interested to know how.
You could lower the depth camera rate (defaults to 5/second I believe) to avoid conflicts, but that might not be desirable given what you're using the system for.
Alternatively, only enable the depth camera when placing your 3D models on surfaces, then disable said depth camera when it is not needed. This can also help conserve CPU and battery power.
It did not work. Occipital Structure Sensor on the other hand, did work (multiple devices in one place)!

Does the project tango tablet work outdoors?

I'm looking to develop an outdoor application but not sure if the tango tablet will work outdoors. Other depth devices out there tend to not work well outside becuase they depend on IR light being projected from the device and then observed after it bounces off the objects in the scene. I've been looking for information on this and all I've found is this video - https://www.youtube.com/watch?v=x5C_HNnW_3Q. Based on the video, it appears it can work outside by doing some IR compensation and/or using the depth sensor but just wanted to make sure before getting the tablet.
If the sun is out, it will only work in the shade, and darker shade is better. I tested this morning using the Java Point Cloud sample app, and only get > 10k points in my point cloud in center of my building's shadow, close to the building. Toward the edge of the shadow the depth point cloud frame rate goes way down and I get the "Few depth points" message. If it's overcast, I'm guessing your results will vary, depending on how dark it is, I haven't tested this yet.
The tango (yellowstone) tablet also works by projecting IR light patterns, like the other depth sensing devices you mentioned.
You can expect the pose tracking and area learning to work as well as they do indoors. The depth perception, however, will likely not work well outside in direct sunlight.

Looking for ways for a robot to locate itself in the house

I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)

Resources