What is the difference between ROS and OES onboard SDK's? - dji-sdk

I am just getting into drone programing and I want to use my Jetson tx2 to control my Matrice 100. I noticed that there were two sdks that could be installed, ROS and OES.
What is the difference between the two?
Which one is recommended to be installed on a Jetson for autonomy?

To clear up some confusion regarding this - we offer a single SDK on Linux, the DJI Onboard SDK (OSDK). For convenience, we offer a ROS wrapper around the SDK that is compliant with ROS standards. Your Jetson TX2 should be able to run either the SDK by itself, or its ROS wrapper based on what your needs are - use ROS if you want to interface with existing ROS packages and use ROS tools, or use the SDK by itself if you need to create efficient and purpose-built applications. All of the sample code documented here and available on GitHub (Onboard-SDK and Onboard-SDK-ROS) is functionally equivalent on both platforms.
Further reading:
DJI OSDK Doc - http://developer.dji.com/onboard-sdk/documentation/introduction/homepage.html
Quick Start - Run a C++/Linux sample application
ROS Wiki - http://wiki.ros.org/dji_sdk

Related

How can I deploy my Pytorch model into IOS?

I have a deep learning neural network I built on Pytorch I am seeking to deploy onto IOS.
Native support doesn't exist still I think, but what some do is to export the ONNX model and then open this in Caffe2 which has the support for IOS device (also Androids)
So use ONNX export tutorial and this mobile integration helper.
There is also a path converting ONNX to CoreML but depending on your project it may not be super fast.
There is also an option to port the ONNX to TensorFlow Lite and to integrate with your Swift or Objective C app from there on.

How control the Mavic Air drone with the computer

right now, I’m on a study project for School. I'm french. I have the Mavic Air drone, and I do to control my drone with my computer. DJI Developer has some SDK for different plateforme, whose a Windows SDK. But it was in beta version and he don't support the fly mod.
I think, to take the OX SDK (Android Version in JAVA) and translate it into a Java App for a Windows version. OX SDK support drone control commands. You thinks it's a good ideas ? And some people can help to translate this app ?
Can you help me find a solution? Have you some command-line to give to me?
Thanks you all.
It's unclear from your question if you've actually tried to implement the Windows 10 SDK and ran into difficulties or if you saw something which stated flight is not supported by the SDK. According to the SDK documentation (https://developer.dji.com/windows-sdk/), high & low level flight control are supposed to be supported. For example, the ComponentManager.FlightControllerHandler has methods such as StartTakeoffAsync, StartGoHomeAsync, etc. Joystick control is available via the VirtualRemoteController.UpdateJoystickValue method. So far, I have only used these while my Mavic Air is in simulation mode (without propellers on!) and haven't encountered any issues. But I haven't seen any documentation that states the beta SDK doesn't support actual flight either. Before launching into a conversion effort (does DJI even provide the source? I haven't checked...), I'd stick with with Win10 SDK.

GUI Development for a ROBOT Compatible with ROS and WINDOWS

for a project i have been assigned, I have been given 2 robots...one has ROS and the other basically uses Windows. So my task is to develop one Graphic User Interface that can be used for both robots.
From the GUI , a user should be able to.
- Connect to the Robot
- Move and control the robot.
- Change speed...etc
I will like to ask for advice as i am about to start this project.
How can i go about this? and which has better support for my requirements?
From my research i have read people recommend QT...for cross platform developmens. Are there any other alternatives? any book recommendations?
The goal will be to have a GUI that is compatible for both systems. Any Recommendations or help is welcomed.
First you setup ROS On windows Using WSL (or any other ways to do it WSL is most stable).
after that you need achieve everything you want the GUI does on robot using ROS terminal.
after that your write the GUI. You can Choose any framework You want(You need C++ or Python for compatibility issues with ROS) but QT Framework is most used in multi platform Application and has a lot of support.
the compatibility with non-ROS is what You should Implement in your Application Like Choosing or something Like that.
PySimpleGUI is a framework built on top of tkinter that runs on the Pi. There are some example programs written to do robot remote control. There are GUI buttons designed specifically for "realtime" control of hardware that will provide immediate and constant feedback when a button is held.
It runs on Python 2.7 and 3 (recommend 3).
There is a Recipe in the Cookbook that matches your problem located here.
If you use PySimpleGUI in your project, post in the Issues area on GitHub if you have any questions and you'll get support.

Windows 10 IoT core+rasperry pi+camera sensor?

Does Windows 10 IoT core supports rasperry pi camera sensor?If So,which libraries are there in C# to code camera module?
Windows IoT officially supports several types of usb cameras, find a completed list from https://developer.microsoft.com/en-us/windows/iot/docs/hardwarecompatlist#Cameras.
If you're developing under UWP framework, which has built-in support for various cameras, follow the tutorial from https://msdn.microsoft.com/en-us/windows/uwp/audio-video-camera/camera.
Microsoft also provides sample projects for camera development, find it in https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/CameraStarterKit.
I hope it helps.
tl;dr
No, Windows 10 IoT Core does NOT support the CSI module (Camera Serial Interface) of a Raspberry Pi.
#Jackie posted already a link with the supported cameras. In my experiences other may work but it is not guaranteed.

How can I use Caffe or TensorFlow in a Windows Universal App?

I have trained several Caffe networks to do basic image classification tasks using Nvidia Digits. I am looking for a way to use the library functions and models in a Windows Universal App, or to convert my model to a TensorFlow model and use the mobile friendly options available there.
Evidently it is possible to use TensorFlow models in iOS and Android apps, but is there a way of using the Caffe or TensorFlow libraries (or models) in a Windows Universal App?
is there a way of using the Caffe or TensorFlow libraries (or models) in a Windows Universal App?
There is no direct way in UWP app, they have not added support for Windows Runtime platform.
A possible way is to create a web service which can expose several methods, with these methods, using the libraries of Machine Intelligence to implement what you want.
In your UWP app, just use HttpClient to consume web service.
You may compile TensorFlow (difficult) or TensorFlow Lite (easy) as a static universal windows library, and link it with your uwp app.

Resources