I have a deep learning neural network I built on Pytorch I am seeking to deploy onto IOS.
Native support doesn't exist still I think, but what some do is to export the ONNX model and then open this in Caffe2 which has the support for IOS device (also Androids)
So use ONNX export tutorial and this mobile integration helper.
There is also a path converting ONNX to CoreML but depending on your project it may not be super fast.
There is also an option to port the ONNX to TensorFlow Lite and to integrate with your Swift or Objective C app from there on.
Related
I am trying to convert the desktop app to Android Automotive OS (AAOS). I am using OpenCV DNN for object tracking. Also, I am using OpenGL to render the contents. Rendering outputs (2 full HD) must be displayed on two monitors (must be full screen). Also, I must send some data using serial communication. I don't have any experience with AAOS. So I can not decide to this app doable or not on AAOS. So If you have any experience with AAOS can you give me any feedback about this project. AAOS runs on Snapdragon SA8155.
Dev board link:
https://www.lantronix.com/products/sa8155p-automotive-development-platform/#tab-features
Android Automotive supports multiple screens. And specifically this platform provides multiple video outputs.
You should check whether mentioned features are supported by provided Android distributive. Most certainly the distro is supplied by Qualcomm. In this case you need to get access to Qualcomm's documentation.
I am trying to make an app in Xamarin.forms that needs to be able to detect text from images, and I decided to use Firebase ML Kit. How do I use ML Kit with Xamarin.forms, not just Xamarin android? If I can't, is there an alternative I can use with Xamarin iOS?
I can't see any Firebase MLKit package for Xamarin.Forms. There are only packages for Xamarin.Firebase.ML.Vision->Xamarin.Android and Xamarin.Firebase.iOS.MLKit->Xamarin.IOS.
I think you should use alternatives like Microsoft Cognitive Service-> Computer Vision or Tesseract package. I had a change to implement both and Azure service's recognition is much better than Terresact. On the other hand Tesseract has an advantage, it can work offline and faster.
There are 2 ways to implement Microsoft Cognitive Service. First one is using their packages and other one is using rest service. Similar result. Tesseract is offline, so you should use its package.
I have trained IBM Watson to recognize objects of interest. Since remote execution isn’t a requirement I want to export to .mlmodel with the tool provided and run in macOS.
Unfortunately learning Swift and macOS development isn’t a requirement either. It is possible to invoke Vision directly from the command line or from a scripting language? As alternative anybody knows a skeleton of macOS app to run Vision over a list of files and obtain classification scores in tabular form? Thanks.
The code mentioned in this article uses a downloaded Core ML model in an iOS App through Watson SDK.
Additionally, Here’s a code sample that uses Watson Visual Recognition and Core ML to classify images. The workspace has two projects
Core ML Vision Simple: Classify images locally with Visual Recognition.
Core ML Vision Custom: Train a custom Visual Recognition model for more specialized classification.
Refer the code and instructions here
Also, there’s a starter kit that comes with Watson Visual Recognition preconfigured with Core ML - https://console.bluemix.net/developer/appledevelopment/starter-kits/custom-vision-model-for-core-ml-with-watson
You can also load the mlmodel into Python and use the coremltools package to make predictions. I wouldn't use that in a production environment, but it's OK to get something basic up and running.
I am just getting into drone programing and I want to use my Jetson tx2 to control my Matrice 100. I noticed that there were two sdks that could be installed, ROS and OES.
What is the difference between the two?
Which one is recommended to be installed on a Jetson for autonomy?
To clear up some confusion regarding this - we offer a single SDK on Linux, the DJI Onboard SDK (OSDK). For convenience, we offer a ROS wrapper around the SDK that is compliant with ROS standards. Your Jetson TX2 should be able to run either the SDK by itself, or its ROS wrapper based on what your needs are - use ROS if you want to interface with existing ROS packages and use ROS tools, or use the SDK by itself if you need to create efficient and purpose-built applications. All of the sample code documented here and available on GitHub (Onboard-SDK and Onboard-SDK-ROS) is functionally equivalent on both platforms.
Further reading:
DJI OSDK Doc - http://developer.dji.com/onboard-sdk/documentation/introduction/homepage.html
Quick Start - Run a C++/Linux sample application
ROS Wiki - http://wiki.ros.org/dji_sdk
I have trained several Caffe networks to do basic image classification tasks using Nvidia Digits. I am looking for a way to use the library functions and models in a Windows Universal App, or to convert my model to a TensorFlow model and use the mobile friendly options available there.
Evidently it is possible to use TensorFlow models in iOS and Android apps, but is there a way of using the Caffe or TensorFlow libraries (or models) in a Windows Universal App?
is there a way of using the Caffe or TensorFlow libraries (or models) in a Windows Universal App?
There is no direct way in UWP app, they have not added support for Windows Runtime platform.
A possible way is to create a web service which can expose several methods, with these methods, using the libraries of Machine Intelligence to implement what you want.
In your UWP app, just use HttpClient to consume web service.
You may compile TensorFlow (difficult) or TensorFlow Lite (easy) as a static universal windows library, and link it with your uwp app.