Classifying images with Vision and CoreML in macOS - macos

I have trained IBM Watson to recognize objects of interest. Since remote execution isn’t a requirement I want to export to .mlmodel with the tool provided and run in macOS.
Unfortunately learning Swift and macOS development isn’t a requirement either. It is possible to invoke Vision directly from the command line or from a scripting language? As alternative anybody knows a skeleton of macOS app to run Vision over a list of files and obtain classification scores in tabular form? Thanks.

The code mentioned in this article uses a downloaded Core ML model in an iOS App through Watson SDK.
Additionally, Here’s a code sample that uses Watson Visual Recognition and Core ML to classify images. The workspace has two projects
Core ML Vision Simple: Classify images locally with Visual Recognition.
Core ML Vision Custom: Train a custom Visual Recognition model for more specialized classification.
Refer the code and instructions here
Also, there’s a starter kit that comes with Watson Visual Recognition preconfigured with Core ML - https://console.bluemix.net/developer/appledevelopment/starter-kits/custom-vision-model-for-core-ml-with-watson

You can also load the mlmodel into Python and use the coremltools package to make predictions. I wouldn't use that in a production environment, but it's OK to get something basic up and running.

Related

How can I deploy my Pytorch model into IOS?

I have a deep learning neural network I built on Pytorch I am seeking to deploy onto IOS.
Native support doesn't exist still I think, but what some do is to export the ONNX model and then open this in Caffe2 which has the support for IOS device (also Androids)
So use ONNX export tutorial and this mobile integration helper.
There is also a path converting ONNX to CoreML but depending on your project it may not be super fast.
There is also an option to port the ONNX to TensorFlow Lite and to integrate with your Swift or Objective C app from there on.

How do I use firebase ml kit with xamarin.forms?

I am trying to make an app in Xamarin.forms that needs to be able to detect text from images, and I decided to use Firebase ML Kit. How do I use ML Kit with Xamarin.forms, not just Xamarin android? If I can't, is there an alternative I can use with Xamarin iOS?
I can't see any Firebase MLKit package for Xamarin.Forms. There are only packages for Xamarin.Firebase.ML.Vision->Xamarin.Android and Xamarin.Firebase.iOS.MLKit->Xamarin.IOS.
I think you should use alternatives like Microsoft Cognitive Service-> Computer Vision or Tesseract package. I had a change to implement both and Azure service's recognition is much better than Terresact. On the other hand Tesseract has an advantage, it can work offline and faster.
There are 2 ways to implement Microsoft Cognitive Service. First one is using their packages and other one is using rest service. Similar result. Tesseract is offline, so you should use its package.

How can I use Caffe or TensorFlow in a Windows Universal App?

I have trained several Caffe networks to do basic image classification tasks using Nvidia Digits. I am looking for a way to use the library functions and models in a Windows Universal App, or to convert my model to a TensorFlow model and use the mobile friendly options available there.
Evidently it is possible to use TensorFlow models in iOS and Android apps, but is there a way of using the Caffe or TensorFlow libraries (or models) in a Windows Universal App?
is there a way of using the Caffe or TensorFlow libraries (or models) in a Windows Universal App?
There is no direct way in UWP app, they have not added support for Windows Runtime platform.
A possible way is to create a web service which can expose several methods, with these methods, using the libraries of Machine Intelligence to implement what you want.
In your UWP app, just use HttpClient to consume web service.
You may compile TensorFlow (difficult) or TensorFlow Lite (easy) as a static universal windows library, and link it with your uwp app.

DirectX sample code c++

I'm looking for source code (as I bet a lot of others are / were and will) for learning purposes of DirectX. I would like something similar to the vs2013 Graphics Editor when dealing with *.fbx files, etc. Every thing I find is old and outdated, or way to simple and does not show the basics like transformation cursor, picking objects or points on objects. I'm just looking for something basic.
many thanks in advance
The DirectX Tool Kit is a good place to start and includes some tutorial content as well. It supports loading models using the VS 2013 content pipeline that produces CMOs from FBX files.
You didn't state if you were looking to write a Windows desktop application (aka a Win32 application) or if you were looking to write for Windows Store / Windows phone. DirectX Tool Kit supports either, although the tutorial is written using a Windows desktop application template so that developers using Windows 7 could also utilize it.
You should also refer to the DirectX SDK Samples Catalog for locations of updated versions of the legacy DirectX SDK samples that build fine using VS 2013 only.

Easiest image processing on Mac

I am looking for an easy way to process images as an app on Mac – e.g. tracking moving objects, finding objects/faces etc.
This was inspired by a recent SO post: How to detect a Christmas Tree?
What is the best language for me to code this in, and how would I do it? I don't have any money to spend on software. I am also a complete beginner to image processing!
Thanks,
Fjpackard.
I would suggest to get openCV and connect it to Xcode. Numerous resources can be found on the web or in book stores.
See for example:
OpenCV MacOS installation
OpenCV and iOS
OpenCV and Xcode
OpenCV and Xcode
I think ipython notebook, and one of the the open source python distributions are two tools to look at. Also, there are some talks from PyData on Vimeo on how to do image processing with python based tool.
One of the python modules you can look at is called scikit-image.
The advantage of the ipython notebook is speakers often post their talks, so if they share their notebooks you can download them and follow along. I did that with one of the image processing talks from pydata and most of the code and images worked.

Resources