H2O sparkingwater AutoML with MLFLOW - h2o

strong text
How the best AutoML model can be saved with MLFLow?
API changed and model.leader is no longer available

Related

calling builtin AI algorithms to train and predict via REST API

I saw few references to uploading saved models to Google cloud and using them via REST API. Is there any way we can utilize prebuilt(built-in) Google AI algorithm such as 'wide and deep' in web application. We have a case where we are working on self service application where users will upload datasets which needs to be provided to this algorithm for training. Once it has been trained, we would like to intimate user about availability of the trained model and use it via API calls for predicting. We know the saved custom model path and tried all ok. But we need to use builtin algorithm in similar fasion.
Any pointers are highly appreciated.
Kiran

How to load and save models in Sparkling Water

I want to store a created model within sparkling water as a binary file so that I can can reload it with a different application.
What is the best way?
The support article is outdated and was just demonstrating something we already incorporated into our API. That sample code is using an outdated water.serial.ObjectTreeBinarySerializer, which doesn't exist anymore.
The most convenient way is to use the ModelSerializationSupport.

Algorithms behind the Alchemy API for concept and keywords extraction

I've started using the alchemy API but I would like to know if
there is any scientific publication that explains the models used for extracting the keywords and the concepts from the text?
Also according to this answer Is there way to influence AlchemyAPI sentiment analysis
the models used for the alchemy Api were trained on billions of web pages. My question is on which type of data the algorithms were trained on
(only news content for example?).
Thank you in advance for the answers.
Their website says that AlchemyAPI uses a deep learning model. Other than that tidbit, they keep their secret sauce under wraps.
Now that they've been acquired by IBM, their APIs are part of IBM's Bluemix suite of machine learning services. There is a way to train the model using your own data. The custom model option is not cheap though: $3,500.00 USD/Custom Model Instance per Month, in addition to API request charges.
If you want an open-source alternative try DatumBox. It's open-source, and you can dig in as deep as you'd like.

How to save/export a Spark ML Lib model to PMML?

I'd like to train a model using Spark ML Lib but then be able to export the model in a platform-agnostic format. Essentially I want to decouple how models are created and consumed.
My reason for wanting this decoupling is so that I can deploy a model in other projects. E.g.:
Use the model to perform predictions in a separate standalone program which doesn't depend on Spark for the evaluation.
Use the model with existing projects such as OpenScoring and provide APIs which can make use of the model.
Load an existing model back into Spark for high throughput prediction.
Has anyone done something like this with Spark ML Lib?
Version of Spark 1.4 now has support for this. See latest documentation. Not all models are available (see to be supported (see the JIRA issue SPARK-4587).
HTHs

ML model deployment through h2o

I have built a machine learning model in R for Preventing Application Fraud in Loans using
ensemble of 5 submodels. I am looking to deploy it but I am clueless how to use h2o for this. can anyone explain briefly how to use it?
You can read all about productionizing a model in the H2O User Guide here

Resources