i have created a model in rapid miner. it is a classification model and save the model in pmml. i want to use this model in H2O.ai to predict further. is there any way i can import this pmml model to H2O.ai an used this for further prediction.
I appreciate your suggestions.
Thanks
H2O offers no support for importing/exporting(*) pmml models.
It is hard to offer a good suggestion without knowing your motivation for wanting to use both RapidMiner and H2O. I've not used RapidMiner in about 6 or 7 years, and I know H2O well, so my first choice would just be to re-build the model in H2O.
If you are doing a lot of pre-processing steps in RapidMiner, and that is why you want to use it, you could still do all that data munging there, then export the prepared data to csv, import that into H2O, then build the model.
*: Though I did just find this tool for converting H2O models to PMML: https://github.com/jpmml/jpmml-h2o But that is the opposite direction for what you want.
Related
I want to build a text2text model. Specifically, I want to transfer some automatically generated scrabbling text pieces into a smooth paragraph within the same language. I've already prepared the text inputs and outputs. So corpus is not the primary problem now.
I want to use hugging face models like:
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
because it has already obtained the capacity to generate the language, the model is made for masked language, and there's no mature task like mine as it is really customized. So how could I use the hugging face masked language model as a base text2text model without jeopardizing its capacity? I want to fine-tune it to achieve that task/goal. I want to know how.
I want to continue training the model.zip file with more images without retraining from the baseline model from scratch, how do I do that?
This isn't possible at the moment. ML.NET's ImageClassificationTrainer already uses a pre-trained model, so you're using transfer learning to create your model. Any additions would have to be "from scratch" on the pre-trained model.
Also, looking at the existing trainers that can be re-trained, the ImageClassificationTrainer isn't listed among them.
Do you guys already worked with serialized models in Sparkling Models ou export models like the Spark to put in production? How can I do that?!
Thanks in advance.
Flavio
There are several ways how you can export a model.
You can export binary version( serialization) of model, POJO ( Plain Old Java Object) and Mojo ( Model Object Optimised)
You can read more about POJOs and MOJOs here. These 2 are used for putting models into predictions. The good thing about them that they don't need H2O framework to be running.
The methods you can use for exporting are available here https://github.com/h2oai/sparkling-water/blob/master/core/src/main/scala/water/support/ModelSerializationSupport.scala
exportH2OModel exports binary model and exportPOJOModel exports POJO.
The method for exporting MOJO is accidentally missing and will be added in the next Sparkling Water release.
I made a sentiment analysis model using Standford CoreNLP's library. So I have a bunch of ser.gz files that look like the following:
I was wondering what model to use in my java code, but based on a previous question,
I just used the model with the highest F1 score, which in this case is model-0014-93.73.ser.gz. And in my java code, I pointed to the model I want to use by using the following line:
props.put("sentiment.model", "/path/to/model-0014-93.73.ser.gz.");
However, by referring to just that model, am I excluding the sentiment analysis from the other models that were made? Should I be referring to all the model files to make sure I "covered" all the bases or does the highest scoring model trump everything else?
You should point to only the single highest scoring model. The code has no way to make use of multiple models at the same time.
I'm working on a project about the live upgrade of HA applications in SA
Forum middleware.
in Part of my research, I need to make a UML profile for my input upgrade campaign file,
and validate that file regarding some dependency constraints. Now I want to use ALLOY
instead of UML in my work specially since it's more abstract and formal than UML. (of
course UML + OCL will be formal.). Now my question is that, if UML + OCL is formal so
what's the benefit of using the ALLOY?
In general what are the benefits of using Alloy against UML?
As far as I know, there are no tools that let you check your OCL constraints against the UML
model, and generate and visualize valid instances, so if you are planning to do formal analysis of your models + specifications, Alloy might be a better choice. Even if you're not planning to do much of analysis, Alloy's ability to generate and visualize valid instances is greatly helpful in making sure you got your model and specification right.