Saving gensim LDA model to ONNX - gensim

Is there a way to save a gensim LDA model to ONNX format? We need to be able to train using Python/gensim and then operationalize it into an Onnx model to publish and use.

Currently (March 2020, gensim-3.8.1) I don't know of any built-in support for ONNX formats in gensim.
Provided the ONNX format can represent LDA models well – & here's an indiction it does – it would be a plausible new feature.
You could add a feature request at the gensim issue tracker, but for the feature to be added, it would likely require a contribution from a skilled developer who needs the feature, & can write the code & test cases.

Related

Feature extraction using pre-trained model

I'm using pre-trained model for feature extraction of CT image for COVID. Then using a classifier. I need know what are features that will be extracted when pre-trained model is used here.

Where does hugginface's transform library look for models?

I'm trying to make huggingface's transformer library use a model that I have downloaded that is not in the huggingface model repository.
Where does transformers look for models? Is there an equivalent of the $PATH environment variable for transformers models?
Research
This hugging face issues talks about manually downloading models.
This issue suggests that you can work around the question of where huggingface is looking for models by using the path as an argument to from_pretrained (#model = BertModel.from_pretrained('path/to/your/directory')`)
Related questions
Where does hugging face's transformers save models?

ML.NET doesn't support resuming training for ImageClassificationTrainer

I want to continue training the model.zip file with more images without retraining from the baseline model from scratch, how do I do that?
This isn't possible at the moment. ML.NET's ImageClassificationTrainer already uses a pre-trained model, so you're using transfer learning to create your model. Any additions would have to be "from scratch" on the pre-trained model.
Also, looking at the existing trainers that can be re-trained, the ImageClassificationTrainer isn't listed among them.

how use rapidminer model in H2O.ai

i have created a model in rapid miner. it is a classification model and save the model in pmml. i want to use this model in H2O.ai to predict further. is there any way i can import this pmml model to H2O.ai an used this for further prediction.
I appreciate your suggestions.
Thanks
H2O offers no support for importing/exporting(*) pmml models.
It is hard to offer a good suggestion without knowing your motivation for wanting to use both RapidMiner and H2O. I've not used RapidMiner in about 6 or 7 years, and I know H2O well, so my first choice would just be to re-build the model in H2O.
If you are doing a lot of pre-processing steps in RapidMiner, and that is why you want to use it, you could still do all that data munging there, then export the prepared data to csv, import that into H2O, then build the model.
*: Though I did just find this tool for converting H2O models to PMML: https://github.com/jpmml/jpmml-h2o But that is the opposite direction for what you want.

Using Alloy Models

I'm working on a project about the live upgrade of HA applications in SA
Forum middleware.
in Part of my research, I need to make a UML profile for my input upgrade campaign file,
and validate that file regarding some dependency constraints. Now I want to use ALLOY
instead of UML in my work specially since it's more abstract and formal than UML. (of
course UML + OCL will be formal.). Now my question is that, if UML + OCL is formal so
what's the benefit of using the ALLOY?
In general what are the benefits of using Alloy against UML?
As far as I know, there are no tools that let you check your OCL constraints against the UML
model, and generate and visualize valid instances, so if you are planning to do formal analysis of your models + specifications, Alloy might be a better choice. Even if you're not planning to do much of analysis, Alloy's ability to generate and visualize valid instances is greatly helpful in making sure you got your model and specification right.

Resources