How to export models trained in AutoML to saved model or frozen graph? - google-cloud-automl

I have trained a model on Google Auto-ML. I wish to use it along with other custom models I have built, hence I need the models trained in AutoML in frozen graph or saved model format. Any direction would be much appreciated.

You cannot do that in Google AutoML, you can do it in your local machine and export it to AWS sage-maker kind of platforms and deploy it.

Related

How do I use huggingface models offline? Also, how can I retrain those models with my own datasets for specific task?

I want to use huggingface Model offline. Also, I want to use models for particular task trains on a specific dataset.
I expect I will retrain my model with specific datasets and also i can use it offline.

Web application that creates SPARQL query from user input (predetermined keywords) and gives result back to user

I want to create a web application that has a front end. The user will search with some keywords and it will trigger a SPARQL query that will run on knowledge graph or ontology. Then the results of the query will be shown back to the user on the UI.
I have a simple ontology of machine learning models built with Protégé. It has a property of "accuracy of model". The user wants to search a model with 90% accuracy and this web application should search and show the models name and details to the user on the UI. There should be a dropdown which has different accuracy values on the page and the user will select any one of them. Then he will search for the model.
I tried using NLP techniques but it was too difficult for me, as I don't have much knowledge about those techniques.
I know about OWL, RDF, SPARQL, Apache jena.
Can anyone help me with the strategy or the tools that I should use for this application.
Any help is appreciated. Thank You.

Firebase Location and Validation query

I am in a bit of dilemma.. in the database if have lots of posts which all have a longitude and latitude attribute.
What I am currently doing is observing the whole post group, and then doing the validation on the device. Which will become very heavy if there are more than 1000 or more posts.
if (location (latitude + longitude)).distance(currentUser.location) < 5000
Is there a way to do the validation directly on the database? And keep in mind that the currentUser.location is relative to the device, so I always would need to construct the location from the post and compare it to the current location of the user.
Any help would be much appreciated.
It's possible if you would use something like geohashes. Then you could filter your locations based on their geohash indexes. Or you can also achieve that by creating your data structure based on geohashes. But there's no need for this since there are libraries which do that.
I've created a library for geo queries, check it out.
Or there's also a good library from firebase itself.

How to add our trained data to existing Stanford NER classifier?

I have to add my trained data to existing CRF classifier [english.all.7class.distsim.crf.ser].
Is there any API to extract the the existing models or de-serialise them ?
Thanks
Unfortunately, no: there is no way to recover the training data from the serialized model files. Nor is there a way to [easily] continue training from a serialized model. Part of this is by design: Stanford is not allowed to redistribute the data that was used to train the classifier. If you have access to the data, you can of course train with it alongside the new data, as per http://nlp.stanford.edu/software/crf-faq.shtml#a

Data mine a huge amount of data

I store a huge amount of reporting elements in a MySQL database. These elements are stored in a simple way :
KindOfEvent;FromCountry;FromGroupOfUser;FromUser;CreationDate
All these reporting elements should permit to display graphs from different points of view. I have tried using SQL requests for that but it is very slow for users. As this graph will be used by non-technical users, I need a tool to pre-work the result.
I am very new to all this data-mining, reporting, olap concepts. If you know a pragmatic approach not so time consuming, or a tool for that, it would help !
You could setup OLAP cubes on top of your MySQL data. The multi-dimensional model will help your users navigating through and analysing the data either via Excel or Web dashboards. One thing specific to icCube is its ability to integrate any Javascript charting library and to embed the dashboard within your own pages.
I am not familiar with DB, but I think MySQL is far than enough for your problems. Well designed index or transaction will speed up the query process.
I am not a DB expert but if you want to process graphs, you can use Neo4J (java graph processing framework), or SNAP (C++ graph processing framework), or employee cloud computing if this is possible. I would recommend either Hadoop (MapReduce) or Giraph (cloud graph processing). For graph display you can use whatever tools suites you. Of course "the best" technology depends on the data size. If none of the above suites you, try finding something that does on the wiki page: http://en.wikipedia.org/wiki/Graph_database
InforGrid (http://infogrid.org/trac/) looks like might suite you.

Resources