How to know the trained model by AutoML? - google-cloud-automl

I used AutoML Vision to train a model to predict cancer based on images. It works quite well. I want to know what the model is, whether it is CNN, how many layers.
Thank you!

We don't normally release the exact details of the model because we want to, and continue to, change it under the hood as newer better models come out.
It is a relatively deep CNN.

Related

DistilBert for self-supervision - switch heads for pre-training: MaskedLM and SequenceClassification

Say I want to train a model for sequence classification. And so I define my model to be:
model = DistilBertForSequenceClassification.from_pretrained("bert-base-uncased")
My question is - what would be the optimal way if I want to pre-train this model with masked language modeling task? After pre-training I would like to model to train on the down-stream task of sequence classification.
My understanding is that I can somehow switch the heads of my model and a DistilBertForMaskedLM for pre-training, and then switch it back to the original downstream task. Although I haven't figured out if this is indeed optimal or how to write it.
Does hugging face offer any built in function that accepts the input ids, a percentage of tokens to masked (which aren't pad tokens) and simply trains the model?
Thank you in advance
I've tried to implement this myself, and while it does seem to work it is extremely slow. I figured there could already be implemented solutions instead of trying to optimize my code.

How to validate my YOLO model trained on custom data set?

I am doing my research regarding object detection using YOLO although I am from civil engineering field and not familiar with computer science. My advisor is asking me to validate my YOLO detection model trained on custom dataset. But my problem is I really don't know how to validate my model. So, please kindly point me out how to validate my model.
Thanks in advance.
I think first you need to make sure that all the cases you are interested in (location of objects, their size, general view of the scene, etc) are represented in your custom dataset - in other words, the collected data reflects your task. You can discuss it with your advisor. Main rule - you label data qualitatively in same manner as you want to see it on the output. more information can be found here
It's really important - garbage in, garbage out, the quality of output of your trained model is determined by the quality of the input (labelled data)
If this is done, it is common practice to split your data into training and test sets. During model training only train set is used, and you can later validate the quality (generalizing ability, robustness, etc) on data that the model did not see - on the test set. It's also important, that this two subsets don't overlap - than your model will be overfitted and the model will not perform the tasks properly.
Than you can train few different models (with some architectural changes for example) on the same train set and validate them on the same test set, and this is a regular validation process.

How to train on very small data set?

We are trying to understand the underlying model of Rasa - the forums there still didnt get us an answer - on two main questions:
we understand that Rasa model is a transformer-based architecture. Was it
pre-trained on any data set? (eg wikipedia, etc)
then, if we
understand correctly, the intent classification is a fine tuning task
on top of that transformer. How come it works with such small
training sets?
appreciate any insights!
thanks
Lior
the transformer model is not pre-trained on any dataset. We use quite a shallow stack of transformer which is not as data hungry as deeper stacks of transformers used in large pre-trained language models.
Having said that, there isn't an exact number of data points that will be sufficient for training your assistant as it varies by the domain and your problem. Usually a good estimate is 30-40 examples per intent.

Obtaining a HOG feature vector for implementation in SVM in Python

I am new to sci-kit learn. I have viewed the online tutorials but they all seem to leverage existing data (e.g., digits, iris, etc). I need the information on how to process images so that they can be used by scikit learn.
Details of my Study: I have a webcam set up outside my office. It captures all of the traffic on my street that passes in the field of view. I have cropped several hundred images of sedans, trucks and SUV's. The goal is to predict whether a vehicle is one of these categories. I have applied Histogram Oriented Gradients (HOG) to these images which I have attached for your review to see the differences in the categories. This blog will not allow me to post any images but you can see them here https://stats.stackexchange.com/questions/149421/obtaining-a-hog-feature-vector-for-implementation-in-svm-in-python. I posted the same question at this site but no response. This post is the closest answer I have found. Resize HOG feature for Scikit-Learn classifier
I wish to train an SVM classifier based on these images. I understand that there are algorithms that exist in scikit-image that prepares the HOG images for use in scikit-learn. Can someone help me understand this process. I am also grateful for any thoughts based on your experience as to the probability of success of this classification study. I also understand that I need to train the model using a negative images ( ones with no vehicles. How is this done?
I know I am asking a lot but I am surprised no one that I am aware of has done a tutorial on these early steps. It seems like a fairly elementary study.

Training Model for Sentiment Analysis with Google Prdection API

I am planning to use Google Prediction API for Sentiment Analysis. How can I generate the Traning model for this? Or where can I have any standard training model available for commercial use? I have already tried with the Sentiment Predictor provided in Prediction Gallery of Google Prediction API, but does not seem to work properly.
From my understanding, the "model" for the Google Prediction API is actually not a model, but a suite of models for regression as well as classification. That being said, it's not clear how the Prediction API decides what kind of regression or classification model is used when you present it with training data. You may want to look at how to train a model on the Google Prediction API if you haven't already done so.
If you're not happy with the results of the Prediction API, it might be an issue with your training data. You may want to think about adding more examples to the training file to see if the model comes up with better results. I don't know how many examples you used, but generally, the more you can add, the better.
However, if you want to look at creating one yourself, NLTK is a Python library that you can use to train your own model. Another Python library you can use is scikit-learn.
Hope this helps.
google prediction API is great BUT to train a model you will need...LOT OF DATA.
you can use the sentiment model that is alrady trained..

Resources