I want to perform classification on several different images (car, pen, table) and then use the trained weights to create filters specific to car, pen, table. And finally if I apply the car filter to an image it will give me the similarity percentage.
I am new to deep learning. I will be grateful if you guide me.
I want to build filters for each class
I have a problem with the YOLO v5 model to training model using videos. I can't imagine how it is going to work with gain knowledge. I saw before the YOLO v5 model trained using an image. but I didn't see how to train the model using videos base. So I think anyone with this has any experience. So I want to know that very well.
I have trained yolov4 model. Now I want to do inference by developing some GUI (using opencv/python or any other tool) where I can give image as an input and get the inference
I want to use Google AutoML vision API for image classification, but with an incremental learning setup - more specifically I should be able to incrementally provide new training data with possibly brand new (and previously unknown) class labels. For example, lets say I train the network today for three labels: A, B and C. Now, after a week, I want to add some new data labeled with a brand new class D. And then after another week, I want to add even newer data labeled with a brand new class E. At this point, the model should be able to classify an input image into any of those five classes, with each incremental addition to the model causing very little accuracy drop.
Is that possible with google AutoML vision API?
Currently you could keep importing new data into existing AutoML dataset and each week train a new model. There is import API and train API.
The assumption of causing very little accuracy drop may be unrealistic. There may valid cases when adding new label will make the accuracy go down. E.g. add labels that are hard to distinguish from previous labels or adding labels without performing data cleanup (adding label and not applying it to existing images in which objects with this label are visible).
I was googling the last few days about active appearance model (AAM). I found a shape model and texture model and now I'm trying to do some research about active shape model (ASM) and I'm getting confused.
Are the active shape model (ASM) and the shape model in AAM the same?
AAM involves (among other things) both a shape model and a texture model. The shape model is usually obtained by what is referred to as an active shape model (ASM). So the answer is yes, the shape model in AAM is the active shape model ASM.
Are the active shape model (ASM) and the shape model in AAM the same?
The answer to that is neither a straightforward yes or no, and needs some information.
The shape model in both the ASM and the AAM are the same. They are point distribution models (PDMs), i.e. you have a set of rigidly aligned points, and learn a PCA, and then you've got a statistical model of those points, which is what one would call shape model.
Now, what one commonly calls Active Shape Model is the combination of a PDM and a fitting algorithm - hence the term Active. The most simple method for that is to search along each points normal ("profile direction"), and search for the strongest edge. A slightly more elaborate method is to learn the 1D gradient profile along each points normals from the training images. Cootes describes both methods in An Introduction to Active Shape Models.
An AAM has the same PDM as an ASM to model the statistical variability of the shape, but the fitting is done differently. AAMs use the learned statistical model of the appearance for the fitting, and not the method used by ASMs.
Hence, strictly speaking, the answer to your question that is quoted above is: No, the ASM and the shape model in an AAM are not the same. An AAM does not "contain" an ASM. It contains, however, the PDM part of the ASM, and the shape model in both the ASM and AAM are the same. The shape model is fitted differently though in ASMs and AAMs.
I recommend to read the paper I linked above for more details, it's a very, very well-written and very easy to understand paper.