Probabilistic image classification using imageCategoryClassifier MATLAB - image

I got a classifier with my training set containing images of three types following this guide: https://ch.mathworks.com/help/vision/examples/image-category-classification-using-bag-of-features.html
Now I want to use this classifier to classify the images of another dataset. Outputs are supposed to give me the predicted types of the images and corresponding probabilities.
I found the function "predict" to do the prediction.
Link: https://ch.mathworks.com/help/vision/ref/imagecategoryclassifier.predict.html
However, I have two questions
First, it says:
[labelIdx,score] = predict(categoryClassifier,imds) returns the predicted label index and score for the images specified in imds.
I don't understand this "score". It says: "The score provides a negated average binary loss per class". And the outputs of "score" are negative. So is there any way I can obtain the probability(should be [0,1]) from this "score"?
Second, my testing datasets contains images of 6 types, that is, 3 more types than my classifier. But with the function "predict", it will give a label from one of three types to each images. How can I add an extra label to point out the images that cannot be classified into any of the three types?
I think this one could be solved if I can get the probabilities from my first question. At least I could set a threshold to change the labels manually.
Any suggestions that could help solve these problems? Thanks a lot!

Related

Machine Learning: Image Classification: CNN: How to identify Just Red Cars?

I am trying to come up with a way to classify if the given image contains a Red Car.
The possible outcomes of the classifier should be:
Image contains a CAR and it is RED. (Desired case)
All others where Image contains a CAR but it is NOT RED or image does not contain any Cars at all.
I know how to implement a Convolutionary NN that can classify if an image contains a CAR or not.
But I am having trouble on how to implement a fine grained image classification for this where the classifier should only identify Red Cars and ignore all other images that may contain Cars or no cars in the image.
I read the following papers but since my use case is much limited than finding similarities as proposed in the papers I am trying to see if there is a simple approach to implement this.
Fast Training of Triplet-based Deep Binary Embedding Networks
Learning Fine-grained Image Similarity with Deep Ranking
Thanks for your help.
Just treat it as a classification problem with two classes: "Red car" - "No red car". Label every instance of your training data this way. There is no need to train a "car" classifier first.
I know how to implement a Convolutionary NN that can classify if an image contains a CAR or not.
Good. Then this should be done within seconds (+ time for labeling).
I read the following papers but since my use case is much limited than finding similarities as proposed in the papers I am trying to see if there is a simple approach to implement this.
Fast Training of Triplet-based Deep Binary Embedding Networks
Learning Fine-grained Image Similarity with Deep Ranking
Yes, simply treating it as a classification problem as described above. If you need a starter, have a look at the Tensorflow Cifar10 tutorial.

How to provide a score value to an image based on pattern information in it?

I saw a few image processing and analysis related questions on this forum and thought I could try this forum for my question. I have a say 30 two-dimensional arrays (to make things simple, although I have a very big data set) which form 30 individual images. Many of these images have similar base structure, but differ in intensities for different pixels. Due to this intensity variation amongst pixels, some images have a prominent pattern (say a larger area with localised intense pixels or high intensity pixels classifying an edge). Some images, also just contain single high intensity pixels randomly distributed without any prominent feature (so basically noise). I am now trying to build an algorithm, which can give a specific score to an image based on different factors like area fraction of high intensity pixels, mean standard deviation, so that I can find out the image with the most prominent pattern (in order words rank them). But these factors depend on a common factor i.e. a user defined threshold, which becomes different for every image. Any inputs on how I can achieve this ranking or a image score in an automated manner (without the use of a threshold)? I initially used Matlab to perform all the processing and area fraction calculations, but now I am using R do the same thing.
Can some amount of machine learning/ random forest stuff help me here? I am not sure. Some inputs would be very valuable.
P.S. If this not the right forum to post, any suggestions on where I can get good advise?
First of all, let me suggest a change in terminology: What you denote as feature is usually called pattern in image prcessing, while what you call factor is usually called feature.
I think that the main weakness of the features you are using (mean, standard deviation) is that they are only based on the statistics of single pixels (1st order statistics) without considering correlations (neighborhood relations of pixels). If you take a highly stuctured image and shuffle the pixels randomly, you will still have the same 1st order statistics.
There are many ways to take these correlations into account. A simple, efficient and therefore popular method is to apply some filters on the image first (high-pass, low-pass etc.) and then get the 1st order statistics of the resulting image. Other methods are based on Fast Fourier Transform (FFT).
Of course machine learning is also an option here. You could try convolutional neural networks for example, but I would try the simple filtering stuff first.

Image similarity index for plant images produced by L-system fractal

I have read all that staff about image similarity index on that forum but i think that my subject is kind different because images that i want to compare comes from an L-system generator and as you can see bellow it's hard to find obvious differences. So i couldn't decide which method and software to choose for my problem.
But let's take the story from the beginning. I have a collection of data , by measuring angles and lengths of branches of some plants (15 in total), and i represented them with L-system fractals method as already told.
These images looks like the above ones:
Plant A
Plant B
Plant C
Till now i tried to find differences using two methods.
1) By calculating the fractal dimension of those images but as expected, it was 2 in all of them
2) By calculating the % of area coverage in a same canvas. Numbers in that case show some differences but there are not statistically significant.
So the thought was to use an other similarity index but there are too many protocols and ideas out there that i couldn't find a starting point. I read about OPENCV , VisualCI etc but because i've never used such methods again, i feel somehow lost.
Any of your suggestions will be welcome.
Thank you.

image retrieval (CBIR) with bag of words

I want to use bag of words for content-based image retrieval.
I'm confused as to how to apply bag-of-words to content based image retrieval.
To clarify:
I've trained my program using SURF features and extract the BoW descriptors. I feed this to a support vector machine as training data. Then, given a query image, the support vector machine can predict which class a given image belongs to.
In other words, given a query image it can find a class. For example, given a query image of a car, the program would return 'car'. How would one find similar images?
Would I, given the class, return images from the training set? Or would the program - given a query image - also return a subset of a test-set on which the SVM predicts the same class?
The title only mentions BoW, but in your text you also use SVMs.
I think the core idea of CBIR is, to find the most similar image, according to some distance measure. You can do this with BoW-features. The SVM is not necessary.
The main purpose of using additional classification is to speed up the process. Because after you obtained a class label for your test image, you only need to search this subgroup of your images for the best match. And of course, if the SVM is better in distinguishing certain classes than your distance measure, it might help to reduce errors.
So the standard workflow would be:
obtain the class
return the best match from the training samples of this class

Face matching after detection

I am working on a software that will match a captured image (face) with 3/4 images (faces) of same person. Now there are 2 possibilities
1- That the captured image (face) is of the same person whose 3/4 images (faces) are already stored in database
2. Captured image is of a different person
Now i want to get the results of the above 2 scenarios, i.e matched in case 1 and not matched in case 2. I used 40 Gabor filters so that i can get good results. Moreover i get the results in an array (histogram). But It don't seems to work fine and environment conditions like light also have influence on the matching process. Can any one suggest me a good and efficient technique for this thing to achieve.
Well, This is basically face identification problem.
You can use LBP(Local Binary Pattern) to extract features from images.LBP is very robust and llumination invariance method.
You can try following steps-
Training:-
Extract face region (using OpenCV HaarCascade)
Re-size all the extracted face regions to equal size
Divide resized face into sub regions(Ex: 8*9)
Extract LBP features from each region and concatenate them, , because localization of feature is very important
Train SVM by this concatenated feature, with different label to each different person's image
Testing:-
Take a face image and Follow step 1 to 4
Predict using SVM(about which person's image this is)

Resources