methods of lessening the number of features when machine learning on images - image

I'm performing machine learning on a 25 x 125 image set. After getting the rgb components it becomes 9375 features per example (and I have about 675). I was trying fminunc and fminsearch and I thought that there was something wrong with my method, because it was 'freezing', but when I decrease the number of features by a factor of 10, it took a while but worked. How can I minimise the number of features, while maintaining the information relevant in the picture? I tried k-means, but I don't see how that helps, as I still have the same number of features, just that there are a lot of redundancy.

You're looking for feature reduction or selection methods. For example see this library:
http://homepage.tudelft.nl/19j49/Matlab_Toolbox_for_Dimensionality_Reduction.html
or see this question
Feature Selection in MATLAB
If you google feature selection/reduction matlab will find many relevant articles/tools. Or you could google some commonly used methods like PCA (principal component analysis).

Related

How to programmatically distinguish the professional photo from the amateur photo?

What are the different options and solutions (software) that will help distinguish professional (good) from amateur (bad) photo?
The criteria can be the contrast, sharpness, noise, presence of compression artifacts, etc. The question is, what are the tools that allow all this to determine it (the machine, not the man). For all of these criteria can be represented as mathematical models, you think?
Or in other words - to "feed" tool 1000 high-quality photos and 1000 substandard. And machine itself has identified the factors that distinguish the good from the bad image.
This is a quite vague definition of a problem. The only thing you have is 1000 high-quality photos and 1000 substandard photos. Your application however, is quite concrete, and I doubt (but I'm not sure) that you will find such a software.
Without looking to your images and have some tests is also difficult to say if contrast/gamma would be enough to classify them properly.
What you can do, if you know a bit of coding in matlab/python/C, is to use some existing libraries to try to solve your problem. I can't help you with that, as this itself is a quite tedious work, but, I can give you some insights.
To define your problem you will need:
Input: 1000 pro images, 1000 std images
You can represent this as 2000 images and a 2000 binary vector (1 for pro, 0 for std)
Features
Images itself might not give you enough information. What you can do is extract features from images. This step is called feature extraction and is an open research field in Computer Vision. There several feature extractors out there, you can try a couple of the most used ones, such as HoG or SIFT (have a look here for examples).
This feature extractors will give you a 1xM numerical vector for each image. With N images, you have a NxM matrix composed of N images and their descriptor.
Classification:
Once you managed to extract features from the image, having X = NxM data and y = label binary vector, you can use any machine learning algorithm, such as Deep Neural Networks, Random Forests, Supported Vector Machines, or any other one, to train your data, and classify it later.
By putting everything together, you might be able to get decent results.
Is this professional vs amateur photographer or equipment classification? I mean like distinguishing between a DLSR photo or a cell phone photo. Or is this distinguishing between an amateur with a DLSR and a professional with the same equipment? Or, are we talking about photoshop editing at the end.
In the case of equipment, I think features to look at would be noise, contrast, color gamut. In the case of skill of the photographer, you will probably have to look at features based on edge representations, natural scene metrics etc.
But, you will need to create a data matrix and then run a machine learning classification algorithm on it and hopefully find some features.
Are you making or looking for art for humans or for robots?
The definition of a great photo is subjective by definition. What to one person may be junk to another is genius. Trying to take that and turn it into an equation is asking for AI to take over humankind.
I don't mean to be harsh in my assessment. My degree is in Fine Art, Painting. I put a lot of time and effort into thinking about images. It's a wonder to me how a child might make an image that seemed to be thoughtless but was a breakthrough in my perception. Conversely you can work for hours, day, months or even years and then feel like the result just doesn't measure up.
Photography is an ingenious invention, therefore all photography is a source for amazement.
I do agree with you however. Some photos are truly impressive. I think that if we approach this as coders than what we are seeking is 'likes' or 'page views' or some other method of getting counts from many people. I know that is not the answer you were looking for but I don't think you can find a better one. I wish you well on your quest.
If you want to judge a photo on technicalities then the edge of the physics should be your target. Currently that would be mirrorless cameras and 3d imaging.

Supervised Descent Method (SDM)

Can someone explain briefly how SDM (Supervised Descent Method) for Feature Extraction works?
I searched a lot on the Internet but couldn't found what I was looking for.
Is it only for feature extraction in videos, or can it be used in both videos and images?
If someone can explain, it would be of great help.
SDM is a method to align shapes in images. It uses feature extractors (SIFT and HoG) in the process, but is not a feature extractor.
Similar methods are ASM, AAM or CLM, but SDM has better performance and accuracy.
In the case of SDM, in the training process, the system learns some descent vectors from a initial shape configuration (different from the shapes in the database) to the database sets. Those vectors have the hability of fitting a new initial configuration with the face shape in the image you want to fit in.
This link can help you to learn more about it: http://arxiv.org/pdf/1405.0601v1.pdf
About the code, there is some demo samples in the main page of IntraFace but if you are looking for the code, I don´t think you can find it.
You can use vl_sift for beginning its working even more precise then their original descriptor however its not fast as their descriptor for real time implementation.
Regarding to their implementation no code were released so far. They are using some specialized version with very fast histogram calculation.

OpenCV: Fingerprint Image and Compare Against Database

I have a database of images. When I take a new picture, I want to compare it against the images in this database and receive a similarity score (using OpenCV). This way I want to detect, if I have an image, which is very similar to the fresh picture.
Is it possible to create a fingerprint/hash of my database images and match new ones against it?
I'm searching for a alogrithm code snippet or technical demo and not for a commercial solution.
Best,
Stefan
As Pual R has commented, this "fingerprint/hash" is usually a set of feature vectors or a set of feature descriptors. But most of feature vectors used in computer vision are usually too computationally expensive for searching against a database. So this task need a special kind of feature descriptors because such descriptors as SURF and SIFT will take too much time for searching even with various optimizations.
The only thing that OpenCV has for your task (object categorization) is implementation of Bag of visual Words (BOW).
It can compute special kind of image features and train visual words vocabulary. Next you can use this vocabulary to find similar images in your database and compute similarity score.
Here is OpenCV documentation for bag of words. Also OpenCV has a sample named bagofwords_classification.cpp. It is really big but might be helpful.
Content-based image retrieval systems are still a field of active research: http://citeseerx.ist.psu.edu/search?q=content-based+image+retrieval
First you have to be clear, what constitutes similar in your context:
Similar color distribution: Use something like color descriptors for subdivisions of the image, you should get some fairly satisfying results.
Similar objects: Since the computer does not know, what an object is, you will not get very far, unless you have some extensive domain knowledge about the object (or few object classes). A good overview about the current state of research can be seen here (results) and soon here.
There is no "serve all needs"-algorithm for the problem you described. The more you can share about the specifics of your problem, the better answers you might get. Posting some representative images (if possible) and describing the desired outcome is also very helpful.
This would be a good question for computer-vision.stackexchange.com, if it already existed.
You can use pHash Algorithm and store phash value in Database, then use this code:
double const mismatch = algo->compare(image1Hash, image2Hash);
Here 'mismatch' value can easly tell you the similarity ratio between two images.
pHash function:
AverageHash
PHASH
MarrHildrethHash
RadialVarianceHash
BlockMeanHash
BlockMeanHash
ColorMomentHash
These function are well Enough to evaluate Image Similarities in Every Aspects.

How can I visualize changes in a large code base quality?

One of the things I’ve been thinking about a lot off and on is how we can use metrics of some kind to measure change, are we going backwards or not? This is in the context of a large, legacy code base which we are improving. Most of the code is C++ with a C heritage. Some new functions and the GUI are written in C#.
To start with, we could at least be checking if the simple complexity level was changing over time in the code. The difficulty is in having a representation – we can maybe do a 3D surface where a 2D map represents the code and we have a heat-map of color representing complexity with the 3D surface bulging in and out to show change.
Once you can generate some matrics of numbers there are a ton of math systems around to take care of stuff like this.
Over time, I'd like to have more sophisticated numbers in there but the same visualisation techniques used to represent change.
I like the idea in Crap4j of focusing on the ratio between complexity and number of unit tests covering that code.
I'd also like to include Uncle Bob's SOLID metrics and some of the Chidamber and Kemerer OO metrics. The hard part is finding tools to generate these for C++. The only option seems to be Krakatau Essential Metrics (I have no objection to paying for tools). My desire to use the CK metrics comes partly from the books Object-Oriented Metrics:Measures of Complexity by Henderson-Sellers and the earlier Object-Oriented Software Metrics.
If we start using a number of these metrics we could end up with ten or so numbers that are varying across time. I'm fairly ignorant of statistics but it seems it could be interesting to track a bunch of such metrics and then pay attention to which ones tend to vary.
Note that a related question is about measuring code quality across a large code base. I'm more interested in measuring the change.
I'd consider using a Kiviat Diagram to represent multiple software metrics dimensions evolving over time. These diagrams represent multiple data points in a concave hull around a centerpoint. Visual inspection will show where a particular metric is going up or down, and one ought to be able to compute an overall ratio of area biased by metric value using some hueristic area computation.
You can also have a glance at NDepend documentation about code metrics. Disclaimer: I am one of the developer of the tool NDepend.
With the Code Rule and Query over LINQ (CQLinq) facility, it is possible to ask for code metric evolution/trending across two different snapshots in time of the code base. For example there is a default rule proposed: Avoid making complex methods even more complex illustrated by the screenshot below:
Several metric trending rules are proposed like:
Avoid decreasing code coverage by[enter link description here]5 tests of types
Types that used to be 100% covered but not anymore
and also, since you mentioned Crap4J the metric C.R.A.P can be written with CQLinq, and the query could be easily tweaked to see the trending in C.R.A.P metric.
Concerning the visualization of code metric, NDepend lets visualize code metrics values through an interactive treemap:
There is a fresh approach for this topic.
E.g.
https://github.com/databricks/koalas/pull/840#issuecomment-536949320
See https://softagram.com/docs/visualizing-code-changes/ for more info or do an image search in search engine using the two keywords: softagram koalas
Disclaimer: I work for Softagram.

What algorithm to use to obtain Objects from an Image

I would like to know what algorithm is used to obtain an image and get the objects present in the image and process (give information about) it. And also, how is this done?
I agree with Sid Farkus, there is no simple answer to this question.
Maybe you can get started by checking out the Open Computer Vision Library. There is a Wiki page on object detection with links to a How-To and to papers.
You may find other examples and approaches (i.e. algorithms); it's likely that the algorithms differ by application (i.e. depending on what you actually want to detect).
There are many ways to do Object Detection and it still an open problem.
You can start with template matching, It is probably the simplest way to solve, which consists of making a convolution with the known image (IA) on the new image (IB). It is a fairly simple idea because it is like applying a filter on the signal, the filter will generate a maximum point in the image when it finds the object, as shown in the video. But that technique has several cons, does not handle variants in scale or rotation so it has no real application.
Also you can find another option more robust feature matching, which consist on create a dataset with features such as SIFT, SURF or ORB of different objects with this you can train a SVM to recognize objects
You can also check deformable part models. However, The state of the art object detection is based on deep-learning such as Faster R-CNN, Alexnet, which learn the features that will be used to detect/recognize the objects
Well this is hardly an answerable question, but for most computer vision applications a good starting point is the Hough Transform

Resources