Best CNN architectures for small images (80x80)? - image

I'm new in computer vision area and I hope you can help me with some fundamental questions regarding CNN architectures.
I know some of the most well-known ones are:
VGG Net
ResNet
Dense Net
Inception Net
Xception Net
They usually need an input of images around 224x224x3 and I also saw 32x32x3.
Regarding my specific problem, my goal is to train biomedical images with size (80x80) for a 4-class classification - at the end I'll have a dense layer of 4. Also my dataset is quite small (1000images) and I wanted to use transfer learning.
Could you please help me with the following questions? It seems to me that there is no single correct answer to them, but I need to understand what should be the correct way of thinking about them. I will appreciate if you can give me some pointers as well.
Should I scale my images? How about the opposite and shrink to 32x32 inputs?
Should I change the input of the CNNs to 80x80? What parameters should I change mainly? Any specific ratio for the kernel and the parameters?
Also I have another problem, the input requires 3 channels (RGB) but I'm working with grayscale images. Will it change the results a lot?
Instead of scaling should I just fill the surroundings (between the 80x80 and 224x224) as background? Should the images be centered in this case?
Do you have any recommendations regarding what architecture to choose?
I've seen some adaptations of these architectures to 3D/volumes inputs instead of 2D/images. I have a similar problem to the one I described here but with 3D inputs. Is there any common reasoning when choosing a 3D CNN architecture instead of a 2D?
In advances I leave my thanks!

I am assuming you basic know-how in using CNN for classification
Answering question 1~3
You scale your image for several purposes. Smaller the image, the faster the training and inference time. However you will lose important information in the process of shrinking the image. There is no one right answer and it all depends on your application. Is real-time process important? If your answer is no, always stick to the original size.
You will also need to resize your image to fit the input size of predefined models if you plan to retrain them. However, since your image is in grayscale, you will need to find models trained in gray or create a 3 channel image and copy the same value to all R,G and B channel. This is not efficient but it will help you reuse the high quality model trained by others.
The best way i see for you to handle this problem is to train everything from start. 1000 can seem to be a small number of data, but since your domain is specific and only require 4 classes, training from scratch doesnt seem that bad.
Question 4
When the size is different, always scale. filling with the surrounding will cause the model to learn the empty spaces and that is not what we want.
Also make sure the input size and format during inference is the same as the input size and format during training.
Question 5
If processing time is not a problem RESNET. If processing time is important, then MobileNet.
Question 6
6) Depends on your input data. If you have 3D data then you can use it. More input data usually helps in better classification. But 2D will be enough to solve certain problem. If you can classify the images by looking at the 2D images, most probabily 2D images will be enough to complete the task.
I hope this will clear some of your problems and direct you to a proper solution.

Related

What image characteristics are best for training object detection models?

I'm a novice at ML but I'm trying to create a model to detect a few objects in my custom photos. Before training my model, I'd like to know if and how I should modify my images to improve its accuracy.
I don't have access to the photos at the moment, however, I can provide an example of the characteristics of the images I'll be working with:
There's a white piece of paper (so white background), and on it are a bunch of insects.
There are a few different kinds of insects, and they look unique from eachother (different colors, shapes, sizes etc.).
The camera is pretty zoomed out, so each insect is probably ~ 40x40 pixels (so it's not really high definition).
I don't know much about machine learning, but I'd assume that because the insects will be captured in low quality, the model will mainly end up relying on the general shape and color to distinguish/identify the insects (e.g. long or circular spot on photo, etc.).
Therefore, I was wondering if I should do anything to to the photos to achieve higher accuracy (before I train it). For example, if I increase the contrast in my photos, would the insect's borders be more defined and thus make it easier for the model to detect/identify them? Or, should I convert the images to grayscale or stick with RGB? Are there any other factors that should be considered? Any help will be greatly appreciated!
Edit: I'm not sure why someone voted to close this as opinion-based, however, I'm not asking for an opinion. I'm trying to understand more about image-detection process by learning what constitutes a "good" photo versus a "bad" one. Even though this sounds like it's opinion-based, it's not. For example, I'm sure having extremely low-light photos would be terrible for training models. This wouldn't be an opinion, but a evidence-based fact.
Similarly, I'd like to learn what kinds of general characteristics make "better" photos, such as if I should use high contrast, brightness, etc. I think this is an answerable question that is not opinion-based.
You an employ standard preprocessing strategy like
Normalization of the RGB values
Horizontal/Vertical flipping
Affine transformation
P.s. it is more of comment than answer (I can't put comments)

Inversing the goal of neural networks

I am studying neural networks, or more specifically image classification at the moment. While I was reading, I was wondering if the following has ever been done/ is doable. If anybody could point me to some sources or ideas, I'd appreciate it!
In a traditional neural network, you have a training data set of images and the weights of the neurons in the network. The goal is to optimize the weights so that the classification of the images is accurate for training data and new images is as good as possible.
I was wondering if you could reverse this:
Given a neural network and the weights to its neurons, generate a set of images corresponding to the classes that the network separates, i.e., a proto-type of the kinds of images this specific network is able to classify well.
In my mind it would work as follows (I'm sure this is not quite achievable, but just to get the idea across):
Imagine a neural network that is able to classify images containing labels cat, dog and neither of those.
What I want is the "inverse", i.e. an image of a cat, an image of a dog and one that is "furthest away" from the other two classes.
I think that this could be done by generating images and minimizing the loss function for one specific class, while maximizing it for all other classes at the same time.
Is this kind of how Google Deep Dream visualizes what it is "dreaming"?
I hope it is clear what I mean, if not I will answer any questions.
Is this kind of how Google Deep Dream visualizes what it is "dreaming"?
Pretty much, it seems, at least that's how the people behind it explain it:
One way to visualize what goes on [in a neural network layer] is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work [...]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.
Source - The whole blog post is worth reading.
I think you can understand the mainstream approach from Karpathy's blog:
http://karpathy.github.io/2015/03/30/breaking-convnets/
Normal ConvNet training: "What happens to the score of the correct class when I wiggle this parameter?"
Creating fooling images: "What happens to the score of (whatever class you want) when I wiggle this pixel?"
Fooling the classifier with an image is very close to what you ask. For your goal you need to add some regularization to your loss function to avoid fully misleading results - the absolute minimum loss can be very distorted picture.

Computer Science Theory: Image Similarity

So I'm trying to run a comparison of different images and was wondering if anyone could point me in the right direction for some basic metrics I can take for the group of images.
Assuming I have two images, A and B, I pretty much want as much data as possible about each so I can later programmatically compare them. Things like "general color", "general shape", etc. would be great.
If you can help me find specific properties and algorithms to compute them that would be great!
Thanks!
EDIT: The end goal here is to be able to have a computer tell me how "similar" too pictures are. If two images are the same but in one someone blurred out a face; they should register as fairly similar. If two pictures are completely different, the computer should be able to tell.
What you are talking about is way much general and non-specific.
Image information is formalised as Entropy.
What you seem to be looking for is basically feature extraction and then comparing these features. There are tons of features that can be extracted but a lot of them could be irrelevant depending on the differences in the pictures.
There are space domain and frequency domain descriptors of the image which each can be useful here. I can probably name more than 100 descriptors but in your case, only one could be sufficient or none could be useful.
Pre-processing is also important, perhaps you could turn your images to grey-scale and then compare them.
This field is so immensely diverse, so you need to be a bit more specific.
(Update)
What you are looking for is a topic of hundreds if not thousands of scientific articles. But well, perhaps a simplistic approach can work.
So assuming that the question here is not identifying objects and there is no transform, translation, scale or rotation involved and we are only dealing with the two images which are the same but one could have more noise added upon it:
1) Image domain (space domain): Compare the pixels one by one and add up the square of the differences. Normalise this value by the width*height - just divide by the number of pixels. This could be a useful measure of similarity.
2) Frequency domain: Convert the image to frequency domain image (using FTT in an image processing tool such as OpenCV) which will be 2D as well. Do the same above squared diff as above, but perhaps you want to limit the frequencies. Then normalise by the number of pixels. This fares better on noise and translation and on a small rotation but not on scale.
SURF is a good candidate algorithm for comparing images
Wikipedia Article
A practical example (in Mathematica), identifying corresponding points in two images of the moon (rotated, colorized and blurred) :
You can also calculate sum of differences between histogram bins of those two images. But it is also not a silver bullet...
I recommend taking a look at OpenCV. The package offers most (if not all) of the techniques mentioned above.

Detecting if two images are visually identical

Sometimes two image files may be different on a file level, but a human would consider them perceptively identical. Given that, now suppose you have a huge database of images, and you wish to know if a human would think some image X is present in the database or not. If all images had a perceptive hash / fingerprint, then one could hash image X and it would be a simple matter to see if it is in the database or not.
I know there is research around this issue, and some algorithms exist, but is there any tool, like a UNIX command line tool or a library I could use to compute such a hash without implementing some algorithm from scratch?
edit: relevant code from findimagedupes, using ImageMagick
try $image->Sample("160x160!");
try $image->Modulate(saturation=>-100);
try $image->Blur(radius=>3,sigma=>99);
try $image->Normalize();
try $image->Equalize();
try $image->Sample("16x16");
try $image->Threshold();
try $image->Set(magick=>'mono');
($blob) = $image->ImageToBlob();
edit: Warning! ImageMagick $image object seems to contain information about the creation time of an image file that was read in. This means that the blob you get will be different even for the same image, if it was retrieved at a different time. To make sure the fingerprint stays the same, use $image->getImageSignature() as the last step.
findimagedupes is pretty good. You can run "findimagedupes -v fingerprint images" to let it print "perceptive hash", for example.
Cross-correlation or phase correlation will tell you if the images are the same, even with noise, degradation, and horizontal or vertical offsets. Using the FFT-based methods will make it much faster than the algorithm described in the question.
The usual algorithm doesn't work for images that are not the same scale or rotation, though. You could pre-rotate or pre-scale them, but that's really processor intensive. Apparently you can also do the correlation in a log-polar space and it will be invariant to rotation, translation, and scale, but I don't know the details well enough to explain that.
MATLAB example: Registering an Image Using Normalized Cross-Correlation
Wikipedia calls this "phase correlation" and also describes making it scale- and rotation-invariant:
The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation.
Colour histogram is good for the same image that has been resized, resampled etc.
If you want to match different people's photos of the same landmark it's trickier - look at haar classifiers. Opencv is a great free library for image processing.
I don't know the algorithm behind it, but Microsoft Live Image Search just added this capability. Picasa also has the ability to identify faces in images, and groups faces that look similar. Most of the time, it's the same person.
Some machine learning technology like a support vector machine, neural network, naive Bayes classifier or Bayesian network would be best at this type of problem. I've written one each of the first three to classify handwritten digits, which is essentially image pattern recognition.
resize the image to a 1x1 pixle... if they are exact, there is a small probability they are the same picture...
now resize it to a 2x2 pixle image, if all 4 pixles are exact, there is a larger probability they are exact...
then 3x3, if all 9 pixles are exact... good chance etc.
then 4x4, if all 16 pixles are exact,... better chance.
etc...
doing it this way, you can make efficiency improvments... if the 1x1 pixel grid is off by a lot, why bother checking 2x2 grid? etc.
If you have lots of images, a color histogram could be used to get rough closeness of images before doing a full image comparison of each image against each other one (i.e. O(n^2)).
There is DPEG, "The" Duplicate Media Manager, but its code is not open. It's a very old tool - I remember using it in 2003.
You could use diff to see if they are REALLY different.. I guess it will remove lots of useless comparison. Then, for the algorithm, I would use a probabilistic approach.. what are the chances that they look the same.. I'd based that on the amount of rgb in each pixel. You could also find some other metrics such as luminosity and stuff like that.

How can I measure the similarity between two images? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I would like to compare a screenshot of one application (could be a Web page) with a previously taken screenshot to determine whether the application is displaying itself correctly. I don't want an exact match comparison, because the aspect could be slightly different (in the case of a Web app, depending on the browser, some element could be at a slightly different location). It should give a measure of how similar are the screenshots.
Is there a library / tool that already does that? How would you implement it?
This depends entirely on how smart you want the algorithm to be.
For instance, here are some issues:
cropped images vs. an uncropped image
images with a text added vs. another without
mirrored images
The easiest and simplest algorithm I've seen for this is just to do the following steps to each image:
scale to something small, like 64x64 or 32x32, disregard aspect ratio, use a combining scaling algorithm instead of nearest pixel
scale the color ranges so that the darkest is black and lightest is white
rotate and flip the image so that the lighest color is top left, and then top-right is next darker, bottom-left is next darker (as far as possible of course)
Edit A combining scaling algorithm is one that when scaling 10 pixels down to one will do it using a function that takes the color of all those 10 pixels and combines them into one. Can be done with algorithms like averaging, mean-value, or more complex ones like bicubic splines.
Then calculate the mean distance pixel-by-pixel between the two images.
To look up a possible match in a database, store the pixel colors as individual columns in the database, index a bunch of them (but not all, unless you use a very small image), and do a query that uses a range for each pixel value, ie. every image where the pixel in the small image is between -5 and +5 of the image you want to look up.
This is easy to implement, and fairly fast to run, but of course won't handle most advanced differences. For that you need much more advanced algorithms.
The 'classic' way of measuring this is to break the image up into some canonical number of sections (say a 10x10 grid) and then computing a histogram of RGB values inside of each cell and compare corresponding histograms. This type of algorithm is preferred because of both its simplicity and it's invariance to scaling and (small!) translation.
Use a normalised colour histogram. (Read the section on applications here), they are commonly used in image retrieval/matching systems and are a standard way of matching images that is very reliable, relatively fast and very easy to implement.
Essentially a colour histogram will capture the colour distribution of the image. This can then be compared with another image to see if the colour distributions match.
This type of matching is pretty resiliant to scaling (once the histogram is normalised), and rotation/shifting/movement etc.
Avoid pixel-by-pixel comparisons as if the image is rotated/shifted slightly it may lead to a large difference being reported.
Histograms would be straightforward to generate yourself (assuming you can get access to pixel values), but if you don't feel like it, the OpenCV library is a great resource for doing this kind of stuff. Here is a powerpoint presentation that shows you how to create a histogram using OpenCV.
Don't video encoding algorithms like MPEG compute the difference between each frame of a video so they can just encode the delta? You might look into how video encoding algorithms compute those frame differences.
Look at this open source image search application http://www.semanticmetadata.net/lire/. It describes several image similarity algorighms, three of which are from the MPEG-7 standard: ScalableColor, ColorLayout, EdgeHistogram and Auto Color Correlogram.
You could use a pure mathematical approach of O(n^2), but it will be useful only if you are certain that there's no offset or something like that. (Although that if you have a few objects with homogeneous coloring it will still work pretty well.)
Anyway, the idea is the compute the normalized dot-product of the two matrices.
C = sum(Pij*Qij)^2/(sum(Pij^2)*sum(Qij^2)).
This formula is actually the "cosine" of the angle between the matrices (wierd).
The bigger the similarity (lets say Pij=Qij), C will be 1, and if they're completely different, lets say for every i,j Qij = 1 (avoiding zero-division), Pij = 255, then for size nxn, the bigger n will be, the closer to zero we'll get. (By rough calculation: C=1/n^2).
You'll need pattern recognition for that. To determine small differences between two images, Hopfield nets work fairly well and are quite easy to implement. I don't know any available implementations, though.
A ruby solution can be found here
From the readme:
Phashion is a Ruby wrapper around the pHash library, "perceptual hash", which detects duplicate and near duplicate multimedia files
How to measure similarity between two images entirely depends on what you would like to measure, for example: contrast, brightness, modality, noise... and then choose the best suitable similarity measure there is for you. You can choose from MAD (mean absolute difference), MSD (mean squared difference) which are good for measuring brightness...there is also available CR (correlation coefficient) which is good in representing correlation between two images. You could also choose from histogram based similarity measures like SDH (standard deviation of difference image histogram) or multimodality similarity measures like MI (mutual information) or NMI (normalized mutual information).
Because this similarity measures cost much in time, it is advised to scale images down before applying these measures on them.
I wonder (and I'm really just throwing the idea out there to be shot down) if something could be derived by subtracting one image from the other, and then compressing the resulting image as a jpeg of gif, and taking the file size as a measure of similarity.
If you had two identical images, you'd get a white box, which would compress really well. The more the images differed, the more complex it would be to represent, and hence the less compressible.
Probably not an ideal test, and probably much slower than necessary, but it might work as a quick and dirty implementation.
You might look at the code for the open source tool findimagedupes, though it appears to have been written in perl, so I can't say how easy it will be to parse...
Reading the findimagedupes page that I liked, I see that there is a C++ implementation of the same algorithm. Presumably this will be easier to understand.
And it appears you can also use gqview.
Well, not to answer your question directly, but I have seen this happen. Microsoft recently launched a tool called PhotoSynth which does something very similar to determine overlapping areas in a large number of pictures (which could be of different aspect ratios).
I wonder if they have any available libraries or code snippets on their blog.
to expand on Vaibhav's note, hugin is an open-source 'autostitcher' which should have some insight on the problem.
There's software for content-based image retrieval, which does (partially) what you need. All references and explanations are linked from the project site and there's also a short text book (Kindle): LIRE
You can use Siamese Network to see if the two images are similar or dissimilar following this tutorial. This tutorial cluster the similar images whereas you can use L2 distance to measure the similarity of two images.
Beyond Compare has pixel-by-pixel comparison for images, e.g.,
If this is something you will be doing on an occasional basis and doesn't need automating, you can do it in an image editor that supports layers, such as Photoshop or Paint Shop Pro (probably GIMP or Paint.Net too, but I'm not sure about those). Open both screen shots, and put one as a layer on top of the other. Change the layer blending mode to Difference, and everything that's the same between the two will become black. You can move the top layer around to minimize any alignment differences.
Well a really base-level method to use could go through every pixel colour and compare it with the corresponding pixel colour on the second image - but that's a probably a very very slow solution.

Resources