How to deal with different input sizes in CNN models - image

To give a bit of a context: I'm fairly new to machine learning, I've read and seen some educational videos on how CNN works.
I've tried two models so far, a random person's CNN model and the Google's Inception v3 model. I could understand that random's person CNN model and what's happening in there. What I don't understand is how to make it work with different output sizes that are not just a different scale or rotation. Let me just explain what I'm doing:
I basically want to be able to classify a picture (containing a logo) as a brand. For example, you give me a picture that contains the Starbucks logo and our model will tell you it's Starbucks. There is going to be only one logo in every picture (for my case). First try was with the inception model: tried with 20,000 iterations with 2,000 Starbucks receipt pictures, 2,000 Walmart receipt pictures and 2,000 random pictures that were not related to Starbucks or Walmart so I could also classify the picture as 'Neither'. Got 88% accuracy, not good enough and the cross entropy doesn't drop to lower than 0.4 then I tried cropping the logo from those picture and tried again. this time, on cropped pictures it would work like a charm but on bigger pictures containing the starbucks logo, or walmart for that matter, it would fail miserably.
Same thing with the DeepLogo's way: https://github.com/satojkovic/DeepLogo
It works well with the 32 x 32 picture but once I change the input size, it fails.
How can I overcome this?
EDIT: I'm using this for retraining on top of the Inception model: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/image_retraining

Pooling layer?
From my understanding, pooling layer improves the statistical efficiency and also translation invariance. And most important, in your case, it can be used in various size of images.
Maybe you could do some research on that. The book "Deep Learning" from Goodfellow would be my recommendation.

Related

Tensorflow object detection training

I would like to detect objects (upper half of the image below) in images (bottom half). Is it smart to train the dataset with images in a different scale (or size)? Or shall I train it with parts of the bottom half of the image below? What is the best way to mark the objects for training?
Kind regards
If I understand your question correctly. If you are exclusively interested in detecting objects at roughly the scale of the below picture, your training data should consist of images like the below one. To add on: try to get at least a decent range of sizes around the bottom so as to avoid small deviations from a specific scale throwing it off, but generally you should be fine.

Can a CNN be trained to match two different pictures of the same painted rock?

I am trying to design a convolutional neural network that will match two images together. The images will be of painted rocks. One person will paint the rock, take a picture of it and hide it. Another person will find the rock and take a picture of it. I want to CNN to be able to search s database of images and match the photes to indentify the right rock.
I understand how to train a CNN to classify a photo by giving it many examples and then asking it to match a new picture to the category. I am having trouble figuring out a way to train the CNN to match two images together out of one database. The photos will be slightly different because two people are taking a picture of the same rock at different times.
I can reduce the size of the image search on the database by using GPS locations most times but not always. Meaning the person that hides the rock will hopefully record the location and the person finding will also record the location where they found it. This way the CNN has to only compare let's say six rocks from around that location to the rock the person found. The rock images will have profiles associated with them to allow for communication between the finder and the hider.
Does anyone have any ideas of how I can set this up?

Inversing the goal of neural networks

I am studying neural networks, or more specifically image classification at the moment. While I was reading, I was wondering if the following has ever been done/ is doable. If anybody could point me to some sources or ideas, I'd appreciate it!
In a traditional neural network, you have a training data set of images and the weights of the neurons in the network. The goal is to optimize the weights so that the classification of the images is accurate for training data and new images is as good as possible.
I was wondering if you could reverse this:
Given a neural network and the weights to its neurons, generate a set of images corresponding to the classes that the network separates, i.e., a proto-type of the kinds of images this specific network is able to classify well.
In my mind it would work as follows (I'm sure this is not quite achievable, but just to get the idea across):
Imagine a neural network that is able to classify images containing labels cat, dog and neither of those.
What I want is the "inverse", i.e. an image of a cat, an image of a dog and one that is "furthest away" from the other two classes.
I think that this could be done by generating images and minimizing the loss function for one specific class, while maximizing it for all other classes at the same time.
Is this kind of how Google Deep Dream visualizes what it is "dreaming"?
I hope it is clear what I mean, if not I will answer any questions.
Is this kind of how Google Deep Dream visualizes what it is "dreaming"?
Pretty much, it seems, at least that's how the people behind it explain it:
One way to visualize what goes on [in a neural network layer] is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work [...]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.
Source - The whole blog post is worth reading.
I think you can understand the mainstream approach from Karpathy's blog:
http://karpathy.github.io/2015/03/30/breaking-convnets/
Normal ConvNet training: "What happens to the score of the correct class when I wiggle this parameter?"
Creating fooling images: "What happens to the score of (whatever class you want) when I wiggle this pixel?"
Fooling the classifier with an image is very close to what you ask. For your goal you need to add some regularization to your loss function to avoid fully misleading results - the absolute minimum loss can be very distorted picture.

image feature identification

I am looking for a solution to do the following:
( the focus of my question is step 2. )
a picture of a house including the front yard
extract information from the picture like the dimensions and location of the house, trees, sidewalk, and car. Also, the textures and colors of the house, cars, trees, and sidewalk.
use extracted information to generate a model
How can I extract that information?
You could also consult Tatiana Jaworska research on this. As I understood, this details at least 1 new algorithm to feature extraction (targeted at roofs, doors, ...) by colour (RGB). More intriguing, the last publication also uses parameterized objects to be identified in the house images... that must might be a really good starting point for what you're trying to do.
link to her publications:
http://www.springerlink.com/content/w518j70542780r34/
http://portal.acm.org/citation.cfm?id=1578785
http://www.ibspan.waw.pl/~jaworska/TJ_BOS2010.pdf
Yes. You can extract these information from a picture.
1. You just identify these objects in a picture using some detection algorithms.
2. Measure these objects dimensions and generate a model using extracted information.
well actually your desired goal is not so easy to achieve. First of all you'll need a good way to figure what what is what and what is where on your image. And there simply is no easy "algorithm" for detecting houses/cars/whatsoever on an image. There are ways to segment different objects (like cars) from an image, but those don't work generally. Especially on houses this would be hard since each house looks different and it's hard to find one solid measurement for "this is house and this is not"...
Am I assuming it right that you are trying to simply photograph a house (with front yard) and build a texturized 3D-model out of it? This is not going to work since you need several photos of the house to get positions of walls/corners and everything in 3D space (There are approaches that try a mesh reconstruction with one image only but they lack of depth information and results are fairly poor). So if you would like to create 3D-mdoels you will need several photos of different angles of the house.
There are several different approaches that use this kind of technique to reconstruct real world objects to triangle-meshes.
Basically they work after the principle:
Try to find points in images of different viewpoint which are the same on an object. Considering you are photographing a house this could be salient structures likes corners of windows/doors or corners or edges on the walls/roof/...
Knowing where one and the same point of your house is in several different photos and knowing the position of the camera of both photos you can reconstruct this point in 3D-space.
Doing this for a lot of equal points will "empower" you to reconstruct the shape of your house as a 3D-model by triangulating the points.
Taking parts of the image as textures and mapping them on the generated model would work as well since you know where what is.
You should have a look at these papers:
http://www.graphicon.ru/1999/3D%20Reconstruction/Valiev.pdf
http://people.csail.mit.edu/wojciech/pubs/LabeledRec.pdf
http://people.csail.mit.edu/sparis/publi/2006/oceans/Paris_06_3D_Reconstruction.ppt
The second paper even has an example of doing exactly what you try to achieve, namely reconstruct a textured 3D-model of a house photographed from different angles.
The third link is a powerpoint presentation that shows how the reconstruction works and shows the drawbacks there are.
So you should get familiar with these papers to see what problems you are up to... If you then want to try this on your own have a look at OpenCV. This library provides some methods for feature extraction in images. You then can try to find salient points in each image and try to match them.
Good luck on your project... If you have problems, please keep asking!
I suggest to look at this blog
https://jwork.org/main/node/35
that shows how to identify certain features on images using a convolutional neural network. This particular blog discusses how to identify human faces on images from a large set of random images. You can adjust this example to train neural network using some other images. Note that even in the case of human faces, the identification rate is about 85%, therefore, more complex objects can be even harder to identify

Computer Science Theory: Image Similarity

So I'm trying to run a comparison of different images and was wondering if anyone could point me in the right direction for some basic metrics I can take for the group of images.
Assuming I have two images, A and B, I pretty much want as much data as possible about each so I can later programmatically compare them. Things like "general color", "general shape", etc. would be great.
If you can help me find specific properties and algorithms to compute them that would be great!
Thanks!
EDIT: The end goal here is to be able to have a computer tell me how "similar" too pictures are. If two images are the same but in one someone blurred out a face; they should register as fairly similar. If two pictures are completely different, the computer should be able to tell.
What you are talking about is way much general and non-specific.
Image information is formalised as Entropy.
What you seem to be looking for is basically feature extraction and then comparing these features. There are tons of features that can be extracted but a lot of them could be irrelevant depending on the differences in the pictures.
There are space domain and frequency domain descriptors of the image which each can be useful here. I can probably name more than 100 descriptors but in your case, only one could be sufficient or none could be useful.
Pre-processing is also important, perhaps you could turn your images to grey-scale and then compare them.
This field is so immensely diverse, so you need to be a bit more specific.
(Update)
What you are looking for is a topic of hundreds if not thousands of scientific articles. But well, perhaps a simplistic approach can work.
So assuming that the question here is not identifying objects and there is no transform, translation, scale or rotation involved and we are only dealing with the two images which are the same but one could have more noise added upon it:
1) Image domain (space domain): Compare the pixels one by one and add up the square of the differences. Normalise this value by the width*height - just divide by the number of pixels. This could be a useful measure of similarity.
2) Frequency domain: Convert the image to frequency domain image (using FTT in an image processing tool such as OpenCV) which will be 2D as well. Do the same above squared diff as above, but perhaps you want to limit the frequencies. Then normalise by the number of pixels. This fares better on noise and translation and on a small rotation but not on scale.
SURF is a good candidate algorithm for comparing images
Wikipedia Article
A practical example (in Mathematica), identifying corresponding points in two images of the moon (rotated, colorized and blurred) :
You can also calculate sum of differences between histogram bins of those two images. But it is also not a silver bullet...
I recommend taking a look at OpenCV. The package offers most (if not all) of the techniques mentioned above.

Resources