Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Does any one have an idea regarding what sort of algorithm might Google be using to find similar images ?
No, but they could be using SIFT.
I'm not sure this has much to do with image processing. When I ask for "similar images" of the Eiffel tower, I get a bunch of photos of Paris Hilton, and street maps from Paris. Curiously, all of these images have the word "Paris" in the file name.
Currently the Google Image Search provides these filtering options:
Image size
Face detection
Continuous-tone ("Photo") vs. Smooth shading ("Clipart") vs. bitonal("Line drawing")
Color histogram
These options can be seen in its Image Search Result page.
I don't know about faces, but see at least:
http://www.incm.cnrs-mrs.fr/LaurentPerrinet/Publications/Perrinet08spie
Compare two images the python/linux way
I have heard, that one should use this when comparing images
(I mean: make the prob model, calc. the probs, use this):
http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Or then it might even be one of those PCFG things that MIT people tend to use with robotics stuff. One I read used some sort of PCFG model made of basic shapes (that you can rotate magically) and searched the best match with
http://en.wikipedia.org/wiki/Inside-outside_algorithm
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to create a program that find the duplicate images into a directory, something like this app does and I wonder what would be the algorithm to determine if two images are the same.
Any suggestion is welcome.
This task can be solved by perceptual-hashing, depending on your use-case, combined with some data-structure responsible for nearest-neighbor search in high-dimensions (kd-tree, ball-tree, ...) which can replace the brute-force search (somewhat).
There are tons of approaches for images: DCT-based, Wavelet-based, Statistics-based, Feature-based, CNNs (and more).
Their designs are usually based on different assumptions about the task, e.g. rotation allowed or not?
A google scholar search on perceptual image hashing will list a lot of papers. You can also look for the term image fingerprinting.
Here is some older ugly python/cython code doing the statistics-based approach.
Remark: Digikam can do that for you too. It's using some older Haar-wavelet based approach i think.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want your opinion on what would be the best machine learning algorithm, or even better, library, to use if I wanted to match two faces that look similar. Kind of like how google photos can put photos of the same people in their own album automatically. What's the best way to tackle this?
Face-based user identification isn't only one algorithm, it's a whole process that's still in active research. One way to experiment with it, is to follow these four steps:
Face region detection/extraction using the Histogram of Oriented Gradients (HOG)
Centralize eyes and lips using face landmark estimation
Image encoding using a CNN Model (openface for instance)
Image Search using a voisinage algorithm (KNN, LSHForest, etc)
Here's a blog article that draws a nice walk through the steps required to do face-based user identification:
machine learning is fun part 4 modern face recognition with deep learning
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Like the title says.
I have an image on my PC and want to find out where it was taken.
Is there a possibility to find the place by google maps ?
Thanks
What you are asking might be possible. But the scope of what you are asking is near impossible, if not impossible. It also seems unlikely that such a search would find a result you are looking for.
If you are just trying to place where a picture was taken, you would likely have far more luck, in much shorter time, posting the picture, as well as your best guess to narrow down the area, to a site such as Reddit.
Try running jhead on the image and see if it contains GPS coordinates. It is free here.
Or, if you don't mind uploading your image to a web-service to view the GPS data, you can upload it to here.
If you get the GPS cooridnates, you can type them in to maps.google.com, like this
52.785,-2.012
and hit enter and the map will centre on your location.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
SEO Question, if the images on a server are allowed to be index, named wisely with descriptive names and aren't oversized does the image path or folder make a difference for the ranking of that image in google image search?
Eg, Is pic A ranked higher than pic B (below) - if so, why?
A: /images/cat-on-a-chair.jpg
B: /images/repository/cat-on-a-chair.jpg
thanks
It'd be too difficult to run a controlled case study on a factor that, if it did help, would be too miniscule to notice.
The short answer: it's highly unlikely.
Think of the image itself and the page the image is found on as two completely separate entities (they are, indeed). When you do a Google Image search, you are finding pages that contain that image. So a highly-ranked page is likely going to be a good candidate for image results. You aren't actually being returned direct images.
Other things that influence ranking for images would include image-specific data like ALT tags, description, the image name, and so forth.
For reference, here are paths for top five results for horses:
http://upload.wikimedia.org/wikipedia/commons/thumb/8/85/Points_of_a_horse.jpg/330px-Points_of_a_horse.jpg
http://upload.wikimedia.org/wikipedia/commons/thumb/9/98/Horse-and-pony.jpg/310px-Horse-and-pony.jpg
http://images4.fanpop.com/image/photos/23500000/horse-horses-23582505-1024-768.jpg
http://www.hedweb.com/animimag/horses-gallop.jpg
http://www.horsesmaine.com/images/2%20%20horses.jpg
Scientifically, that's such a small sample that it's not worth mentioning. But let's assume it is: the majority of the results don't have relevant keywords in a directory path. Instead, a very highly-ranked website gets the first few positions.
If you wanted to take this further you could write a script to get a bigger sample, but at this point I'm hoping you've arrived at the conclusion that no, it doesn't make a difference.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I have always thought the way they zoom in and enhance on TV and movies was strictly impossible. Essentially because you cannot create more information that there is to begin with.
There were ways to get better looking or clearer images like with re-sampling, but never to the extent seen on film.
Now, it seems that is not true.
I was reading this article, and it seems they have a way to do that now?
Or, is this just a better version of what was already possible? You still need to have a fairly clear image to start with? Otherwise, what are the limits of this technique?
There is something called Super-resolution. Some companies claim to use fractal theory to enhance images when they are upscaled. But, what you see in most movies is just fiction.
Image enhancement always involves pixel interpolation (aka. prediction) - in one way or the other. Interpolation can be good, bad or whatever, but it will never out-perform real pixel which was recorded by imaging device at greater resolution.