Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I have always thought the way they zoom in and enhance on TV and movies was strictly impossible. Essentially because you cannot create more information that there is to begin with.
There were ways to get better looking or clearer images like with re-sampling, but never to the extent seen on film.
Now, it seems that is not true.
I was reading this article, and it seems they have a way to do that now?
Or, is this just a better version of what was already possible? You still need to have a fairly clear image to start with? Otherwise, what are the limits of this technique?
There is something called Super-resolution. Some companies claim to use fractal theory to enhance images when they are upscaled. But, what you see in most movies is just fiction.
Image enhancement always involves pixel interpolation (aka. prediction) - in one way or the other. Interpolation can be good, bad or whatever, but it will never out-perform real pixel which was recorded by imaging device at greater resolution.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to create a program that find the duplicate images into a directory, something like this app does and I wonder what would be the algorithm to determine if two images are the same.
Any suggestion is welcome.
This task can be solved by perceptual-hashing, depending on your use-case, combined with some data-structure responsible for nearest-neighbor search in high-dimensions (kd-tree, ball-tree, ...) which can replace the brute-force search (somewhat).
There are tons of approaches for images: DCT-based, Wavelet-based, Statistics-based, Feature-based, CNNs (and more).
Their designs are usually based on different assumptions about the task, e.g. rotation allowed or not?
A google scholar search on perceptual image hashing will list a lot of papers. You can also look for the term image fingerprinting.
Here is some older ugly python/cython code doing the statistics-based approach.
Remark: Digikam can do that for you too. It's using some older Haar-wavelet based approach i think.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Like the title says.
I have an image on my PC and want to find out where it was taken.
Is there a possibility to find the place by google maps ?
Thanks
What you are asking might be possible. But the scope of what you are asking is near impossible, if not impossible. It also seems unlikely that such a search would find a result you are looking for.
If you are just trying to place where a picture was taken, you would likely have far more luck, in much shorter time, posting the picture, as well as your best guess to narrow down the area, to a site such as Reddit.
Try running jhead on the image and see if it contains GPS coordinates. It is free here.
Or, if you don't mind uploading your image to a web-service to view the GPS data, you can upload it to here.
If you get the GPS cooridnates, you can type them in to maps.google.com, like this
52.785,-2.012
and hit enter and the map will centre on your location.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
This is just a confirmation. Are barcodes symmetric.
By symmetric I mean, if I rotate the barcode by 180 degrees. Will it have the same data decoded.
Basically does angle matter while scanning barcode? And are there any exceptions in the type of barcode.
No, the barcodes themselves are generally not symmetrical (I'll clarify that - you may be able to find one that is symmetrical but the vast majority of the standard ones are not).
However, any decent reader (such as the ones at your local supermarket) will scan in a large number of different directions to take care of this, not just backwards and forwards but at other angles as well. So you can generally rotate them to your heart's content.
Even the ones that scan in a line (such as some hand-held units) may scan both directions - it depends on what you've paid for. Of course, if you have a "Dodgy Brothers" brand reader, you'll probably find it won't do that.
Some barcode standards will allow for detecting upside down barcodes. For example UPCA swaps black and white on the right hand side so that readers can adjust for it.
Usually there is a start code and a check code in each bar code so the angle doesn't matter.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I would like to hear any practical uses of Digital Zoom.
I only see disadvantages of it like:
When enabled the quality drops
When enabled camera gets very sensitive so blurs
When enabled it's useless
I see it only as a way to crop the image but that can be done in software later and with better view of an image as well as you can select the part which you would like to crop.
So anyone, please state any useful example of digital zoom :)
I am curious to know.
It's a convenience feature for people who are not very intimate with digital image processing. And, in addition, the "zoomed" (cropped) image does not require as much disk space.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Does any one have an idea regarding what sort of algorithm might Google be using to find similar images ?
No, but they could be using SIFT.
I'm not sure this has much to do with image processing. When I ask for "similar images" of the Eiffel tower, I get a bunch of photos of Paris Hilton, and street maps from Paris. Curiously, all of these images have the word "Paris" in the file name.
Currently the Google Image Search provides these filtering options:
Image size
Face detection
Continuous-tone ("Photo") vs. Smooth shading ("Clipart") vs. bitonal("Line drawing")
Color histogram
These options can be seen in its Image Search Result page.
I don't know about faces, but see at least:
http://www.incm.cnrs-mrs.fr/LaurentPerrinet/Publications/Perrinet08spie
Compare two images the python/linux way
I have heard, that one should use this when comparing images
(I mean: make the prob model, calc. the probs, use this):
http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Or then it might even be one of those PCFG things that MIT people tend to use with robotics stuff. One I read used some sort of PCFG model made of basic shapes (that you can rotate magically) and searched the best match with
http://en.wikipedia.org/wiki/Inside-outside_algorithm