i search a solution to segment a image in different parts (Especial a saliency map (see image)).
I knew about some Solutions like Graph-Based segmentation by Pedro F. Felzenszwalb, but for large images my implementation is very slow.
Is there some other solution?
Greetings,
Destiny
Destiny,
What is the specific goal of this segmentation? Are you just trying to create separate regions in a still image? Are you looking for objects, and segmenting the image to find ROIs for later work?
The more specific you can be about your segmentation goals, the more specifically you can tailor your code. Binarizing your image through thresholding, or separating it into smaller chunks via feature detection, could significantly speed up your code.
The only other general image segmentation algorithm I can think of that is implemented in the OpenCV libraries is the water shed algorithm. You can find it in the docs, or look up Laganiere's OpenCV 2 Computer Vision Application Programming Cookbook, which contains an excellent tutorial on both of these algorithms.
Related
I do not have a background in image recognition/feature extraction, but I am in need of a way to extract trees from an image without the background vegetation.
Seen above is a small example of the kind of imagery I'm working with. I have access to multi-spectral imagery as well (though I haven't seen it yet) including NDVI, NIR, Red-edge.
From researching the problem at hand, I am aware that feature extraction is an active area of research and it seems that often supervised and unsupervised machine learning is employed in combination with statistical voodoo such as "PCA". Being able to differentiate between trees and background vegetation has been noted as an area of difficulty in some papers I skimmed over in my research.
There are notable features about the imagery I am working with. First of all, the palm trees have a very distinctive shape. Not only this, but there are obvious differences in the texture of the trees vs the texture of the background vegetation.
I am not an academic, and as such I only have access to publicly available papers for my research. I am looking for relevant algorithms that could help me extract the features of interest to me (trees) that either have an implementation (ideally in C or bindings to C, though I'm aware that it is not a commonly used language in this field) or with publicly available papers/tutorials/sites/etc. detailing the algorithm so that I could implement it myself.
Thanks in advance for any help!
Look into OpenCV, It has a lot of options for supervised/semi supervised Learning methods. As you have mentioned there is a visible texture difference between the tress and background vegetation, a good place for you would be to start would be color based segmentation and evolving it to use textures as well. OpenCV ML tutorial is a good starting point. Moreover you can also combine the NDVI data to create a stronger feature set.
First of all, thanks for reading my question. I'm beginner in computer vision.
I read a lot but I didn't find any solution.
I have an image and I want to detect logo/logos on it.
Also, I have a whole of images with different logos, all image containing a logo on it and nothing more.
Can you help me with any idea of how to detect logo/logos on an image when I have a whole (thousands) of training sets (known logos set)?
It can be done by using the SURF or SIFT feature detection algorithm for few known logos, by matching the given image with all of the others but I have a huge dataset, and I can't match with all other images.
To try all images in the dataset takes toooooo much time :)
Can be useful any SDK? (it can be even for mobile phones or for desktop also).
Or can I use some multiple algorithms for it?
I found an interesting paper about this question with a SIGMA algorithm, but I can't find any description for these algorithms (http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5495345).
I think to detect the features on the images is OK (SIFT, maybe SURF).
But I think the problem is with the big number of known images/logos.
I think it should be stored in a special way.
Ex. made a tree somehow from the thousand of known logos, or to separate them in groups.
Is it possible to do this task?
I appreciate any help.
The thousands of training sets is useful only to test your algorithm, it will not help to analyze a new image.
I made a bit of pattern recognition in the past, I would start this way: look for sharp edges (sharp color transitions too). So an edge filter and statistical analysis about features all located in the same corner. The result of the algorithm will be a number that you will use with your training set.
Since you are doing original reserch be prepared for a long work. If a SDK with a function "ImageHasLogo()" exists yet, you will find it on Google.
I'm working on a small program for optical mark recognition.
The processing of the scanned form consists of two steps:
1) Find the form in the scanned image, descew and crop borders.
2) With this "normalized" form, I can simply search the marks by using coordinates from the original document and so on.
For the first step, I'm currently using the Homography functions from OpenCV and a perspecive transform to map the points. I also tried the SurfDetector.
However, both algorithms are quite slow and do not really meet the speed requierements when scanning forms from a document scanner.
Can anyone point me to an alternative algorithm/solution for this specific problem?
Thanks in advance!
Try with ORB or FAST detector: they should be faster than SURF (documentation here).
If those don't match your speed requirement you should probably use a different approach. Do you need scale and rotation invariance? If not, you could try with the cross correlation.
Viola-Jones cascade classifier is pretty quick. It is used in OpenCV for Face detection, but you can train it for different purpose. Depending on the appearance of what you call your "form", you can use simpler algorithms such as cross correlation as said by Muffo.
Recently I've been messing about with algorithms on images, partly for fun and partly to keep my programming skills sharp.
I've just implemented a 'nearest-neighbour' algorithm that picks n random pixels in an image, and then converts the colour of each other pixel in the image to the colour of its nearest neighbour in the set of n chosen pixels. The result is a kind of "frosted glass" effect on the image, for a reasonably large value of n (if n is too small then the image gets blocky).
I'm just wondering if anyone has any other good/fun algorithms on images that might be interesting to implement?
Tom
This book, Digital Image Processing, is one of the most commonly used books in image processing classes, and it will teach you a lot of basic techniques that will help you understand other algorithms better, like the ones Ants Aasma suggested.
Try making an Andy Warhol print. It's pretty easy in Java. For more ideas, just look at the filters available in GIMP or a similar program.
Marching Squares is a computer vision algorithm. Try using that to convert black and white raster images to object based scenes.
Turns the image into a pizza
Take N images, relate them via an MC-Escher-style painting
"Explode" an image from the inside out
Convert the image into a single-color blocks (piet-style) based on all the colours within.
How about tie-dye algorithm?
Fun to toy with and easy to code filters are:
kaleidoscope
lens
twirl
There are a lot of other filters, but especially the kaleidoscope gives much bang for the bucks. I have made my own graphics editor with lots of filters and is also looking for inspiration.
Instead of coding image filters, I personally would love to code Diffusion Curves, but unfortunately have little time for fun.
If you want to try something more challenging look for SIGGRAPH papers on the web. There are some really nifty image algorithms presented at that conference. Seam carving is one cool example that is reasonably straightforward to implement.
If you want something more challenging try to complete the symmetry of broken objects
Do you guys know of any algorithms that can be used to compute difference between images?
Take this webpage for example http://tineye.com/ You give it a link or upload an image and it finds similiar images. I doubt that it compares the image in question against all of them (or maybe it does).
By compute I mean like what the Levenshtein_distance or the Hamming distance is for strings.
By no means do I need to the correct answer for a project or anything, I just found the website and got very curious. I know digg pays for a similiar service for their website.
The very simplest measures are going to be RMS-error based approaches, for example:
Root Mean Square Deviation
Peak Signal to Noise Ratio
These probably gel with your notions of distance measures, but their results are really only meaningful if you've got two images that are very close already, like if you're looking at how well a particular compression scheme preserved the original image. Also, the same result from either comparison can mean a lot of different things, depending on what kind of artifacts there are (take a look at the paper I cite below for some example photos of RMS/PSNR can be misleading).
Beyond these, there's a whole field of research devoted to image similarity. I'm no expert, but here are a few pointers:
A lot of work has gone into approaches using dimensionality reduction (PCA, SVD, eigenvalue analysis, etc) to pick out the principal components of the image and compare them across different images.
Other approaches (particularly medical imaging) use segmentation techniques to pick out important parts of images, then they compare the images based on what's found
Still others have tried to devise similarity measures that get around some of the flaws of RMS error and PSNR. There was a pretty cool paper on the spatial domain structural similarity (SSIM) measure, which tries to mimic peoples' perceptions of image error instead of direct, mathematical notions of error. The same guys did an improved translation/rotation-invariant version using wavelet analysis in this paper on WSSIM.
It looks like TinEye uses feature vectors with values for lots of attributes to do their comparison. If you hunt around on their site, you eventually get to the Ideé Labs page, and their FAQ has some (but not too many) specifics on the algorithm:
Q: How does visual search work?
A: Idée’s visual search technology uses sophisticated algorithms to analyze hundreds of image attributes such as colour, shape, texture, luminosity, complexity, objects, and regions.These attributes form a compact digital signature that describes the appearance of each image, and these signatures are calculated by and indexed by our software. When performing a visual search, these signatures are quickly compared by our search engine to return visually similar results.
This is by no means exhaustive (it's just a handful of techniques I've encountered in the course of my own research), but if you google for technical papers or look through proceedings of recent conferences on image processing, you're bound to find more methods for this stuff. It's not a solved problem, but hopefully these pointers will give you an idea of what's involved.
One technique is to use color histograms. You can use machine learning algorithms to find similar images based on the repesentation you use. For example, the commonly used k-means algorithm. I have seen other solutions trying to analyze the vertical and horizontal lines in the image after using edge detection. Texture analysis is also used.
A recent paper clustered images from picasa web. You can also try the clustering algorithm that I am working on.
Consider using lossy wavelet compression and comparing the highest relevance elements of the images.
What TinEye does is a sort of hashing over the image or parts of it (see their FAQ). It's probably not a real hash function since they want similar "hashes" for similar (or nearly identical) images. But all they need to do is comparing that hash and probably substrings of it, to know whether the images are similar/identical or whether one is contained in another.
Heres an image similarity page, but its for polygons. You could convert your image into a finite number of polygons based on color and shape, and run these algorithm on each of them.
here is some code i wrote, 4 years ago in java yikes that does image comparisons using histograms. dont look at any part of it other than buildHistograms()
https://jpicsort.dev.java.net/source/browse/jpicsort/ImageComparator.java?rev=1.7&view=markup
maybe its helpful, atleast if you are using java
Correlation techniques will make a match jump out. If they're JPEGs you could compare the dominant coefficients for each 8x8 block and get a decent match. This isn't exactly correlation but it's based on a cosine transfore, so it's a first cousin.