Detect the vein pattern in leaves? - image

My aim is to detect the vein pattern in leaves which characterize various species of plants
I have already done the following:
Original image:
After Adaptive thresholding:
However the veins aren't that clear and get distorted , Is there any way i could get a better output
EDIT:
I tried color thresholding my results are still unsatisfactory i get the following image
Please help

The fact that its a JPEG image is going to give the "block" artifacts, which in the example you posted causes most square areas around the veins to have lots of noise, so ideally work on an image that's not been through lossy compression. If that's not possible then try filtering the image to remove some of the noise.
The veins you are wanting to extract have a different colour from the background, leaf and shadow so some sort of colour based threshold might be a good idea. There was a recent S.O. question with some code that might help here.
After that some sort of adaptive normalisation would help increase the contrast before you threshold it.
[edit]
Maybe thresholding isn't an intermediate step that you want to do. I made the following by filtering to remove jpeg artifacts, doing some CMYK channel math (more cyan and black) then applying adaptive equalisation. I'm pretty sure you could then go on to produce (subpixel maybe) edge points using image gradients and non-maxima supression, and maybe use the brightness at each point and the properties of the vein structure (mostly joining at a tangent) to join the points into lines.

In the past I made good experiences with the Edge detecting algorithm difference of Gaussian. Which basically works like this:
You blur the image twice with the gaussian blurr algorithm but with differenct blur radii.
Then you calculate the difference between both images.
Pixel with same color beneath each other will creating a same blured color.
Pixel with different colors beneath each other wil reate a gradient which is depending on the blur radius. For bigger radius the gradient will stretch more far. For smaller ones it wont.
So basically this is bandpass filter. If the selected radii are to small a vain vill create 2 "parallel" lines. But since the veins of leaves are small compared with the extends of the Image you mostly find radii, where a vein results in 1 line.
Here I added th processed picture.
Steps I did on this picture:
desaturate (grayscaled)
difference of Gaussian. Here I blured the first Image with a radius of 10px and the second image with a radius of 2px. The result you can see below.
This is only a quickly created result. I would guess that by optimizing the parametes, you can even get better ones.

This sounds like something I did back in college with neural networks. The neural network stuff is a bit hard so I won't go there. Anyways, patterns are perfect candidates for the 2D Fourier transform! Here is a possible scheme:
You have training data and input data
Your data is represented as a the 2D Fourier transform
If your database is large you should run PCA on the transform results to convert a 2D spectrogram to a 1D spectrogram
Compare the hamming distance by testing the spectrum (after PCA) of 1 image with all of the images in your dataset.
You should expect ~70% recognition with such primitive methods as long as the images are of approximately the same rotation. If the images are not of the same rotation.you may have to use SIFT. To get better recognition you will need more intelligent training sets such as a Hidden Markov Model or a neural net. The truth is to getting good results for this kind of problem may be quite a lot of work.
Check out: https://theiszm.wordpress.com/2010/07/20/7-properties-of-the-2d-fourier-transform/

Related

anyway to remove algorithmically discolorations from aerial imagery

I don't know much about image processing so please bear with me if this is not possible to implement.
I have several sets of aerial images of the same area originating from different sources. The pictures have been taken during different seasons, under different lighting conditions etc. Unfortunately some images look patchy and suffer from discolorations or are partially obstructed by clouds or pix-elated, as par example picture1 and picture2
I would like to take as an input several images of the same area and (by some kind of averaging them) produce 1 picture of improved quality. I know some C/C++ so I could use some image processing library.
Can anybody propose any image processing algorithm to achieve it or knows any research done in this field?
I would try with a "color twist" transform, i.e. a 3x3 matrix applied to the RGB components. To implement it, you need to pick color samples in areas that are split by a border, on both sides. You should fing three significantly different reference colors (hence six samples). This will allow you to write the nine linear equations to determine the matrix coefficients.
Then you will correct the altered areas by means of this color twist. As the geometry of these areas is intertwined with the field patches, I don't see a better way than contouring the regions by hand.
In the case of the second picture, the limits of the regions are blurred so that you will need to blur the region mask as well and perform blending.
In any case, don't expect a perfect repair of those problems as the transform might be nonlinear, and completely erasing the edges will be difficult. I also think that colors are so washed out at places that restoring them might create ugly artifacts.
For the sake of illustration, a quick attempt with PhotoShop using manual HLS adjustment (less powerful than color twist).
The first thing I thought of was a kernel matrix of sorts.
Do a first pass of the photo and use an edge detection algorithm to determine the borders between the photos - this should be fairly trivial, however you will need to eliminate any overlap/fading (looks like there's a bit in picture 2), you'll see why in a minute.
Do a second pass right along each border you've detected, and assume that the pixel on either side of the border should be the same color. Determine the difference between the red, green and blue values and average them along the entire length of the line, then divide it by two. The image with the lower red, green or blue value gets this new value added. The one with the higher red, green or blue value gets this value subtracted.
On either side of this line, every pixel should now be the exact same. You can remove one of these rows if you'd like, but if the lines don't run the length of the image this could cause size issues, and the line will likely not be very noticeable.
This could be made far more complicated by generating a filter by passing along this line - I'll leave that to you.
The issue with this could be where there was development/ fall colors etc, this might mess with your algorithm, but there's only one way to find out!

Detecting hexagonal shapes in greyscale or binary image

For my bachelor thesis I need to analyse images taken in the ocean to count and measure the size of water particles.
my problem:
besides the wanted water particles, the images show hexagonal patches all over the image in:
- different sizes
- not regular shape
- different greyscale values
(Example image below!)
It is clear that these patches will falsify my image analysis concerning the size and number of particles.
For this reason this patches need to be detected and deleted somehow.
Since it will be just a little part of the work in my thesis, I don't want to spend much time in it and already tried classic ways like: (imageJ)
playing with the threshold (resulting in also deleting wanted water particles)
analyse image including the hexagonal patches and later sort out the biggest areas (the hexagonal patches have quite the biggest areas, but you will still have a lot of haxagons)
playing with filters: using gaussian filter on a duplicated image and subtract the copy from the original deletes many patches (in reducing the greyscale value) but also deletes little wanted water particles and so again falsifies the result
a more complicated and time consuming solution would be to use a implemented library in for example matlab or opencv to detect points, that describe the shapes.
but so far I could not find any code that fits my task.
Does anyone of you have created such a code I could use for my task or any other idea?
You can see a lot of hexagonal patches in different depths also.
the little spots with an greater pixel value are the wanted particles!
Image processing is quite an involved area so there are no hard and fast rules.
But if it was me I would 'Mask' the image. This involves either defining what you want to keep or remove as a pixel 'Mask'. You then scan the mask over the image recursively and compare the mask to the image portion selected. You then select or remove the section (depending on your method) if it meets your criterion.
One such example of a criteria would be the spatial and grey-scale error weighted against a likelihood function (eg Chi-squared, square mean error etc.) or a Normal distribution that you define the uncertainty..
Some food for thought
Maybe you can try with the Hough transform:
https://en.wikipedia.org/wiki/Hough_transform
Matlab have an built-in function, hough, wich implements this, but only works for lines. Maybe you can start from that and change it to recognize hexagons.

how to improve keypoints detection and matching

I have been working a self project in image processing and robotics where instead robot as usual detecting colors and picking out the object, it tries to detect the holes(resembling different polygons) on the board. For a better understanding of the setup here is an image:
As you can see I have to detect these holes, find out their shapes and then use the robot to fit the object into the holes. I am using a kinect depth camera to get the depth image. The pic is shown below:
I was lost in thought of how to detect the holes with the camera, initially using masking to remove the background portion and some of the foreground portion based on the depth measurement,but this did not work out as, at different orientations of the camera the holes would merge with the board... something like inranging (it fully becomes white). Then I came across adaptiveThreshold function
adaptiveThreshold(depth1,depth3,255,ADAPTIVE_THRESH_GAUSSIAN_C,THRESH_BINARY,7,-1.0);
With noise removal using erode, dilate, and gaussian blur; which detected the holes in a better manner as shown in the picture below. Then I used the cvCanny edge detector to get the edges but so far it has not been good as shown in the picture below.After this I tried out various feature detectors from SIFT, SURF, ORB, GoodFeaturesToTrack and found out that ORB gave the best times and the features detected. After this I tried to get the relative camera pose of a query image by finding its keypoints and matching those keypoints for good matches to be given to the findHomography function. The results are as shown below as in the diagram:
In the end i want to get the relative camera pose between the two images and move the robot to that position using the rotational and translational vectors got from the solvePnP function.
So is there any other method by which I could improve the quality of the
holes detected for the keypoints detection and matching?
I had also tried contour detection and approxPolyDP but the approximated shapes are not really good:
I have tried tweaking the input parameters for the threshold and canny functions but
this is the best I can get
Also ,is my approach to get the camera pose correct?
UPDATE : No matter what I tried I could not get good repeatable features to map. Then I read online that a depth image is cheap in resolution and its only used for stuff like masking and getting the distances. So , it hit me that the features are not proper because of the low resolution image with its messy edges. So I thought of detecting features on a RGB image and using the depth image to get only the distances of those features. The quality of features I got were literally off the charts.It even detected the screws on the board!! Here are the keypoints detected using GoodFeaturesToTrack keypoint detection..
I met an another hurdle while getting the distancewith the distances of the points not coming out properly. I searched for possible causes and it occured to me after quite a while that there was a offset in the RGB and depth images because of the offset between the cameras.You can see this from the first two images. I then searched the net on how to compensate this offset but could not find a working solution.
If anyone one of you could help me in compensate the offset,it would be great!
UPDATE: I could not make good use of the goodFeaturesToTrack function. The function gives the corners in Point2f type .If you want to compute the descriptors we need the keypoints and converting Point2f to Keypoint with the code snippet below leads to the loss of scale and rotational invariance.
for( size_t i = 0; i < corners1.size(); i++ )
{
keypoints_1.push_back(KeyPoint(corners1[i], 1.f));
}
The hideous result from the feature matching is shown below .
I have to start on different feature matchings now.I'll post further updates. It would be really helpful if anyone could help in removing the offset problem.
Compensating the difference between image output and the world coordinates:
You should use good old camera calibration approach for calibrating the camera response and possibly generating a correction matrix for the camera output (in order to convert them into real scales).
It's not that complicated once you have printed out a checkerboard template and capture various shots. (For this application you don't need to worry about rotation invariance. Just calibrate the world view with the image array.)
You can find more information here: http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/own_calib.html
--
Now since I can't seem to comment on the question, I'd like to ask if your specific application requires the machine to "find out" the shape of the hole on the fly. If there are finite amount of hole shapes, you may then model them mathematically and look for the pixels that support the predefined models on the B/W edge image.
Such as (x)^2+(y)^2-r^2=0 for a circle with radius r, whereas x and y are the pixel coordinates.
That being said, I believe more clarification is needed regarding the requirements of the application (shape detection).
If you're going to detect specific shapes such as the ones in your provided image, then you're better off using a classifer. Delve into Haar classifiers, or better still, look into Bag of Words.
Using BoW, you'll need to train a bunch of datasets, consisting of positive and negative samples. Positive samples will contain N unique samples of each shape you want to detect. It's better if N would be > 10, best if >100 and highly variant and unique, for good robust classifier training.
Negative samples would (obviously), contain stuff that do not represent your shapes in any way. It's just for checking the accuracy of the classifier.
Also, once you have your classifier trained, you could distribute your classifier data (say, suppose you use SVM).
Here are some links to get you started with Bag of Words:
https://gilscvblog.wordpress.com/2013/08/23/bag-of-words-models-for-visual-categorization/
Sample code:
http://answers.opencv.org/question/43237/pyopencv_from-and-pyopencv_to-for-keypoint-class/

Algorithm to detect the change in visible luminosity in an image

I want a formula to detect/calculate the change in visible luminosity in a part of the image,provided i can calculate the RGB, HSV, HSL and CMYK color spaces.
E.g: In the above picture we will notice that the left side of the image is more bright when compared to the right side , which is beneath a shade.
I have had a little think about this, and done some experiments in Photoshop, though you could just as well use ImageMagick which is free. Here is what I came up with.
Step 1 - Convert to Lab mode and discard the a and b channels since the Lightness channel holds most of the brightness information which, ultimately, is what we are looking for.
Step 2 - Stretch the contrast of the remaining L channel (using Levels) to accentuate the variation.
Step 3 - Perform a Gaussian blur on the image to remove local, high frequency variations in the image. I think I used 10-15 pixels radius.
Step 4 - Turn on the Histogram window and take a single row marquee and watch the histogram change as different rows are selected.
Step 5 - Look out for a strongly bimodal histogram (two distimct peaks) to identify the illumination variations.
This is not a complete, general purpose solution, but may hold some pointers and cause people who know better to suggest improvememnts for you!!! Note that the method requires the image to have a some areas of high uniformity like the whiteish horizontal bar across your input image. However, nearly any algorithm is going to have a hard time telling the difference between a sheet of white paper with a shadow of uneven light across it and the same sheet of paper with a grey sheet of paper laid on top of it...
In the images below, I have superimposed the histogram top right. In the first one, you can see the histogram is not narrow and bimodal because the dotted horizontal selection marquee is across the bar-code area of the image.
In the subsequent images, you can see a strong bimodal histogram because the dotted selection marquee is across a uniform area of image.
The first problem is in "visible luminosity". It me mean one of several things. This discussion should be a good start. (Yes, it has incomplete and contradictory answers, as well.)
Formula to determine brightness of RGB color
You should make sure you operate on the linear image which does not have any gamma correction applied to it. AFAIK Photoshop does not degamma and regamma images during filtering, which may produce erroneous results. It all depends on how accurate results you want. Photoshop wants things to look good, not be precise.
In principle you should first pick a formula to convert your RGB values to some luminosity value which fits your use. Then you have a single-channel image which you'll need to filter with a Gaussian filter, sliding average, or some other suitable filter. Unfortunately, this may require special tools as photoshop/gimp/etc. type programs tend to cut corners.
But then there is one thing you would probably like to consider. If you have an even brightness gradient across an image, the eye is happy and does not perceive it. Rather large differences go unnoticed if the contrast in the image is constant across the image. Unfortunately, the definition of contrast is not very meaningful if you do not know at least something about the content of the image. (If you have scanned/photographed documents, then the contrast is clearly between ink and paper.) In your sample image the brightness changes quite abruptly, which makes the change visible.
Just to show you how strange the human vision is in determining "brightness", see the classical checker shadow illusion:
http://en.wikipedia.org/wiki/Checker_shadow_illusion
So, my impression is that talking about the conversion formulae is probably the second or third step in the process of finding suitable image processing methods. The first step would be to try to define the problem in more detail. What do you want to accomplish?

Image Warp Filter - Algorithm and Rasterization

I'd like to implement a Filter that allows resampling of an image by moving a number of control points that mark edges and tangent directions. The goal is to be able to freely transform an image as seen in Photoshop when you use "Free Transform" and chose Warpmode "Custom". The image is fitted into a some kind of Spline-Patch (if that is a valid name) that can be manipulated.
I understand how simple splines (paths) work but how do you connect them to form a patch?
And how can you sample such a patch to render the morphed image? For each pixel in the target I'd need to know what pixel in the source image corresponds. I don't even know where to start searching...
Any helpful info (keywords, links, papers, reference implementations) are greatly appreciated!
This document will get you a good insight into warping: http://www.gson.org/thesis/warping-thesis.pdf
However, this will include filtering out high frequencies, which will make the implementation a lot more complicated but will give a better result.
An easy way to accomplish what you want to do would be to loop through every pixel in your final image, plug the coordinates into your splines and retrieve the pixel in your original image. This pixel might have coordinates 0.4/1.2 so you could bilinearly interpolate between 0/1, 1/1, 0/2 and 1/2.
As for splines: there are many resources and solutions online for the 1D case. As for 2D it gets a bit trickier to find helpful resources.
A simple example for the 1D case: http://www-users.cselabs.umn.edu/classes/Spring-2009/csci2031/quad_spline.pdf
Here's a great guide for the 2D case: http://en.wikipedia.org/wiki/Bicubic_interpolation
Based upon this you could derive an own scheme for splines for the 2D case. Define a bivariate (with x and y) polynomial and set your constraints to solve for the coefficients of the polynomial.
Just keep in mind that the borders of the spline patches have to be consistent (both in value and derivative) to avoid ugly jumps.
Good luck!

Resources