I've already asked this question on https://dsp.stackexchange.com/ but didn't get any answer! hope to get any suggestion here:
I have a project in which I have to recognize 2 lines in different "position", the lines are orthogonal but can be projected on different surfaces. I'm using opencv.
The intersection can be anywhere on the frame. The lines are red (the images show just the gray scale).
UPDATE
-I'll be using a gray scale camera !!!!!!!!!
-the background and objects on which the lines will be projected can change
I'm not asking for code, but only for hints about how can I solve this? I tried houghlines function but it works only for straight surfaces.
thanks in advance !
This is not that difficult task as it include straight line. I have done similar kind of project.
First of all if your image is colored covert it to gray scale.
Then use a calibrated median filter to blur the image.
Now subtract the blurred image from the gray scale image.
After step 3 if you look at the image you will see that the on the places of lines the intensity
is higher than the other parts of image because these line are contrasted and when we apply median
filter the subtracted value is more than the rest of image.
to get a cleaner distinction you need to use create a binary image ie. only black and white with
a particular thresh hold.
6.Finally you got yu lines if their is noise you can use top hat filtering after step 4 and
gaussian filtering after step 5.
You can take help from this paper on crack detection
I think AMI's idea is good.
You can also think about using controled laser source. In that case you can get image pair one with laser turned on and one with turned off, then find difference.
It can be interesting for you: http://www.instructables.com/id/3-D-Laser-Scanner/
Here's the result of subtracting the output of a median filter (r=6):
You might be able to improve things a bit by adjusting the median filter radius, but these wavy, discontinuous lines are going to be difficult to detect reliably.
You really need better source images. Here are a few suggestions:
A colour camera would help enormously. Apply a high-pass filter to the red and green channels, and calculate the difference between the two. The red lines will stand out much better then.
Can you make the light source brighter?
Have you tried putting a red filter over the camera lens? Ideally you want one with a pass band that matches the light source's wavelength as closely as possible — if the light is coming from a laser, then a suitable dichroic filter should give good results. But even a sheet of red plastic would be better than nothing. (Have you got an old pair of red/blue 3D glasses sitting around somewhere?)
Perhaps subtracting the grayscale image from the red channel would help to highlight the red. I'd post this as a comment but cannot do so yet.
Related
I have a figure looks like this
I want to find the coordinates of all intersections of three hexagons.
How can I do this? Should I use OpenCV?
I am still trying to think of a faster/better method, but I think the following should work:
threshold your image to pure blacks and whites
generate and save a list of all black pixels for later
label your image so that each white hexagon is effectively flood-filled with a unique color (or shade of grey) - some folks call this "labelling", some call it "Blob Analysis", some call it "Connected Component Analysis". Whatever it is called, you will get something like this:
Now look at each black pixel from the list you saved in the second step and count how many different colours other than black are in the surrounding 9x9, or 15x15 area. If it's three it is probably an intersection like you are looking for.
Of course there are variations on this - you could implement a "minimum distance from other intersection" on top, for example. Or a "black line thinning first". Or a dilation of each blob to erode the black lines and make the three colours closer together. You could scale your image down (being careful to use NEAREST_NEIGHBOUR rather than interpolation) after labelling to reduce processing time - if important.
You can try to find these features using Harris corner detector.
Also check if findContours with analysis of result intersections could give you useful information.
I don't know much about image processing so please bear with me if this is not possible to implement.
I have several sets of aerial images of the same area originating from different sources. The pictures have been taken during different seasons, under different lighting conditions etc. Unfortunately some images look patchy and suffer from discolorations or are partially obstructed by clouds or pix-elated, as par example picture1 and picture2
I would like to take as an input several images of the same area and (by some kind of averaging them) produce 1 picture of improved quality. I know some C/C++ so I could use some image processing library.
Can anybody propose any image processing algorithm to achieve it or knows any research done in this field?
I would try with a "color twist" transform, i.e. a 3x3 matrix applied to the RGB components. To implement it, you need to pick color samples in areas that are split by a border, on both sides. You should fing three significantly different reference colors (hence six samples). This will allow you to write the nine linear equations to determine the matrix coefficients.
Then you will correct the altered areas by means of this color twist. As the geometry of these areas is intertwined with the field patches, I don't see a better way than contouring the regions by hand.
In the case of the second picture, the limits of the regions are blurred so that you will need to blur the region mask as well and perform blending.
In any case, don't expect a perfect repair of those problems as the transform might be nonlinear, and completely erasing the edges will be difficult. I also think that colors are so washed out at places that restoring them might create ugly artifacts.
For the sake of illustration, a quick attempt with PhotoShop using manual HLS adjustment (less powerful than color twist).
The first thing I thought of was a kernel matrix of sorts.
Do a first pass of the photo and use an edge detection algorithm to determine the borders between the photos - this should be fairly trivial, however you will need to eliminate any overlap/fading (looks like there's a bit in picture 2), you'll see why in a minute.
Do a second pass right along each border you've detected, and assume that the pixel on either side of the border should be the same color. Determine the difference between the red, green and blue values and average them along the entire length of the line, then divide it by two. The image with the lower red, green or blue value gets this new value added. The one with the higher red, green or blue value gets this value subtracted.
On either side of this line, every pixel should now be the exact same. You can remove one of these rows if you'd like, but if the lines don't run the length of the image this could cause size issues, and the line will likely not be very noticeable.
This could be made far more complicated by generating a filter by passing along this line - I'll leave that to you.
The issue with this could be where there was development/ fall colors etc, this might mess with your algorithm, but there's only one way to find out!
I want a formula to detect/calculate the change in visible luminosity in a part of the image,provided i can calculate the RGB, HSV, HSL and CMYK color spaces.
E.g: In the above picture we will notice that the left side of the image is more bright when compared to the right side , which is beneath a shade.
I have had a little think about this, and done some experiments in Photoshop, though you could just as well use ImageMagick which is free. Here is what I came up with.
Step 1 - Convert to Lab mode and discard the a and b channels since the Lightness channel holds most of the brightness information which, ultimately, is what we are looking for.
Step 2 - Stretch the contrast of the remaining L channel (using Levels) to accentuate the variation.
Step 3 - Perform a Gaussian blur on the image to remove local, high frequency variations in the image. I think I used 10-15 pixels radius.
Step 4 - Turn on the Histogram window and take a single row marquee and watch the histogram change as different rows are selected.
Step 5 - Look out for a strongly bimodal histogram (two distimct peaks) to identify the illumination variations.
This is not a complete, general purpose solution, but may hold some pointers and cause people who know better to suggest improvememnts for you!!! Note that the method requires the image to have a some areas of high uniformity like the whiteish horizontal bar across your input image. However, nearly any algorithm is going to have a hard time telling the difference between a sheet of white paper with a shadow of uneven light across it and the same sheet of paper with a grey sheet of paper laid on top of it...
In the images below, I have superimposed the histogram top right. In the first one, you can see the histogram is not narrow and bimodal because the dotted horizontal selection marquee is across the bar-code area of the image.
In the subsequent images, you can see a strong bimodal histogram because the dotted selection marquee is across a uniform area of image.
The first problem is in "visible luminosity". It me mean one of several things. This discussion should be a good start. (Yes, it has incomplete and contradictory answers, as well.)
Formula to determine brightness of RGB color
You should make sure you operate on the linear image which does not have any gamma correction applied to it. AFAIK Photoshop does not degamma and regamma images during filtering, which may produce erroneous results. It all depends on how accurate results you want. Photoshop wants things to look good, not be precise.
In principle you should first pick a formula to convert your RGB values to some luminosity value which fits your use. Then you have a single-channel image which you'll need to filter with a Gaussian filter, sliding average, or some other suitable filter. Unfortunately, this may require special tools as photoshop/gimp/etc. type programs tend to cut corners.
But then there is one thing you would probably like to consider. If you have an even brightness gradient across an image, the eye is happy and does not perceive it. Rather large differences go unnoticed if the contrast in the image is constant across the image. Unfortunately, the definition of contrast is not very meaningful if you do not know at least something about the content of the image. (If you have scanned/photographed documents, then the contrast is clearly between ink and paper.) In your sample image the brightness changes quite abruptly, which makes the change visible.
Just to show you how strange the human vision is in determining "brightness", see the classical checker shadow illusion:
http://en.wikipedia.org/wiki/Checker_shadow_illusion
So, my impression is that talking about the conversion formulae is probably the second or third step in the process of finding suitable image processing methods. The first step would be to try to define the problem in more detail. What do you want to accomplish?
i'm interested in some kind of charcoal-filters like the photoshop Photocopy-Filter or the note-paper.
Have someone a paper or some instructions how this filter works?
In best case i want to create the following:
input:
Output:
greetings
I think it's a process akin to pan-sharpening. I could get a quite similar image in gimp by:
Converting to gray
Duplicating into two layers
Lightly blurring one layer
Edge-detecting in the other layer with a DOG filter with large radius
Compositing the two layers, playing a bit with the transparency.
What this is doing is converting the color picture into a 0-1 bitmap picture.
They typically use a threshold function which returns 1 (white) for some values and 0 (black) for some other.
One simple function would be transform the image from color to gray-scale, and then select a shade of gray above which everything is white, and below it everything is black. The actual threshold you use could be made adaptive depending on the brightness of the picture (you want a certain percentage of pixels to be white).
It can also be adaptive based on the context within the picture (i.e. a dark area may still have some white pixels to show local contrast). The trees behind the house are not all black because the filtering is sensitive to the average darkness of the region.
Also note that the area close to the light gap in the tree has a cluster of dark pixels, because of its relative darkness. The edges of the home, the bench are also highlighted. There is an edge detection element at play.
I do not know exactly what effect you gave an example of but there are a variety that are similar to it. As VSOverFlow pointed out, thresholding an image would result in something very similar to that though I do not think it is what is being used. Open cv has a function for this, its documentation can be found here. You may also want to look into Otsu's method for thresholding.
Again as VSOverFlow pointed out, there is an edge detection element at play as well. You may want to investigate the Sobel and Prewitt filters. Those are 3 simple options that will give you something similar to the image you provided. Perhaps you could threshold the result from the Prewitt filter? I have no knowledge of how Photoshop implements its filters. If none of these options are close enough to what you are looking for I would recommend looking for information on the specific implementations of those filters in photoshop.
My aim is to detect the vein pattern in leaves which characterize various species of plants
I have already done the following:
Original image:
After Adaptive thresholding:
However the veins aren't that clear and get distorted , Is there any way i could get a better output
EDIT:
I tried color thresholding my results are still unsatisfactory i get the following image
Please help
The fact that its a JPEG image is going to give the "block" artifacts, which in the example you posted causes most square areas around the veins to have lots of noise, so ideally work on an image that's not been through lossy compression. If that's not possible then try filtering the image to remove some of the noise.
The veins you are wanting to extract have a different colour from the background, leaf and shadow so some sort of colour based threshold might be a good idea. There was a recent S.O. question with some code that might help here.
After that some sort of adaptive normalisation would help increase the contrast before you threshold it.
[edit]
Maybe thresholding isn't an intermediate step that you want to do. I made the following by filtering to remove jpeg artifacts, doing some CMYK channel math (more cyan and black) then applying adaptive equalisation. I'm pretty sure you could then go on to produce (subpixel maybe) edge points using image gradients and non-maxima supression, and maybe use the brightness at each point and the properties of the vein structure (mostly joining at a tangent) to join the points into lines.
In the past I made good experiences with the Edge detecting algorithm difference of Gaussian. Which basically works like this:
You blur the image twice with the gaussian blurr algorithm but with differenct blur radii.
Then you calculate the difference between both images.
Pixel with same color beneath each other will creating a same blured color.
Pixel with different colors beneath each other wil reate a gradient which is depending on the blur radius. For bigger radius the gradient will stretch more far. For smaller ones it wont.
So basically this is bandpass filter. If the selected radii are to small a vain vill create 2 "parallel" lines. But since the veins of leaves are small compared with the extends of the Image you mostly find radii, where a vein results in 1 line.
Here I added th processed picture.
Steps I did on this picture:
desaturate (grayscaled)
difference of Gaussian. Here I blured the first Image with a radius of 10px and the second image with a radius of 2px. The result you can see below.
This is only a quickly created result. I would guess that by optimizing the parametes, you can even get better ones.
This sounds like something I did back in college with neural networks. The neural network stuff is a bit hard so I won't go there. Anyways, patterns are perfect candidates for the 2D Fourier transform! Here is a possible scheme:
You have training data and input data
Your data is represented as a the 2D Fourier transform
If your database is large you should run PCA on the transform results to convert a 2D spectrogram to a 1D spectrogram
Compare the hamming distance by testing the spectrum (after PCA) of 1 image with all of the images in your dataset.
You should expect ~70% recognition with such primitive methods as long as the images are of approximately the same rotation. If the images are not of the same rotation.you may have to use SIFT. To get better recognition you will need more intelligent training sets such as a Hidden Markov Model or a neural net. The truth is to getting good results for this kind of problem may be quite a lot of work.
Check out: https://theiszm.wordpress.com/2010/07/20/7-properties-of-the-2d-fourier-transform/