How could I make the discontinuous contour of an image consistant? - image

In the task, I got an discontinuous edge image, how could make it closed? in other word make the curve continuous.
And the shape could be any kind, cause this the coutour shadow.

Here are a few ideas that may get you started. I don't feel like coding and debugging a load of C++ in OpenCV - oftentimes folks ask questions and never log in again, or you spend hours working on something and then they tell you that the single sample image they provided was not at all representative of their actual images and the method that it has taken 25 minutes to explain is completely inappropriate.
One idea is morphological dilation - you can do that at the command line like this with ImageMagick:
convert gappy.jpg -threshold 50% -morphology dilate disk:5 result.png
Another idea might be to locate all the "line end" pixels with Hit-and-Miss morphology. This is available in OpenCV, but I am doing it with ImageMagick to save coding/debugging. The structuring elements are like this:
Hopefully you can see that the first (leftmost) structuring element represents the West end of an East-West line, and that the second one represents the North end of a North-South line and so on. If you still haven't got it, the last one is the South-West end of North-East to South-West line.
Basically, I find the line ends and then dilate them with blue pixels and overlay that onto the original:
convert gappy.jpg -threshold 50% \
\( +clone -morphology hmt lineends -morphology dilate disk:1 -fill blue -opaque white -transparent black \) \
-flatten result.png
Here's a close-up of before and after:
You can also find the singleton pixels with no neighbours, using a "peaks" structuring element like this:
and then you can find all the peaks and dilate them with red pixels like this:
convert gappy.jpg -threshold 50% \
\( +clone -morphology hmt Peaks:1.9 -fill red -morphology dilate disk:2 -opaque white -transparent black \) \
-flatten result.png
Here is a close-up of before and after:
Depending on how your original images look, you may be able to apply the above ideas iteratively till your contour is whole - maybe you could detect that by flood filling and seeing if your contour "holds water" without the flood fill "leaking" out everywhere.
Obviously you would do the red peaks and the blue line ends both in white to complete your contour - I am just doing it in colour to illustrate my technique.

Mark Setchell's answer is a fun way to learn new stuff along the way. My approach is rather simple and straight-forward.
I got the following solution off the top of my head. It involves a simple blurring operation sandwiched between two morphological operations
I have explained what I have done alongside the code:
#---- I converted the image to gray scale and then performed inverted binary threshold on it. ----
img = cv2.imread('leaf.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 127, 255, 1)
#---- Next I performed morphological erosion for a rectangular structuring element of kernel size 7 ----
kernel = np.ones((7, 7),np.uint8)
erosion = cv2.morphologyEx(thresh, cv2.MORPH_ERODE, kernel, iterations = 2)
cv2.imshow('erosion', erosion )
#---- I then inverted this image and blurred it with a kernel size of 15. The reason for such a huge kernel is to obtain a smooth leaf edge ----
ret, thresh1 = cv2.threshold(erosion, 127, 255, 1)
blur = cv2.blur(thresh1, (15, 15))
cv2.imshow('blur', blur)
#---- I again performed another threshold on this image to get the central portion of the edge ----
ret, thresh2 = cv2.threshold(blur, 145, 255, 0)
#---- And then performed morphological erosion to thin the edge. For this I used an ellipse structuring element of kernel size 5 ----
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
final = cv2.morphologyEx(thresh2, cv2.MORPH_ERODE, kernel1, iterations = 2)
cv2.imshow(final', final)
Hope this helps :)

Here is a slightly different approach to finish it off using Imagemagick
1) Threshold and dilate the contour
convert 8y0KL.jpg -threshold 50% -morphology dilate disk:11 step1.gif
2) Erode by a smaller amount
convert step1.gif -morphology erode disk:8 step2.gif
3) Pad by 1 pixel all around with black and floodfill the outside with white and remove the padded 1 pixel all around
convert step2.gif -bordercolor black -border 1 -fill white -draw "color 0,0 floodfill" -alpha off -shave 1x1 step3.gif
4) Erode by a smaller amount and get the edge on white side of the transition. Note that we started with dilate 11, then we eroded by 8, then we now erode by 3. So 8+3=11 should get us back to about the center line.
convert step3.gif -morphology erode disk:3 -morphology edgein diamond:1 step4.gif
5) Create animation to compare
convert -delay 50 8y0KL.jpg step4.gif -loop 0 animation.gif

Here's another suggestion that is more "computer vision literature" oriented.
As a rule of thumb preprocessing step, it is usually a good idea to thin all the edges to make sure they are about 1 pixel thick. A popular edge thinning method is non-maximal suppression (NMS).
Then I would start off by analyzing the image, and finding all the connected components that I have. OpenCV already provides the connectedComponents function. Once groups of connected components are determined, you can fit a Bezier curve to each group. An automatic method of fitting Bezier curves to a set of 2D points is available in the Graphics Gem book. There's also C code available for their method. The goal of fitting a Bezier curve is to get as much high-level understanding of each component group as possible.
Next, you need to join those Bezier curves together. A method of joining lines using endpoints clustering is available in the work of Shpitalni and Lipson. In that paper, take a look at their adaptive clustering method in the section named "Entity Linking and Endpoint Clustering".
Finally, with all the curves grouped together you can fit a final Bezier curve too all the points that you have to get a nice and natural looking edge map.
As a side note, you can take a look at the work Ming-Ming Cheng in cartoon curve extraction. There's OpenCV-based code available for that method here too, but will output the following once applied to your image:
Disclaimer:
I can attest to the performance of the Bezier curve fitting algorithm as I've personally used it and it works pretty well. Cheng's curve extraction algorithm works well too, however, it will create bad looking "blobs" with thin contours due to the use of gradient detection (which has a tendency of making thin lines thick!). If you could find a way to work around this "thickening" effect, you can skip Bezier curve extraction and jump right into endpoint clustering to join the curves together.
Hope this helps!

You can try using distance transform
% binarize
im=rgb2gray(im); im=im>100;
% Distance transform
bd=bwdist(im);
maxDist = 5;
bd(bd<maxDist)=0;
bw=bwperim(bd); bw=imclearborder(bw);
bw=imfill(bw,'holes');
bw=bwperim(bwmorph(bw,'thin',maxDist));
figure,imagesc(bw+2*im),axis image

My proposal:
find the endpoints; these are the pixels with at most one neighbor, after a thinning step to discard "thick" endpoints. Endpoints should come in pairs.
from all endpoints, grow a digital disk until you meet another endpoint which is not the peer.
Instead of growing a disk, you can preprocess the set of endpoints and prepare it for a nearest-neighbor search (2D-tree for instance). You will need to modify the search to avoid hitting the peer.
This approach does not rely on standard functions, but it has the advantage to respect the original outline.
On the picture, the original pixels are in white or in green when they are endpoints. The yellow pixels are digital line segments drawn between the nearest endpoint pairs.

Related

How to find(figure out) pixelated part of image

I want to find pixelated part of image.
for example,
then, I want to find "AREA" of pixelation by bounding box.
At first, I think that R-CNN might be helpful, but later after that, I think using only traditional method is also possible for this problem such as finding difference of image entropy.. etc..
Are there any method for solving this problem?
Thank you.
Just thinking aloud really... the pixellated areas are regions of solid colour so they will have a very low variance and standard deviation, so, we can try some experiments with ImageMagick which is included in most Linux distros and is available for macOS and Windows.
If we take your image, and go to greyscale, then calculate the standard deviation in a 7x7 area around each pixel, then invert so that the areas with lowest standard deviation are bright, then normalise to the full black-white range and threshold the very brightest pixels:
convert p.png -colorspace gray -statistic standarddeviation 7x7 -negate -normalize -threshold 99.99% result.png
Changing the numbers a bit:
convert p.png -colorspace gray -statistic standarddeviation 3x3 -negate -normalize -threshold 99.9% result.png

Image Segmentation: Create polygons

I have input images which look like this:
I like to segment the images in a way that i get approximated polygons which only contain horizontal an vertical lines.
My first approach was a hough segmentation, but i was only able to create rectangular objects. This does not work for the second image.
Then i tried to use a decision tree: For each image i trained a decision tree with the inputs x and y positions of all pixels and the classification black/white. Then i only used the first n layer of this tree. With this new tree i did a prediction for all pixels. Sometimes this worked well, but sometimes it didn't. Especially the tree depth varies from picture to picture...
Maybe someone has an idea how to do this? Or is there already an algorithm for this use case available?
Thank you very much
Regards
Kevin
I get pretty reasonable results using a morphological "thinning" followed by an "erosion" to remove either horizontally or vertically oriented features. I am just doing it at the command-line with ImageMagick but you can use the Python bindings if you prefer.
So, horizontal features:
convert poly.png -threshold 50% -morphology Thinning:-1 Skeleton -morphology erode rectangle:3x1 im1h.png
And vertical features:
convert poly.png -threshold 50% -morphology Thinning:-1 Skeleton -morphology erode rectangle:1x3 im1v.png
And, using the other image:
convert poly2.png -threshold 50% -morphology Thinning:-1 Skeleton -morphology erode rectangle:1x3 result.png

How to trim/crop an image uniformly in ImageMagick?

Assume I have an original image (gray backgorund) with a black circle a bit down to the right (not centered), and the minimum space from any edge of the circle to the edge is, lets say, 75px. I would like to trim the same amount of space on all sides, and the space should be the maximum space possible without cropping the actual object in the image (area in magenta in image). Would love to hear how this could be solved.
Thanks in advance!
If I understand the question correctly, you want to trim an image not based on minimum bounding rectangle, but outer bounding rectangle.
I would do something like this..
Given I create an image with.
convert -size 200x200 xc:gray75 -fill black -draw 'circle 125 125 150 125' base.png
I would drop the image to a binary edge & trim everything down to the minimum bounding rectangle.
convert base.png -canny 1x1 -trim mbr.png
This will generate mbr.png image which will also have the original page information. The page information can be extracted with identify utility to calculate the outer bounding rectangle.
sX=$(identify -format '%W-(0 %X)-%w\n' mbr.png | bc)
sY=$(identify -format '%H-(0 %Y)-%h\n' mbr.png | bc)
Finally apply the calculated result(s) with -shave back on the original image.
convert base.png -shave "${sX}x${sY}" out.png
I assume that you want to trim your image (or shave in ImageMagick terms) by minimal horizontal or vertical distance to edge. If so this can be done with this one liner:
convert circle.png -trim -set page "%[fx:page.width-min(page.width-page.x-w,page.height-page.y-h)*2]x%[fx:page.height-min(page.width-page.x-w,page.height-page.y-h)*2]+%[fx:page.x-min(page.width-page.x-w,page.height-page.y-h)]+%[fx:page.y-min(page.width-page.x-w,page.height-page.y-h)]" -background none -flatten output.png
This may look complicated but in reality isn't. First trim the image. The result will still have stored information on page geometry including original width, height and actual offsets. With this info I can set the page geometry (correct width & height and new offsets) using ImageMagick FX expressions. Finally, flattening the image will produce desired output.

Extract the image2 from image 1

I want draw boundery box around the text like this image Image 2
from this Image 1. Can anyone suggest me a good way to do this or some algorithm or turorial anything?.
As you haven't suggested a tool, I will use ImageMagick straight at the command line as it is installed on most Linux distros and is available for OSX and Windows. It also has PHP, Perl, Python and .Net bindings.
So, as your background is uniform (ish) you can just use trim to trim it off:
convert image.jpg -fuzz 20% -trim result.jpg
Now you can add a border like this:
convert result.jpg -bordercolor black -border 5 result.jpg
Except you want the other grey background to be retained so that doesn't work for you. So, instead of actually trimming, we can ask ImageMagick where it "would" trim but to not actually do it like this:
convert image.jpg -fuzz 20% -format %# info:
81x22+1+14
So, we know it would make a 81x22px box starting 1 pixel in from the left and 14 pixels down from the top, so we'll just draw a rectangle there instead of trimming it:
convert image.jpg -fill none -stroke black -draw "rectangle 1,14 82,36" result.jpg
Or, if you want the outline fatter:
convert image.jpg -fill none -stroke black -strokewidth 5 -draw "rectangle 1,14 82,36" result.jpg
For a uniform background, a simple solution would be to identify all of the pixels that do not match the background color and then find the minimum and maximum indices in each axis of those pixels to define a rectangle.
For instance, if you were using Matlab, this might resemble:
Use 'find' to identify non-background pixels (e.g. linearIndices = find(~(image1 == background)) where background is either a hard coded set of RGB values corresponding to the background pixels or a set of RGB values identified by the mode of the image.
'Find' will return linear indices rather than subscripts (i.e. bottom right corner of a 3x3 matrix is 9, not [3,3]) so use 'ind2sub' to convert to subscripts (e.g. [I,J] = ind2sub(imageSize, linearIndices)
Use 'max' and 'min' to find range in x and y (e.g. rangeX = [min(I) max(I); rangeY = [min(J) max(J)])
Change pixels along min and max indices to border color. For instance, image1 ( rangeX(1), rangeY(1):rangeY(2) ) = boxColour (where boxColour is the RGB values of the colour you want the box to be) would draw the left border of the box. Repeat this process for the three other borders and you're done.
Of course this approach only works if the background is completely uniform. It also assumes you only want to draw a border that is one pixel thick.
While the function recommendations correspond specifically to Matlab functions, the thought process behind those functions could likely be ported elsewhere.

To remove background greyed pixels from image

I want to remove background unnecessary greyed pixels in above image.
May i know how to do that.
Quick and dirty with ImageMagick:
convert 9AWLa.png -blur x1 -threshold 50% out.png
Somewhat better, still with ImageMagick:
convert 9AWLa.png -morphology thicken '1x3>:1,0,1' out.png
Updated Answer
It is rather harder to keep the noisy border intact while removing noise elsewhere in the image. Is it acceptable to trim 3 pixels off all the way around the edge, then add a 3 pixel wide black border back on?
convert 9AWLa.png -morphology thicken '1x3>:1,0,1' \
-shave 3x3 -bordercolor black -border 3 out.png
Some methods that come to my mind are
If the backgroud is gray color rather than sparse black dots then you can convert the image in binary by thresholding it with proper value of grayscale. i.e. all values above particular values of pixel are white and all values below that are black. Something like this.
Another thing you can do is first smoothing the picture my some filter like mean or median filter and then converting into binary presented in previous point.
I think in this way the unnecessary background can be removed

Resources