OpenCV line detection for 45 degree lines - image

I have an image:
In this image, the OpenCV Hough transform can't detect the big -45 degree line using
minLineLength = 150
maxLineGap = 5
line_thr = 150
linesP = cv.HoughLinesP(dst, 1, np.pi / 180, line_thr, None, minLineLength, maxLineGap)
The only lines found are:
I tried playing with various thresholds also but I can't find the line here.
If I manually crop the image like this:
then I can clearly see the OpenCV Hough transform finding the right line:
I want to find this same line in the non cropped version. Any suggestions on the non-cropped version to find it?
Also there can be cases where there is no line at all or the line doesn't go all the way for X-axis length.
Examples

I implemented a slightly simpler algorithm than my other answer but in Python with OpenCV this time.
Basically, rather than taking the mean of vertical columns of pixels, it sums the pixels in the columns and chooses the column that is brightest. If I show the padded, rotated image with another image below representing the sums of the columns, you should see how it works:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image as greyscale
im = cv2.imread('45.jpg',cv2.IMREAD_GRAYSCALE)
# Pad with border so it isn't cropped when rotated
bw=300
bordered = cv2.copyMakeBorder(im, top=bw, bottom=bw, left=bw, right=bw, borderType= cv2.BORDER_CONSTANT)
# Rotate -45 degrees
w, h = bordered.shape
M = cv2.getRotationMatrix2D((h/2,w/2),-45,1)
paddedrotated = cv2.warpAffine(bordered,M,(h,w))
# DEBUG cv2.imwrite('1.tif',paddedrotated)
# Sum the elements of each column and find column with most white pixels
colsum = np.sum(paddedrotated,axis=0,dtype=np.float)
col = np.argmax(colsum)
# DEBUG cv2.imwrite('2.tif',colsum)
# Fill with black except for the line we have located which we make white
paddedrotated[:,:] = 0
paddedrotated[:,col] = 255
# Rotate back to straight
w, h = paddedrotated.shape
M = cv2.getRotationMatrix2D((h/2,w/2),45,1)
straight = cv2.warpAffine(paddedrotated,M,(h,w))
# Remove padding and save to disk
straight = straight[bw:-bw,bw:-bw]
cv2.imwrite('result.png',straight)
Note that you don't actually have to rotate the image back to straight and crop it back to its original size. You could actually stop after the first line that says:
col = np.argmax(colsum)
and use some elementary trigonometry to work out what that means in your original image.
Here is the output:
Keywords: line detection, detect line, rotate, pad, border, projection, project, image, image processing, Python, OpenCV, affine, Hough

I did this on the command-line in Terminal with ImageMagick but you can apply exactly the same technique with OpenCV.
Step 1
Take the image and rotate it 45 degrees introducing black pixels as background where required:
convert 45.jpg -background black -rotate 45 result.png
Step 2
Now, building on the previous command, set every pixel to the median of the box 1px wide and 250px tall centred on it:
convert 45.jpg -background black -rotate 45 -statistic median 1x250 result.png
Step 3
Now, again building on the previous command, rotate it back 45 degrees:
convert 45.jpg -background black -rotate 45 -statistic median 1x250 -rotate -45 result.png
So, in summary, the entire processing is:
convert input.jpg -background black -rotate 45 -statistic median 1x250 -rotate -45 result.png
Obviously then crop it back to the original size and append side-by-side with the original for checking:
convert 45.jpg -background black -rotate 45 -statistic median 5x250 -rotate -45 +repage -gravity center -crop 184x866+0+0 result.png
convert 45.jpg result.png +append result.png
You can also use mean statistic plus thresholding rather than median since it is quicker than sorting to find the median, however it tends to lead to smearing:
convert 45.jpg -background black -rotate 45 -statistic mean 1x250 result.png
Your newly-added image gets processed to this result:

The problem is clearly that the line you are searching for is not a line. It looks actually like a train of connected circles and boxes. Therefore, I recommend that you do the following:
Find all contours in the image using find contours
img = cv.imread('image.jpg')
img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(img_gray, 127, 255, 0)
img2, contours, hierarchy = cv.findContours(thresh, CHAIN_APPROX_SIMPLE ,cv.RETR_EXTERNAL)
This will return many many contours, so use a loop to save only long enough contours. Since the image size is 814x1041 pixels, I assume the contour long if it is at least 10% of the image width which is almost 100 (you must apparently optimize this value)
long_contours = []
for contour in contours[i]:
perimeter = cv2.arcLength(contour,True)
if (perimeter > 0.1 * 1018) # 10% of the image width
long_contours.append(contour)
Now draw a rotated bounding rectangle around those long contours that might be a line as well. The long contour is considered a line if its width is much longer than its height, or its aspect ratio is large (such as 8, and you need also to optimize this value)
for long_contour in long_contours:
rect = cv2.minAreaRect(long_contour)
aspec_ratio = rect.width / rect.height
if aspec_ratio > 8 :
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box],0,(255,255,255),cv.FILLED)
Finally you should get something like that. Please note the code here is for guidance only.

Your original code is fine as a whistle. The only problem is that your image contains too many information which mess up the accumulator scores. Everything will work out if you increase the line threshold to 255.
minLineLength = 150
maxLineGap = 5
line_thr = 255
linesP = cv2.HoughLinesP(dst, 1, np.pi / 180.0, line_thr, None, minLineLength, maxLineGap)
Here are the results using that value.
3 lines are detected here due to the large white pixel size.
[ 1 41 286 326]
[ 0 42 208 250]
[ 1 42 286 327]
5 lines are detected around the same area due to the same reason as above. Reducing the pixel sizes using morphological operation or distance transform should fix this.
[110 392 121 598]
[112 393 119 544]
[141 567 147 416]
[ 29 263 29 112]
[ 0 93 179 272]
No line found here.

Related

how to remove the white pixel around the image?

How I can corp the image or other methods so that the summation of pixels in the boundary, be less than the hole x and y position. I mean in the images below
the white hole summation x and y of the pixels must be higher in the hole position but as shown in the chart the surrounding of the image especially the x summation of pixels have the higher summation
In this way, I want to found the sum pixels that have high values and illustrate the hole pixels or coordination. In the y-axis, the summation of course so as the below better result than the x one but in some others not, of course, I crop all of 1000 image by below code then after returning the pixel data sum of them and the id and the max values than return the hole is illustrated.
img = Image.open('J:\py.pro\path\picture_1.png').convert('L') # convert image to 8-bit grayscale
if img.mode == "CMYK":
# color profiles can be found at C:\Program Files (x86)\Common Files\Adobe\Color\Profiles\Recommended
img = ImageCms.profileToProfile(img, "USWebCoatedSWOP.icc", "sRGB_Color_Space_Profile.icm", outputMode="RGB")
# PIL image -> OpenCV image; see https://stackoverflow.com/q/14134892/2202732
img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
## (1) Convert to gray, and threshold
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
th, threshed = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY_INV)
## (2) Morph-op to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (10,30))
morphed = cv2.morphologyEx(threshed, cv2.MORPH_CLOSE, kernel)
## (3) Find the max-area contour
cnts = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
cnt = sorted(cnts, key=cv2.contourArea)[-1]
## (4) Crop and save it
x,y,w,h = cv2.boundingRect(cnt)
dst = img[y:y+h, x:x+w]
# add border/padding around the cropped image
# dst = cv2.copyMakeBorder(dst, 10, 10, 10, 10, cv2.BORDER_CONSTANT, value=[255,255,255])
#cv2.imshow("J:\\py.pro\\path\\pic_1.png", dst)
cv2.imwrite("J:\\py.pro\\path\\pic_1.png", dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
How about adding a 1-pixel wide white border on all sides and flood-filling white-coloured pixels with black starting from (0,0) so that the flood-fill flows around the edges filling all white areas at image edges?
I'll demonstrate with magenta for the flood-fill so you can see which pixels are affected:
I actually did that with ImageMagick in Terminal, but you can do just the same with OpenCV:
magick lattice.png -bordercolor white -border 1 -fill magenta -draw 'color 0,0 floodfill' result.png

What's the easiest way to find the coordinates of an object in a image?

Imagine having an image of circles of different colors on a background of one color. What would be the easiest way to find the coordinates of the circles' centers (of course programmatically)?
I felt like doing it in Python with OpenCV as well, using the same starting image as my other answer.
The code looks like this:
#!/usr/bin/env python3
import numpy as np
import cv2
# Load image
im = cv2.imread('start.png')
# Convert to grayscale and threshold
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,1,255,0)
# Find contours, draw on image and save
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im, contours, -1, (0,255,0), 3)
cv2.imwrite('result.png',im)
# Show user what we found
for cnt in contours:
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
print('Contour: centre {},{}, radius {}'.format(x,y,radius))
That gives this on the Terminal:
Contour: centre 400.0,200.0, radius 10
Contour: centre 500.0,200.0, radius 80
Contour: centre 200.0,150.0, radius 90
Contour: centre 50.0,50.0, radius 40
And this as the result image:
There's a very easy way with ImageMagick which is free and installed on most Linux distros and is available for macOS and Windows - no programming required!
Let's start with this image:
Now you just run this in Terminal or Command Prompt:
magick input.png -define connected-components:verbose=true -connected-components 8 -auto-level output.png
Output
Objects (id: bounding-box centroid area mean-color):
0: 600x300+0+0 297.4,145.3 128391 srgb(0,0,0) <--- black background
2: 181x181+110+60 200.0,150.0 25741 srgb(0,0,255) <--- blue circle
3: 161x161+420+120 500.0,200.0 20353 srgb(255,0,255) <--- magenta circle
1: 81x81+10+10 50.0,50.0 5166 srgb(0,255,0) <--- green circle
4: 21x21+390+190 400.0,200.0 349 srgb(255,255,0). <--- yellow circle
I added the comments above after <---.
Looking at the blue circle, you can see its colour is srgb(0,0,255) which is blue and it measures 181x181 pixels - so its radius is 90 pixels. The top-left corner of the containing rectangle is at [110,60] so the centre is at [200,150], which matches the 200.00,150.00 given for the centroid.
Likewise, looking at the yellow circle, its colour is srgb(255,255,0) which is yellow. Its height and width are 21 pixels which means the radius is 10. The top-left corner of the containing square is at [390,190] which means the centre is at [400,200], matching the centroid given as 400.0,200.0.

Precise adjusting of two colours simultaneously via manipulation with brightness, contrast and gamma

I have a black and white image which has to be rendered on screen as grayscale image of precise colours. Black should be displayed as rgb(40,40,40) and white as rgb(128,128,128).
The problem is the software to render this image does not allow colours to be specified directly; the only parameters I can vary are brightness, contrast and gamma (converting image to the desired colours is not an option).
Is there any formulae to calculate specific values for those parameters to adjust colours as desribed?
Without knowing how they compute brightness and contrast, it is hard to tell you how to do your computation.
Perhaps I still misunderstand. But you can find the min and max values in your image using Imagemagick
convert image -format %[fx:255*minima] info:
convert image -format %[fx:255*maxima] info:
Those will be in the range of 0 to 255.
As Mark showed above the transformation is linear. So it obeys the equation
Y = a*X + b
where a is a measure of contrast and b is a measure of brightness; X is your input value and Y is your desired output value.
Thus
Ymax = a*Xmax + b
and
Ymin = a*Xmin + b
Subtracting and solving for a, we get
a = (Ymax-Ymin)/(Xmax-Xmin)
and substituting that into the equation for Ymax and saving for b, we get
b = Ymax - a*Xmax = Ymax - ( (Ymax-Ymin)/(Xmax-Xmin) )*Xmax
Then you can use the Imagemagick function -function polynomial to process your image.
In unix, I would do it as follows
Xmin=$(convert image -format %[fx:255*minima] info:)
Xmax=$(convert image -format %[fx:255*maxima] info:)
If your image is pure black and pure white, then you can skip the above and just use
Xmin=0
Xmax=255
And your desired values are
Ymin=40
Ymax=128
These are now variables and I can use -fx to do the calculations for a and b
a = $(convert xc: -format "%[fx:($Ymax-$Ymin)/($Xmax-$Xmin)]" info:)
b = $(convert xc: -format "%[fx:$Ymax - $a*$Xmax]" info:)
And to convert your image,
convert image -function polynomial "$a,$b" result image
In general, there are several ways to alter an image's contrast, gamma and brightness and it is difficult to know which method your chosen tool uses, and therefore provide the correct answer.
What you are trying to do is move the blue line (no contrast or brightness changes) in the image below to where the red line (decreased contrast) is:
In general, decreasing the contrast will rotate the blue line clockwise whereas increasing it will rotate it anti-clockwise. In general, increasing the brightness will shift the blue line to the right whereas decreasing the brightness will shift it left. Changing the gamma will likely make the line into a curve.
Can you use ImageMagick at the commandline instead?
convert input.png +level 15.69%,50.2% -depth 8 result.png
If you have v7+, use magick in place of convert.
I made a little gradient for you with:
convert -size 60x255 gradient: -rotate 90 gradient.png
And if you apply the suggested command:
convert gradient.png +level 15.69%,50.2% -depth 8 result.png
You will get this:
And you can check the statistics (min and max) with:
identify -verbose result.png | more
Image: result.png
Format: PNG (Portable Network Graphics)
Mime type: image/png
Class: PseudoClass
Geometry: 255x60+0+0
Units: Undefined
Type: Grayscale
Base type: Palette
Endianess: Undefined
Colorspace: Gray
Depth: 8-bit
Channel depth:
Gray: 8-bit
Channel statistics:
Pixels: 15300
Gray:
min: 40 (0.156863) <--- MIN looks good
max: 128 (0.501961) <--- MAX looks good
mean: 84.0078 (0.329443)
standard deviation: 25.5119 (0.100047)
kurtosis: -1.19879
skewness: 0.000197702

How to programmatically generate a ring/annulus with varying blocks of colors

A picture is worth a thousand words...
I'd like to know how to be able to generate an image like that where (1) the two circles are obviously perfect circles, (2) I can define the beginning and ending parts of each region in terms of angles, e.g. section 1 starts at 0 radians from the vertical and ends at pi/2 radians from the vertical, etc., and (3) I can define the color of each region.
In fact the outside and inside of the ring should not have a black border; the border of each region should be the same color as each region.
How might I do this with, say, ImageMagick?
You can create annulus in vector graphics with the Arc command. See Mozilla's Path Document for details on parameter.
With ImageMagick, you could -draw any part of an vector-graphic. Example:
convert -size 100x100 xc:transparent -stroke black -strokewidth 1 \
-fill blue -draw 'path "M 10 50 A 40 40 0 0 1 50 10 L 50 20 A 30 30 0 0 0 20 50 Z"' \
-fill red -draw 'path "M 50 10 A 40 40 0 0 1 90 50 L 80 50 A 30 30 0 0 0 50 20 Z"' \
-fill green -draw 'path "M 90 50 A 40 40 0 0 1 10 50 L 20 50 A 30 30 0 0 0 80 50 Z"' \
annulus.png
Other great example here, and here
update
To create a more programmatic approach, use any OOP scripting language. Below is quick example with Python & Wand, but Ruby & RMagick are also highly recommended.
#!/usr/bin/env python3
import math
from wand.color import Color
from wand.drawing import Drawing
from wand.image import Image
class Annulus(Image):
def __init__(self, inner, outer, padding=5):
self.Ri = inner
self.Ro = outer
side = (outer + padding) * 2
self.midpoint = side/2
super(Annulus, self).__init__(width=side,
height=side,
background=Color("transparent"))
self.format = 'PNG'
def __iadd__(self, segment):
cos_start, cos_end = math.cos(segment.As), math.cos(segment.Ae)
sin_start, sin_end = math.sin(segment.As), math.sin(segment.Ae)
SiX, SiY = self.midpoint + self.Ri * cos_start, self.midpoint + self.Ri * sin_start
SoX, SoY = self.midpoint + self.Ro * cos_start, self.midpoint + self.Ro * sin_start
EiX, EiY = self.midpoint + self.Ri * cos_end, self.midpoint + self.Ri * sin_end
EoX, EoY = self.midpoint + self.Ro * cos_end, self.midpoint + self.Ro * sin_end
with Drawing() as draw:
for key, value in segment.draw_args.items():
setattr(draw, key, value)
draw.path_start()
draw.path_move(to=(SiX, SiY))
draw.path_elliptic_arc(to=(EiX, EiY),
radius=(self.Ri, self.Ri),
clockwise=True)
draw.path_line(to=(EoX, EoY))
draw.path_elliptic_arc(to=(SoX, SoY),
radius=(self.Ro, self.Ro),
clockwise=False)
draw.path_close()
draw.path_finish()
draw(self)
return self
class Segment(object):
def __init__(self, start=0.0, end=0.0, **kwargs):
self.As = start
self.Ae = end
self.draw_args = kwargs
if __name__ == '__main__':
from wand.display import display
ring = Annulus(20, 40)
ring += Segment(start=0,
end=math.pi/2,
fill_color=Color("yellow"))
ring += Segment(start=math.pi/2,
end=math.pi,
fill_color=Color("pink"),
stroke_color=Color("magenta"),
stroke_width=1)
ring += Segment(start=math.pi,
end=0,
fill_color=Color("lime"),
stroke_color=Color("orange"),
stroke_width=4)
display(ring)
I know little about gnuplot but think it probaby fits the bill here - my commands may be crude, but they seem pretty legible and effective. Someone cleverer than me may be able to improve them!
Anyway, here is the script I came up with:
set xrange [-1:1]
set yrange [-1:1]
set angles degrees
set size ratio -1
# r1 = annulus outer radius, r2 = annulus inner radius
r1=1.0
r2=0.8
unset border; unset tics; unset key; unset raxis
set terminal png size 1000,1000
set output 'output.png'
set style fill solid noborder
set object circle at first 0,0 front size r1 arc [0:60] fillcolor rgb 'red'
set object circle at first 0,0 front size r1 arc [60:160] fillcolor rgb 'green'
set object circle at first 0,0 front size r1 arc [160:360] fillcolor rgb 'blue'
# Splat a white circle on top to conceal central area
set object circle at first 0,0 front size r2 fillcolor rgb 'white'
plot -10 notitle
And here is the result:
So, if you save the above script as annulus.cmd you would run it and create the file output.png using the command
gnuplot annulus.cmd
Obviously the guts of the script are the 3 lines that start set object circle each of which creates a separate annulus segment in a different colour with a different set of start and end angles.
Noodling around and changing some things gives this:
set xrange [-1:1]
set yrange [-1:1]
set angles degrees
set size ratio -1
# r1 = annulus outer radius, r2 = annulus inner radius
r1=1.0
r2=0.4
unset border; unset tics; unset key; unset raxis
set terminal png size 1000,1000
set output 'output.png'
set style fill solid noborder
set object circle at first 0,0 front size r1 arc [0:60] fillcolor rgb 'red'
set object circle at first 0,0 front size r1 arc [60:120] fillcolor rgb 'green'
set object circle at first 0,0 front size r1 arc [120:180] fillcolor rgb 'blue'
set object circle at first 0,0 front size r1 arc [180:240] fillcolor rgb 'yellow'
set object circle at first 0,0 front size r1 arc [240:300] fillcolor rgb 'black'
set object circle at first 0,0 front size r1 arc [300:360] fillcolor rgb 'magenta'
# Splat a white circle on top to conceal central area
set object circle at first 0,0 front size r2 fillcolor rgb 'white'
plot -10 notitle
As I am better at thinking in straight lines than circles, I thought I would have another go at this, a totally different way...
First, draw our annulus out in a straight line like this:
convert -size 45x40 xc:red xc:lime -size 270x40 xc:blue +append line.png
I sneakily made the lengths of the line segments add up to 360, so that there is one pixel per degree - for my simple brain to cope with :-) So, there are 45 px (degrees) of red, 45 px (degrees) of lime and 270 pixels (degrees) of blue, and they are all appended together with +append to make the line. Note that the first -size 45x40 setting persists until later changed, so it applies to both the red and lime line segments before I change it ready to apply to the blue.
Now we bend that line around a circle, like this:
convert line.png -virtual-pixel White -distort arc 360 result.png
You can also do it all in one go when you get used to the concept, like this:
convert -size 60x40 xc:red xc:lime xc:blue xc:cyan xc:magenta xc:black +append -virtual-pixel White -distort arc 360 result.png
You can add grey borders to your annulus segments like this:
convert -size 600x400 xc:red xc:lime xc:blue xc:cyan xc:magenta xc:black -bordercolor "rgb(180,180,180)" -border 20 +append -virtual-pixel White -distort arc 360 result.png
If you want everything on a transparent background, change all the white above to none.

how to detect edges in an image having only red object

I have an image, in that image all red objects are detected.
Here's an example with two images:
http://img.weiku.com/waterpicture/2011/10/30/18/road_Traffic_signs_634577283637977297_4.jpg
But when i proceed that image for edge detection method i got the output as only black color. However, I want to detect the edges in that red object.
r=im(:,:,1); g=im(:,:,2); b=im(:,:,3);
diff=imsubtract(r,rgb2gray(im));
bw=im2bw(diff,0.18);
area=bwareaopen(bw,300);
rm=immultiply(area,r); gm=g.*0; bm=b.*0;
image=cat(3,rm,gm,bm);
axes(handles.Image);
imshow(image);
I=image;
Thresholding=im2bw(I);
axes(handles.Image);
imshow(Thresholding)
fontSize=20;
edgeimage=Thresholding;
BW = edge(edgeimage,'canny');
axes(handles.Image);
imshow(BW);
When you apply im2bw you want to use only the red channel of I(i.e the 1st channel). Therefore using this command:
Thresholding =im2bw(I(:,:,1));
for example yields this output:
Just FYI for anyone else that manages to stumble here. The HSV colorspace is better suited for detecting colors over the RGB colorspace. A good example is in gnovice's answer. The main reason for this is that there are colors which can contain full 255 red values but aren't actually red (yellow can be formed from (255,255,0), white from (255,255,255), magenta from (255,0,255), etc).
I modified his code for your purpose below:
cdata = imread('roadsign.jpg');
hsvImage = rgb2hsv(cdata); %# Convert the image to HSV space
hPlane = 360.*hsvImage(:,:,1); %# Get the hue plane scaled from 0 to 360
sPlane = hsvImage(:,:,2); %# Get the saturation plane
bPlane = hsvImage(:,:,3); %# Get the brightness plane
% Must get colors with high brightness and saturation of the red color
redIndex = ((hPlane <= 20) | (hPlane >= 340)) & sPlane >= 0.7 & bPlane >= 0.7;
% Show edges
imshow(edge(redIndex));
Output:

Resources