How can I discard an extra line from a contour shape in opencv - image

Is there a way to remove small line segments from a contour?
For example, in this image, the largest contour is a bounding box but we also have a line segment connected to the box.
As contour is a set of Points, I guess we can do something to remove segment of contours that is not part of the box. For example by detecting and removing small lines or small sub contours or another way. But I do not know how I can do it.
Please remember I want to remove them after finding contour and not before that. Do you know how I can remove them? Or any idea?
//After edge detection with canny.
//canny variable has the edge mat
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours( canny, contours, hierarchy, RETR_EXTERNAL,CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
Mat draw = Mat::zeros(canny.size(), CV_8UC3);
for (int i = 0; i< contours.size(); i++){
double a = contourArea(contours[i], false); // Find the area of contour
if (a>largest_area){
largest_area = a;
largest_contour_index = i; //Store the index of largest contour
}
}
drawContours(draw, contours, largest_contour_index, Scalar(255, 255, 255), 0, 8, hierarchy);
imshow("Contours", draw);

Related

Processing, ellipse not following alpha values?

class Particle{
PVector velocity, location; //PVector variables for each particle.
Particle(){ //Constructor - random location and speed for each particle.
velocity = new PVector(random(-0.5,0.5), random(-0.5,0.5));
location = new PVector(random(0,width),random(0,width));
}
void update() { location.add(velocity); } //Motion method.
void edge() { //Wraparound case for particles.
if (location.x > width) {location.x = 0;}
else if (location.x < 0) {location.x = width;}
if (location.y > height) {location.y = 0;}
else if (location.y < 0) {location.y = height;}
}
void display(ArrayList<Particle> p){ //Display method to show lines and ellipses between particles.
for(Particle other: p){ //For every particle in the ArrayList.
float d = PVector.dist(location,other.location); //Get distance between any two particle.
float a = 255 - d*2.5; //Map variable 'a' as alpha based on distance. E.g. if distance is high, d = 100, alpha is low, a = 255 - 225 = 30.
println("Lowest distance of any two particle =" + d); //Debug output.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,a); //Particle are coloured black, 'a' to vary alpha.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
stroke(0,a); //Lines are coloured black, 'a' to vary alpha.
strokeWeight(0.7);
line(location.x,location.y,other.location.x,other.location.y); //Draw line between four coordinates, between two particle.
}
}
}
}
ArrayList<Particle> particles = new ArrayList<Particle>(); //Create a new arraylist of type Particle.
void setup(){
size(640,640,P2D); //Setup frame of sketch.
particles.add(new Particle()); //Add five Particle elements into arraylist.
particles.add(new Particle());
particles.add(new Particle());
particles.add(new Particle());
particles.add(new Particle());
}
void draw(){
background(255); //Set white background.
for(Particle p: particles){ //For every 'p' of type Particle in arraylist particles.
p.update(); //Update location based on velocity.
p.display(particles); //Display each particle in relation to other particles.
p.edge(); //Wraparound if particle reaches edge of screen.
}
}
In the above code, there are to shape objects, lines and ellipses. The transparency of which are affected by variable a.
Variable 'a', or alpha, is extrapolated from 'd' which is distance. Hence, when the objects are further, the alpha value of the objects falls.
In this scenario, the alpha values of the line do not change over time e.g. fade with distance. However the ellipses seem to be stuck on alpha '255' despite having very similar code.
If the value of 'a' is hardcoded, e.g.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,100); //Particle are coloured black, set alpha 'a' to be 100, grey tint.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
the ellipses changes colour as expected to a grey tint.
Edit: I believe I have found the root of the issue. The variable 'a' does not discriminate between the particles that are being iterated. As such, the alpha might be stuck/adding up to 255.
You're going to have to post an MCVE. Note that this should not be your entire sketch, just a few hard-coded lines so we're all working from the same code. We should be able to copy and paste your code into our own machines to see the problem. Also, please try to properly format your code. Your lack of indentation makes your code hard to read.
That being said, I can try to help in a general sense. First of all, you're printing out the value of a, but you haven't told us what its value is. Is its value what you expect? If so, are you clearing out previous frames before drawing the ellipses, or are you drawing them on top of previously drawn ellipses? Are you drawing ellipses elsewhere in your code?
Start over with a blank sketch, and add just enough lines to show the problem. Here's an example MCVE that you can work from:
stroke(0);
fill(0);
ellipse(25, 25, 25, 25);
line(0, 25, width, 25);
stroke(0, 128);
fill(0, 128);
ellipse(75, 75, 25, 25);
line(0, 75, width, 75);
This code draws a black line and ellipse, then draws a transparent line and ellipse. Please hardcode the a value from your code, or add just enough code so we can see exactly what's going on.
Edit: Thanks for the MCVE. Your updated code still has problems. I don't understand this loop:
for(Particle other: p){ //For every particle in the ArrayList.
float d = PVector.dist(location,other.location); //Get distance between any two particle.
float a = 255 - d*2.5; //Map variable 'a' as alpha based on distance. E.g. if distance is high, d = 100, alpha is low, a = 255 - 225 = 30.
println("Lowest distance of any two particle =" + d); //Debug output.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,a); //Particle are coloured black, 'a' to vary alpha.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
stroke(0,a); //Lines are coloured black, 'a' to vary alpha.
strokeWeight(0.7);
line(location.x,location.y,other.location.x,other.location.y); //Draw line between four coordinates, between two particle.
}
}
}
You're saying for each Particle, you loop through every Particle and then draw an ellipse at the current Particle's location? That doesn't make any sense. If you have 100 Particles, that means each Particle will be drawn 100 times!
If you want each Particle's color to be based off its distance to the closest other Particle, then you need to modify this loop to simply find the closest Particle, and then base your calculations off of that. It might look something like this:
Particle closestNeighbor = null;
float closestDistance = 100000;
for (Particle other : p) { //For every particle in the ArrayList.
if (other == this) {
continue;
}
float d = PVector.dist(location, other.location);
if (d < closestDistance) {
closestDistance = d;
closestNeighbor = other;
}
}
Notice the if (other == this) { section. This is important, because otherwise you'll be comparing each Particle to itself, and the distance will be zero!
Once you have the closestNeighbor and the closestDistance, you can do your calculations.
Note that you're only drawing particles when they have a neighbor that's closer than 112 pixels away. Is that what you want to be doing?
If you have a follow-up question, please post an updated MCVE in a new question. Constantly editing the question and answer gets confusing, so just ask a new question if you get stuck again.

Emgu CV draw rotated rectangle

I'm looking for few days a solution to draw rectangle on image frame. Basically I'm using CvInvoke.cvRectangle method to draw rectangle on image because I need antialiased rect.
But problem is when I need to rotate a given shape for given angle. I can't find any good solution.
I have tryed to draw rectangle on separate frame then rotate hole frame and apply this new image on top of my base frame. But in this solution there is a problem with antialiasing. It's not working.
I'm working on simple application that should allow draw few kinds of shape, resize them and rotation for given angle.
Any idea how to achive this?
The best way I found to draw a minimum enclosing rectangle on the contour is using the Polylines() function which uses vertices that are returned from MinAreaRect() function. There are surely other ways to do it as well. Here is the code walk down:
// Find contours
var contours = new Emgu.CV.Util.VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(image, contours, hierarchy, RetrType.Tree, ChainApproxMethod.ChainApproxSimple);
// According to your metric, get an index of the contour you want to find the min enclosing rectangle for
int index = 2; // Say, 2nd index works for you.
var rectangle = CvInvoke.MinAreaRect(contours[index]);
Point[] vertices = Array.ConvertAll(rectangle.GetVertices(), Point.Round);
CvInvoke.Polylines(image, vertices, true, new MCvScalar(0, 0, 255), 5);
The result can be visualized in the image below, in red is the minimum enclosing rectangle.
I use C# and EMGU.CV(4.1), and I think this code will not be difficult to transfer to any platform.
Add function in the in your helper:
public static Mat DrawRect(Mat input, RotatedRect rect, MCvScalar color = default(MCvScalar),
int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
{
var v = rect.GetVertices();
var prevPoint = v[0];
var firstPoint = prevPoint;
var nextPoint = prevPoint;
var lastPoint = nextPoint;
for (var i = 1; i < v.Length; i++)
{
nextPoint = v[i];
CvInvoke.Line(input, Point.Round(prevPoint), Point.Round(nextPoint), color, thickness, lineType, shift);
prevPoint = nextPoint;
lastPoint = prevPoint;
}
CvInvoke.Line(input, Point.Round(lastPoint), Point.Round(firstPoint), color, thickness, lineType, shift);
return input;
}
This draws roteted rectangle by points. Here used rounding points by method Point.Round becose RotatedRect has points in float coordinates and CvInvoke.Line takes points as integer.
Use:
var mat = Mat.Zeros(200, 200, DepthType.Cv8U, 3);
mat.GetValueRange();
var rRect = new RotatedRect(new PointF(100, 100), new SizeF(100, 50), 30);
DrawRect(mat, rRect,new MCvScalar(255,0,0));
var brect = CvInvoke.BoundingRectangle(new VectorOfPointF(rRect.GetVertices()));
CvInvoke.Rectangle(mat, brect, new MCvScalar(0,255,0), 1, LineType.EightConnected, 0);
Result:
You should read the OpenCV documentation.
There is a RotatedRectangle class that you can use for your task. You can specify the angle by which the rectangle will be rotated.
Here is a sample code (taken from the docs) for drawing a rotated rectangle:
Mat image(200, 200, CV_8UC3, Scalar(0));
RotatedRect rRect = RotatedRect(Point2f(100,100), Size2f(100,50), 30);
Point2f vertices[4];
rRect.points(vertices);
for (int i = 0; i < 4; i++)
line(image, vertices[i], vertices[(i+1)%4], Scalar(0,255,0));
Rect brect = rRect.boundingRect();
rectangle(image, brect, Scalar(255,0,0));
imshow("rectangles", image);
waitKey(0);
Here is the result:

Estimate Image line gradient ( not pixel gradient)

I have a problem whereby I want to estimate the gradient of the line on the contour. Please note that I dont need the pixel gradient but the rate of change of line.
If you see the attached image, you will see a binary image with green contour. I want to label each pixel based on the gradient of the pixel on the contour.
Why I need the gradient is because I want to compute the points where the gradient orientation changes from + to - or from - to +.
I cannot think of a good method, to estimate this point on the image. Could someone help me with suggestion on how I can estimate this points.
Here is a small program that computes the tangent at each contour pixel location in a very simple way (there exist other and probably better ways! the easy ones are: http://en.wikipedia.org/wiki/Finite_difference#Forward.2C_backward.2C_and_central_differences):
for a contour pixel c_{i} get the neighbors c_{i-1} and c_{i+1}
tangent direction at c_i is (c_{i-1} - c_{i+1}
So this is all on CONTOUR PIXELS but maybe you could so something similar if you compute the orthogonal to the full image pixel gradient... not sure about that ;)
here's the code:
int main()
{
cv::Mat input = cv::imread("../inputData/ContourTangentBin.png");
cv::Mat gray;
cv::cvtColor(input,gray,CV_BGR2GRAY);
// binarize
cv::Mat binary = gray > 100;
// find contours
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
findContours( binary.clone(), contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE ); // CV_CHAIN_APPROX_NONE to get each single pixel of the contour!!
for( int i = 0; i< contours.size(); i++ )
{
std::vector<cv::Point> & cCont = contours[i];
std::vector<cv::Point2f> tangents;
if(cCont.size() < 3) continue;
// 1. compute tangent for first point
cv::Point2f cPoint = cCont.front();
cv::Point2f tangent = cCont.back() - cCont.at(1); // central tangent => you could use another method if you like to
tangents.push_back(tangent);
// display first tangent
cv::Mat tmpOut = input.clone();
cv::line(tmpOut, cPoint + 10*tangent, cPoint-10*tangent, cv::Scalar(0,0,255),1);
cv::imshow("tangent",tmpOut);
cv::waitKey(0);
for(unsigned int j=1; j<cCont.size(); ++j)
{
cPoint = cCont[j];
tangent = cCont[j-1] - cCont[(j+1)%cCont.size()]; // central tangent => you could use another method if you like to
tangents.push_back(tangent);
//display current tangent:
tmpOut = input.clone();
cv::line(tmpOut, cPoint + 10*tangent, cPoint-10*tangent, cv::Scalar(0,0,255),1);
cv::imshow("tangent",tmpOut);
cv::waitKey(0);
//if(cv::waitKey(0) == 's') cv::imwrite("../outputData/ContourTangentTangent.png", tmpOut);
}
// now there are all the tangent directions in "tangents", do whatever you like with them
}
for( int i = 0; i< contours.size(); i++ )
{
drawContours( input, contours, i, cv::Scalar(0,255,0), 1, 8, hierarchy, 0 );
}
cv::imshow("input", input);
cv::imshow("binary", binary);
cv::waitKey(0);
return 0;
}
I used this image:
and got outputs like:
in the result you get a vector with a 2D tangent information (line direction) for each pixel of that contour.

Prospective algorithmic approach for this image in OpenCV

I am looking for advice from people having extensive experience with computer vision. I have a collection of ultrasonographic B&W images like the one below (without the stars and dotted line):
What I would like to do is detect the contour of a blood vessel (for example, the one highlighted by the yellow star). Of course my first step would be to define the ROI and maximize the contrast. But what would then be the best algorithm to use? Segmentation with the watershed algorithm? Something else?
I am little unsettled because of the image blur...
Edit:
As requested in the comments, here would be an example of source and result images:
Following is a simple approach to your problem, if I understood you correctly. My result is shown below.
And here is the code
int max_area_threshold = 10000;
int min_area_threshold = 1000;
float rational_threshold = 0.7;
cv::Mat img = cv::imread("sample.jpg", CV_8UC1);
cv::Mat img_binary;
//Create binary imae by tresholding
cv::threshold(img, img_binary, 25, 255, CV_THRESH_BINARY);
//Invert black-white
cv::bitwise_not(img_binary, img_binary);
//Eliminating small segments
cv::erode(img_binary, img_binary, cv::Mat(), cv::Point(-1, -1), 2, 1, 1);
cv::dilate(img_binary, img_binary, cv::Mat(), cv::Point(-1, -1), 1, 1, 1);
//Find contours
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours( img_binary, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
for( int i = 0; i< contours.size(); i++ )
{
if(contours[i].size() < 5)
continue;
//Fit ellipse to contour
cv::RotatedRect boundRect = cv::fitEllipse(contours[i]);
//Check the squareness of the bounding box
if(abs((boundRect.size.width / (float)boundRect.size.height)-1.0) > rational_threshold)
continue;
//Elliminate too big segments
if(boundRect.boundingRect().area() > max_area_threshold)
continue;
//Elliminate too small segments
if(boundRect.boundingRect().area() < min_area_threshold)
continue;
drawContours(img, contours, i, cv::Scalar(255), 0.2, 8, hierarchy, 0, cv::Point() );
}
cv::imwrite("result.jpg", img);
I hope it helps.

Find vertex from a object by using vertex detection

I would like to find all vertex (e.g. return x, y positions) for the black object.
I will use Java and JavaCV to implements. Is there any API or algorithm can help?
Sorry for not enough reputation to post images. I post the link here.
The original image like this:
http://i.stack.imgur.com/geubs.png
The expected result like this:
http://i.stack.imgur.com/MA7uq.png
Here is what you should do (for explanation, see comments with code),
CODE
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
// Load the image
String path = "/home/bikz05/Desktop/geubs.png";
Mat original = Highgui.imread(path);
Mat image = new Mat();
Imgproc.cvtColor(original, image, Imgproc.COLOR_BGR2GRAY);
// Threshold the image
Mat threshold = new Mat();
Imgproc.threshold(image, threshold, 127, 255, 1);
// Find the contours
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(threshold, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// Get contour index with largest area
double max_area = -1;
int index = 0;
for(int i=0; i< contours.size();i++) {
if (Imgproc.contourArea(contours.get(i)) > max_area) {
max_area = Imgproc.contourArea(contours.get(i));
index = i;
}
}
// Approximate the largest contour
MatOfPoint2f approxCurve = new MatOfPoint2f();
MatOfPoint2f oriCurve = new MatOfPoint2f( contours.get(index).toArray() );
Imgproc.approxPolyDP(oriCurve, approxCurve, 6.0, true);
// Draw contour points on the original image
Point [] array = approxCurve.toArray();
for(int i=0; i < array.length;i++) {
Core.circle(original, array[i], 2, new Scalar(0, 0 ,255), 2);
}
INPUT IMAGE
OUTPUT IMAGE
OpenCV allows you to take a binary image and carry out contour analysis.
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html
You could use findContours to find all of the contours (all of the edge points) then simply average them or pick and choose the ones that suit your purpose.
Here is a good example for JavaCV..
opencv/javacv: How to iterate over contours for shape identification?

Resources