I'm trying to detect circles but I am detecting circles that aren't even there. My code is below. Anyone know how to modify the DetectCircle() method to make the detection more accurate , please and thanks
void detectCircle( IplImage * img )
{
int edge_thresh = 1;
IplImage *gray = cvCreateImage( cvSize(img->width,img->height), 8, 1);
IplImage *edge = cvCreateImage( cvSize(img->width,img->height), 8, 1);
cvCvtColor(img, gray, CV_BGR2GRAY);
gray->origin = 1;
// color threshold
cvThreshold(gray,gray,100,255,CV_THRESH_BINARY);
// smooths out image
cvSmooth(gray, gray, CV_GAUSSIAN, 11, 11);
// get edges
cvCanny(gray, edge, (float)edge_thresh, (float)edge_thresh*3, 5);
// detects circle
CvSeq* circle = cvHoughCircles(edge, cstorage, CV_HOUGH_GRADIENT, 1,
edge->height/50, 5, 35);
// draws circle and its centerpoint
float* p = (float*)cvGetSeqElem( circle, 0 );
if( p==null ){ return;}
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), 3, CV_RGB(255,0,0), -1, 8, 0 );
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), cvRound(p[2]), CV_RGB(200,0,0), 1, 8, 0 );
cvShowImage ("Snooker", img );
}
cvHoughCircles detects circles that arent obvious to us. If you know the pixel size of snooker balls you can filter them based on their radius. Try setting the min_radius and max_radius parameters in your cvHoughCircles function.
On a side note, once you get the circles, you can filter them based on color. If the circle is mostly one color, it has a good chance of being a ball, if it doenst its probably a false positive.
edit: by "circle's color" i mean the pixels inside the circle boundary
Related
I get different results when running this sample with Processing directly, and with Processing.js in a browser. Why?
I was happy about my result and wanted to share it on open Processing, but the rendering was totally different and I don't see why. Below is a minimal working example.
/* Program that rotates a triange and draws an ellipse when the third vertex is on top of the screen*/
float y = 3*height/2;
float x = 3*width/2;
float previous_1 = 0.0;
float previous_2 = 0.0;
float current;
float angle = 0.0;
void setup() {
size(1100, 500);
}
void draw() {
fill(0, 30);
// rotate triangle
angle = angle - 0.02;
translate(x, y);
rotate(angle);
// display triangle
triangle(-50, -50, -30, 30, -90, -60);
// detect whether third vertex is on top by comparing its 3 successive positions
current = screenY(-90, -60); // current position of the third vertex
if (previous_1 < previous_2 && previous_1 < current) {
// draw ellipse at the extrema position
fill(128, 9, 9);
ellipse(-90, -60, 7, 10);
}
// update the 2 previous positions of the third vertex
previous_2 = previous_1;
previous_1 = current;
}
In processing, the ellipse is drawn when a triangle vertex is on top, which is my goal.
In online sketching, the ellipse is drawn during the whole time :/
In order to get the same results online as you get by running Processing locally you will need to specify the rendering mode as 3d when calling size
For example:
void setup() {
size(1100, 500, P3D);
}
You will also need to specify the z coordinate in the call to screenY()
current = screenY(-90, -60, 0);
With these two changes you should get the same results online as you get running locally.
Online:
Triangle Ellipse Example
Local:
The problem lies in the screenY function. Print out the current variable in your processing sketch locally and online. In OpenProcessing, the variable current grows quickly above multiple thousands, while it stays between 0 and ~260 locally.
It seems like OpenProcessing has a bug inside this function.
To fix this however, I would recommend you to register differently when you drew a triangle at the top of the circle, for example by using your angle variable:
// Calculate angle and modulo it by 2 * PI
angle = (angle - 0.02) % (2 * PI);
// If the sketch has made a full revolution
if (previous_1 < previous_2 && previous_1 < angle) {
// draw ellipse at the extrema position
fill(128, 9, 9);
ellipse(-90, -60, 7, 10);
}
// update the 2 previous angles of the third vertex
previous_2 = previous_1;
previous_1 = angle;
However, because of how you draw the triangles, the ellipse is at an angle of about PI / 3. To fix this, one option would be to rotate the screen by angle + PI / 3 like so:
rotate(angle + PI / 3);
You might have to experiment with the angle offset a bit more to draw the ellipse perfectly at the top of the circle.
class Particle{
PVector velocity, location; //PVector variables for each particle.
Particle(){ //Constructor - random location and speed for each particle.
velocity = new PVector(random(-0.5,0.5), random(-0.5,0.5));
location = new PVector(random(0,width),random(0,width));
}
void update() { location.add(velocity); } //Motion method.
void edge() { //Wraparound case for particles.
if (location.x > width) {location.x = 0;}
else if (location.x < 0) {location.x = width;}
if (location.y > height) {location.y = 0;}
else if (location.y < 0) {location.y = height;}
}
void display(ArrayList<Particle> p){ //Display method to show lines and ellipses between particles.
for(Particle other: p){ //For every particle in the ArrayList.
float d = PVector.dist(location,other.location); //Get distance between any two particle.
float a = 255 - d*2.5; //Map variable 'a' as alpha based on distance. E.g. if distance is high, d = 100, alpha is low, a = 255 - 225 = 30.
println("Lowest distance of any two particle =" + d); //Debug output.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,a); //Particle are coloured black, 'a' to vary alpha.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
stroke(0,a); //Lines are coloured black, 'a' to vary alpha.
strokeWeight(0.7);
line(location.x,location.y,other.location.x,other.location.y); //Draw line between four coordinates, between two particle.
}
}
}
}
ArrayList<Particle> particles = new ArrayList<Particle>(); //Create a new arraylist of type Particle.
void setup(){
size(640,640,P2D); //Setup frame of sketch.
particles.add(new Particle()); //Add five Particle elements into arraylist.
particles.add(new Particle());
particles.add(new Particle());
particles.add(new Particle());
particles.add(new Particle());
}
void draw(){
background(255); //Set white background.
for(Particle p: particles){ //For every 'p' of type Particle in arraylist particles.
p.update(); //Update location based on velocity.
p.display(particles); //Display each particle in relation to other particles.
p.edge(); //Wraparound if particle reaches edge of screen.
}
}
In the above code, there are to shape objects, lines and ellipses. The transparency of which are affected by variable a.
Variable 'a', or alpha, is extrapolated from 'd' which is distance. Hence, when the objects are further, the alpha value of the objects falls.
In this scenario, the alpha values of the line do not change over time e.g. fade with distance. However the ellipses seem to be stuck on alpha '255' despite having very similar code.
If the value of 'a' is hardcoded, e.g.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,100); //Particle are coloured black, set alpha 'a' to be 100, grey tint.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
the ellipses changes colour as expected to a grey tint.
Edit: I believe I have found the root of the issue. The variable 'a' does not discriminate between the particles that are being iterated. As such, the alpha might be stuck/adding up to 255.
You're going to have to post an MCVE. Note that this should not be your entire sketch, just a few hard-coded lines so we're all working from the same code. We should be able to copy and paste your code into our own machines to see the problem. Also, please try to properly format your code. Your lack of indentation makes your code hard to read.
That being said, I can try to help in a general sense. First of all, you're printing out the value of a, but you haven't told us what its value is. Is its value what you expect? If so, are you clearing out previous frames before drawing the ellipses, or are you drawing them on top of previously drawn ellipses? Are you drawing ellipses elsewhere in your code?
Start over with a blank sketch, and add just enough lines to show the problem. Here's an example MCVE that you can work from:
stroke(0);
fill(0);
ellipse(25, 25, 25, 25);
line(0, 25, width, 25);
stroke(0, 128);
fill(0, 128);
ellipse(75, 75, 25, 25);
line(0, 75, width, 75);
This code draws a black line and ellipse, then draws a transparent line and ellipse. Please hardcode the a value from your code, or add just enough code so we can see exactly what's going on.
Edit: Thanks for the MCVE. Your updated code still has problems. I don't understand this loop:
for(Particle other: p){ //For every particle in the ArrayList.
float d = PVector.dist(location,other.location); //Get distance between any two particle.
float a = 255 - d*2.5; //Map variable 'a' as alpha based on distance. E.g. if distance is high, d = 100, alpha is low, a = 255 - 225 = 30.
println("Lowest distance of any two particle =" + d); //Debug output.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,a); //Particle are coloured black, 'a' to vary alpha.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
stroke(0,a); //Lines are coloured black, 'a' to vary alpha.
strokeWeight(0.7);
line(location.x,location.y,other.location.x,other.location.y); //Draw line between four coordinates, between two particle.
}
}
}
You're saying for each Particle, you loop through every Particle and then draw an ellipse at the current Particle's location? That doesn't make any sense. If you have 100 Particles, that means each Particle will be drawn 100 times!
If you want each Particle's color to be based off its distance to the closest other Particle, then you need to modify this loop to simply find the closest Particle, and then base your calculations off of that. It might look something like this:
Particle closestNeighbor = null;
float closestDistance = 100000;
for (Particle other : p) { //For every particle in the ArrayList.
if (other == this) {
continue;
}
float d = PVector.dist(location, other.location);
if (d < closestDistance) {
closestDistance = d;
closestNeighbor = other;
}
}
Notice the if (other == this) { section. This is important, because otherwise you'll be comparing each Particle to itself, and the distance will be zero!
Once you have the closestNeighbor and the closestDistance, you can do your calculations.
Note that you're only drawing particles when they have a neighbor that's closer than 112 pixels away. Is that what you want to be doing?
If you have a follow-up question, please post an updated MCVE in a new question. Constantly editing the question and answer gets confusing, so just ask a new question if you get stuck again.
I'm looking for few days a solution to draw rectangle on image frame. Basically I'm using CvInvoke.cvRectangle method to draw rectangle on image because I need antialiased rect.
But problem is when I need to rotate a given shape for given angle. I can't find any good solution.
I have tryed to draw rectangle on separate frame then rotate hole frame and apply this new image on top of my base frame. But in this solution there is a problem with antialiasing. It's not working.
I'm working on simple application that should allow draw few kinds of shape, resize them and rotation for given angle.
Any idea how to achive this?
The best way I found to draw a minimum enclosing rectangle on the contour is using the Polylines() function which uses vertices that are returned from MinAreaRect() function. There are surely other ways to do it as well. Here is the code walk down:
// Find contours
var contours = new Emgu.CV.Util.VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(image, contours, hierarchy, RetrType.Tree, ChainApproxMethod.ChainApproxSimple);
// According to your metric, get an index of the contour you want to find the min enclosing rectangle for
int index = 2; // Say, 2nd index works for you.
var rectangle = CvInvoke.MinAreaRect(contours[index]);
Point[] vertices = Array.ConvertAll(rectangle.GetVertices(), Point.Round);
CvInvoke.Polylines(image, vertices, true, new MCvScalar(0, 0, 255), 5);
The result can be visualized in the image below, in red is the minimum enclosing rectangle.
I use C# and EMGU.CV(4.1), and I think this code will not be difficult to transfer to any platform.
Add function in the in your helper:
public static Mat DrawRect(Mat input, RotatedRect rect, MCvScalar color = default(MCvScalar),
int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
{
var v = rect.GetVertices();
var prevPoint = v[0];
var firstPoint = prevPoint;
var nextPoint = prevPoint;
var lastPoint = nextPoint;
for (var i = 1; i < v.Length; i++)
{
nextPoint = v[i];
CvInvoke.Line(input, Point.Round(prevPoint), Point.Round(nextPoint), color, thickness, lineType, shift);
prevPoint = nextPoint;
lastPoint = prevPoint;
}
CvInvoke.Line(input, Point.Round(lastPoint), Point.Round(firstPoint), color, thickness, lineType, shift);
return input;
}
This draws roteted rectangle by points. Here used rounding points by method Point.Round becose RotatedRect has points in float coordinates and CvInvoke.Line takes points as integer.
Use:
var mat = Mat.Zeros(200, 200, DepthType.Cv8U, 3);
mat.GetValueRange();
var rRect = new RotatedRect(new PointF(100, 100), new SizeF(100, 50), 30);
DrawRect(mat, rRect,new MCvScalar(255,0,0));
var brect = CvInvoke.BoundingRectangle(new VectorOfPointF(rRect.GetVertices()));
CvInvoke.Rectangle(mat, brect, new MCvScalar(0,255,0), 1, LineType.EightConnected, 0);
Result:
You should read the OpenCV documentation.
There is a RotatedRectangle class that you can use for your task. You can specify the angle by which the rectangle will be rotated.
Here is a sample code (taken from the docs) for drawing a rotated rectangle:
Mat image(200, 200, CV_8UC3, Scalar(0));
RotatedRect rRect = RotatedRect(Point2f(100,100), Size2f(100,50), 30);
Point2f vertices[4];
rRect.points(vertices);
for (int i = 0; i < 4; i++)
line(image, vertices[i], vertices[(i+1)%4], Scalar(0,255,0));
Rect brect = rRect.boundingRect();
rectangle(image, brect, Scalar(255,0,0));
imshow("rectangles", image);
waitKey(0);
Here is the result:
I am looking for advice from people having extensive experience with computer vision. I have a collection of ultrasonographic B&W images like the one below (without the stars and dotted line):
What I would like to do is detect the contour of a blood vessel (for example, the one highlighted by the yellow star). Of course my first step would be to define the ROI and maximize the contrast. But what would then be the best algorithm to use? Segmentation with the watershed algorithm? Something else?
I am little unsettled because of the image blur...
Edit:
As requested in the comments, here would be an example of source and result images:
Following is a simple approach to your problem, if I understood you correctly. My result is shown below.
And here is the code
int max_area_threshold = 10000;
int min_area_threshold = 1000;
float rational_threshold = 0.7;
cv::Mat img = cv::imread("sample.jpg", CV_8UC1);
cv::Mat img_binary;
//Create binary imae by tresholding
cv::threshold(img, img_binary, 25, 255, CV_THRESH_BINARY);
//Invert black-white
cv::bitwise_not(img_binary, img_binary);
//Eliminating small segments
cv::erode(img_binary, img_binary, cv::Mat(), cv::Point(-1, -1), 2, 1, 1);
cv::dilate(img_binary, img_binary, cv::Mat(), cv::Point(-1, -1), 1, 1, 1);
//Find contours
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours( img_binary, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
for( int i = 0; i< contours.size(); i++ )
{
if(contours[i].size() < 5)
continue;
//Fit ellipse to contour
cv::RotatedRect boundRect = cv::fitEllipse(contours[i]);
//Check the squareness of the bounding box
if(abs((boundRect.size.width / (float)boundRect.size.height)-1.0) > rational_threshold)
continue;
//Elliminate too big segments
if(boundRect.boundingRect().area() > max_area_threshold)
continue;
//Elliminate too small segments
if(boundRect.boundingRect().area() < min_area_threshold)
continue;
drawContours(img, contours, i, cv::Scalar(255), 0.2, 8, hierarchy, 0, cv::Point() );
}
cv::imwrite("result.jpg", img);
I hope it helps.
I am trying the detect the pupil from a infrared image and calculate the center of the pupil.
In my setup, i used a camera sensitive to infrared light, and I added a visible light filter to the lens and two infrared LED around the camera.
However, the image I got is blur not so clear, maybe this caused by the low resolution of the camera, whose max is about 700x500.
In the processing, the first thing i did was to convert this RGB image to gray image, how ever the result is terrible. and it got nothing in the results.
int main()
{
//load image
cv::Mat src = cv::imread("11_13_2013_15_36_09.jpg");
cvNamedWindow("original");
cv::imshow("original", src);
cv::waitKey(10);
if (src.empty())
{
std::cout << "failed to find the image";
return -1;
}
// Invert the source image and convert to graysacle
cv::Mat gray;
cv::cvtColor(~src, gray, CV_BGR2GRAY);
cv::imshow("image1", gray);
cv::waitKey(10);
// Convert to binary image by thresholding it
cv::threshold(gray, gray, 220, 255, cv::THRESH_BINARY);
cv::imshow("image2", gray);
cv::waitKey(10);
// Find all contours
std::vector<std::vector<cv::Point>>contours;
cv::findContours(gray.clone(), contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Fill holes in each contour
cv::drawContours(gray, contours, -1, CV_RGB(255, 255, 255), -1);
cv::imshow("image3", gray);
cv::waitKey(10);
for (int i = 0; i < contours.size(); i++)
{
double area = cv::contourArea(contours[i]);
cv::Rect rect = cv::boundingRect(contours[i]);
int radius = rect.width / 2;
// If controu is big enough and has round shape
// Then it is the pupil
if (area >= 800 &&
std::abs(1 - ((double)rect.width / (double)rect.height)) <= 0.3 &&
std::abs(1 - (area / (CV_PI * std::pow(radius, 2)))) <= 0.3)
{
cv::circle(src, cv::Point(rect.x + radius, rect.y + radius), radius, CV_RGB(255, 0, 0), 2);
}
}
cv::imshow("image", src);
cvWaitKey(0);
}
When the original image was converted, the gray image is terrible, does anyone know a better solution to this? I am completely new to this. for the rest of the code for finding the circle, if you have any comments, just tell me. and also i need to extra the position of the two glint (the light point) on the original image, does anyone has some idea?
thanks.
Try equalizing and filtering your source image before thresholding it ;)