Problems in finding contours and convex hull in openCV - image

I have written the following code
#include"opencv2/opencv.hpp"
#include<iostream>
#include<math.h>
using namespace std;
using namespace cv;
main()
{
Mat img1,img2,sub,gray1,gray2,lab,ycbcr;
int v[3];
int row,col,i,j,t;
VideoCapture cap(0);
namedWindow("current");
cap>>img1;
sub=img1;
row=img1.rows;
col=img1.cols;
cvtColor(img1,gray1,CV_BGR2GRAY);
vector<vector<Point> > cont;
vector<Vec4i> hierarchy;
while (1) {
cap>>img2;
cvtColor(img2,gray2,CV_BGR2GRAY);
for(i=0;i<row;++i)
{
for (j=0; j<col; ++j)
{
if(abs(gray1.at<uchar>(i,j) - gray2.at<uchar>(i,j))>10)
{
sub.at<Vec3b>(i,j)[0] = img2.at<Vec3b>(i,j)[0];
sub.at<Vec3b>(i,j)[1] = img2.at<Vec3b>(i,j)[1];
sub.at<Vec3b>(i,j)[2] = img2.at<Vec3b>(i,j)[2];
}
else
{
sub.at<Vec3b>(i,j)[0]=0;
sub.at<Vec3b>(i,j)[1]=0;
sub.at<Vec3b>(i,j)[2]=0;
}
}
}
cvtColor(sub,ycbcr,CV_BGR2YCrCb);
inRange(ycbcr,Scalar(7,133,106),Scalar(255,178,129),ycbcr);
findContours(ycbcr,cont,hierarchy,CV_RETR_LIST,CV_CHAIN_APPROX_SIMPLE);
Scalar color = CV_RGB(255,0,0);
for(int i1 = 0 ;i1 >= 0; i1 = hierarchy[i1][0] )
drawContours( ycbcr, cont, i1, color,2, CV_AA, hierarchy );
vector<Point2f > hullPoints;
// convexHull(Mat(cont),hullPoints,false);
imshow("current",ycbcr);
a
if(waitKey(33)=='q')
break;
img1=img2.clone();
}
}
1.)Why the contours are not displaying in red color although i specified it through CV_RGB(255,0,0).
2.)When i uncomment the line
convexHull(Mat(cont),hullPoints,false);
,the program shows runtime error.Why is it happening.Can anybody tell me the exact format of convexHull()
and the meaning of its arguments

1) Try (0,0,255) for red color. OpenCV uses BGR format.
2) For finding convex hull, try the code in OpenCV tutorial.
Also try similar question as yours on Convexhull. How to calculate convex hull area using openCV functions?

Your hullPoints vector is empty. If you check the size of vector,you don't get an error.
if(hullPoints.size() > 2) {
convexHull(....);
}

Related

Has anyone encounter this this stripping artifact during "RayTracing in one Weekend"?

I am trying to port the "RayTracing in One Weekend" into metal compute shader. I encounter this strip artifacts in my project:
Is it because my random generator does not work well?
Does anyone have a clue?
// 参照 https://www.pcg-random.org/ 实现
typedef struct { uint64_t state; uint64_t inc; } pcg32_random_t;
uint32_t pcg32_random_r(thread pcg32_random_t* rng)
{
uint64_t oldstate = rng->state;
rng->state = oldstate * 6364136223846793005ULL + rng->inc;
uint32_t xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u;
uint32_t rot = oldstate >> 59u;
return (xorshifted >> rot) | (xorshifted << ((-rot) & 31));
}
void pcg32_srandom_r(thread pcg32_random_t* rng, uint64_t initstate, uint64_t initseq)
{
rng->state = 0U;
rng->inc = (initseq << 1u) | 1u;
pcg32_random_r(rng);
rng->state += initstate;
pcg32_random_r(rng);
}
// 生成0~1之间的浮点数
float randomF(thread pcg32_random_t* rng)
{
//return pcg32_random_r(rng)/float(UINT_MAX);
return ldexp(float(pcg32_random_r(rng)), -32);
}
// 生成x_min ~ x_max之间的浮点数
float randomRange(thread pcg32_random_t* rng, float x_min, float x_max){
return randomF(rng) * (x_max - x_min) + x_min;
}
I found this link.It says that the primary ray hit point is either above or below the sphere's surface a litter bit due to the float precision error. It is a z-fighting problem
Seeing these "circles aroune the image center" artifact is almost always a dead give-away for z-fighting (rays along any such "circle" always have the same distance to any flat object you're looking at, so they either all round up or all round down, giving you this artifact).
This z-fighting then translates to this artifact because sometimes all the rays on such a circle are inside the sphere (meaning their shadow rays get self-occluded by the sphere), or all outside (they do what they should). What you want to do is offset the ray origin of the shadow rays a tiny bit (say, 1e-3f) along the normal direction at the hitpoint.
You may also want to read up on Carsten Waechter's article in Ray Tracing Gems 1 - it's for triangles, but explains the problem and potential solution very well.

Separate image object into N sections of equal pixels (Approach)

Sorry in advance, this is more of an algorithmic problem rather than a coding problem, but I wasn't sure where to put it. For simplicity sake, say you have a binary image (white background, solid black object in foreground)
Example:
sample input
I want to divide this object (meaning only the black pixels) into N sections, all with the same number of pixels (so each section should contain (1/N)*(total # of black pixels)).
With the current algorithm that I'm using, I (1) find the total number of black pixels and (2) divide by N. Then I (3) scan the image row by row marking all black pixels. The result looks something like this:
current output sketch
The problem with this is the last (yellow) section, which isn't continuous. I want to divide the image in a way that makes more sense, like this:
ideal output
Basically, I'd like the boundary between the sections to be as short as possible.
I've been stumped on this for a while, but my old code just isn't cutting it anymore. I only need an approach to identifying the sections, I'll ultimately be outputting each section as individual images, as well as a grayscale copy of the input image where every pixel's value corresponds to its section number (these things I don't need help with). Any ideas?
I only need an approach to identifying the sections
According to this, I tried couple of approaches, these may help for guidelines:
Find contour of the image
Find the moments of contour and detect mass center.
For outer corners, you can simply use convex hull
Find the closest contour points(which are will be inner corners) to mass center
Then you can seperate it to desired regions by using these important points
Here is the result and code:
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
vector<Point>innerCorners;
bool isClose(Point test);
int main()
{
Mat src_gray;
int thresh = 100;
Mat src = imread("image/dir/star.png");
cvtColor( src, src_gray, COLOR_BGR2GRAY );
namedWindow( "Source",WINDOW_NORMAL );
Mat canny_output;
Canny( src_gray, canny_output, thresh, thresh*2 );
vector<vector<Point> > contours;
findContours( canny_output, contours, RETR_TREE, CHAIN_APPROX_SIMPLE );
vector<Vec4i> hierarchy;
vector<vector<Point> >hull( contours.size() );
vector<Moments> mu(contours.size() );
for( int i = 0; i <(int)contours.size(); i++ )
{ mu[i] = moments( contours[i], false ); }
for( size_t i = 0; i < contours.size(); i++ )
{
if(contours[i].size()>20)
convexHull( contours[i], hull[i] );
}
vector<Point2f> mc( contours.size() );
for( int i = 0; i <(int)contours.size(); i++ )
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
int onlyOne = 1;
for( size_t i = 0; i< contours.size(); i++ )
{
if(contours[i].size()>20 && onlyOne)
{
circle( src, mc[i], 4, Scalar(0,255,255), -1, 8, 0 );
Scalar color = Scalar(255,0,0);
drawContours( drawing, contours, (int)i, color );
drawContours( src, hull, (int)i, color,5 );
Point centerMass = mc[i];
for(int a=0; a<(int)contours[i].size();a++)
{
if(cv::norm(cv::Mat(contours[i][a]),Mat(centerMass))<200 && isClose(contours[i][a]))
{
circle(src,contours[i][a],5,Scalar(0,0,255),10);
innerCorners.push_back(contours[i][a]);
line(src,contours[i][a],centerMass,Scalar(0,255,255),5);
}
}
onlyOne = 0;
}
}
namedWindow( "Hull demo",WINDOW_NORMAL );
imshow( "Hull demo", drawing );
imshow("Source", src );
waitKey();
return 0;
}
bool isClose(Point test){
if(innerCorners.size()==0)
return 1;
for(Point a:innerCorners)
if((cv::norm(cv::Mat(a),cv::Mat(test)))<70)
return 0;
return 1;
}

Using Gabor filter on an image

I'm looking to write some code in opencv/matlab that'll apply the Gabor filter to images to spot interesting image regions. I've read quite a lot of literature and seen some of the previous matlab/opencv code, but I'd like to attempt it all myself to make sure I fully understand.
I have the equation for the Gabor function and an image. I am unsure of the steps I should take in my algorithm. The general idea I got was to take the discrete Fourier transform of the image, multiply it (convolve?) it with the Gabor function then take the inverse Fourier transform for the result. Any pointers appreciated. Thanks!
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <math.h>
using namespace cv;
int main(int argc, char** argv)
{
int ks = 47;
int hks = (ks-1)/2;
int kernel_size=21;
double sig = 7;
double th = 200;
double ps = 90;
double lm = 0.5+ps/100.0;
double theta = th*CV_PI/180;
double psi = ps*CV_PI/180;
double del = 2.0/(ks-1);
double sigma = sig/ks;
double x_theta;
double y_theta;
Mat image = imread("C:\\users\\michael\\desktop\\tile1.tif",1), dest, src, src_f;
if (image.empty())
{
return -1;
}
imshow("Src", image);
cvtColor(image, src, CV_BGR2GRAY);
src.convertTo(src_f, CV_32F, 1.0/255, 0);
if (!ks%2)
{
ks+=1;
}
Mat kernel(ks,ks, CV_32F);
for (int y=-hks; y<=hks; y++)
{
for (int x=-hks; x<=hks; x++)
{
x_theta = x*del*cos(theta)+y*del*sin(theta);
y_theta = -x*del*sin(theta)+y*del*cos(theta);
kernel.at<float>(hks+y,hks+x) = (float)exp(-0.5*(pow(x_theta,2)+pow(y_theta,2))/pow(sigma,2))* cos(2*CV_PI*x_theta/lm + psi);
}
}
filter2D(src_f, dest, CV_32F, kernel);
imshow("Gabor", dest);
Mat Lkernel(kernel_size*20, kernel_size*20, CV_32F);
resize(kernel, Lkernel, Lkernel.size());
Lkernel /= 2.;
Lkernel += 0.5;
imshow("Kernel", Lkernel);
Mat mag;
pow(dest, 2.0, mag);
imshow("Mag", mag);
waitKey(0);
return 0;
}

accumulatedweight throws cv:Exception error

I am new to OpenCV and trying to find contours and draw rectangle on them, here's my code but its throwing cv::Exception when it comes to accumulatedweighted().
i tried to make both src(Original Image) and dst(background) by converting to CV_32FC3 and then finding avg using accumulatedweighted.
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <ctype.h>
using namespace cv;
using namespace std;
static void help()
{
cout << "\nThis is a Example to implement CAMSHIFT to detect multiple motion objects.\n";
}
Rect rect;
VideoCapture capture;
Mat currentFrame, currentFrame_grey, differenceImg, oldFrame_grey,background;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
bool first = true;
int main(int argc, char* argv[])
{
//Create a new movie capture object.
capture.open(0);
if(!capture.isOpened())
{
//error in opening the video input
cerr << "Unable to open video file: " /*<< videoFilename*/ << endl;
exit(EXIT_FAILURE);
}
//capture current frame from webcam
capture >> currentFrame;
//Size of the image.
CvSize imgSize;
imgSize.width = currentFrame.size().width; //img.size().width
imgSize.height = currentFrame.size().height; ////img.size().height
//Images to use in the program.
currentFrame_grey.create( imgSize, IPL_DEPTH_8U);//image.create().
while(1)
{
capture >> currentFrame;//VideoCapture& VideoCapture::operator>>(Mat& image)
//Convert the image to grayscale.
cvtColor(currentFrame,currentFrame_grey,CV_RGB2GRAY);//cvtColor()
currentFrame.convertTo(currentFrame,CV_32FC3);
background = Mat::zeros(currentFrame.size(), CV_32FC3);
accumulateWeighted(currentFrame,background,1.0,NULL);
imshow("Background",background);
if(first) //Capturing Background for the first time
{
differenceImg = currentFrame_grey.clone();//img1 = img.clone()
oldFrame_grey = currentFrame_grey.clone();//img2 = img.clone()
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);//convertscaleabs()
first = false;
continue;
}
//Minus the current frame from the moving average.
absdiff(oldFrame_grey,currentFrame_grey,differenceImg);//absDiff()
//bluring the differnece image
blur(differenceImg, differenceImg, imgSize);//blur()
//apply threshold to discard small unwanted movements
threshold(differenceImg, differenceImg, 25, 255, CV_THRESH_BINARY);//threshold()
//find contours
findContours(differenceImg,contours,hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0)); //findcontours()
//draw bounding box around each contour
//for(; contours! = 0; contours = contours->h_next)
for(int i = 0; i < contours.size(); i++)
{
rect = boundingRect(contours[i]); //extract bounding box for current contour
//drawing rectangle
rectangle(currentFrame, cvPoint(rect.x, rect.y), cvPoint(rect.x+rect.width, rect.y+rect.height), cvScalar(0, 0, 255, 0), 2, 8, 0);
}
//New Background
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);
//display colour image with bounding box
imshow("Output Image", currentFrame);//imshow()
//display threshold image
imshow("Difference image", differenceImg);//imshow()
//clear memory and contours
//cvClearMemStorage( storage );
//contours = 0;
contours.clear();
//background = currentFrame;
//press Esc to exit
char c = cvWaitKey(33);
if( c == 27 ) break;
}
// Destroy All Windows.
destroyAllWindows();
return 0;
}
Please Help to solve this.
you might want to RTFM before asking here.
so, you missed the alpha param as well as the dst Mat in your call to addWeighted
Mat dst;
accumulateWeighted(currentFrame, 0.5 background, 0.5, 0, dst);
also, no idea, what the whole thing should achieve. adding up the current frame before diffing it does not make any sense to me.
if you planned to do background separation, throw it all away, and use one of the builtin backgroundsubtractors instead

What the function calcHist() give us

My question is when we normalize the histogram , is there any build-in function for that , if not than obviously we can calculate the histogram of the image using the function calcHist() , but the formula of normalizing histogram is Nk/N so what calcHist return us is N in this formula , or we have to calculate N on our own , and whats its role in entropy formula
I am not sure I get your question. But here is a simple example of how to get the l1 normalised histogram of a grayscale image with OpenCV.
In case of an image N is the number of pixels which can be computed simply by multiplying the width and the height of the image. Then it is simply a matter of dividing the histogram by N.
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
int main(int argc, char** argv)
{
Mat img = imread(argv[1],CV_LOAD_IMAGE_GRAYSCALE);
Mat hist;
int channels[] = {0};
int histSize[] = {32};
float range[] = { 0, 256 };
const float* ranges[] = { range };
calcHist( &img, 1, channels, Mat(), // do not use mask
hist, 1, histSize, ranges,
true, // the histogram is uniform
false );
Mat histNorm = hist / (img.rows * img.cols);
return 0;
}
To get the example I modified the one from the OpenCV documentation.
If you want to compute the entropy with this histogram, you can do this:
double entropy = 0.0;
for (int i=0; i<histNorm.rows; i++)
{
float binEntry = histNorm.at<float>(i,0);
if (binEntry != 0.0)
entropy -= binEntry * log2(binEntry);
}

Resources