What the function calcHist() give us - image

My question is when we normalize the histogram , is there any build-in function for that , if not than obviously we can calculate the histogram of the image using the function calcHist() , but the formula of normalizing histogram is Nk/N so what calcHist return us is N in this formula , or we have to calculate N on our own , and whats its role in entropy formula

I am not sure I get your question. But here is a simple example of how to get the l1 normalised histogram of a grayscale image with OpenCV.
In case of an image N is the number of pixels which can be computed simply by multiplying the width and the height of the image. Then it is simply a matter of dividing the histogram by N.
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
int main(int argc, char** argv)
{
Mat img = imread(argv[1],CV_LOAD_IMAGE_GRAYSCALE);
Mat hist;
int channels[] = {0};
int histSize[] = {32};
float range[] = { 0, 256 };
const float* ranges[] = { range };
calcHist( &img, 1, channels, Mat(), // do not use mask
hist, 1, histSize, ranges,
true, // the histogram is uniform
false );
Mat histNorm = hist / (img.rows * img.cols);
return 0;
}
To get the example I modified the one from the OpenCV documentation.
If you want to compute the entropy with this histogram, you can do this:
double entropy = 0.0;
for (int i=0; i<histNorm.rows; i++)
{
float binEntry = histNorm.at<float>(i,0);
if (binEntry != 0.0)
entropy -= binEntry * log2(binEntry);
}

Related

Separate image object into N sections of equal pixels (Approach)

Sorry in advance, this is more of an algorithmic problem rather than a coding problem, but I wasn't sure where to put it. For simplicity sake, say you have a binary image (white background, solid black object in foreground)
Example:
sample input
I want to divide this object (meaning only the black pixels) into N sections, all with the same number of pixels (so each section should contain (1/N)*(total # of black pixels)).
With the current algorithm that I'm using, I (1) find the total number of black pixels and (2) divide by N. Then I (3) scan the image row by row marking all black pixels. The result looks something like this:
current output sketch
The problem with this is the last (yellow) section, which isn't continuous. I want to divide the image in a way that makes more sense, like this:
ideal output
Basically, I'd like the boundary between the sections to be as short as possible.
I've been stumped on this for a while, but my old code just isn't cutting it anymore. I only need an approach to identifying the sections, I'll ultimately be outputting each section as individual images, as well as a grayscale copy of the input image where every pixel's value corresponds to its section number (these things I don't need help with). Any ideas?
I only need an approach to identifying the sections
According to this, I tried couple of approaches, these may help for guidelines:
Find contour of the image
Find the moments of contour and detect mass center.
For outer corners, you can simply use convex hull
Find the closest contour points(which are will be inner corners) to mass center
Then you can seperate it to desired regions by using these important points
Here is the result and code:
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
vector<Point>innerCorners;
bool isClose(Point test);
int main()
{
Mat src_gray;
int thresh = 100;
Mat src = imread("image/dir/star.png");
cvtColor( src, src_gray, COLOR_BGR2GRAY );
namedWindow( "Source",WINDOW_NORMAL );
Mat canny_output;
Canny( src_gray, canny_output, thresh, thresh*2 );
vector<vector<Point> > contours;
findContours( canny_output, contours, RETR_TREE, CHAIN_APPROX_SIMPLE );
vector<Vec4i> hierarchy;
vector<vector<Point> >hull( contours.size() );
vector<Moments> mu(contours.size() );
for( int i = 0; i <(int)contours.size(); i++ )
{ mu[i] = moments( contours[i], false ); }
for( size_t i = 0; i < contours.size(); i++ )
{
if(contours[i].size()>20)
convexHull( contours[i], hull[i] );
}
vector<Point2f> mc( contours.size() );
for( int i = 0; i <(int)contours.size(); i++ )
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
int onlyOne = 1;
for( size_t i = 0; i< contours.size(); i++ )
{
if(contours[i].size()>20 && onlyOne)
{
circle( src, mc[i], 4, Scalar(0,255,255), -1, 8, 0 );
Scalar color = Scalar(255,0,0);
drawContours( drawing, contours, (int)i, color );
drawContours( src, hull, (int)i, color,5 );
Point centerMass = mc[i];
for(int a=0; a<(int)contours[i].size();a++)
{
if(cv::norm(cv::Mat(contours[i][a]),Mat(centerMass))<200 && isClose(contours[i][a]))
{
circle(src,contours[i][a],5,Scalar(0,0,255),10);
innerCorners.push_back(contours[i][a]);
line(src,contours[i][a],centerMass,Scalar(0,255,255),5);
}
}
onlyOne = 0;
}
}
namedWindow( "Hull demo",WINDOW_NORMAL );
imshow( "Hull demo", drawing );
imshow("Source", src );
waitKey();
return 0;
}
bool isClose(Point test){
if(innerCorners.size()==0)
return 1;
for(Point a:innerCorners)
if((cv::norm(cv::Mat(a),cv::Mat(test)))<70)
return 0;
return 1;
}

How to deblur image using fourier transform in open-cv or emgu-cv?

i saw this video about debluring images using fourier transform in matlab
video
and i want to convert the code in emgu cv
my code in emgucv :
string path = Environment.GetFolderPath(Environment.SpecialFolder.Desktop);
Image<Bgr, byte> img = new Image<Bgr, byte>(#"lal.png");
//blur the image
Image<Gray, byte> gray = img.Convert<Gray, byte>().SmoothBlur(31,31);
//convert image to float and get the fourier transform
Mat g_fl = gray.Convert<Gray, float>().Mat;
Matrix<float> dft_image = new Matrix<float>(g_fl.Size);
CvInvoke.Dft(g_fl, dft_image, Emgu.CV.CvEnum.DxtType.Forward, 0);
//here i make an image of kernel with size of the original
Image<Gray, float> ker = new Image<Gray, float>(img.Size);
ker.SetZero();
for (int x = 0; x < 31; x++)
{
for (int y = 0; y < 31; y++)
{
//31 * 31= 961
ker[y, x] = new Gray(1/961);
}
}
//get the fourier of the kernel
Matrix<float> dft_blur = new Matrix<float>(g_fl.Size);
CvInvoke.Dft(ker, dft_blur, Emgu.CV.CvEnum.DxtType.Forward, 0);
// fouier image / fourier blur
Matrix<float> res = new Matrix<float>(g_fl.Size);
for (int x=0;x<g_fl.Cols;x++)
{
for (int y = 0; y < g_fl.Rows; y++)
{
res[y, x] = dft_image[y, x] / dft_blur[y, x];
}
}
//get the inverse of fourier
Image<Gray, float> ready = new Image<Gray, float>(g_fl.Size);
CvInvoke.Dft(res, ready, Emgu.CV.CvEnum.DxtType.Inverse, 0);
CvInvoke.Imshow("deblur", ready.Convert<Gray,byte>());
CvInvoke.Imshow("original", gray);
CvInvoke.WaitKey(0);
but the result is black and not working , where is the mistake in my code
if you have a code in opencv python you can post it :)??
Thanks :)
My old implementation of wiener filter:
#include "stdafx.h"
#pragma once
#pragma comment(lib, "opencv_legacy220.lib")
#pragma comment(lib, "opencv_core220.lib")
#pragma comment(lib, "opencv_highgui220.lib")
#pragma comment(lib, "opencv_imgproc220.lib")
#include "c:\Users\Andrey\Documents\opencv\include\opencv\cv.h"
#include "c:\Users\Andrey\Documents\opencv\include\opencv\cxcore.h"
#include "c:\Users\Andrey\Documents\opencv\include\opencv\highgui.h"
#include <string>
#include <iostream>
#include <complex>
using namespace std;
using namespace cv;
//----------------------------------------------------------
// Compute real and implicit parts of FFT for given image
//----------------------------------------------------------
void ForwardFFT(Mat &Src, Mat *FImg)
{
int M = getOptimalDFTSize( Src.rows );
int N = getOptimalDFTSize( Src.cols );
Mat padded;
copyMakeBorder(Src, padded, 0, M - Src.rows, 0, N - Src.cols, BORDER_CONSTANT, Scalar::all(0));
// Create complex representation of our image
// planes[0] Real part, planes[1] Implicit part (zeros)
Mat planes[] = {Mat_<double>(padded), Mat::zeros(padded.size(), CV_64F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg);
// As result, we also have Re and Im planes
split(complexImg, planes);
// Crop specter, if it have odd number of rows or cols
planes[0] = planes[0](Rect(0, 0, planes[0].cols & -2, planes[0].rows & -2));
planes[1] = planes[1](Rect(0, 0, planes[1].cols & -2, planes[1].rows & -2));
FImg[0]=planes[0].clone();
FImg[1]=planes[1].clone();
}
//----------------------------------------------------------
// Restore our image using specter
//----------------------------------------------------------
void InverseFFT(Mat *FImg,Mat &Dst)
{
Mat complexImg;
merge(FImg, 2, complexImg);
// Apply inverse FFT
idft(complexImg, complexImg);
split(complexImg, FImg);
Dst=FImg[0];
}
//----------------------------------------------------------
// Wiener filter
//----------------------------------------------------------
void wienerFilter(Mat &src,Mat &dst,Mat &_h,double k)
{
//---------------------------------------------------
// small number for numeric stability
//---------------------------------------------------
const double eps=1E-8;
//---------------------------------------------------
int ImgW=src.size().width;
int ImgH=src.size().height;
//--------------------------------------------------
Mat Yf[2];
ForwardFFT(src,Yf);
//--------------------------------------------------
Mat h;
h.create(ImgH,ImgW,CV_64F);
h=0;
_h.copyTo(h(Rect(0, 0, _h.size().width, _h.size().height)));
Mat Hf[2];
ForwardFFT(h,Hf);
//--------------------------------------------------
Mat Fu[2];
Fu[0].create(ImgH,ImgW,CV_64F);
Fu[1].create(ImgH,ImgW,CV_64F);
complex<double> a;
complex<double> b;
complex<double> c;
double Hf_Re;
double Hf_Im;
double Phf;
double hfz;
double hz;
double A;
for (int i=0;i<Hf[0].size().height;i++)
{
for (int j=0;j<Hf[0].size().width;j++)
{
Hf_Re=Hf[0].at<double>(i,j);
Hf_Im=Hf[1].at<double>(i,j);
Phf = Hf_Re*Hf_Re+Hf_Im*Hf_Im;
hfz=(Phf<eps)*eps;
hz =(h.at<double>(i,j)>0);
A=Phf/(Phf+hz+k);
a=complex<double>(Yf[0].at<double>(i,j),Yf[1].at<double>(i,j));
b=complex<double>(Hf_Re+hfz,Hf_Im+hfz);
c=a/b; // Deconvolution
// Other we do to remove division by 0
Fu[0].at<double>(i,j)=(c.real()*A);
Fu[1].at<double>(i,j)=(c.imag()*A);
}
}
//--------------------------------------------------
Fu[0]/=(ImgW*ImgH);
Fu[1]/=(ImgW*ImgH);
//--------------------------------------------------
InverseFFT(Fu,dst);
// remove out of rane values
for (int i=0;i<Hf[0].size().height;i++)
{
for (int j=0;j<Hf[0].size().width;j++)
{
if(dst.at<double>(i,j)>215){dst.at<double>(i,j)=215;}
if(dst.at<double>(i,j)<(-40)){dst.at<double>(i,j)=(-40);}
}
}
}
//----------------------------------------------------------
// Main
//----------------------------------------------------------
int _tmain(int argc, _TCHAR* argv[])
{
// Input image
Mat img;
// Load it from drive
img=imread("data/motion_fuzzy_lena.bmp",0);
//---------------------------------------------
imshow("Src image", img);
// Image size
int ImgW=img.size().width;
int ImgH=img.size().height;
// Deconvolution kernel (coefficient sum must be 1)
// Image was blurred using same kernel
Mat h;
h.create(1,10,CV_64F);
h=1/double(h.size().width*h.size().height);
// Apply filter
wienerFilter(img,img,h,0.05);
normalize(img,img, 0, 1, CV_MINMAX);
imshow("Result image", img);
cvWaitKey(0);
return 0;
}
The result:

I am using Open cv for creating RGB to HSI then doing a histogram. Then Fourier transform and back to HSI to RGB

I can not debug this programme. I am going to convert RGB to HSI and then Put a histogram in anyone channel. before Fourier and after Fourier.
#include "stdafx.h"
#include <opencv2/opencv.hpp>
#include <opencv\highgui.h>
#include <iostream>
// ass.cpp : Converts the given RGB image to HSI colour space then
// performs Fourier filtering on a particular channel.
//
using namespace std;
using namespace cv;
// Declarations of 4 unfinished functions
Mat rgb2hsi(const Mat& rgb); // converts RGB image to HSI space
Mat hsi2rgb(const Mat& hsi); // converts HSI image to RGB space
Mat histogram(const Mat& im); // returns the histogram of the selected channel in HSI space
// void filter(Mat& im);// // performs frequency-domain filtering on a single-channel image
int main(int argc, char* argv[])
{
if (argc < 2) // check number of arguments
{
cerr << "feed me something!!" << endl; // no arguments passed
return -1;
}
string path = argv[1];
Mat im; // load an RGB image
Mat hsi = rgb2hsi(im); // convert it to HSI space
Mat slices[3]; // 3 channels of the converted HSI image
im = imread(path); //try to load path
if (im.empty()) // loaded Sucessfully
{
cerr << "I Cannot load the file : ";
return -1;
}
imshow("BEFORE", im);
split(hsi, slices); // split up the packed HSI image into an array of matrices
Mat& h = slices[0];
Mat& s = slices[1];
Mat& i = slices[2]; // references to H, S, and I layers
Mat hist1, hist2; // histogram of the selected channel before and after filtering
Going to apply histogram. May be I miss some header. draw is not taken.
Mat histogram(const Mat& im)
{
Mat hist;
const float range[] = { 0, 255 };
const int channels[] = { 0 };
const int bins = range[1] - range[0];
const int dims[] = { bins, 1 };
const Size binSize(2, 240);
const float* ranges[] = { range };
// calculate the histogram
calcHist(&im, 1, channels, Mat(), hist, 1, dims, ranges);
Mat draw = Mat::zeros(binSize.height, binSize.width * bins, CV_8UC3);
double maxVal;
minMaxLoc(hist, NULL, &maxVal, 0, 0);
for (int b = 0; b < bins; b++)
{
float val = hist.at<float>(b, 0);
int x0 = binSize.width * b;
int y0 = draw.rows - val / maxVal * binSize.height + 1;
int x1 = binSize.width * (b + 1) - 1;
int y1 = draw.rows - 1;
rectangle(draw,0, cv::(Point(x0, y0), cv::Point(x1, y1)), Scalar::all(255), CV_FILLED);
}
return draw;
}
imwrite("input-original.png", rgb); // write the input image
imwrite("hist-original.png", histogram(h)); // write the histogram of the selected channel
filter(h); // perform filtering
merge(slices, 3, hsi); // combine the separated H, S, and I layers to a big packed matrix
rgb = hsi2rgb(hsi); // convert HSI back to RGB colour space
imwrite("input-filtered.png", rgb); // write the filtered image
imwrite("hist-filtered.png", histogram(h)); // and the histogram of the filtered channel
return 0;
}
Mat rgb2hsi(const Mat& rgb)
{
Mat slicesRGB[3];
Mat slicesHSI[3];
Mat &r = slicesRGB[0], &g = slicesRGB[1], &b = slicesRGB[2];
Mat &h = slicesHSI[0], &s = slicesHSI[1], &i = slicesHSI[2];
split(rgb, slicesRGB);
//
// TODO: implement colour conversion RGB => HSI
//
// begin of conversion code
h = r * 1.0f;
s = g * 1.0f;
i = b * 1.0f;
// end of conversion code
Mat hsi;
merge(slicesHSI, 3, hsi);
return hsi;
}
Mat hsi2rgb(const Mat& hsi)
{
Mat slicesRGB[3];
Mat slicesHSI[3];
Mat &r = slicesRGB[0], &g = slicesRGB[1], &b = slicesRGB[2];
Mat &h = slicesHSI[0], &s = slicesHSI[1], &i = slicesHSI[2];
split(hsi, slicesHSI);
// begin of conversion code
r = h * 1.0f;
g = s * 1.0f;
b = i * 1.0f;
// end of conversion code
Mat rgb;
merge(slicesRGB, 3, rgb);
return rgb;
}
Mat histogram(const Mat& im)
{
Mat hist;
const float range[] = { 0, 255 };
const int channels[] = { 0 };
const int bins = range[1] - range[0];
const int dims[] = { bins, 1 };
const Size binSize(2, 240);
const float* ranges[] = { range };
// calculate the histogram
calcHist(&im, 1, channels, Mat(), hist, 1, dims, ranges);
Mat draw = Mat::zeros(binSize.height, binSize.width * bins, CV_8UC3);
double maxVal;
minMaxLoc(hist, NULL, &maxVal, 0, 0);
for (int b = 0; b < bins; b++)
{
float val = hist.at<float>(b, 0);
int x0 = binSize.width * b;
int y0 = draw.rows - val / maxVal * binSize.height + 1;
int x1 = binSize.width * (b + 1) - 1;
int y1 = draw.rows - 1;
rectangle(draw, Point(x0, y0), Point(x1, y1), Scalar::all(255), CV_FILLED);
}
return draw;
}
void filter(Mat& im)
{
int type = im.type();
// Convert pixel data from unsigned 8-bit integers (0~255)
// to 32-bit floating numbers, as required by cv::dft
if (type != CV_32F) im.convertTo(im, CV_32F);
// Perform 2-D Discrete Fourier Transform
Mat f;
dft(im, f, DFT_COMPLEX_OUTPUT + DFT_SCALE); // do DFT
// Separate the packed complex matrix to two matrices
Mat complex[2];
Mat& real = complex[0]; // the real part
Mat& imag = complex[1]; // the imaginary part
split(f, complex); // dft(im) => {real,imag}
// Frequency domain filtering
int xc = im.cols / 2; // find (xc,yc) the highest
int yc = im.rows / 2; // frequency component
for (int y = 0; y < im.rows; y++) // go through each row..
{
for (int x = 0; x < im.cols; x++) // then through each column..
{
//
// TODO: Design your formula here to decide if the component is
// discarded or kept.
//
if (false) // override this condition
{
real.at<float>(y, x) = 0;
imag.at<float>(y, x) = 0;
}
}
}
// Pack the real and imaginary parts
// back to the 2-channel matrix
merge(complex, 2, f); // {real,imag} => f
// Perform 2-D Inverse Discrete Fourier Transform
idft(f, im, DFT_REAL_OUTPUT); // do iDFT
// convert im back to it's original type
im.convertTo(im, type);
}
Error List
1 IntelliSense: expected a ';' d:\709
Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 48 2 Dibya_project
2 IntelliSense: identifier "draw" is undefined d:\709
Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 70 13 Dibya_project
3 IntelliSense: no instance of overloaded function "rectangle"
matches the argument list
argument types are: (, int, , cv::Scalar_, int) d:\709
Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 72 4 Dibya_project
4 IntelliSense: expected an identifier d:\709
Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 72 26 Dibya_project
5 IntelliSense: no instance of constructor "cv::Point_<Tp>::Point
[with _Tp=int]" matches the argument list
argument types are: (, double __cdecl (double _X)) d:\709 Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 72 27 Dibya_project
broken here (in Mat histogram(...)):
rectangle(draw,0, cv::(Point(x0, y0), cv::Point(x1, y1)), Scalar::all(255), CV_FILLED);
should be either:
rectangle(draw,0, cv::Rect(Point(x0, y0), cv::Point(x1, y1)), Scalar::all(255), CV_FILLED);
or:
rectangle(draw,0, Point(x0, y0), cv::Point(x1, y1), Scalar::all(255), CV_FILLED);
I think there is a typo in including the highgui header file.

Using Gabor filter on an image

I'm looking to write some code in opencv/matlab that'll apply the Gabor filter to images to spot interesting image regions. I've read quite a lot of literature and seen some of the previous matlab/opencv code, but I'd like to attempt it all myself to make sure I fully understand.
I have the equation for the Gabor function and an image. I am unsure of the steps I should take in my algorithm. The general idea I got was to take the discrete Fourier transform of the image, multiply it (convolve?) it with the Gabor function then take the inverse Fourier transform for the result. Any pointers appreciated. Thanks!
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <math.h>
using namespace cv;
int main(int argc, char** argv)
{
int ks = 47;
int hks = (ks-1)/2;
int kernel_size=21;
double sig = 7;
double th = 200;
double ps = 90;
double lm = 0.5+ps/100.0;
double theta = th*CV_PI/180;
double psi = ps*CV_PI/180;
double del = 2.0/(ks-1);
double sigma = sig/ks;
double x_theta;
double y_theta;
Mat image = imread("C:\\users\\michael\\desktop\\tile1.tif",1), dest, src, src_f;
if (image.empty())
{
return -1;
}
imshow("Src", image);
cvtColor(image, src, CV_BGR2GRAY);
src.convertTo(src_f, CV_32F, 1.0/255, 0);
if (!ks%2)
{
ks+=1;
}
Mat kernel(ks,ks, CV_32F);
for (int y=-hks; y<=hks; y++)
{
for (int x=-hks; x<=hks; x++)
{
x_theta = x*del*cos(theta)+y*del*sin(theta);
y_theta = -x*del*sin(theta)+y*del*cos(theta);
kernel.at<float>(hks+y,hks+x) = (float)exp(-0.5*(pow(x_theta,2)+pow(y_theta,2))/pow(sigma,2))* cos(2*CV_PI*x_theta/lm + psi);
}
}
filter2D(src_f, dest, CV_32F, kernel);
imshow("Gabor", dest);
Mat Lkernel(kernel_size*20, kernel_size*20, CV_32F);
resize(kernel, Lkernel, Lkernel.size());
Lkernel /= 2.;
Lkernel += 0.5;
imshow("Kernel", Lkernel);
Mat mag;
pow(dest, 2.0, mag);
imshow("Mag", mag);
waitKey(0);
return 0;
}

Which is best simple Gaussian blur or FFT of Gaussian blur for sigma=20?

I'm making a program to blur a 16 bit grayscale image in CUDA.
In my program, if I use a Gaussian blur function with sigma = 20 or 30, it takes a lot of time, while it is fast with sigma = 2.0 or 3.0.
I've read in some web site that Guaussian blur with FFT is good for large kernel size or large sigma value:
Is It really true ?
Which algorithm should I use: simple Gaussian blur or Gaussian blur with FFT ?
My code for Guassian Blur is below. In my code , is there something wrong or not ?
enter code here
__global__
void gaussian_blur(
unsigned short* const blurredChannel, // return value: blurred channel (either red, green, or blue)
const unsigned short* const inputChannel, // red, green, or blue channel from the original image
int rows,
int cols,
const float* const filterWeight, // gaussian filter weights. The weights look like a bell shape.
int filterWidth // number of pixels in x and y directions for calculating average blurring
)
{
int r = blockIdx.y * blockDim.y + threadIdx.y; // current row
int c = blockIdx.x * blockDim.x + threadIdx.x; // current column
if ((r >= rows) || (c >= cols))
{
return;
}
int half = filterWidth / 2;
float blur = 0.f; // will contained blurred value
int width = cols - 1;
int height = rows - 1;
for (int i = -half; i <= half; ++i) // rows
{
for (int j = -half; j <= half; ++j) // columns
{
// Clamp filter to the image border
int h = min(max(r + i, 0), height);
int w = min(max(c + j, 0), width);
// Blur is a product of current pixel value and weight of that pixel.
// Remember that sum of all weights equals to 1, so we are averaging sum of all pixels by their weight.
int idx = w + cols * h; // current pixel index
float pixel = static_cast<float>(inputChannel[idx]);
idx = (i + half) * filterWidth + j + half;
float weight = filterWeight[idx];
blur += pixel * weight;
}
}
blurredChannel[c + r * cols] = static_cast<unsigned short>(blur);
}
void createFilter(float *gKernel,double sigma,int radius)
{
double r, s = 2.0 * sigma * sigma;
// sum is for normalization
double sum = 0.0;
// generate 9*9 kernel
int m=0;
for (int x = -radius; x <= radius; x++)
{
for(int y = -radius; y <= radius; y++)
{
r = std::sqrtf(x*x + y*y);
gKernel[m] = (exp(-(r*r)/s))/(3.14 * s);
sum += gKernel[m];
m++;
}
}
m=0;
// normalize the Kernel
for(int i = 0; i < (radius*2 +1); ++i)
for(int j = 0; j < (radius*2 +1); ++j)
gKernel[m++] /= sum;
}
int main()
{
cudaError_t cudaStatus;
const int size =81;
float gKernel[size];
float *dev_p=0;
cudaStatus = cudaMalloc((void**)&dev_p, size * sizeof(float));
if (cudaStatus != cudaSuccess) {
fprintf(stderr, "cudaMemcpy failed!");
}
createFilter(gKernel,20.0,4);
cudaStatus = cudaMemcpy(dev_p, gKernel, size* sizeof(float), cudaMemcpyHostToDevice);
if (cudaStatus != cudaSuccess) {
fprintf(stderr, "cudaMemcpy failed!");
}
/* i read image Buffere in unsigned short that code is not added here ,becouse it is large , and copy image data of buffere from host to device*/
/* So, suppose i have unsigned short *d_img which contain image data */
cudaMalloc( (void**)&d_img, length* sizeof(unsigned short));
cudaMalloc( (void**)&d_blur_img, length* sizeof(unsigned short));
static const int BLOCK_WIDTH = 32;
int image_width=1580.0,image_height=1050.0;
int x = static_cast<int>(ceilf(static_cast<float>(image_width) / BLOCK_WIDTH));
int y = static_cast<int>(ceilf(static_cast<float>((image_height) ) / BLOCK_WIDTH));
const dim3 grid (x, y, 1); // number of blocks
const dim3 block(BLOCK_WIDTH, BLOCK_WIDTH, 1);
gaussian_blur<<<grid,block>>>(d_blur_img,d_img,1050.0,1580.0,dev_p,9.0);
cudaDeviceSynchronize();
/* after bluring image i will copied buffer from Device to Host and free gpu memory */
cudaFree(d_img);
cudaFree(d_blur_img);
cudaFree(dev_p);
return 0;
}
Short answer: both algorithms are good with respect to image blurring, so feel free to pick the best (fastest) one for your use case.
Kernel size and sigma value are directly correlated: the greater the sigma, the larger the kernel (and thus the more operations-per-pixel to get the final result).
If you implemented a naive convolution, then you should try a separable convolution implementation instead; it will reduce the computation time by an order of magnitude already.
Now some more insight: they implement almost the same Gaussian blurring operation. Why almost ? It's because taking the FFT of an image does implicitly periodize it. Hence, at the border of the image, the convolution kernel sees an image that has been wrapped around its edge. This is called circular convolution (because of the wrapping). On the other hand, Gaussian blur implements a simple linear convolution.

Resources