Argument list error using calcHist() function in OpenCV - visual-studio-2010

I'm trying to use the following code:
cv::MatND hist;
cv::Mat image = cv::imread("image.bmp");
float *range = new float[33];
int histSize = 32;
int channel = 0;
for (int i = 0; i < 33; ++i)
range[i] = (float)6602.0 + 21*i;
float **ranges = &range;
cv::calcHist(&frame.get<0>(), 1, &channel, cv::Mat(), hist, 1, &histSize, ranges, false, false);
The image is grayscale, so I'm using the zeroth channel to get the histogram, I know the range is uniform but I wanted to know my boundaries exactly, and the image is CV_16U type (in the original code the image is read from a camera, but that's too long to include here)
My problem is that at compilation I get the following errors:
error C2665: 'cv::calcHist' : none of the 3 overloads could convert all the argument types
C:\opencv\build_x64\include\opencv2/imgproc/imgproc.hpp(670): could be 'void cv::calcHist(const cv::Mat *,int,const int *,cv::InputArray,cv::OutputArray,int,const int *,const float **,bool,bool)'
C:\opencv\build_x64\include\opencv2/imgproc/imgproc.hpp(676): or 'void cv::calcHist(const cv::Mat *,int,const int *,cv::InputArray,cv::SparseMat &,int,const int *,const float **,bool,bool)'
while trying to match the argument list '(cv::Mat *, int, int *, cv::Mat, cv::MatND, int, int *, float **, bool, bool)'
I know its kind of silly, but I'm about to go crazy. Any help is appreciated.
PS: I'm using opencv 2.4.2 on Microsoft Visual C++ express in 64-bit environment.
Best,
Baris

If your OpenCV version is newer than 2.3, which seems to be the case, you should know that cv::Mat and cv::MatND are combined.
But about the error, as in the new OpenCV cv::Mat can have any number of dimensions, they have changed the definition, as you can see here:
C++: void calcHist(const Mat* images, int nimages, const int* channels, InputArray mask, OutputArray hist, int dims, const int* histSize, const float** ranges, bool uniform=true, bool accumulate=false )
C++: void calcHist(const Mat* images, int nimages, const int* channels, InputArray mask, SparseMat& hist, int dims, const int* histSize, const float** ranges, bool uniform=true, bool accumulate=false )
You can also see in the brand new tutorial here, a simple cv::Mat would do the trick:
cv::Mat hist;
Using the new documentation would help you clear these things up, unfortunately most of the tutorials and codes available are based on the old OpenCV which has undertaken big changes to get to the current version.

Related

stbir_resize_uint8 crashing on memory access

I'm using stb_image to upload an image to the GPU. If I just upload the image with stbi_load I can confirm (nvidia Nsight) that the image is correctly stored in the GPU memory. However, some images I like to resize before I upload the to the GPU. In this case, I get a crash. This is the code:
int textureWidth;
int textureHeight;
int textureChannelCount;
stbi_uc* pixels = stbi_load(fullPath.string().c_str(), &textureWidth, &textureHeight, &textureChannelCount, STBI_rgb_alpha);
if (!pixels) {
char error[512];
sprintf_s(error, "Failed to load image %s!", pathToTexture);
throw std::runtime_error(error);
}
stbi_uc* resizedPixels = nullptr;
uint32_t imageSize = 0;
if (scale > 1.0001f || scale < 0.9999f) {
stbir_resize_uint8(pixels, textureWidth, textureHeight, 0, resizedPixels, textureWidth * scale, textureHeight * scale, 0, textureChannelCount);
stbi_image_free(pixels);
textureWidth *= scale;
textureHeight *= scale;
imageSize = textureWidth * textureHeight * textureChannelCount;
} else {
resizedPixels = pixels;
imageSize = textureWidth * textureHeight * textureChannelCount;
}
// Upload the image to the gpu
When this code is run with scale set to 1.0f, it works fine. However, when I set the scale to 0.25f, the program crashes in method stbir_resize_uint8. The image I'm providing in both cases is a 1920x1080 RGBA PNG. Alpha channel is set to 1.0f across the whole image.
Which function do I have to use to resize the image?
EDIT: If I allocate the memory myself, the function no longer crashes and works fine. But I though stb handles all memory allocation internally. Was I wrong?
I see you found and solved the problem in your edit but here's some useful advice anyway:
It seems like the comments in the source (which is also the documentation) don't explicitly mention that you have to allocate memory for the resized image, but it becomes clear when you take a closer look at the function's signature:
STBIRDEF int stbir_resize_uint8( const unsigned char *input_pixels , int input_w , int input_h , int input_stride_in_bytes,
unsigned char *output_pixels, int output_w, int output_h, int output_stride_in_bytes,
int num_channels);
Think about how you yourself would return the address of a memory chunk that you allocated in a function. The easiest would be to return the pointer directly like so:
unsigned char* allocate_memory( int size )
{ return (unsigned char*) malloc(size); }
However the return seems to be reserved for error codes, so your only option would be to manipulate the pointer as a side-effect. To do that, you'd need to pass a pointer to it (pointer to pointer):
int allocate_memory( unsigned char** pointer_to_array, int size )
{
*pointer_to_array = (unsigned char*) malloc(size);
/* Check if allocation was successful and do other stuff... */
return 0;
}
If you take a closer look at the resize function's signature, you'll notice that there's no such parameter passed, so there's no way for it to return the address of internally allocated memory. (unsigned char* output_pixels instead of unsigned char** output_pixels). As a result, you have to allocate the memory for the resized image yourself.
I hope this helps you in the future.
There is a mention of memory allocation in the docs but as far as I understand, it's about allocations required to perform the resizing, which is unrelated to the output.

How to flatten an image using OpenCV correctly for image processing and then convert it to Mat again?

I have an image, read using "cv::imread". I have to flatten it so that I could use CUDA & GPU for my image processing algorithms acceleration.
My problem: When I read my image, I can show it correctly using imshow, however when I flatten it and convert it to a Mat object to be used with imshow, only part of my image is displayed. The size of the output image is also wrong, meaning that some data is really lost. What's the problem with my for loop?
// The problematic part of my code
// The Camera Man gray test image
const char* img_gray_name = "../../Test_Images/cameraman.tiff";
const char* img_blur_name = "../cameraman-blur.tiff";
const char* image_general_name = "cameraman_blur";
cv::Mat img = cv::imread(img_gray_name);
unsigned long int img_gray_size = img.rows * img.cols * sizeof(uchar);
uchar *h_img_in;// input image, converted to a flat array to be
// processed by GPU
h_img_in = (uchar *)malloc(img_gray_size);
//*************** The bug should be here! ***************//
for (int i = 0; i < img.rows; ++i) {
for (int j = 0; j < img.cols; ++j) {
h_img_in[i*img.cols+j] = img.at<uchar>(i, j);
}
}
Mat img_test;
img_test = Mat(cv::Size(img.cols, img.rows), CV_8U, h_img_in);
imwrite(img_blur_name, img_test);
// create image window named "camera man"
cv::namedWindow(image_general_name);
// show the image on window
cv::imshow(image_general_name, img_test);
P.S.: I also tested with a new 2D array instead of 1D h_img_in, result is the same; This means that something goes wrong with my usage of "img.at(i, j)".

Using Gabor filter on an image

I'm looking to write some code in opencv/matlab that'll apply the Gabor filter to images to spot interesting image regions. I've read quite a lot of literature and seen some of the previous matlab/opencv code, but I'd like to attempt it all myself to make sure I fully understand.
I have the equation for the Gabor function and an image. I am unsure of the steps I should take in my algorithm. The general idea I got was to take the discrete Fourier transform of the image, multiply it (convolve?) it with the Gabor function then take the inverse Fourier transform for the result. Any pointers appreciated. Thanks!
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <math.h>
using namespace cv;
int main(int argc, char** argv)
{
int ks = 47;
int hks = (ks-1)/2;
int kernel_size=21;
double sig = 7;
double th = 200;
double ps = 90;
double lm = 0.5+ps/100.0;
double theta = th*CV_PI/180;
double psi = ps*CV_PI/180;
double del = 2.0/(ks-1);
double sigma = sig/ks;
double x_theta;
double y_theta;
Mat image = imread("C:\\users\\michael\\desktop\\tile1.tif",1), dest, src, src_f;
if (image.empty())
{
return -1;
}
imshow("Src", image);
cvtColor(image, src, CV_BGR2GRAY);
src.convertTo(src_f, CV_32F, 1.0/255, 0);
if (!ks%2)
{
ks+=1;
}
Mat kernel(ks,ks, CV_32F);
for (int y=-hks; y<=hks; y++)
{
for (int x=-hks; x<=hks; x++)
{
x_theta = x*del*cos(theta)+y*del*sin(theta);
y_theta = -x*del*sin(theta)+y*del*cos(theta);
kernel.at<float>(hks+y,hks+x) = (float)exp(-0.5*(pow(x_theta,2)+pow(y_theta,2))/pow(sigma,2))* cos(2*CV_PI*x_theta/lm + psi);
}
}
filter2D(src_f, dest, CV_32F, kernel);
imshow("Gabor", dest);
Mat Lkernel(kernel_size*20, kernel_size*20, CV_32F);
resize(kernel, Lkernel, Lkernel.size());
Lkernel /= 2.;
Lkernel += 0.5;
imshow("Kernel", Lkernel);
Mat mag;
pow(dest, 2.0, mag);
imshow("Mag", mag);
waitKey(0);
return 0;
}

What the function calcHist() give us

My question is when we normalize the histogram , is there any build-in function for that , if not than obviously we can calculate the histogram of the image using the function calcHist() , but the formula of normalizing histogram is Nk/N so what calcHist return us is N in this formula , or we have to calculate N on our own , and whats its role in entropy formula
I am not sure I get your question. But here is a simple example of how to get the l1 normalised histogram of a grayscale image with OpenCV.
In case of an image N is the number of pixels which can be computed simply by multiplying the width and the height of the image. Then it is simply a matter of dividing the histogram by N.
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
int main(int argc, char** argv)
{
Mat img = imread(argv[1],CV_LOAD_IMAGE_GRAYSCALE);
Mat hist;
int channels[] = {0};
int histSize[] = {32};
float range[] = { 0, 256 };
const float* ranges[] = { range };
calcHist( &img, 1, channels, Mat(), // do not use mask
hist, 1, histSize, ranges,
true, // the histogram is uniform
false );
Mat histNorm = hist / (img.rows * img.cols);
return 0;
}
To get the example I modified the one from the OpenCV documentation.
If you want to compute the entropy with this histogram, you can do this:
double entropy = 0.0;
for (int i=0; i<histNorm.rows; i++)
{
float binEntry = histNorm.at<float>(i,0);
if (binEntry != 0.0)
entropy -= binEntry * log2(binEntry);
}

Xcode convert rawdata from 8 bit/channel to 16 bit/channel using apple native framework

I have rgb raw data, which is 8 bit/channel, i need to convert it to 16 bit /channel, any native framework can do so, thanks.
so input is image width, height, rawdata, any suggestion?
If you just want to convert 8bpc raw image into 16bpc raw image (without any special color processing and filtering) - you don't need any special framework. You can do it by youself by conbverting pixel by pixel... In case of unsigned short 16 bit raw conversion will looks like:
size_t ent_cnt = width*height*channel_count;
unsigned short *dst = new unsigned short[ent_cnt];
unsigned short *dst_ptr = dst;
unsigned char *src_ptr = src;
while (ent_cnt --)
*dst_ptr++ = (unsigned short)(*src_ptr++) << 8;
return dst;

Resources