cvThreshold of an iplImage - visual-studio-2010

I'm writing code with visual c++ using opencv and qt libraries. I'm trying to apply a threshold to an iplImage and displaying it but I've some problems: when I pass my iplImage to cvThreshold function (with an hypothetical threshold of 0) doesn't return a white image and I don't know why.
To display the function I'm using emit:
uchar *qimout=new uchar[sImg];
IplImage *greyImage=cvCreateImage(cvSize(wImg,hImg),IPL_DEPTH_8U,1);
cvThreshold(currentImage,greyImage,0,255,cv::THRESH_BINARY);
greyImage->imageData = (char*)qimout;
emit renderImage(QImage(qimout,wImg,hImg,QImage::Format_Indexed8));
Can someone help me?
Thanks in advance.

I think this is related to this post Convert RGB to Black & White in OpenCV
// C
IplImage *im_rgb = cvLoadImage("image.jpg");
IplImage *im_gray = cvCreateImage(cvGetSize(im_rgb),IPL_DEPTH_8U,1);
cvCvtColor(im_rgb,im_gray,CV_RGB2GRAY);
// C++
Mat im_rgb = imread("image.jpg");
Mat im_gray;
cvtColor(im_rgb,im_gray,CV_RGB2GRAY);
// C
IplImage* im_bw = cvCreateImage(cvGetSize(im_gray),IPL_DEPTH_8U,1);
cvThreshold(im_gray, im_bw, 128, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
// C++
Mat img_bw = im_gray > 128;

Related

Convert from Android.Media.Image to Android.Graphics.Bitmap (possibily with Xamarin)

I have a Android.Media.Image frame acquired from Camera2 API Preview (in a Xamarin app), that I want to process.
The image format is YUV_420_888.
In a previous iteration I was able to get a Android.Graphics.Bitmap from the CameraPreview (but not anymore), and with that bitmap I was able to:
crop it
convert it to JPEG, save it or send it to server
convert it to a Mat to process with Emgu in real time
I searched for how to convert from Android.Media.Image to Android.Graphics.Bitmap thinking it was going to be quite straightforward, but it's not.
I found mostly java code that I tried to convert to C# Xamarin but without success.
First step is supposed to be to convert to a YuvImage (found 2 different versions, hence the comments):
private static byte[] YUV_420_888toNV21(Android.Media.Image image)
{
byte[] nv21;
ByteBuffer yBuffer = image.GetPlanes()[0].Buffer;
ByteBuffer uBuffer = image.GetPlanes()[1].Buffer;
ByteBuffer vBuffer = image.GetPlanes()[2].Buffer;
int ySize = yBuffer.Remaining();
int uSize = uBuffer.Remaining();
int vSize = vBuffer.Remaining();
nv21 = new byte[ySize + uSize + vSize];
//yBuffer.Get(nv21, 0, ySize);
//vBuffer.Get(nv21, ySize, vSize);
//uBuffer.Get(nv21, ySize + vSize, uSize);
yBuffer.Get(nv21, 0, ySize);
uBuffer.Get(nv21, ySize, uSize);
vBuffer.Get(nv21, ySize + uSize, vSize);
return nv21;
}
And then the saving:
var bbitmap = YUV_420_888toNV21(image);
YuvImage yuv = new YuvImage(bbitmap, ImageFormatType.Nv21, image.Width, image.Height, null);
var ms = new MemoryStream();
yuv.CompressToJpeg(new Android.Graphics.Rect(0, 0, image.Width, image.Height), 100, ms);
File.WriteAllBytes("/sdcard/Download/testFrame.jpg", ms.ToArray());
But this is the result:
This was just a test to actually save and check the captured frame, because at the end of this I still don't have a Bitmap (even if it was working).
In cas of real time processing, I don't want/need the JPG conversion becaus it would only slow down everything.
I'd like to convert the Image to a Bitmap, crop it and feed it to Emgu as a Mat.
Any suggestion?
Thanks

OpenCV, Qt, imread, namedWindow, imshow does not work

In the .pro file:
QT += core
QT -= gui
TARGET = latihan_2
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
SOURCES += main.cpp
INCLUDEPATH += E:\OpenCV\OpenCV\opencv\build\include
LIBS += E:\OpenCV\OpenCV\opencv\build\x86\mingw\lib\libopencv_core246.dll.a
LIBS += E:\OpenCV\OpenCV\opencv\build\x86\mingw\lib\libopencv_highgui246.dll.a
LIBS += E:\OpenCV\OpenCV\opencv\build\x86\mingw\lib\libopencv_imgproc246.dll.a
LIBS += E:\OpenCV\OpenCV\opencv\build\x86\mingw\lib\libopencv_features2d246.dll.a
LIBS += E:\OpenCV\OpenCV\opencv\build\x86\mingw\lib\libopencv_calib3d246.dll.a
In main.cpp:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main(){
//read image
Mat image = imread("img.jpg", 1);
//create image window named "My image"
namedWindow("My Image", CV_WINDOW_AUTOSIZE);
//show the image on window
imshow("My image", image);
//wait key for 5000ms
waitKey(5000);
return 1;
}
When I hit run, there is no error, but it only shows a black window named qtcreator_process_stub.exe.
Why the "My image" window doesn't come out and shows the img.jpg?
I use Qt creator 2.8.1, based on Qt 5.1.1, and openCV-2.4.6.0.
You could also display a cv::Mat on a Qt window. I demonstrate how to do that on cvImage. The code below is adapted from cvImage::_open():
std::string filename = ...
cv::Mat mat = cv::imread(filename);
// Since OpenCV uses BGR order, we need to convert it to RGB
// NOTE: OpenCV 2.x uses CV_BGR2RGB, OpenCV 3.x uses cv::COLOR_BGR2RGB
cv::cvtColor(mat, mat, cv::COLOR_BGR2RGB)
// image is created according to Mat dimensions
QImage image(mat.size().width, mat.size().height, QImage::Format_RGB888);
// Copy cv::Mat to QImage
memcpy(image.scanLine(0), mat.data, static_cast<size_t>(image.width() * image.height() * mat.channels()));
// Display the QImage in a QLabel
QLabel label;
label.setPixmap(QPixmap::fromImage(image));
label.show();
First guess is that the image is in the wrong path, so first test should be to specify the full path to the image.
Also check the return value of your program (make sure the it doesn't return some crash error code - be consistent and return 0 for success and other values for fail).
And a little bit of coding that tells you where the code fails doesn't hurt, so check image.data or you can also use image.empty():
if(! image.data )
{
std::cout << "No image to display";
//can be any other method to display the error: qDebug, a messagebox...
//you can also
return 1;
}
else
{
//use the image
//if nothing goes wrong:
//return 0;
}
Check Projects->Run Settings ->Run in Terminal checkbox. If it is disabled, enable it.
I faced same problem and I solved it by fixing path environment variable. In my path environment variable I added some opencv folders wrongly, then I deleted them and add only bin folder for opencv DLLs and then the problem was solved.

UINT16 monochrome image to 8bit monochrome Qimage using freeImage

I want to convert a UINT16 monochrome image to a 8 bits image, in C++.
I have that image in a
char *buffer;
I'd like to give the new converted buffer to a QImage (Qt).
I'm trying with freeImagePlus
fipImage fimage;
if (fimage.loadfromMemory(...) == false)
//error
loadfromMemory needs a fipMemoryIO adress:
loadfromMemory(fipMemoryIO &memIO, int flag = 0)
So I do
fipImage fimage;
BYTE *buf = (BYTE*)malloc(gimage.GetBufferLength() * sizeof(BYTE));
// 'buf' is empty, I have to fill it with 'buffer' content
// how can I do it?
fipMemoryIO memIO(buf, gimage.GetBufferLength());
fimage.loadFromMemory(memIO);
if (fimage.convertTo8Bits() == true)
cout << "Good";
Then I would do something like
fimage.saveToMemory(...
or
fimage.saveToHandle(...
I don't understand what is a FREE_IMAGE_FORMAT, which is the first argument to any of those two functions. I can't find information of those types in the freeImage documentation.
Then I'd finish with
imageQt = new QImage(destiny, dimX, dimY, QImage::Format_Indexed8);
How can I fill 'buf' with the content of the initial buffer?
And get the data from the fipImage to a uchar* data for a QImage?
Thanks.
The conversion is simple to do in plain old C++, no need for external libraries unless they are significantly faster and you care about such a speedup. Below is how I'd do the conversion, at least as a first cut. The data is converted inside of the input buffer, since the output is smaller than the input.
QImage from16Bit(void * buffer, int width, int height) {
int size = width*height*2; // length of data in buffer, in bytes
quint8 * output = reinterpret_cast<quint8*>(buffer);
const quint16 * input = reinterpret_cast<const quint16*>(buffer);
if (!size) return QImage;
do {
*output++ = *input++ >> 8;
} while (size -= 2);
return QImage(output, width, height, QImage::Format_Indexed8);
}

Error converting IplImage** to IplImage*

IplImage *img;
img = (IplImage **)malloc(IMAGE_NUM * sizeof(IplImage *));
for(index=0; index<IMAGE_NUM; index++){
sprintf(filename, "preproc/preproc%d.jpg", index);
img = cvLoadImage(filename, 0);
}
Hi! This piece of code here produces the error: cannot convert ‘IplImage** {aka _IplImage*}’ to ‘IplImage {aka _IplImage*}’ in assignment. I am trying to load multiple images here. What am I doing wrong? Thanks!
You declare 'img' to be a pointer to IplImage and then you're trying to convert it into pointer to a pointer. (IplImage**) - This typecast is incorrect for this particular case since you're trying to assign IplImage** to a IplImage*.
Declare img to be : IplImage **img;
try this
IplImage** img;
img = (IplImage**)malloc(IMAGE_NUM * sizeof(IplImage *));
for(index=0; index<IMAGE_NUM; index++){
sprintf(filename, "preproc/preproc%d.jpg", index);
*img = cvLoadImage(filename, 0);
}
by the way, the next error you'll get will be for not advancing the img pointer after each loop iteration
Try declaring IplImage** img;, then img[index] = cvLoadImage(filename, 0), since img is an array of IplImage pointers and cvLoadImage() returns a single image.

Analyzing camera feed under OSX

I am looking for a way to programmatically analyze a video feed from an external usb webcam under OSX.
Since I haven't done any low level programming like this before I am currently kind of lost on where to start.
How can I access a webcam feed and grab the image data to then process further?
At this point I am just trying to understand the basic concept and am not looking for language-specific solutions.
Any sample code would be highly appreciated.
I'd appreciate it very much if someone could point me in the right direction and help me get started.
Thank you very much in advance!
Thomas
Use OpenCV.
And check my previous answer on this subject if you are looking for a code example to display the webcam images. It converts the video feed to grayscale and displays them on a window:
OpenCV 2.1: Runtime error
If you just want to display the frames, then replace the else block by this:
else
{
cvShowImage("Colored video", color_frame);
}
In case you are wandering how to manipulate the pixels of the frame:
int width = color_frame->width;
int height = color_frame->height;
int bpp = color_frame->nChannels;
for (int i=0; i < width*height*bpp; i+=bpp)
{
if (!(i % (width*bpp))) // print empty line for better readability
std::cout << std::endl;
std::cout << std::dec << "R:" << (int) color_frame->imageData[i] <<
" G:" << (int) color_frame->imageData[i+1] <<
" B:" << (int) color_frame->imageData[i+2] << " ";
}
For quick access to a webcam and for manipulation of pixel data, you can use Processing with the Video library - the easiest way to start is to check out the examples bundled with the IDE.
Processing is a java based visualisation language which is easy to learn and use and works on WIndows, MacOSX and Linux. I found the webcam stuff worked out of the box on my MacBook.
Here is an example script (based on an example bundled in the IDE) which loads a webcam feed and renders the pixels in greyscale.
import processing.video.*;
int numPixels;
Capture video;
void setup() {
// Change size to 320 x 240 if too slow at 640 x 480
size(640, 480, P2D);
video = new Capture(this, width, height, 24);
numPixels = video.width * video.height;
// Make the pixels[] array available for direct manipulation
loadPixels();
}
void draw() {
if (video.available()) {
video.read(); // Read a new video frame
video.loadPixels(); // Make the pixels of video available
for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
// Make all the pixels grey if mouse is pressed
if (mousePressed) {
float greyVal = brightness(video.pixels[i]);
pixels[i] = color(greyVal);
} else {
// If mouse not pressed, show normal video
pixels[i] = video.pixels[i];
}
}
updatePixels(); // Notify that the pixels[] array has changed
}
}
Moreover, there is a great interface to OpenCV which can be used for edge detection etc.

Resources