I'm working right out of a book, and copied the following code (essentially the openCV equivalent of "hello world"):
//helloCV.cpp
#include <opencv2/opencv.hpp>
int main(int argc, char** argv){
cv::Mat img = cv::imread(argv[1], -1);
if (img.empty()) return -1;
cv::namedWindow("Example1", cv::WINDOW_AUTOSIZE);
cv::imshow("Example1", img);
cv::waitKey(0);
cv::destroyWindow("Example1");
return 0;
}//main
Unfortunately, when I run this code, I get a window with the header and nothing in it:
I suspect that I've messed up the environment or some such when installing OpenCV, but cmake is throwing no errors, and the code runs as expected, exiting correctly on a keystroke and all of that, with the glaring exception of a lack of a displayed photo.
Any tips?
Thanks!
Thanks to #DanMašek for the lead on this one, and to all the people on this page: http://answers.opencv.org/question/160201/video-window-not-loading-frame/
To repeat what they said, what worked for me was the following:
To resolve this, locate the file window_cocoa.mm; if built from source it'll be in opencv/modules/highgui/src.
Change the following:
#implementation CVView
#if defined(__LP64__)
#synthesize image;
#else // 32-bit Obj-C does not have automatic synthesize
#synthesize image = _image;
#endif
To this:
#implementation CVView
#synthesize image = _image;
Do the same thing for the CVWindow and CVSlider implementations to accommodate videos as well.
Recompile OpenCV and test out your code.
Hope this helps other people struggling with this issue!
Related
I am trying to "print" items that contain text, onto a mono image.
I have items that either subclass QGraphicsTextItem, or some other class that uses painter->drawText().
I can do it in Qt4 - but in Qt5, all I get are some blobs. I have tried to experiment with a small sample program, but I can't figure out what is going on...
Here is a simple self contained program:
#include <QApplication>
#include <QGraphicsView>
#include <QGraphicsTextItem>
class TextItem : public QGraphicsItem
{
public:
TextItem(){}
virtual void paint(QPainter* painter, const QStyleOptionGraphicsItem*, QWidget* = NULL)
{
QPen p(Qt::red);
painter->setPen(p);
painter->setBrush(Qt::NoBrush);
//painter->setFont(QFont("Arial", 30));
painter->drawText(boundingRect(), Qt::AlignCenter, "abcd");
}
virtual QRectF boundingRect() const { return QRectF(0, 0, 30, 20); }
};
void processScene(QGraphicsScene* s) {
QGraphicsTextItem* t = new QGraphicsTextItem();
t->setPlainText("abcd");
// t->setDefaultTextColor(Qt::red); // won't make a difference
t->setPos(0, 0);
s->addItem(t);
//t->setFont(QFont("Arial", 30));
//t->setTransform(t->transform().fromScale(3,3));
TextItem* t1 = new TextItem();
t1->setPos(0, 20);
s->addItem(t1);
QImage image = QImage(s->sceneRect().size().toSize(),
QImage::Format_Mono);
//QImage::Format_RGB32); // this works in qt5
image.fill(QColor(Qt::color0).rgb());
QPainter painter;
painter.begin(&image);
s->render(&painter);
painter.end();
image.save(QString("../test.bmp"));
}
int main(int argc, char *argv[]) {
QApplication app(argc, argv);
QGraphicsScene s;
s.setSceneRect(0, 0, 50, 50);
QGraphicsView view(&s);
view.show();
processScene(&s);
return app.exec();
}
Running it in Qt4 it gives what is expected: outputs a mono image containing text (in both versions, using QGraphicsTextItem or paint->drawText() (left image).
The same in Qt5 only shows some blobs in place of text (right image):
I have tried all kinds of experiments... Setting the font doesn't make a difference (except, for QGraphicsTextItem, setting the font to something big AND scaling to something big - but that is not useful in a practical application).
The other thing that worked in Qt5 is rendering to a color image instead of mono - but that is very bad for my application, where I need to store small mono images. My thought would be then, for Qt5, to render text to a color bitmap and then convert it to mono... But really !
I have also tried to save the items themselves to a bitmap, in a function implemented inside their class - it did the same thing.
I have tried to set point size on the QFont before applying it.
Is there something else I can set on QPainter in order to draw text ?
I found a related question: QT5 Text rendering issue also asked again in Qt forums: QT5 Text rendering issue. But that question refers to all rendering, and to a Qt5 in Alpha on a MIPS based platform... It implies that other platforms don't see that issue...
I'm using 5.5.0... I am also running under windows... And rendering on color image works... And...
I simply do not understand the "solution":
For time being, when we tried forcefully to enable Glyph cache, then
we are not seeing this issue.
Not sure what that means and how to do it.
Why is text painting blobs on mono images (but fine on color images) in qt5 and what can I do to fix it ?
Using Qt 5.5.0 32 bit, in Windows 7
Edit:
1) I tested the code in 5.4.2 and it renders text correctly.
I tried to compare some of the relevant code in qpainter.cpp, qtextlayout.cpp, qpaintengine_raster.cpp, qtextengine.cpp
- but with my limited understanding, the changes between Qt 5.4.2 and Qt 5.5.0 look logical... and would not affect the behavior...
2) I did paint my text on a color QImage, then painted it on mono - it worked fine on my own text items, but a third party software that creates very small text gives me very bad resolution when I try the same (illegible).
There's an article describing how to do this here, that seems to have worked for other people but it does not compile for me.
Here's a copy of the .h file that was used:
//
// NSImage+OpenCV.h
//
#import <AppKit/AppKit.h>
#interface NSImage (NSImage_OpenCV) {
}
+(NSImage*)imageWithCVMat:(const cv::Mat&)cvMat;
-(id)initWithCVMat:(const cv::Mat&)cvMat;
#property(nonatomic, readonly) cv::Mat CVMat;
#property(nonatomic, readonly) cv::Mat CVGrayscaleMat;
#end
I'm on Xcode 4.4, using openCV 2.4.2. The compiler errors I'm getting for the header file are 4x of the following:
Semantic issue: Use of undeclared identifier 'cv'
This error seems rather obvious...the header file does not define what a cv::Mat is.
So I took a guess from looking at the OpenCV 2.4 tutorial, that I needed to add
#include <opencv2/core/core.hpp>
This generated 20 other errors, where the compiler was complaining about the core.hpp file.
The first of which says:
Semantic issue: Non-const static data member must be initialized out of line
So my question is what am I doing wrong? How do I get this code to work?
Another stackoverflow Q&A (link: How to include OpenCV in Cocoa Application?) had the missing piece on the undefined symbols.
To summarize:
The OpenCV headers have to be included before the Cocoa headers, due to a macro conflict.
Xcode automatically includes the cocoa headers at the top of every objective-C file (*.m, *.mm), which prevents you from adding the OpenCV headers first, as stated in point 1.
Xcode has a "-Prefix.pch" file that defines any code which is automatically prepended to any code.
Normally, it looks like:
#ifdef __OBJC__
#import <Cocoa/Cocoa.h>
#endif
Instead, the file should be changed to look like:
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
#ifdef __OBJC__
#import <Cocoa/Cocoa.h>
#endif
So in the case of any *.cpp file, the opencv header gets added automatically. In the case of any *.m file, the Cocoa headers are added. And in the special case of *.mm files, they BOTH get added, and in the correct order.
Kudos to Ian Charnas and user671435 for figuring this out.
That is C++ syntax. You have to give your source file the .mm suffix to make it Objective-C++.
I have recently setup openCV 2.3.1 in Visual Studio 2010. I set it up using cmake and managed to run simple 'hello world' code as follows:
#include "stdafx.h"
#include <opencv2/opencv.hpp>
#include <cxcore.h>
#include <highgui.h>
int _tmain(int argc, _TCHAR* argv[])
{
IplImage *img = cvLoadImage("funny-pictures-cat-goes-pew.jpg");
cvNamedWindow("Image:",1);
cvShowImage("Image:",img);
cvWaitKey();
cvDestroyWindow("Image:");
cvReleaseImage(&img);
return 0;
}
This code was able to run for the first time, though it was displaying a grey image instead of the cat. As I was trying to see whts wrong, it started giving the following error; Unhandled Exception at a certain memory location...:![enter image description here][1](can't upload image because of low reputation points. But I hope you understood my problem description...
Regards,
Ruzzar
Can you please recheck that the *img is being properly filled out?
I just tested this, with the single change that my path to the image is absolute (IplImage *img = cvLoadImage("D:\\Development\\TestProjects\\OpenCVTest\\funny-pictures-cat-goes-pew.jpg");), aside from that its the same. It worked perfectly fine on my system here.
OpenCV 2.4.2 used, image from
http://i288.photobucket.com/albums/ll169/critterclaw101/funny/funny-pictures-cat-goes-pew.jpg
Edit: when testing with a wrongly set path, I get a grey image, thus I am sure that your code cannot find the image.
I'm following the lazyfoo tutorials on SDL and I'm on lesson 01, getting an image on the screen, but SDL is giving me "Couldn't load hello.bmp", and I can't figure out why.
I'm using OS X, Xcode 3.2, and the latest version of SDL from their website.
I suspect it has something to do with not loading the hello.bmp image into Xcode correctly, but I've followed the tutorial and further Googling has produced no helpful results. Does anyone know how to troubleshoot this further?
Edit: It seems it has to do with relative paths. Still not sure what part is wrong though...
Edit: I've figured out that by going to Project -> Edit Active Executable and changing Set The Working Directory to 'Project Directory' works for now, but I don't understand why it won't load the hello.bmp in the .app itself. What am I missing?
Edit: Below is the source code for the lazyfoo lesson 01, included as per request. This is the code I'm using character for character, if you need any information about my XCode (Version 3.2), let me know.
/*This source code copyrighted by Lazy Foo' Productions (2004-2012)
and may not be redestributed without written permission.*/
//Include SDL functions and datatypes
#include "SDL/SDL.h"
int main( int argc, char* args[] )
{
//The images
SDL_Surface* hello = NULL;
SDL_Surface* screen = NULL;
//Start SDL
SDL_Init( SDL_INIT_EVERYTHING );
//Set up screen
screen = SDL_SetVideoMode( 640, 480, 32, SDL_SWSURFACE );
//Load image
hello = SDL_LoadBMP( "3.app/Contents/Resources/hello.bmp" );
//Apply image to screen
SDL_BlitSurface( hello, NULL, screen, NULL );
//Update Screen
SDL_Flip( screen );
//Pause
SDL_Delay( 2000 );
//Free the loaded image
SDL_FreeSurface( hello );
//Quit SDL
SDL_Quit();
return 0;
}
This is incorrect:
SDL_LoadBMP( "3.app/Contents/Resources/hello.bmp" );
You should get the path for a resource in your application's bundle by calling [[NSBundle mainBundle] pathForResource:#"hello" ofType:#"bmp"], which will return an NSString object with the absolute path of the file corresponding to that resource.
I've found that by replacing "hello.bmp" in the lesson01 source code with "X.app/Contents/Resources/hello.bmp" where X is the name of your XCode project, this correctly references the app that is built even if the .app is copied to the desktop and ran there.
Also, don't forget to add hello.bmp to the XCode project.
I just want to display this "img1.jpg" image in a c++ project with using opencv libs for future processes, but it only displays an empty gray window. What is the reason of this. Is there a mistake in this code? please help!
Here is the code;
Mat img1;
char imagePath[256] = "img1.jpg";
img1 = imread(imagePath, CV_LOAD_IMAGE_GRAYSCALE);
namedWindow("result", 1);
imshow("result", img1);
Thanks...
I had the same problem and solved putting waitKey(1); after imshow(). The OpenCV documentation explains why:
This function is the only method in HighGUI that can fetch and handle
events, so it needs to be called periodically for normal event
processing, unless HighGUI is used within some environment that takes
care of event processing.
Thanks #b_froz.
For more detials about this issue,you can refer to: http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html#imshow
Note This function should be followed by waitKey function which displays the image for specified milliseconds. Otherwise, it won’t display the image. For example, waitKey(0) will display the window infinitely until any keypress (it is suitable for image display). waitKey(25) will display a frame for 25 ms, after which display will be automatically closed. (If you put it in a loop to read videos, it will display the video frame-by-frame)
So,not only waitkey(1)could be put after imshow(),but also waitkey(0) or waitkey(other integers).Here is the explanation of the function waitkey() : http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html#waitkey
Are you importing the correct library ?
This is other very easy way to load one image:
#define CV_NO_BACKWARD_COMPATIBILITY
#include <cv.h>
#include <highgui.h>
#include <math.h>
main(){
IplImage* img = cvLoadImage("yourPicture.jpg");
cvNamedWindow("Original", 1);
cvShowImage("Original", img);
}
I think you have openCV correctly installed, so yo can type this (Ubuntu):
g++ NameOfYourProgram.cpp -o Sample -I/usr/local/include/opencv/ -L/usr/local/lib -lcv -lhighgui ./sample
The problem you are having is due to the type of your Mat img1. When you load your image with the flag CV_LOAD_IMAGE_GRAYSCALE, the type of your Mat is 0 (CV_8UC1), and the function imshow() is not able to show the image correctly.
You can solve this, converting your Mat to type 16 (CV_8UC3):
img1.convertTo(img1,CV_8UC3);
and then show it with imshow():
imshow("result", img1);
I hope this help.