I'm following the lazyfoo tutorials on SDL and I'm on lesson 01, getting an image on the screen, but SDL is giving me "Couldn't load hello.bmp", and I can't figure out why.
I'm using OS X, Xcode 3.2, and the latest version of SDL from their website.
I suspect it has something to do with not loading the hello.bmp image into Xcode correctly, but I've followed the tutorial and further Googling has produced no helpful results. Does anyone know how to troubleshoot this further?
Edit: It seems it has to do with relative paths. Still not sure what part is wrong though...
Edit: I've figured out that by going to Project -> Edit Active Executable and changing Set The Working Directory to 'Project Directory' works for now, but I don't understand why it won't load the hello.bmp in the .app itself. What am I missing?
Edit: Below is the source code for the lazyfoo lesson 01, included as per request. This is the code I'm using character for character, if you need any information about my XCode (Version 3.2), let me know.
/*This source code copyrighted by Lazy Foo' Productions (2004-2012)
and may not be redestributed without written permission.*/
//Include SDL functions and datatypes
#include "SDL/SDL.h"
int main( int argc, char* args[] )
{
//The images
SDL_Surface* hello = NULL;
SDL_Surface* screen = NULL;
//Start SDL
SDL_Init( SDL_INIT_EVERYTHING );
//Set up screen
screen = SDL_SetVideoMode( 640, 480, 32, SDL_SWSURFACE );
//Load image
hello = SDL_LoadBMP( "3.app/Contents/Resources/hello.bmp" );
//Apply image to screen
SDL_BlitSurface( hello, NULL, screen, NULL );
//Update Screen
SDL_Flip( screen );
//Pause
SDL_Delay( 2000 );
//Free the loaded image
SDL_FreeSurface( hello );
//Quit SDL
SDL_Quit();
return 0;
}
This is incorrect:
SDL_LoadBMP( "3.app/Contents/Resources/hello.bmp" );
You should get the path for a resource in your application's bundle by calling [[NSBundle mainBundle] pathForResource:#"hello" ofType:#"bmp"], which will return an NSString object with the absolute path of the file corresponding to that resource.
I've found that by replacing "hello.bmp" in the lesson01 source code with "X.app/Contents/Resources/hello.bmp" where X is the name of your XCode project, this correctly references the app that is built even if the .app is copied to the desktop and ran there.
Also, don't forget to add hello.bmp to the XCode project.
Related
I have an app that loads an image in a QPixmap, which changes depending on the user's request.
Not all images have the same format, or the same size, some vertical and others horizontal.
I am using Qt5.14.2, with MinGW 7.3.0 32-bit and Qt Creator 4.11.1. in W10.
On a machine running W8.1 and the same version of Qt, MinGW and QtCreator have been working fine.
The first image loads perfectly, but if I change it, it already returns the empty QPixmap.
I have done different tests to be able to load the image, resizing the QPixmap (with fixed dimensions and with those of the image),
saving the QImage to a file and loading it from the file putting the PNG format, etc.
I have been able to check that the images are generated correctly, to rule out that it would be a problem when converting the data to QImage.
Any idea what is happening? I can't put the code because it's quite an extensive app, so I'll narrow it down to where the problem is
My_class.h:
QPixmap pixmap;
My_class.cpp:
int w=num_celdas_imagen_x*128;
int h=num_celdas_imagen_y*128;
unsigned char *data =(unsigned char *)malloc(w*h*3);
// filling data
QImage img = QImage((unsigned char *)data,w,h,QImage::Format_RGB888);
// At this point I have been able to check that the image is generated correctly
this->pixmap = QPixmap::fromImage(img);
// At this point the QPixmap is null (except with the first image, which loads fine)
Thanks
I'm working right out of a book, and copied the following code (essentially the openCV equivalent of "hello world"):
//helloCV.cpp
#include <opencv2/opencv.hpp>
int main(int argc, char** argv){
cv::Mat img = cv::imread(argv[1], -1);
if (img.empty()) return -1;
cv::namedWindow("Example1", cv::WINDOW_AUTOSIZE);
cv::imshow("Example1", img);
cv::waitKey(0);
cv::destroyWindow("Example1");
return 0;
}//main
Unfortunately, when I run this code, I get a window with the header and nothing in it:
I suspect that I've messed up the environment or some such when installing OpenCV, but cmake is throwing no errors, and the code runs as expected, exiting correctly on a keystroke and all of that, with the glaring exception of a lack of a displayed photo.
Any tips?
Thanks!
Thanks to #DanMaĆĄek for the lead on this one, and to all the people on this page: http://answers.opencv.org/question/160201/video-window-not-loading-frame/
To repeat what they said, what worked for me was the following:
To resolve this, locate the file window_cocoa.mm; if built from source it'll be in opencv/modules/highgui/src.
Change the following:
#implementation CVView
#if defined(__LP64__)
#synthesize image;
#else // 32-bit Obj-C does not have automatic synthesize
#synthesize image = _image;
#endif
To this:
#implementation CVView
#synthesize image = _image;
Do the same thing for the CVWindow and CVSlider implementations to accommodate videos as well.
Recompile OpenCV and test out your code.
Hope this helps other people struggling with this issue!
I am trying to "print" items that contain text, onto a mono image.
I have items that either subclass QGraphicsTextItem, or some other class that uses painter->drawText().
I can do it in Qt4 - but in Qt5, all I get are some blobs. I have tried to experiment with a small sample program, but I can't figure out what is going on...
Here is a simple self contained program:
#include <QApplication>
#include <QGraphicsView>
#include <QGraphicsTextItem>
class TextItem : public QGraphicsItem
{
public:
TextItem(){}
virtual void paint(QPainter* painter, const QStyleOptionGraphicsItem*, QWidget* = NULL)
{
QPen p(Qt::red);
painter->setPen(p);
painter->setBrush(Qt::NoBrush);
//painter->setFont(QFont("Arial", 30));
painter->drawText(boundingRect(), Qt::AlignCenter, "abcd");
}
virtual QRectF boundingRect() const { return QRectF(0, 0, 30, 20); }
};
void processScene(QGraphicsScene* s) {
QGraphicsTextItem* t = new QGraphicsTextItem();
t->setPlainText("abcd");
// t->setDefaultTextColor(Qt::red); // won't make a difference
t->setPos(0, 0);
s->addItem(t);
//t->setFont(QFont("Arial", 30));
//t->setTransform(t->transform().fromScale(3,3));
TextItem* t1 = new TextItem();
t1->setPos(0, 20);
s->addItem(t1);
QImage image = QImage(s->sceneRect().size().toSize(),
QImage::Format_Mono);
//QImage::Format_RGB32); // this works in qt5
image.fill(QColor(Qt::color0).rgb());
QPainter painter;
painter.begin(&image);
s->render(&painter);
painter.end();
image.save(QString("../test.bmp"));
}
int main(int argc, char *argv[]) {
QApplication app(argc, argv);
QGraphicsScene s;
s.setSceneRect(0, 0, 50, 50);
QGraphicsView view(&s);
view.show();
processScene(&s);
return app.exec();
}
Running it in Qt4 it gives what is expected: outputs a mono image containing text (in both versions, using QGraphicsTextItem or paint->drawText() (left image).
The same in Qt5 only shows some blobs in place of text (right image):
I have tried all kinds of experiments... Setting the font doesn't make a difference (except, for QGraphicsTextItem, setting the font to something big AND scaling to something big - but that is not useful in a practical application).
The other thing that worked in Qt5 is rendering to a color image instead of mono - but that is very bad for my application, where I need to store small mono images. My thought would be then, for Qt5, to render text to a color bitmap and then convert it to mono... But really !
I have also tried to save the items themselves to a bitmap, in a function implemented inside their class - it did the same thing.
I have tried to set point size on the QFont before applying it.
Is there something else I can set on QPainter in order to draw text ?
I found a related question: QT5 Text rendering issue also asked again in Qt forums: QT5 Text rendering issue. But that question refers to all rendering, and to a Qt5 in Alpha on a MIPS based platform... It implies that other platforms don't see that issue...
I'm using 5.5.0... I am also running under windows... And rendering on color image works... And...
I simply do not understand the "solution":
For time being, when we tried forcefully to enable Glyph cache, then
we are not seeing this issue.
Not sure what that means and how to do it.
Why is text painting blobs on mono images (but fine on color images) in qt5 and what can I do to fix it ?
Using Qt 5.5.0 32 bit, in Windows 7
Edit:
1) I tested the code in 5.4.2 and it renders text correctly.
I tried to compare some of the relevant code in qpainter.cpp, qtextlayout.cpp, qpaintengine_raster.cpp, qtextengine.cpp
- but with my limited understanding, the changes between Qt 5.4.2 and Qt 5.5.0 look logical... and would not affect the behavior...
2) I did paint my text on a color QImage, then painted it on mono - it worked fine on my own text items, but a third party software that creates very small text gives me very bad resolution when I try the same (illegible).
In my program I am using Qt's function: qApp->primaryScreen()->grabWindow(qApp->desktop()->winId(),x_offset,y_offset,w,h);
But its a little bit slowly for a "main task". So I've collide with a question above. The program able to work under Windows and Mac OS X. I heard about opengl as nice screengrabber since it is closer to GPU than native API plus its a cross platform solution. This is the first knowledge I want to get: Opengl as desktop screengrabber, is it real? I mean like a button "print screen".
If it is, how?If its not:
Windows: can you please give advice how to? BitBlt, GetDC, smthing like this?
Mac OS X: AVFoundation? Please, can you describe this or give some link about how to capture screenshot using this class? (Its a hard way since I know about Objective-C(++) almost nothing)
UPDATE: I read a lot about ways to capture screenshot. There are some knowledge:1. Opengl (maybe) is real as screengrabber, but use it will be irrcorrect for this software. Btw, I don't care, if there is some solution I will accept it.
2. DirectX it is not a way to solve my problem since this software does not work under Mac OS X.
Just to expand on #Zhenyi Luo's answer, here is a code snippet I have used in the past.
It also uses FreeImage for exporting the screenshot.
void Display::SaveScreenShot (std::string FilePath, SCREENSHOT_FORMAT Format){
// Create Pixel Array
GLubyte* pixels = new GLubyte [3 * Window::width * Window::height];
// Read Pixels From Screen And Buffer Into Array
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glReadPixels (0, 0, Window::width, Window::height, GL_BGR, GL_UNSIGNED_BYTE, pixels);
// Convert To FreeImage And Save
FIBITMAP* image = FreeImage_ConvertFromRawBits (pixels, Window::width,
Window::height, 3 * Window::width, 24,
0x0000FF, 0xFF0000, 0x00FF00, false);
FreeImage_Save ((FREE_IMAGE_FORMAT) Format, image, FilePath.c_str (), 0);
// Free Resources
FreeImage_Unload (image);
delete [] (pixels);
}
glReadPixels( GLint x,
GLint y,
GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
GLvoid * data), would read a block of pixels into client memory starting at location data.
I am using GLFW to display some OpenGL content in MSVS 2010. I want to use AntTweakBar to modify some directive variables (speed rotation, object size, ...) and I want this bar to be above the OpenGL content (not behind as you can see in the picture).
I read the manual, followed the examples but I cant figure out how to set this.
Examples use old deprecated fixed pipeline however I use dynamic pipline so i guess that might be problem.
Picture: (i donr have enough rep to post it directly)
http://s9.postimg.org/43aa3pt0v/cube.png
Code:
TwInit(TW_OPENGL_CORE, NULL);
int width=0;
int height=0;
glfwGetWindowSize(&width,&height);
TwWindowSize(width, height);
TwBar * BuildingGUI = TwNewBar("Window settings");
TwSetParam(BuildingGUI, NULL, "refresh", TW_PARAM_CSTRING, 1, "0.1");
TwDefine(" 'Window settings' alwaystop=true ")
TwAddVarRW(BuildingGUI, "Movement Speed" , TW_TYPE_FLOAT, &speed, "step=0.1");
Thank you for your time !
I had the same problem and placed the TwDraw(); function just before glfwSwapBuffers inside the draw function.