I have an app that loads an image in a QPixmap, which changes depending on the user's request.
Not all images have the same format, or the same size, some vertical and others horizontal.
I am using Qt5.14.2, with MinGW 7.3.0 32-bit and Qt Creator 4.11.1. in W10.
On a machine running W8.1 and the same version of Qt, MinGW and QtCreator have been working fine.
The first image loads perfectly, but if I change it, it already returns the empty QPixmap.
I have done different tests to be able to load the image, resizing the QPixmap (with fixed dimensions and with those of the image),
saving the QImage to a file and loading it from the file putting the PNG format, etc.
I have been able to check that the images are generated correctly, to rule out that it would be a problem when converting the data to QImage.
Any idea what is happening? I can't put the code because it's quite an extensive app, so I'll narrow it down to where the problem is
My_class.h:
QPixmap pixmap;
My_class.cpp:
int w=num_celdas_imagen_x*128;
int h=num_celdas_imagen_y*128;
unsigned char *data =(unsigned char *)malloc(w*h*3);
// filling data
QImage img = QImage((unsigned char *)data,w,h,QImage::Format_RGB888);
// At this point I have been able to check that the image is generated correctly
this->pixmap = QPixmap::fromImage(img);
// At this point the QPixmap is null (except with the first image, which loads fine)
Thanks
Related
Apologies if this is a duplicate; I've been searching with every combination of keywords I can think of but I can't find an article that addresses this.
I'm building a Python Tkinter application (Python 2.7.x, Win 7) which includes buttons with image data (drawn from XBM files). For example,
self._ResetIcon = tk.BitmapImage(file='ResetIcon.xbm')
self._Reset = tk.Button(self,
width=20, height=20,
image=self._ResetIcon,
command=self._reset)
Now I'm 99.9% sure there's a way to include the XBM image data as a declaration of some kind directly in the Python module itself, rather than pulling it from the external file. But I can't find anything online that describes how to do it.
Is there a way to do this?
Did some more digging via Google and found it.
http://effbot.org/tkinterbook/bitmapimage.htm
An X11 bitmap image consists of a C fragment that defines a width, a
height, and a data array containing the bitmap. To embed a bitmap in a
Python program, you can put it inside a triple-quoted string:
BITMAP = """
#define im_width 32
#define im_height 32
static char im_bits[] = {
0xaf,0x6d,0xeb,0xd6,0x55,0xdb,0xb6,0x2f,
0xaf,0xaa,0x6a,0x6d,0x55,0x7b,0xd7,0x1b,
0xad,0xd6,0xb5,0xae,0xad,0x55,0x6f,0x05,
0xad,0xba,0xab,0xd6,0xaa,0xd5,0x5f,0x93,
0xad,0x76,0x7d,0x67,0x5a,0xd5,0xd7,0xa3,
0xad,0xbd,0xfe,0xea,0x5a,0xab,0x69,0xb3,
0xad,0x55,0xde,0xd8,0x2e,0x2b,0xb5,0x6a,
0x69,0x4b,0x3f,0xb4,0x9e,0x92,0xb5,0xed,
0xd5,0xca,0x9c,0xb4,0x5a,0xa1,0x2a,0x6d,
0xad,0x6c,0x5f,0xda,0x2c,0x91,0xbb,0xf6,
0xad,0xaa,0x96,0xaa,0x5a,0xca,0x9d,0xfe,
0x2c,0xa5,0x2a,0xd3,0x9a,0x8a,0x4f,0xfd,
0x2c,0x25,0x4a,0x6b,0x4d,0x45,0x9f,0xba,
0x1a,0xaa,0x7a,0xb5,0xaa,0x44,0x6b,0x5b,
0x1a,0x55,0xfd,0x5e,0x4e,0xa2,0x6b,0x59,
0x9a,0xa4,0xde,0x4a,0x4a,0xd2,0xf5,0xaa
};
"""
To create X11 bitmaps, you can use the X11 bitmap editor provided with
most Unix systems, or draw your image in some other drawing program
and convert it to a bitmap using e.g. the Python Imaging Library.
The BitmapImage class can read X11 bitmaps from strings or text files:
bitmap = BitmapImage(data=BITMAP)
bitmap = BitmapImage(file="bitmap.xbm")
I am trying to "print" items that contain text, onto a mono image.
I have items that either subclass QGraphicsTextItem, or some other class that uses painter->drawText().
I can do it in Qt4 - but in Qt5, all I get are some blobs. I have tried to experiment with a small sample program, but I can't figure out what is going on...
Here is a simple self contained program:
#include <QApplication>
#include <QGraphicsView>
#include <QGraphicsTextItem>
class TextItem : public QGraphicsItem
{
public:
TextItem(){}
virtual void paint(QPainter* painter, const QStyleOptionGraphicsItem*, QWidget* = NULL)
{
QPen p(Qt::red);
painter->setPen(p);
painter->setBrush(Qt::NoBrush);
//painter->setFont(QFont("Arial", 30));
painter->drawText(boundingRect(), Qt::AlignCenter, "abcd");
}
virtual QRectF boundingRect() const { return QRectF(0, 0, 30, 20); }
};
void processScene(QGraphicsScene* s) {
QGraphicsTextItem* t = new QGraphicsTextItem();
t->setPlainText("abcd");
// t->setDefaultTextColor(Qt::red); // won't make a difference
t->setPos(0, 0);
s->addItem(t);
//t->setFont(QFont("Arial", 30));
//t->setTransform(t->transform().fromScale(3,3));
TextItem* t1 = new TextItem();
t1->setPos(0, 20);
s->addItem(t1);
QImage image = QImage(s->sceneRect().size().toSize(),
QImage::Format_Mono);
//QImage::Format_RGB32); // this works in qt5
image.fill(QColor(Qt::color0).rgb());
QPainter painter;
painter.begin(&image);
s->render(&painter);
painter.end();
image.save(QString("../test.bmp"));
}
int main(int argc, char *argv[]) {
QApplication app(argc, argv);
QGraphicsScene s;
s.setSceneRect(0, 0, 50, 50);
QGraphicsView view(&s);
view.show();
processScene(&s);
return app.exec();
}
Running it in Qt4 it gives what is expected: outputs a mono image containing text (in both versions, using QGraphicsTextItem or paint->drawText() (left image).
The same in Qt5 only shows some blobs in place of text (right image):
I have tried all kinds of experiments... Setting the font doesn't make a difference (except, for QGraphicsTextItem, setting the font to something big AND scaling to something big - but that is not useful in a practical application).
The other thing that worked in Qt5 is rendering to a color image instead of mono - but that is very bad for my application, where I need to store small mono images. My thought would be then, for Qt5, to render text to a color bitmap and then convert it to mono... But really !
I have also tried to save the items themselves to a bitmap, in a function implemented inside their class - it did the same thing.
I have tried to set point size on the QFont before applying it.
Is there something else I can set on QPainter in order to draw text ?
I found a related question: QT5 Text rendering issue also asked again in Qt forums: QT5 Text rendering issue. But that question refers to all rendering, and to a Qt5 in Alpha on a MIPS based platform... It implies that other platforms don't see that issue...
I'm using 5.5.0... I am also running under windows... And rendering on color image works... And...
I simply do not understand the "solution":
For time being, when we tried forcefully to enable Glyph cache, then
we are not seeing this issue.
Not sure what that means and how to do it.
Why is text painting blobs on mono images (but fine on color images) in qt5 and what can I do to fix it ?
Using Qt 5.5.0 32 bit, in Windows 7
Edit:
1) I tested the code in 5.4.2 and it renders text correctly.
I tried to compare some of the relevant code in qpainter.cpp, qtextlayout.cpp, qpaintengine_raster.cpp, qtextengine.cpp
- but with my limited understanding, the changes between Qt 5.4.2 and Qt 5.5.0 look logical... and would not affect the behavior...
2) I did paint my text on a color QImage, then painted it on mono - it worked fine on my own text items, but a third party software that creates very small text gives me very bad resolution when I try the same (illegible).
Sorry for lengthier explanation. As am new to Open want to give more details with example.
My requirement is to find the delta of 2 static images, for this am using the following technique:
cv::Mat prevImg = cv::imread("prev.bmp");
cv::Mat currImg = cv::imread("curr.bmp");
cv::Mat deltaImg;
cv::absdiff(prevImg,currImg,deltaImg);
cv::namedWindow("image", CV_WINDOW_NORMAL);
cv::absdiff(prevImg,currImg,deltaImg);
cv::imshow("image", deltaImg);
And in the deltaImg, am getting the difference between the images, but it includes the background of the first image also. I know i have to remove the background using BackgroundSubtractorMOG2, but am unable to understand this class usage as most of the examples are based on webcamera captures.
Please note that my images are static (Screen shots of the desktop activity).
Please guide me in resolving this issue, some sample code will be helpful.
Note I want to calculate delta in RGB.
Detailed Explination:
Images are at : https://picasaweb.google.com/105653560142316168741/OpenCV?authkey=Gv1sRgCLesjvLEjNXzZg#
Prev.bmp: The previous screen shot of my dektop
curr.bmp: The current screen shot of my desktop
The delta between the prev.bmp and curr.bmp, should be the startup menu image only, please find the image below:
The delta image should contain only the startup menu, but even contains the background image of the prev.bmp, this background i want to remove.
Thanks in advance.
After computing cv::absdiff your image contains non-zero values for each pixel that changed it's value. So you want to use all image regions that changed.
cv::Mat deltaImg;
cv::absdiff(currImg,prevImg,deltaImg);
cv::Mat grayscale;
cv::cvtColor(deltaImg, grayscale, CV_BGR2GRAY);
// create a mask that includes all pixel that changed their value
cv::Mat mask = grayscale>0;
cv::Mat output;
currImg.copyTo(output,mask);
Here are sample images:
previous:
current:
mask:
output:
and here is an additional image for the deltaImg before computing the mask:
Problems occur if foreground pixels have the same value as background pixel but belong to some other 'objects'. You can use cv::dilate operator (followed by cv::erode) to fill single pixel gaps. Or you might want to extract the rectangle of the start menu if you are not interested in all the other parts of the image that changed, too.
I just want to display this "img1.jpg" image in a c++ project with using opencv libs for future processes, but it only displays an empty gray window. What is the reason of this. Is there a mistake in this code? please help!
Here is the code;
Mat img1;
char imagePath[256] = "img1.jpg";
img1 = imread(imagePath, CV_LOAD_IMAGE_GRAYSCALE);
namedWindow("result", 1);
imshow("result", img1);
Thanks...
I had the same problem and solved putting waitKey(1); after imshow(). The OpenCV documentation explains why:
This function is the only method in HighGUI that can fetch and handle
events, so it needs to be called periodically for normal event
processing, unless HighGUI is used within some environment that takes
care of event processing.
Thanks #b_froz.
For more detials about this issue,you can refer to: http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html#imshow
Note This function should be followed by waitKey function which displays the image for specified milliseconds. Otherwise, it won’t display the image. For example, waitKey(0) will display the window infinitely until any keypress (it is suitable for image display). waitKey(25) will display a frame for 25 ms, after which display will be automatically closed. (If you put it in a loop to read videos, it will display the video frame-by-frame)
So,not only waitkey(1)could be put after imshow(),but also waitkey(0) or waitkey(other integers).Here is the explanation of the function waitkey() : http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html#waitkey
Are you importing the correct library ?
This is other very easy way to load one image:
#define CV_NO_BACKWARD_COMPATIBILITY
#include <cv.h>
#include <highgui.h>
#include <math.h>
main(){
IplImage* img = cvLoadImage("yourPicture.jpg");
cvNamedWindow("Original", 1);
cvShowImage("Original", img);
}
I think you have openCV correctly installed, so yo can type this (Ubuntu):
g++ NameOfYourProgram.cpp -o Sample -I/usr/local/include/opencv/ -L/usr/local/lib -lcv -lhighgui ./sample
The problem you are having is due to the type of your Mat img1. When you load your image with the flag CV_LOAD_IMAGE_GRAYSCALE, the type of your Mat is 0 (CV_8UC1), and the function imshow() is not able to show the image correctly.
You can solve this, converting your Mat to type 16 (CV_8UC3):
img1.convertTo(img1,CV_8UC3);
and then show it with imshow():
imshow("result", img1);
I hope this help.
I'm currently using FreeImage to load PFMs into a program that otherwise uses IplImages (the old data type for OpenCV). Here's a sample of what I'm doing (ignore the part about img being an array of Mats, that's related to some other code).
FIBITMAP *src;
// Load a PFM file using freeimage
src = FreeImage_Load(FIF_PFM, "test0.pfm", 0);
Mat* img;
img = new Mat[3];
// Create a copy of the image in an OpenCV matrix (using .clone() copies the data)
img[1] = Mat(FreeImage_GetHeight(src), FreeImage_GetWidth(src), CV_32FC3, FreeImage_GetScanLine(src, 0)).clone();
// Flip the image verticall because OpenCV row ordering is reverse of FreeImage
flip(img[1], img[1], 0);
// Save a copy
imwrite("OpenCV_converted_image.jpg", img[1]);
What's strange is that if I use FreeImage to load JPEGs instead by changing FIF_PFM to FIF_JPEG and CV_32FC3 to CV_8U, this works fine, i.e. the copied picture comes out unchanged. This makes me think that OpenCV and FreeImage generally agree on the ordering of RGB channels, and that the problem is related to PFMs specifically and their being a non-standardized format.
The PFMs I'm loading were written with this code (under "Local Histogram Equalization"), which appears to write them in RGB order although I could be wrong about that. It just takes the data from a MATLAB 3D matrix of doubles and dumps it into a file using fwrite. Also, if I modify that code to write PPMs instead, then view them in IrfanView, they look correct.
So, that leaves me thinking FreeImage is taking the file data to be BGR ordered on disk already which it is not, and should not be.
Any thoughts? Is there an error in FreeImage's reading of PFMs, or is there something more subtle going on here? Thanks.
Well, I never really got this one sorted out; long story short, FreeImage and OpenCV agree on color channel order (BGR) when loading most image formats, but not when loading PFMs. I can only assume that the makers of FreeImage have therefore misinterpreted the admittedly not very solidified specs for PFMs. Since I was only using FreeImage to read/write PFMs, and it was proving quite complicated to get data back into a FreeImage structure after processing with OpenCV functions, I wrote my own PFM read/write code which turned out to be very simple.