My program consists of two functions: first a user clicks a button (btn_image) to upload an image from desktop and it displays on the label (lbl_image). Secondly, I push another button (cnv_image) in order to change the colors of that uploaded image to black and white.
I have managed to implement the first function: the image chosen by a user successfully displays. However, I am confused how to convert that image to b&w. I wrote a function that is triggered after clicking the cnv_image button, but the problem is to refer to that uploaded image. So, when I click cnv_image buttom the uploaded image simply disappears.
I tried to use image.load (ui->lbl_image) to refer to the label which contains the image but it shows a mistake.
How can I implement my second function?
void MainWindow::on_btn_image_clicked()
{
QString fileName = QFileDialog::getOpenFileName(this, tr("Choose"), "", tr("Images (*.png *.jpg *jpeg)"));
if (QString::compare(fileName, QString()) != 0) {
QImage image;
bool valid = image.load(fileName);
if (valid) {
ui->lbl_image->setPixmap(QPixmap::fromImage(image));
}
}
}
void MainWindow::on_cnv_image_clicked()
{
QImage image;
image.load(ui->lbl_image);
QSize sizeImage = image.size();
int width = sizeImage.width(), height = sizeImage.height();
QRgb color;
for (int f1=0; f1<width; f1++) {
for (int f2=0; f2<height; f2++) {
int gray = qGray(color);
image.setPixel(f1, f2, qRgb(gray, gray, gray));
}
}
ui->lbl_image->setPixmap(QPixmap::fromImage(image));
}
I update your code, I add QImage image; as private member of MainWindow class so In mainwindow.h :
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
QT_BEGIN_NAMESPACE
namespace Ui
{
class MainWindow;
}
QT_END_NAMESPACE
class MainWindow: public QMainWindow
{
Q_OBJECT
public:
MainWindow(QWidget *parent = nullptr);
~MainWindow();
private slots:
void on_btn_image_clicked();
void on_cnv_image_clicked();
private:
Ui::MainWindow *ui;
QImage image;
};
#endif // MAINWINDOW_H
and in on_cnv_image_clicked function
void MainWindow::on_cnv_image_clicked()
{
QSize sizeImage = image.size();
int width = sizeImage.width(), height = sizeImage.height();
QRgb color;
int value;
for (int f1 = 0; f1 < width; f1++)
{
for (int f2 = 0; f2 < height; f2++)
{
color = image.pixel(f1, f2);
int gray = (qRed(color) + qGreen(color) + qBlue(color)) / 3;
image.setPixel(f1, f2, qRgb(gray, gray, gray));
}
}
ui->lbl_image->setPixmap(QPixmap::fromImage(image));
ui->lbl_image->setScaledContents(true);
}
Result :
Welcome to Stackoverflow!
First of all, it's a good idea to keep a copy of the QImage in your class when you load it. It helps to avoid extra conversions from QPixmap to QImage in next steps. I'll skip it because it's out of the scope of your question.
You can use QImage::convertTo to convert the format of a QImage in place. It means that, it does not create a new QImage. As per documentation it may detach the QImage. You can read more about Implicit Sharing if you are interested.
So, the implementation should be something like:
void MainWindow::on_cnv_image_clicked()
{
QImage image = ui->lbl_image->pixmap().toImage();
image.convertTo(QImage::Format_Grayscale8);
ui->lbl_image->setPixmap(QPixmap::fromImage(image));
}
Take a look at the list of QImage::Formats to evaluate other grayscale/mono options.
Related
QDockWidget has a feature where you can double click on the title bar and the dock will toggle to a floating window and back to its docked state. The problem is if you move and resize the floating window and then toggle back to the dock and then back to floating again, your position and size are lost.
I've looked for solutions to resize and move a QDockWidget and I've taken a look at the Qt source code for QDockWidget. I've created a small subclass of QDockWidget that appears to solve the problem. It overrides the mouseDoubleClick, resize and move events, filtering out the unwanted "random" resizing and positioning by Qt and stores the information about screen, position and size in a struct that I store in QSettings for persistence between sessions.
// header
#include <QDockWidget>
#include "global.h"
class DockWidget : public QDockWidget
{
Q_OBJECT
public:
DockWidget(const QString &title, QWidget *parent = nullptr);
QSize sizeHint() const;
void rpt(QString s);
struct DWLoc {
int screen;
QPoint pos;
QSize size;
};
DWLoc dw;
bool ignore;
protected:
bool event(QEvent *event);
void resizeEvent(QResizeEvent *event);
void moveEvent(QMoveEvent *event);
};
// cpp
#include "dockwidget.h"
DockWidget::DockWidget(const QString &title, QWidget *parent)
: QDockWidget(title, parent)
{
ignore = false;
}
bool DockWidget::event(QEvent *event)
{
if (event->type() == QEvent::MouseButtonDblClick) {
ignore = true;
setFloating(!isFloating());
if (isFloating()) {
// move and size to previous state
QRect screenres = QApplication::desktop()->screenGeometry(dw.screen);
move(QPoint(screenres.x() + dw.pos.x(), screenres.y() + dw.pos.y()));
ignore = false;
adjustSize();
}
ignore = false;
return true;
}
QDockWidget::event(event);
return true;
}
void DockWidget::resizeEvent(QResizeEvent *event)
{
if (ignore) {
return;
}
if (isFloating()) {
dw.screen = QApplication::desktop()->screenNumber(this);
QRect r = geometry();
QRect a = QApplication::desktop()->screen(dw.screen)->geometry();
dw.pos = QPoint(r.x() - a.x(), r.y() - a.y());
dw.size = event->size();
}
}
QSize DockWidget::sizeHint() const
{
return dw.size;
}
void DockWidget::moveEvent(QMoveEvent *event)
{
if (ignore || !isFloating()) return;
dw.screen = QApplication::desktop()->screenNumber(this);
QRect r = geometry();
QRect a = QApplication::desktop()->screen(dw.screen)->geometry();
dw.pos = QPoint(r.x() - a.x(), r.y() - a.y());
dw.size = QSize(r.width(), r.height());
}
While this appears to be working is there a simpler way to accomplish this? What do I do if a QDockWidget was on a screen that is now turned off?
Currently I am working on the display of gray level image with zoom feature. I am able to get the position of the pixel and the zoom feature is working well. However I encountered two problems:
1.) How can I get the grey level value of the pixel that is pointed by the mouse? I only managed to obtain the rgb value through “QRgb rgbValue = pix.toImage().pixel(x,y)”. How can I convert it into grey level value? Or is there any direct way to get the grey level value of the pixel.
2.) I have implemented “mouseMoveEvent(QMouseEvent *event)” and “setMouseTracking(true)”. However the function of “mouseMoveEvent(QMouseEvent *event)” is not functioning when I move the mouse. What is wrong with my code?
mainwindow.h
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
#include <QGraphicsScene>
#include <QGraphicsItem>
namespace Ui {
class MainWindow;
}
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
explicit MainWindow(QWidget *parent = 0);
~MainWindow();
private slots:
void on_pushButton_clicked();
protected:
void mouseMoveEvent(QMouseEvent * event);
private:
Ui::MainWindow *ui;
QGraphicsScene* scene;
QGraphicsItem* item;
QPixmap pix;
};
#endif // MAINWINDOW_H
mainwindow.cpp
#include "mainwindow.h"
#include "ui_mainwindow.h"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
QImage image("E:/image_00002.bmp");
pix = QPixmap::fromImage(image);
scene = new QGraphicsScene(this);
ui->graphicsView->setScene(scene);
scene->addPixmap(pix);
}
MainWindow::~MainWindow()
{
delete ui;
}
void MainWindow::on_pushButton_clicked()
{
ui->graphicsView->setMouseTracking(true);
}
void MainWindow::mouseMoveEvent(QMouseEvent *event)
{
QPoint local_pt = ui->graphicsView->mapFromGlobal(event->globalPos());
QPointF img_coord_pt = ui->graphicsView->mapToScene(local_pt);
double x = img_coord_pt.x();
double y = img_coord_pt.y();
/* How can I get a gray level image here */
QRgb rgbValue = pix.toImage().pixel(x,y);
ui->label_X->setText(QString::number(x));
ui->label_Y->setText(QString::number(y));
ui->label_Value->setText(QString::number(rgbValue));
}
To elaborate a bit on what hank said. In order for your QMainWindow to receive the events from the QGraphicsScene you need to install an event filter (see http://qt-project.org/doc/qt-4.8/qobject.html#installEventFilter ). Quoting from the DOCs:
An event filter is an object that receives all events that are sent to this object.
The filter can either stop the event or forward it to this object.
In order to process the events you need to define an eventFilter method in your main window class. Below are the on_pushButton_clicked() and eventFilter() methods that should do what you want:
void MainWindow::on_pushButton_clicked()
{
ui->graphicsView->setMouseTracking(true);
scene->installEventFilter(this);
}
bool MainWindow::eventFilter(QObject *obj, QEvent *event)
{
if (event->type() == QEvent::GraphicsSceneMouseMove ) {
QGraphicsSceneMouseEvent *mouseEvent = static_cast<QGraphicsSceneMouseEvent*>(event);
QPointF img_coord_pt = mouseEvent->scenePos();
double x = img_coord_pt.x();
double y = img_coord_pt.y();
QColor color = QColor(pix.toImage().pixel(x,y));
int average = (color.red()+color.green()+color.blue())/3;
ui->label_X->setText(QString::number(x));
ui->label_Y->setText(QString::number(y));
ui->label_Value->setText(QString::number(average));
return true;
} else {
return QObject::eventFilter(obj, event);
}
}
Does this help?
Qt developers!
Is there are way to add image on the background of my midArea like on picture below?
I know I can use something like this
QImage img("logo.jpg");
mdiArea->setBackground(img);
But I don't need any repeat of my image on the background.
Thank you!
As I said in my comment above, you can sub-class the QMdiArea, override its paintEvent() function and draw your logo image yourself (in the bottom right corner). Here is the sample code that implements the mentioned idea:
class MdiArea : public QMdiArea
{
public:
MdiArea(QWidget *parent = 0)
:
QMdiArea(parent),
m_pixmap("logo.jpg")
{}
protected:
void paintEvent(QPaintEvent *event)
{
QMdiArea::paintEvent(event);
QPainter painter(viewport());
// Calculate the logo position - the bottom right corner of the mdi area.
int x = width() - m_pixmap.width();
int y = height() - m_pixmap.height();
painter.drawPixmap(x, y, m_pixmap);
}
private:
// Store the logo image.
QPixmap m_pixmap;
};
And finally use the custom mdi area in the main window:
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QMainWindow mainWindow;
QMdiArea *mdiArea = new MdiArea(&mainWindow);
mainWindow.setCentralWidget(mdiArea);
mainWindow.show();
return app.exec();
}
I displayed a picture on QLabel and wanted to take coordinates and paint a point on image on mouse click event. I am able to get coordinates but painter is painting point below my image on label, i want it above my image.
My code is :
main.cpp
#include "imageviewer.h"
#include <QApplication>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
imageviewer w;
w.showMaximized();
return a.exec();
}
imageviewer.h
#include <QPushButton>
class imageviewer : public QLabel
{
Q_OBJECT
public:
explicit imageviewer(QWidget *parent = 0);
private slots:
void mousePressEvent(QMouseEvent * e);
void paintEvent(QPaintEvent * e);
private:
QLabel *label1 ;
int mFirstX;
int mFirstY;
bool mFirstClick;
bool mpaintflag;
};
#endif
imageviewer.cpp
#include <QtGui>
#include <QHBoxLayout>
#include <QVBoxLayout>
#include "imageviewer.h"
#include <QDebug>
imageviewer::imageviewer(QWidget *parent)
: QLabel(parent)
{
label1 = new QLabel;
label1->setSizePolicy(QSizePolicy::Ignored, QSizePolicy::Ignored);
QPixmap pm1("/home/nishu/Pictures/img_0002.jpg");
label1->setPixmap(pm1);
label1->adjustSize();
label1->setScaledContents(true);
QHBoxLayout *hlayout1 = new QHBoxLayout;
hlayout1->addWidget(label1);
setLayout(hlayout1);
}
void imageviewer :: mousePressEvent(QMouseEvent *e)
{
mFirstX=0;
mFirstY=0;
mFirstClick=true;
mpaintflag=false;
if(e->button() == Qt::LeftButton)
{
//store 1st point
if(mFirstClick)
{
mFirstX = e->x();
mFirstY = e->y();
mFirstClick = false;
mpaintflag = true;
qDebug() << "First image's coordinates" << mFirstX << "," << mFirstY ;
update();
}
}
}
void imageviewer :: paintEvent(QPaintEvent * e)
{
QLabel::paintEvent(e);
if(mpaintflag)
{
QPainter painter(this);
QPen paintpen(Qt::red);
paintpen.setWidth(10);
QPoint p1;
p1.setX(mFirstX);
p1.setY(mFirstY);
painter.setPen(paintpen);
painter.drawPoint(p1);
}
}
Help me to sort out what exactly problem is?
with line QPainter painter(this); you set QPainter to draw on your main widget instead of QLabel's pixmap. Change block to this and it will work:
if(mpaintflag)
{
QImage tmp(label1->pixmap()->toImage());
QPainter painter(&tmp);
QPen paintpen(Qt::red);
paintpen.setWidth(10);
QPoint p1;
p1.setX(mFirstX);
p1.setY(mFirstY);
painter.setPen(paintpen);
painter.drawPoint(p1);
label1->setPixmap(QPixmap::fromImage(tmp));
}
EDIT:
Just noticed, that you derived from QLabel, not from QWidget, as i assumed automatically, looking at layout. Indeed, you don't need label1 and layout inside of our imageviewer class. That whole point of subclassing is that you implement behavior and filter events the way you want it and then you add them to main widget if that is needed
EDIT2:
Imageviewer class should be derived from QLabel, remove label1 and layout, and paint not on image, but on imageviewer itself, i.e. this. Then you need to add new class to your program, which is derived from QMainwindow or QWidget for example, where you should include your imageviewer class, create layout and add your class to it like this:
#include "imageviewer.h"
//.... somewhere in constructor ....
imageviewer *viewer1=new imageviewer(this); // creating new object of imageviewer
viewer1->setPixmap(...);
hlayout1->addWidget(viewer1);
You derived your class from QLabel, so you should not create another QLabel *label1 and put it inside label's layout. That doesn't make any sense. Why would anyone put a label into a label? You need to remove label1 and use the imageviewer object as a label instead. Your constructor should contain only the following code:
setSizePolicy(QSizePolicy::Ignored, QSizePolicy::Ignored);
QPixmap pm1(...);
setPixmap(pm1);
adjustSize();
setScaledContents(true);
I've checked that it fixes your problem.
I'm fighting with this problem from a long time.
I can't get OpenCV to work, and I have follow a lot of tutorials about it and how to use in Qt, so I get tired and I want to avoid the use of OpenCV for this.
Now, my requirement or question... I need to show a webcam feed (real time video, without audio) in a Qt GUI application with only one button: "Take Snapshot" which, obviusly, take a picture from the current feed and store it.
That's all.
Is there anyway to get this done without using OpenCV ?
System specification:
Qt 4.8
Windows XP 32 bits
USB 2.0.1.3M UVC WebCam (the one I'm using now, it should support other models too)
Hope anybody can help me with this because I'm getting crazy.
Thanks in advance!
Ok, I finally did it, so I will post here my solution so we have something clear about this.
I used a library called 'ESCAPI': http://sol.gfxile.net/escapi/index.html
This provide a extremely easy way to capture frames from the device. With this raw data, I just create a QImage which later show in a QLabel.
I created a simple object to handle this.
#include <QDebug>
#include "camera.h"
Camera::Camera(int width, int height, QObject *parent) :
QObject(parent),
width_(width),
height_(height)
{
capture_.mWidth = width;
capture_.mHeight = height;
capture_.mTargetBuf = new int[width * height];
int devices = setupESCAPI();
if (devices == 0)
{
qDebug() << "[Camera] ESCAPI initialization failure or no devices found";
}
}
Camera::~Camera()
{
deinitCapture(0);
}
int Camera::initialize()
{
if (initCapture(0, &capture_) == 0)
{
qDebug() << "[Camera] Capture failed - device may already be in use";
return -2;
}
return 0;
}
void Camera::deinitialize()
{
deinitCapture(0);
}
int Camera::capture()
{
doCapture(0);
while(isCaptureDone(0) == 0);
image_ = QImage(width_, height_, QImage::Format_ARGB32);
for(int y(0); y < height_; ++y)
{
for(int x(0); x < width_; ++x)
{
int index(y * width_ + x);
image_.setPixel(x, y, capture_.mTargetBuf[index]);
}
}
return 1;
}
And the header file:
#ifndef CAMERA_H
#define CAMERA_H
#include <QObject>
#include <QImage>
#include "escapi.h"
class Camera : public QObject
{
Q_OBJECT
public:
explicit Camera(int width, int height, QObject *parent = 0);
~Camera();
int initialize();
void deinitialize();
int capture();
const QImage& getImage() const { return image_; }
const int* getImageRaw() const { return capture_.mTargetBuf; }
private:
int width_;
int height_;
struct SimpleCapParams capture_;
QImage image_;
};
#endif // CAMERA_H
It's so simple, but just for example purposes.
The use should be something like:
Camera cam(320, 240);
cam.initialize();
cam.capture();
QImage img(cam.getImage());
ui->label->setPixmap(QPixmap::fromImage(img));
Of course, you can use a QTimer and update the frame in QLabel and you will have video there...
Hope it help! and thanks Nicholas for your help!