I want to write to an image a formatted text. OpenCV offers only a limited set of default fonts. Is it possible to use others? For example to read them from the *.ttf file (in Ubuntu)?
If you cannot or don't want to use the Qt bindings, here is a way to do it with CAIRO:
#include <opencv2/opencv.hpp>
#include <cairo/cairo.h>
#include <string>
void putTextCairo(
cv::Mat &targetImage,
std::string const& text,
cv::Point2d centerPoint,
std::string const& fontFace,
double fontSize,
cv::Scalar textColor,
bool fontItalic,
bool fontBold)
{
// Create Cairo
cairo_surface_t* surface =
cairo_image_surface_create(
CAIRO_FORMAT_ARGB32,
targetImage.cols,
targetImage.rows);
cairo_t* cairo = cairo_create(surface);
// Wrap Cairo with a Mat
cv::Mat cairoTarget(
cairo_image_surface_get_height(surface),
cairo_image_surface_get_width(surface),
CV_8UC4,
cairo_image_surface_get_data(surface),
cairo_image_surface_get_stride(surface));
// Put image onto Cairo
cv::cvtColor(targetImage, cairoTarget, cv::COLOR_BGR2BGRA);
// Set font and write text
cairo_select_font_face(
cairo,
fontFace.c_str(),
fontItalic ? CAIRO_FONT_SLANT_ITALIC : CAIRO_FONT_SLANT_NORMAL,
fontBold ? CAIRO_FONT_WEIGHT_BOLD : CAIRO_FONT_WEIGHT_NORMAL);
cairo_set_font_size(cairo, fontSize);
cairo_set_source_rgb(cairo, textColor[2], textColor[1], textColor[0]);
cairo_text_extents_t extents;
cairo_text_extents(cairo, text.c_str(), &extents);
cairo_move_to(
cairo,
centerPoint.x - extents.width/2 - extents.x_bearing,
centerPoint.y - extents.height/2- extents.y_bearing);
cairo_show_text(cairo, text.c_str());
// Copy the data to the output image
cv::cvtColor(cairoTarget, targetImage, cv::COLOR_BGRA2BGR);
cairo_destroy(cairo);
cairo_surface_destroy(surface);
}
Example call:
putTextCairo(mat, "Hello World", cv::Point2d(50,50), "arial", 15, cv::Scalar(0,0,255), false, false);
It assumes that the target image is BGR.
It puts the text's center to the given point. If you want some different positioning, you have to modify the cairo_move_to call.
It's possible to use other fonts but you need to link the Qt library to OpenCV and use the cvAddText function with a cvFontQt
http://docs.opencv.org/modules/highgui/doc/qt_new_functions.html#addtext
http://docs.opencv.org/modules/highgui/doc/qt_new_functions.html#fontqt
There are other solutions you might try, with more or less the same performance as OpenCV. For instance, you can use CAIRO to write fonts into the image.
Related
I need a way to overlay text on opencv image without actually modifying the underlying image. For example inside a while loop i create a random point for each iteration and I want to display it. If I use puttext the matrix gets overwritten with the added text in each cycle.
My questions is how to overlay text on opencv image without modifying the underlying matrix. I know I could use a temporary copy of the original image and upload it every time. However I want to avoid it.
My attempt(which fails) is as below:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
int main(int argc, char * argv[])
{
RNG rng( 0xFFFFFFFF );
cv::Mat image(480, 640, CV_8UC3, cv::Scalar(0,255,0));
int fontFace = FONT_HERSHEY_COMPLEX_SMALL;
double fontScale_small=1.5;
double fontScale_large=10.;
std::string text="X";
Point p;
while(1)
{
p.x = rng.uniform( 0, 639 );
p.y = rng.uniform( 0, 479 );
putText(image, "X", p, fontFace, 1, Scalar(0,0,255), 2);
imshow("IMAGE", image);
waitKey(1);
}
return 0;
}
If you are on OpenCV 3+ and have built OpenCV with Qt, then you can use the highgui functions on the Qt GUI window to overlay text, instead of actively modifying your image. See displayOverlay(). You can also simply change the status bar text, which is sometimes more useful (so that you don't cover the image or have to deal with color clashes) with displayStatusBar(). However, note that displayOverlay() does not allow you to give specific positions for your text.
Without QT support, I don't believe this is possible in OpenCV. You'll need to either modify your image via putText() or addText(), or roll your own GUI window that you can overlay text on.
You can output your text on black image, then make XOR with source image for put your text, and the second XOR will clear your text from source image.
I didn't know what title would correctly describe my problem, I hope this one is not confusing.
I started my adventure with OpenCV a few days ago. Until today I managed to find a chessboard in a live stream from my internet camera and display a resized image on it. My next goal is to make the program rotate the image while I'm rotating the chessboard. Unfortunately I have no idea how to do that, I saw many codes, many examples but none of them helped. My last goal is to do something like this: http://www.youtube.com/watch?v=APxgPYZOd0I (I can't get anything from his code, he uses Qt, I only met it once and I'm not interested in it - yet).
Here is my code:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
vector<Point3f> Create3DChessboardCorners(Size boardSize, float squareSize);
int main(int argc, char* argv[])
{
Size boardSize(6,9);
float squareSize=1.f;
namedWindow("Viewer");
namedWindow("zdjecie");
namedWindow("changed");
Mat zdjecie=imread("D:\\Studia\\Programy\\cvTest\\Debug\\przyklad.JPG");
resize(zdjecie, zdjecie, Size(200,150));
Mat changed=Mat(zdjecie);
imshow("zdjecie", changed);
vector<Point2f> corners;
VideoCapture video(0);
cout<<"video height: "<<video.get(CV_CAP_PROP_FRAME_HEIGHT)<<endl;
cout<<"video width: "<<video.get(CV_CAP_PROP_FRAME_WIDTH)<<endl;
Mat frame;
bool found;
Point2f src[4];
Point2f dst[4];
Mat perspMat;
while(1)
{
video>>frame;
found=findChessboardCorners(frame, boardSize, corners, CALIB_CB_FAST_CHECK);
changed=Mat(zdjecie);
// drawChessboardCorners(frame, boardSize, Mat(corners), found);
if(found)
{
line(frame, corners[0], corners[5], Scalar(0,0,255));
line(frame, corners[0], corners[48], Scalar(0,0,255));
src[0].x=0;
src[0].y=0;
src[1].x=zdjecie.cols;
src[1].y=0;
src[2].x=zdjecie.cols;
src[2].y=zdjecie.rows;
src[3].x=0;
src[3].y=zdjecie.rows;
dst[0].x=corners[0].x;
dst[0].y=corners[0].y;
dst[1].x=corners[boardSize.width-1].x;
dst[1].y=corners[boardSize.width-1].y;
dst[2].x=corners[boardSize.width*boardSize.height-1].x;
dst[2].x=corners[boardSize.width*boardSize.height-1].y;
dst[3].x=corners[boardSize.width*(boardSize.height-1)].x;
dst[3].y=corners[boardSize.width*(boardSize.height-1)].y;
perspMat=getPerspectiveTransform(src, dst);
warpPerspective(zdjecie, changed, perspMat, frame.size());
}
imshow("changed", changed);
imshow("Viewer", frame);
if(waitKey(20)!=-1)
break;
}
return 0;
}
I was trying to understand this code: http://dsynflo.blogspot.com/2010/06/simplar-augmented-reality-for-opencv.html
but nothing helped. It didn't even work for me - the image from my webcamera was inverted, frames were changing every few seconds and nothing was being displayed.
So what I ask for is not a whole solution. If someone explained me the way how to do it, I'd be glad. I want to understand it from basics and I just don't know where to go now. I spent much time on trying to solve it, if I didn't, I would not bother you with my problem.
I'm lookin forward for you answers !
Greetings,
Daniel
EDIT:
I changed the code. Now you can see how I try to warp the perspective on my image. Firstly I thought that the reason why after calling warpPerspective function my image Mat changed (it's Mat krzywe, I changed its name so it won't be confusing) is black, is the fact I don't start warping perspective everytime from the basic photo. So I added the line
changed=Mat(zdjecie)
I guess my problem is pretty simply to solve but I really have no idea now.
As I see it, there are two problems. First, the coordinates you are warping from and to are wrong. The source coordinates are simply the corners of the source image:
src[0].x=0;
src[0].y=0;
src[1].x=zdjecie.cols;
src[1].y=0;
src[2].x=zdjecie.cols;
src[2].y=zdjecie.rows;
src[3].x=0;
src[3].y=zdjecie.rows;
The destination coordinates must be the corners points of the found chessboard points. Which points are the corner points changes with the size of the chessboard. Since my chessboard had different dimension I tried to make it adaptive by taking the size of the board into account:
dst[0].x=corners[0].x;
dst[0].y=corners[0].y;
dst[1].x=corners[ boardSize.width-1 ].x;
dst[1].y=corners[ boardSize.width-1 ].y;
dst[2].x=corners[ boardSize.width * boardSize.height - 1 ].x;
dst[2].y=corners[ boardSize.width * boardSize.height - 1 ].y;
dst[3].x=corners[ boardSize.width * (boardSize.height-1) ].x;
dst[3].y=corners[ boardSize.width * (boardSize.height-1) ].y;
The second problem is, that you warp into an image that has only the size of the source image. But it needs to have the size of the target image:
warpPerspective(zdjecie, changed, perspMat, frame.size());
Another thing I found peculiar is, that our imshow("Viewer", frame); call is inside the if-clause. This means, the image is only updated if a chessboard was found. I am not sure, if that was intended.
Now you should have one window showing the video and another window showing the transformed source image. Your next step would now be to blend those both images together.
Update:
Here is how I merge the two images:
Mat mask = changed > 0;
Mat merged = frame.clone();
changed.copyTo(merged,mask);
The mask matrix is true for all pixels that are not zero in the warped image. Then all non-zero pixels from the warped image are copied into the frame.
I have a binary file of 16 bit Intensity of the image. I have read this data in short array. And create 16 bit gray Image using the following code.
IplImage *img=cvCreateImage( cvSize( Image_width, Image_height ), IPL_DEPTH_16S, 1 );
cvSetData(img,Data, sizeof(short )*Image_width);
Where Data is short array.
Then I set ROI for this Image using this function
cvSetImageROI(img, cvRect(crop_start.x, crop_start.y, crop_width, Image_height));
and ROI is being set successfully.
Now after setting the ROI I want to access the Intensity of the Image means I want pointer of the intensity of the cropped Image. I have tried this code to access the intensity
short *crop_Imagedata=(short *)img->imageData;
But This pointer is not giving the right Intensity as I have checked that by debugging the code.
Can any body please tell me how can I get pointer of the Image Intensity.
Thanks in Advance
Hello I have tryed the following to find what maybe you wannt to do:
IplImage *img=cvCreateImage( cvSize( 15, 15 ), IPL_DEPTH_16S, 1 );
cvSet(img,cvScalar(15));//sets all values to 15
cvSetImageROI(img, cvRect(4, 0, 10, 15));
short *crop_Imagedata=(short *)img->imageData;
((*crop_Imagedata) == 15) // true
The value that you will get is not in the roi! imageData of the IplImage structure is just a simple pointer and not a function !!! the rois of opencv is something that isn t that well documented and easy to use in my opinion. Maybe all algorithms of opencv use the rois somehow. I use them to but there is no such automatic function with the standard iplimage structure to simple use them.
If you wannt more magic try to use the new cv::Mat object...
but if you still wann t to use rois then you will have to allways to use
CvRect roi = cvGetImageROI(img);
method to check all the time the position. After that you will have to add the roi offset:
((short*)(img->imageData + (roi.y+ypos)*img-widthStep))[roi.x+xpos]
remember the standard Opencv is a C library not a C++ ! btw when mixing cv::Mat rois can be a bit annoying to. To copy an iplimage to a cv::Mat without roi I have to do the following:
CvRect roitmp = cvGetImageROI(ilimage);
cvResetImageROI(ilimage);
cv::Mat tmp = cv::Mat(ilimage).clone();
cvSetImageROI(ilimage,roitmp);
maybe here someone knows the right way of working with rois...
There is a QSvgRenderer class in QtSvg module which can render image onto QPaintDevice. This one can be QImage. In that case we will create:
Image svgBufferImage(renderer.defaultSize(), QImage::Format_ARGB32);
But how to render to a QImage of different size than default from the SVG renderer? Since the SVG format image can be scaled without quality loss, is it possible to generate static images, like PNG, from SVG files using QSvgRenderer?
Does anyone have a better idea? Basically I need to create images like PNG from SVG files in different sizes.
Just give your QImage the desired size. The SVG renderer will scale to fit the whole image.
#include <QApplication>
#include <QSvgRenderer>
#include <QPainter>
#include <QImage>
// In your .pro file:
// QT += svg
int main(int argc, char **argv)
{
// A QApplication instance is necessary if fonts are used in the SVG
QApplication app(argc, argv);
// Load your SVG
QSvgRenderer renderer(QString("./svg-logo-h.svg"));
// Prepare a QImage with desired characteritisc
QImage image(500, 200, QImage::Format_ARGB32);
image.fill(0xaaA08080); // partly transparent red-ish background
// Get QPainter that paints to the image
QPainter painter(&image);
renderer.render(&painter);
// Save, image format based on file extension
image.save("./svg-logo-h.png");
}
This will create an 500x200 PNG image from the passed in SVG file.
Example output with an SVG image from the SVG logos page:
Here is complete answer:
QImage QPixmap::toImage()
If the pixmap has 1-bit depth, the returned image will also be 1 bit deep. Images with more bits will be returned in a format closely represents the underlying system. Usually this will be QImage::Format_ARGB32_Premultiplied for pixmaps with an alpha and QImage::Format_RGB32 or QImage::Format_RGB16 for pixmaps without alpha.
QImage img = QIcon("filepath.svg").pixmap(QSize(requiredsize)).toImage()
Also copy from above answer
// Save, image format based on file extension
image.save("./svg-logo-h.png");
If you have simple svg, like icon or small image, for replacing color for example, it's better to use NanoSVG lib
It has Qt wrapper, coded by me Qt Svg Render.
It's very small, only 2 headers and one cpp.
Usage:
QString svgPath="your svg Path";
QString pngPath="path for png";
QSize imageSize(your image Path size);
const QSize resultSize(size that you want to get);
QImage image(sizeW, sizeH, Format);//End of param
QFile svgFile(svgPath);
if (svgFile.open(QIODevice::ReadOnly))
{
int svgFileSize=svgFile.size();
QByteArray ba;
ba.resize(svgFileSize + 1);
svgFile.read(ba.data(), svgFileSize);
ba.back() = {};
if (auto const nsi(nsvgParse(ba.data(), "px", 96)); nsi)
{
image.fill(Qt::transparent);
QPainter p(&image);
p.setRenderHint(QPainter::Antialiasing, true);
drawSVGImage(&p, nsi, resultSize.width(), resultSize.height());
nsvgDelete(nsi);
image.save(pngPath);// if you want to save image
}
}
I want to draw transparent windows (with alpha channel) on Linux (GTK) and OSX. Is there an API to do that ? Note that I don't want to set a global transparency, the alpha level should be set per pixel.
I'm looking for the same kind of API than the UpdateLayeredWindow function on Windows, as in this example: Per Pixel Alpha Blend.
For Mac OS X, see the RoundTransparentWindow sample code. It works by using a custom completely transparent window and drawing shapes in it. Although the example only uses shapes with hard edges + overall alpha, arbitrary alpha can be used.
Although the example uses a custom window, you can use the same technique to punch holes in normal windows by calling setOpaque:NO. Hacky example:
#implementation ClearView
- (void)drawRect:(NSRect)rect
{
if (mask == nil) mask = [[NSImage imageNamed:#"mask"] retain];
[self.window setOpaque:NO];
[mask drawInRect:self.bounds
fromRect:(NSRect){{0, 0},mask.size}
operation:NSCompositeCopy
fraction:1.0];
}
#end
The primary limitation of this technique is that the standard drop shadow doesn’t interact very well with alpha-blended edges.
I found this code in my experiments folder from earlier this year. I can't remember how much of this I wrote and how much is based on examples from elsewhere in the Internet.
This example will display a partially-transparent blue window with a fully opaque GTK+ button in the centre. Drawing, for example, an alpha-blended PNG somewhere inside the window should result in it being composited correctly. Hopefully this will put you on the right track.
Compile it with the following:
$ gcc `pkg-config --cflags --libs gtk+-2.0` -o per-pixel-opacity per-pixel-opacity.c
Now for the code:
#include <gtk/gtk.h>
static gboolean on_window_expose_event(GtkWidget *widget, GdkEventExpose *event, gpointer data)
{
cairo_t *cr;
cr = gdk_cairo_create(widget->window); // create cairo context
cairo_set_source_rgba(cr, 0.0, 0.0, 1.0, 0.2);
cairo_set_operator(cr, CAIRO_OPERATOR_SOURCE); // set drawing compositing operator
// SOURCE -> replace destination
cairo_paint(cr); // paint source
cairo_destroy(cr);
return FALSE;
}
gint main(gint argc, gchar **argv)
{
GtkWidget *window, *button, *vbox;
GdkScreen *screen;
GdkColormap *colormap;
gtk_init(&argc, &argv);
window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
g_signal_connect(G_OBJECT(window), "delete-event", gtk_main_quit, NULL);
g_signal_connect(G_OBJECT(window), "expose-event", G_CALLBACK(on_window_expose_event), NULL);
gtk_window_set_decorated(GTK_WINDOW(window), FALSE);
gtk_container_set_border_width(GTK_CONTAINER(window), 20);
gtk_widget_set_app_paintable(window, TRUE);
screen = gtk_widget_get_screen(window);
colormap = gdk_screen_get_rgba_colormap(screen);
gtk_widget_set_colormap(window, colormap);
button = gtk_button_new();
gtk_button_set_label(GTK_BUTTON(button), "Don't Press!");
gtk_container_add(GTK_CONTAINER(window), GTK_WIDGET(button));
gtk_widget_show_all(window);
gtk_main();
return 0;
}