I'm trying to save a QGraphicsScene with OpenGL as an image (png or jpeg), but I don't want the image to depend on the current view (zoom). That's why I'm not using grabFrameBuffer but I use render() instead :
QImage imgToSave(1024,768,QImage::Format_ARGB32_Premultiplied);
// fill the image
// and define rectbuffer(), the QRect() containing what I want to save
QPainter painter(&imgToSave);
m_scene = new QGraphicsScene;
// fill the Scene
m_scene->render(&painter,imgToSave.rect(),rectbuffer());
It does work. My question is: is it using OpenGL capabilities or not?
If not, how to do so?
n.b. : i am using a QGLWidget as a Viewport for my GraphicsView. And the display using OpenGL is working. My concerns are about the image saving.
I am going to guess: no. Because for the QGraphicsScene to render through OpenGL you need to specify a QGLWidget derived object as the viewport of the scene - you haven't, so it's almost certainly using the raster engine. Secondly the QPainter uses whatever paint device you construct it with as a backend, you have specified a straight QImage which does not use OpenGL.
If you can't/won't use a QGLWidget through a QGraphicsView, then you might be able to render onto QGLFramebufferObject. But this brings it's own complications, namely you will have to create a hidden context beforehand.
Related
I want to render some graphics, texts and shapes to a bitmap using a canvas-like API like nanovega in D on my server.
I know how to create an arsd window with OpenGL context and render to it, (as per documentation) but is it also possible rendering to a headless context or even directly to draw to some memory buffer? (Because I don't think my server has OpenGL available and would probably need to use some software renderer)
Mainly I wanted to use gradients, texts, shapes, rounded corners, images and masks and render all of that to an image. I know that nanovega implements all of the rendering parts of that, so I would like to keep using it.
I would like to apply a CIFilter to a CGPath. A quick searching around reveals this is fairly straight forward on iOS. What are the options on OS X?
Are the steps,
create a image context,
create CGPath which uses the image context,
apply filter,
draw image into the current graphics context (i.e. for the NSView)?
This seems like a huge amount of boilerplate for a reasonably common task. I just want to check that I have not missed anything!
Core Image operates with image's pixels.
Filters in CoreImage generate CIImage objects and not to change original context. But you can create CIContext to draw to the image context.
You can't apply a filter to the image context directly. But you can create a image from the image context, later you can apply the filter, and blend images together.
I have a squared formed image. I need to add perspective view to it and then make an animation that will restore image back to square view. This actions must work at least in all newest versions of browsers and without Flash.
I have tried to do this as follows:
Using RaphaelJS I can only clip image (create path and set fill to image url), not add perspective.
Canvas works as svg and vml in RaphaelJS... I can't add perspective with it's help.
CSS3 3D animation method rotateX adds perspective, but it is supported only by Chrome and Safary.
There is no way to add perspective to image using built in tools.
The only possible solution, I have found is using SVG add clip path to image - it doesn't add perspective view, but it's the best possible solution.
As I understand - we can use Canvas to add perspective, but you must write your own javascript algorithm to transform plain image to perspective view.
Is there any way to animate (using one of the supplied Core Animations) an image change in IKImageView without resorting to using two independent IKImageViews upon a new image load?
There is no public API for that. You can only add core animation overlay/background to IKImageView.
Is it possible to convert a vector image into Quartz 2D code (mac) so that
image can be drawn programmatically?
Not easily, you would have to write all the code yourself to do this. You might like to have a look at the Opacity image editor, which allows you to generate images and export them as Quartz or Cocoa drawing code.
What kind of vector image?
NSImage loads PDFs the same way it loads bitmaps.
NSImage is Quartz 2D drawing, but if you meant that you need a CGImage, NSImage in 10.6 has a method for getting one. However, CGImage is explicitly bitmap based, unlike NSImage. The parameters you pass to -[NSImage CGImageForRect:context:hints:] will determine how the art is rasterized. It will be rasterized the same way it would be if drawing to the passed rect in the passed context.
You can use Vector code http://www.vectorcodeapp.com to generate core graphics code, which you may use programmatically, or even use to generate postscripts.