What does CIImageAccumulator do? - macos

Problem
The apple documentation states when the CIImageAccumulater can be used, but unfortunately it does not say what it actually does.
The CIImageAccumulator class enables feedback-based image processing for such things as iterative painting operations or fluid dynamics simulations. You use CIImageAccumulator objects in conjunction with other Core Image classes, such as CIFilter, CIImage, CIVector, and CIContext, to take advantage of the built-in Core Image filters when processing images.
I have to fix code that used a CIImageAccumulator. It seems to me that all it is meant to do, despite its name, is to return a CIImage with all CIFilters applied to the image. Adding the first image however darkens the output. That is not what I would expect from an accumulator nor from any other Operator that enables feedback based image processing.
Question
Can anyone answer what logic / algorithm is being used when setting and getting images in and out of the CIImageAccumulator

The biggest advantage of the CIImageAccumulater is that stores its contents between different rendering steps (in contrast to CIFilter or CIImage). This allows you to use the state of a previous rendering step, blend it with something new and store that result again in the accumulator.
Apple's main use case is interactive painting: You retrieve the current image from the accumulator, blend a new stroke the user just painted with a gesture on top of it, and store the resulting image back into the accumulator. Then you display the content of the accumulator. You can read about it here.

Related

Vulkan - How to know what is current image layout?

After rendering or some other actions, I want to read the target image into cpu.
For this, there is need first to do layout transition and change the image's current layout (old layout ) to a new one that allows transferring its data into stage image - VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL.
For transition operation, I need to give both, the current layout and the new layout.
But how do I know what is the current layout ? - each render pass may set the finalLayout to a different value, it also may that some transitions where done by the time.
A solution I think is to store per image its current layout and set it after each render pass , and after each transition operation.
Is this correct?
But how do I know what is the current layout ? - each render pass may set the finalLayout to a different value, it also may that some transitions where done by the time.
Yes, but you created those render passes. You issued commands to use those render passes on that image. Therefore, at any point in the command stream, you know what layout the image is in.
Vulkan expects you to be aware of what you've done. How you pull that off is up to you. Maybe you always leave the image in color-attachment optimal. Maybe you explicitly keep track of it with some higher-level layer. It could be any number of things.
But at the end of the day, it's up to you. With great power, comes great responsibility.

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

Best Practice for laying out images for printing in a WYSIWYG Mac app?

I'm in the concept phase of a Mac application that should let the user easily select and layout images for printing. It's a document-based app and a document can have multiple pages with lots of pictures in different sizes and rotations on it. The UI would kind of be like the UI of Pages.app.
Those pictures can possibly be large hi-res images. The user should also be able to print them in the best quality that the images offer.
I have re-watched some WWDC sessions about Quartz, 2D drawing optimization and NSView.
I know that there are a few different ways of accomplishing what I want to do, namely:
Use a custom view for a "page" and draw the images in drawRect: with Core Graphics/Quartz. Use CG transforms to rotate and scale images.
Also use a custom view for a "page", but use NSImageView-subviews to display the images. Use Core Animation and layer transforms to scale/rotate images.
What is the best practice for this? Drawing with Core Graphics or using NSViews? Why?
Thank you so much!
Johannes
Depends on how interactive these pages should be. If there is a lot of mouse interaction, e.g. dragging, selecting etc. I'd go with views. If you want fluid animations I'd even use plain CALayers with their content set to one image. This would also let you zPosition the images in case they overlap. A view based solution makes z-ordering hard.
The drawRect method should be fastest but you have hard times integrating user interaction and you must z-order manually.
This is a reply I got from opening one of my two Apple Technical Support Incidents:
Hi Johannes,
Thanks for contacting Apple DTS regarding your question about printing
and the different ways to construct your applications general UI (with
views).
There is a trend toward using layer-backed views in OS X (utilizing
Core Animation layers) which is motivated by the ability to easily
animate your application's user interface, with little work, when
needed. However in terms of printing, you would be better off to
implement drawRect for custom views so that the view contents can be
drawn at "full resolution" when rendered into the context for
printing.
If instead you use layer backed views at as those layers to
"renderInContext" the layer contents would be used to render, which
commonly will not be set to the full resolution of your source
documents/images. This is because layer backed views take additional
memory to store those bitmaps (cached layer contents), and because of
that, they are recommended to be sized appropriately for the screen
(which may not necessarily be sized appropriately for the printed
page).
Does this help guide your application architecture? Please let me
know.
So basically this means that using layer-backed views might result in sub-optimal printing quality. I've replied with some follow-up questions ("How setting wantsLayer = NO on the rootView right before printing help?") and will post the answers as soon as I get them.
All three approaches should work. Since you should be using scaled-down representations of large images anyway, I don't think there will be much difference. Do what you feel most comfortable doing.
My guess is just using layer-backed NSViews (one par draggable image) will probably work best for starters. If you find performance lacking, you can always micro-optimize. Note that you may have to make your views a tad larger than the images so you can draw the selection handles outside them.
This is all assuming that you will never want to do a more complex drawing.

How to convert an OpenGL ES texture into a CIImage

I know how to do it the other way around. But how can I create a CIImage from a texture, without having to copy into CPU memory? [CIImage imageWithData]? CVOpenGLESTextureCache?
Unfortunately, I don't think there's any way to avoid having to read back pixel data using glReadPixels(). All of the inputs for a CIImage (data, CGImageRef, CVPixelBufferRef) are CPU-side, so I don't see a fast path to deliver that to a CIImage. It looks like your best alternative there would be to use glReadPixels() to pull in the raw RGBA data from your texture and send it into the CIImage using -initWithData:options: and an kCIFormatRGBA8 pixel format. (Update: 3/14/2012) On iOS 5.0, there is now a faster way to grab OpenGL ES frame data, using the new texture caches. I describe this in detail in this answer.
However, there might be another way to achieve what you want. If you simply want to apply filters on a texture for output to the screen, you might be able to use my GPUImage framework to do the processing. It already uses OpenGL ES 2.0 as the core of its rendering pipeline, with textures as the way that frames of images or video are passed from one filter to the next. It's also much faster than Core Image, in my benchmarks.
You can supply your texture as an input here, so that it never has to touch the CPU. I don't have a stock class for grabbing raw textures from OpenGL ES yet, but you can modify the code for one of the existing GPUImageOutput subclasses to use this as a source fairly easily. You can then chain filters on to that, and direct the output to the screen or to a still image. At some point, I'll add a class for this kind of data source, but the project's still fairly new.
As of iOS 6, you can use a built-in init method for this situation:
initWithTexture:size:flipped:colorSpace:
See the docs:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIImage_Class/Reference/Reference.html
You might find these helpful:
https://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
https://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Listings/GLCameraRipple_RippleViewController_m.html
In general I think the image data will need to be copied from the GPU to the CPU. However the iOS features mentioned above might make this easier and more efficient.

Anything wrong using display lists in OpenGL for a single object?

First of all, I know that displays lists were deprecated in OpenGL 3.0 and removed on 3.1. But I still have to use them on this university project for which OpenGL 2.1 is being used.
Every article or tutorial I read on display lists use them on some kind of object that is drawn multiple times, for instance, trees. Is there anything wrong in creating one list for a single object?
For instance, I have an object (in .obj format) for a building. This specific building is only drawn once. For basic performance analysis I have the current frames per second on the window title bar.
Doing it like this:
glmDraw(objBuilding.model, GLM_SMOOTH | GLM_TEXTURE);
I get around 260 FPS. If you don't about the GLM library, the glmDraw function basically makes a bunch of glVertex calls.
But doing it like this:
glNewList(gameDisplayLists.building, GL_COMPILE);
glmDraw(objBuilding.model, GLM_SMOOTH | GLM_TEXTURE);
glEndList();
glCallList(gameDisplayLists.building);
I get around 420 FPS. Of course, the screen refresh rate doesn't refresh that fast but like I said, it's just a simple and basic way to measure performance.
It looks much better to me.
I'm also using display lists for when I have some type of object that I repeat many times, like defense towers. Again, is there anything wrong doing this for a single object or can I keep doing this?
Using display lists for this stuff is perfectly fine and before Vertex Arrays were around they were the de-facto standard to place geometry in fast memory. Put display lists have a few interesting pitfalls. For example in OpenGL-1.0 there were no texture objects; instead placing the glTexImage call in a display list was the way to emulate this. And this behaviour still prevails, so binding a texture and calling glTexImage in a display list will effectively re-initialize the texture object with what's been done in the display list compilation, whenever the display list is called. On the other hand display lists don't work with vertex arrays.
TL;DR: If your geometry is static, you don't expect to transistion to OpenGL-3 core anytime and it gives you a huge performance increase (like it did for you), then just use them!

Resources