Render CIImage to a specific rect in IOSurface - macos

I'm trying to render a CIImage to a specific location in an IOSurface using [CIContext render:toIOSurface:bounds:colorSpace:] by specifying the bounds argument r as the destination rectangle.
According to the documentation this should work, but CoreImage always render the image to the bottom-left corner of the IOSurface.
It seems to me like a bug in CoreImage.
I can overcome this problem by rendering the image to an intermediate IOSurface with the same size of the CIImage, and then copy the content of the surface to another surface.
However, I would like to avoid the allocation and the copying in the solution.
Any suggestion?

What you want to happen isn't currently possible with that API (which is a huge bummer).
You can however wrap your IOSurface up as a texture (using CGLTexImageIOSurface2D) and then use CIContext's contextWithCGLContext:…, and then finally use drawImage:inRect:fromRect: to do this.
It's a huge hack, but it works (mostly):
https://github.com/ccgus/FMMicroPaintPlus/blob/master/CIMicroPaint/FMIOSurfaceAccumulator.m

Since macOS 10.13 you can use CIRenderDestination and CIContext.startTask(toRender:from:to:at:) to achieve the same result without having to provide an intermediate image.
In my case I used a combination of Metal and Core Image to render only a subpart of the output image as part of my pipeline as follow:
let renderDst = CIRenderDestination(mtlTexture: texture, commandBuffer: commandBuffer)
try! context.startTask(toRender: ciimage,
from: dirtyRect, to: renderDst, at: dirtyRect.origin)
As I'm already synchronizing against the MTLCommandBuffer I didn't need to synchronize against the returned CIRenderTask.
If you want to more details you can check the slides (starting from 83) of Advances in Core Image: Filters, Metal, Vision, and More (WWDC video from 2017).

Related

What does CIImageAccumulator do?

Problem
The apple documentation states when the CIImageAccumulater can be used, but unfortunately it does not say what it actually does.
The CIImageAccumulator class enables feedback-based image processing for such things as iterative painting operations or fluid dynamics simulations. You use CIImageAccumulator objects in conjunction with other Core Image classes, such as CIFilter, CIImage, CIVector, and CIContext, to take advantage of the built-in Core Image filters when processing images.
I have to fix code that used a CIImageAccumulator. It seems to me that all it is meant to do, despite its name, is to return a CIImage with all CIFilters applied to the image. Adding the first image however darkens the output. That is not what I would expect from an accumulator nor from any other Operator that enables feedback based image processing.
Question
Can anyone answer what logic / algorithm is being used when setting and getting images in and out of the CIImageAccumulator
The biggest advantage of the CIImageAccumulater is that stores its contents between different rendering steps (in contrast to CIFilter or CIImage). This allows you to use the state of a previous rendering step, blend it with something new and store that result again in the accumulator.
Apple's main use case is interactive painting: You retrieve the current image from the accumulator, blend a new stroke the user just painted with a gesture on top of it, and store the resulting image back into the accumulator. Then you display the content of the accumulator. You can read about it here.

How do you work with really large images in Metal?

TL;DR: In macOS 10.13, an MTLTexture has a maximum width and height of 16,384. What strategies can you use to be able to process and display images larger than 16,384 pixels using Metal?
In a photo viewer that I'm currently working on, I've moved most of the viewing of images into a Metal backed view that uses Core Image for doing any image adjustments. This is working really well but I've recently started testing against some really large images (panoramas) and I'm now hitting some limits that I'm not entirely sure how to workaround while remaining relatively performant.
My current environment looks like this:
Load and decode an image from from an NSURL into an IOSurface. This is done using either Image IO directly or a Core Image pipeline that renders into the IOSurface. The IOSurface is then passed from an XPC service back into the main app.
In the main app, a new MTLTexture is created that is backed by the IOSurface. Then, a CIImage is created from the MTLTexture and that CIImage is used throughout an image pipeline as the root "source image".
However, if I attempt to open an image larger that 16,384 pixels in one dimension, then I'm unable to create the original IOSurface 16,384 on my laptop. (13" MBP-TB 2016)
But even if I could create a larger IOSurface, then I'm still stuck with the same limit on the MTLTexture.
See: Apple's Metal Feature Table Set
I'm curious what strategies others would recommend to allow one to open large image files while still taking advantage of Core Image and Metal.
One attempt I've made is to just have the root source image be a CIImage that was created with a CGImageRef. However, there's a significant drop in performance between that arrangement and a CIImage backed by a texture for even smaller sized images.
Another idea I've had, but haven't yet explored, was to use CIImageProvider in some capacity but I'm not entirely sure how I'd go about "tiling" potentially several IOSurfaces or MTLTextures, and if that even makes sense or if it would be better to just allocate a single large buffer to read from. (Or perhaps use dispatch_data in some capacity?)
(macOS 10.13 or even 10.14 would be fine.)

How to get opencv image in an enaml space - Is it possible?

Is it possible to have enaml as target for OpenCV?
I'm thinking how to setup GUI and what to use.
Nothing too complicated, I need to be able to set some bitmap background, draw rectangles and circles over it, but also have the possibility to select/move these graphics objects.
Also, I would like that I do not have to take care of all these elements when I stretch the window, etc. they should do this automatically since they would be defined in some "absolute" space. I think I could easily make it work for the bitmaps (even from memory), by overriding request_image in ImageProvider object (even though I see some strange cache happening in provider/enaml view).
Problem that I'm having now with OpenCV (OSX 64) is that even when I get resize to work with qt backend and CV_WINDOW_NORMAL, the content does not stretch.
I like OpenCV, because easily I get basic UI functions.
On the other hand I started to like enaml so I'm thinking did anyone manage to get these to to work together.
I'm thinking if link with MPL works, it's possible that coupling with OpenCV should be possible :)
Thanks!
If you can get your image into argb32 or png format, you can use an Enaml ImageView to display it.
Take a look at the ImageView example:
https://github.com/nucleic/enaml/blob/master/examples/widgets/image_view.enaml
This should do it:
from enaml.image import Image
from cv2 import imread, imencode
open_cv_image = imread('./cat.png')
png_image = imencode('.png', open_cv_image)[1].tostring()
enaml_image = Image(data=png_image)

NSCursor images on a retina display

I am trying to modify the default I-beam cursor image. I'm using [[[NSCursor IBeamCursor] image] representations], passing each one through a CIFilter and adding it to a new image. However, the resulting cursor looks as though it is rendering the low-resolution images.
The High Resolution Guidelines say:
For custom cursors, you can pass a multirepresentation TIFF to the NSCursor class method initWithImage:hotSpot:.
So I would expect this to work. Additionally, if I get the -TIFFRepresentation of the original image and my modified image, and write them to disk, they both look like multi-page TIFF files with the same size images. What could I be doing wrong?
I have a somewhat-temporary solution: manually call -setSize: on each image representation, dividing the pixel height and width by the screen's scale factor. However, this technique doesn't seem like it will work ideally with multiple screens.
You're right on. I've been debugging this all day and I'm pretty sure I've got it nailed. I'm not doing exactly the same thing you are (my images are loaded from a file) but the end result is exactly the same.
The trick is to set the first representation of the multi-representation image to the non-retina size. If you are loading your cursors from an image file, you must take this extra step to adjust the size of the representations to match. It doesn't work 'out-of-the-box' as you would expect.
I've tested this on a machine with two monitors and dragging the window from the retina display to the non-retina display acts as it should, displaying the high/low resolution images for the cursor.
I had a similar problem a while ago: I had my cursor in a PDF, and it always drew as if it was a pixel image at 1:1 size, blown up. There's a solution to that in NSCursor: Using high-resolution cursors with cursor zoom (or retina).
Maybe someone can use that technique to solve this problem? My guess is creating an image with the same size but a different CTM marks it as the same size but Retina. What #jtbrandes is doing probably marks it as a different size and non-Retina. So you're effectively losing the scale factor information. If you create an image with a CTM in the hints, maybe you can draw the filtered images into it and it'll be detected right.

How to convert an OpenGL ES texture into a CIImage

I know how to do it the other way around. But how can I create a CIImage from a texture, without having to copy into CPU memory? [CIImage imageWithData]? CVOpenGLESTextureCache?
Unfortunately, I don't think there's any way to avoid having to read back pixel data using glReadPixels(). All of the inputs for a CIImage (data, CGImageRef, CVPixelBufferRef) are CPU-side, so I don't see a fast path to deliver that to a CIImage. It looks like your best alternative there would be to use glReadPixels() to pull in the raw RGBA data from your texture and send it into the CIImage using -initWithData:options: and an kCIFormatRGBA8 pixel format. (Update: 3/14/2012) On iOS 5.0, there is now a faster way to grab OpenGL ES frame data, using the new texture caches. I describe this in detail in this answer.
However, there might be another way to achieve what you want. If you simply want to apply filters on a texture for output to the screen, you might be able to use my GPUImage framework to do the processing. It already uses OpenGL ES 2.0 as the core of its rendering pipeline, with textures as the way that frames of images or video are passed from one filter to the next. It's also much faster than Core Image, in my benchmarks.
You can supply your texture as an input here, so that it never has to touch the CPU. I don't have a stock class for grabbing raw textures from OpenGL ES yet, but you can modify the code for one of the existing GPUImageOutput subclasses to use this as a source fairly easily. You can then chain filters on to that, and direct the output to the screen or to a still image. At some point, I'll add a class for this kind of data source, but the project's still fairly new.
As of iOS 6, you can use a built-in init method for this situation:
initWithTexture:size:flipped:colorSpace:
See the docs:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIImage_Class/Reference/Reference.html
You might find these helpful:
https://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
https://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Listings/GLCameraRipple_RippleViewController_m.html
In general I think the image data will need to be copied from the GPU to the CPU. However the iOS features mentioned above might make this easier and more efficient.

Resources