What is the fastest way to get NSAttributedString drawn into a CVPixelBufferRef - macos

What is the most performant means for getting text via NSAttributedString:drawAtPoint into a RGBA32 CVPixelBufferRef?
Just to clarify my objective...
I'm being handed CVPixelBufferRef objects #60fps via a CVDisplayLink whilst a movie is playing. These are getting wrapped in CMSampleBuffers for output. I am using Apple's "AVGreenScreenPlayer" sample code as my base to work from.
I have an NSAttributedString object which represents a string (eg. #"ABC"). I want to draw this onto a small background (possibly) and then draw this resulting text w/background into the CVPixelBufferRef, into a corner of the video that is playing.
While the use of a CIFilter would likely be the most performant, I need access to a video frame containing the video+overlay result as a CVPixelBuffer or vImageBuffer.
For MacOSX 10.10.3. - Objective C.

Related

What does CIImageAccumulator do?

Problem
The apple documentation states when the CIImageAccumulater can be used, but unfortunately it does not say what it actually does.
The CIImageAccumulator class enables feedback-based image processing for such things as iterative painting operations or fluid dynamics simulations. You use CIImageAccumulator objects in conjunction with other Core Image classes, such as CIFilter, CIImage, CIVector, and CIContext, to take advantage of the built-in Core Image filters when processing images.
I have to fix code that used a CIImageAccumulator. It seems to me that all it is meant to do, despite its name, is to return a CIImage with all CIFilters applied to the image. Adding the first image however darkens the output. That is not what I would expect from an accumulator nor from any other Operator that enables feedback based image processing.
Question
Can anyone answer what logic / algorithm is being used when setting and getting images in and out of the CIImageAccumulator
The biggest advantage of the CIImageAccumulater is that stores its contents between different rendering steps (in contrast to CIFilter or CIImage). This allows you to use the state of a previous rendering step, blend it with something new and store that result again in the accumulator.
Apple's main use case is interactive painting: You retrieve the current image from the accumulator, blend a new stroke the user just painted with a gesture on top of it, and store the resulting image back into the accumulator. Then you display the content of the accumulator. You can read about it here.

Render CIImage to a specific rect in IOSurface

I'm trying to render a CIImage to a specific location in an IOSurface using [CIContext render:toIOSurface:bounds:colorSpace:] by specifying the bounds argument r as the destination rectangle.
According to the documentation this should work, but CoreImage always render the image to the bottom-left corner of the IOSurface.
It seems to me like a bug in CoreImage.
I can overcome this problem by rendering the image to an intermediate IOSurface with the same size of the CIImage, and then copy the content of the surface to another surface.
However, I would like to avoid the allocation and the copying in the solution.
Any suggestion?
What you want to happen isn't currently possible with that API (which is a huge bummer).
You can however wrap your IOSurface up as a texture (using CGLTexImageIOSurface2D) and then use CIContext's contextWithCGLContext:…, and then finally use drawImage:inRect:fromRect: to do this.
It's a huge hack, but it works (mostly):
https://github.com/ccgus/FMMicroPaintPlus/blob/master/CIMicroPaint/FMIOSurfaceAccumulator.m
Since macOS 10.13 you can use CIRenderDestination and CIContext.startTask(toRender:from:to:at:) to achieve the same result without having to provide an intermediate image.
In my case I used a combination of Metal and Core Image to render only a subpart of the output image as part of my pipeline as follow:
let renderDst = CIRenderDestination(mtlTexture: texture, commandBuffer: commandBuffer)
try! context.startTask(toRender: ciimage,
from: dirtyRect, to: renderDst, at: dirtyRect.origin)
As I'm already synchronizing against the MTLCommandBuffer I didn't need to synchronize against the returned CIRenderTask.
If you want to more details you can check the slides (starting from 83) of Advances in Core Image: Filters, Metal, Vision, and More (WWDC video from 2017).

NSWindow Flip Animation - Like iWork

I'm attempting to implement window-flipping identical to that in iWork -
https://dl.dropbox.com/u/2338382/Window%20Flipping.mov
However, I can't quite seem to find a straightforward way of doing this. Some tutorials suggest sticking snapshot-images of both sides of the window in a bigger, transparent window and animate those. This might work, but seems a bit hacky, and the sample code is always bloated. Some tutorials suggest using private APIs, and since this app may be MAS-bound, I'd like to avoid that.
How should I go about implementing this? Does anyone have any hints?
NSWindow+Flipping
I've rewritten the ancient code linked below into NSWindow+Flipping. You can grab these source files from my misc. Cocoa collection on GitHub, PCSnippets.
You can achieve this using CoreGraphics framework. Take a look at this:
- (void) flipWithDuration: (float) duration forwards: (BOOL) forwards
{
CGSTransitionSpec spec;
CGSTransitionHandle transitionHandle;
CGSConnection cid = CGSDefaultConnection;
spec.type = CGSFlip;
spec.option = 0x80 | (forwards ? 2 : 1);
spec.wid = [self windowNumber];
spec.backColor = nil;
transitionHandle = -1;
CGSNewTransition (cid, &spec, &transitionHandle);
CGSInvokeTransition (cid, transitionHandle, duration);
[[NSRunLoop currentRunLoop] runUntilDate:
[NSDate dateWithTimeIntervalSinceNow: duration]];
CGSReleaseTransition (cid, transitionHandle);
}
You can download sample project: here. More info here.
UPDATE:
Take a look at this project. It's actually what You need.
About this project:
This category on NSWindow allows you to switch one window for
another, using the "flip" animation popularized by Dashboard widgets.
This was a nice excuse to learn something about CoreImage and how to
use it in Cocoa. The demo app shows how to use it. Scroll to the end
to see what's new in this version!
Basically, all you need to do is something like:
[someWindow flipToShowWindow:someOtherWindow forward:YES];
However, this code makes some assumptions: — someWindow (the initial
window) is already visible on-screen. — someOtherWindow (the final
window) is not already visible on-screen. — Both windows can be
resized to the same size, and aren't too large or complicated — the
latter conditions being less important the faster your CPU/video card
is. — The windows won't go away while the animation is running. — The
user won't try to click on the animated window or do something while
the animation is running.
The implementation is quite straightforward. I move the final to the
same position and size as the initial window. I then position a larger
transparent window so it covers that frame. I render both window
contents into CIImages, hide both windows, and start the animation.
Each frame of the animation renders a perspective-distorted image into
the transparent window. When the animation is done, I show the final
window. Some tricks are used to make this faster; the flipping window
is setup only once; the final window is hidden by setting its alpha to
0.0, not by ordering it out and later ordering it back in again, for instance.
The main bottleneck is the CoreImage filter, and the first frame
always takes much longer to render — 4 or 6 times what it takes for
the remaining frames. I suppose this time is spent with setup and
downloading to the video card. So I calculate the time this takes and
draw a second frame at a stage where the rotation begins to show. The
animation begins at this point, but, if those first two frames took
too long, I stretch the duration to make sure that at least 5 more
frames will get rendered. This will happen with slow hardware or large
windows. At the end, I don't render the last frame at all and swap the
final window in instead.

Displaying AVPlayer content on two views simultaneously

I am creating an HTTP Live Streaming Client for Mac that will control video playback on a large screen. My goal is to have a control UI on the main screen, and full screen video on the secondary screen.
Using AVFoundation, I have successfully been able to open the stream and control all aspects of it from my control UI, and I am now attempting to duplicate the video on a secondary screen. This is proving more difficult than I imagined...
On the control screen, I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
Digging deeper, I found this in the AVFoundation docs:
You can create arbitrary numbers of player layers with the same AVPlayer object. Only the most-recently-created player layer will actually display the video content on-screen.
This is actually useless to me, because I need the video showing correctly in both views.
I can create a new instance of AVPlayerItem from the same AVAsset, then create a new AVPlayer and add it to a new AVPlayerLayer and have video show up, but they are no longer in sync because they are two different players generating two different audio streams playing different parts of the same stream.
Does anyone have any suggestions on how to get the same AVPlayer content into two different views? Perhaps some sort of CALayer mirroring trick?
AVSyncronizedLayer may help. I'm using it differently (to syncronize two different media objects rather than the same one) but in principle it should be possible to load the same item twice and then use an AvSyncronized layer to keep them synced.
I see that this topic got very old, but I think it still would be helpful. You wrote that
I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
But, it's working. I just tried it in my project. Here's my code of layer initializations:
AVPlayerLayer *playerLayer = [AVPlayerLayer new];
[playerLayer setPlayer:_testPlayer];
playerLayer.frame = CGRectMake(0, 0, _videoView.frame.size.width, _videoView.frame.size.height);
playerLayer.contentsGravity = kCAGravityResizeAspect;
playerLayer.videoGravity = AVLayerVideoGravityResizeAspect;
_defaultTransform = playerLayer.affineTransform;
[_videoView.layer insertSublayer:playerLayer atIndex:0];
AVPlayerLayer *testLayer_1 = [AVPlayerLayer playerLayerWithPlayer:_testPlayer];
testLayer_1.frame = CGRectMake(100, 100, 200, 200);
testLayer_1.contentsGravity = kCAGravityResizeAspect;
testLayer_1.videoGravity = AVLayerVideoGravityResizeAspect;
[_videoView.layer insertSublayer:testLayer_1 atIndex:1];
And here's what I got:
As you can see, there're two AVPlayerLayers playing the same AVPlayerItem in the very perfect sync
Apple's docs now state this:
You can create arbitrary numbers of player layers with the same AVPlayer object, but you should limit the number of layers you create to avoid impacting playback performance.
link to docs
This does indeed work in my app as well.

Trying to turn [NSImage imageNamed:NSImageNameUser] into NSData

If I create an NSImage via something like:
NSImage *icon = [NSImage imageNamed:NSImageNameUser];
it only has one representation, a NSCoreUIImageRep which seems to be a private class.
I'd like to archive this image as an NSData but if I ask for the TIFFRepresentation I get a
small icon when the real NSImage I originally created seemed to be vector and would scale up to fill my image views nicely.
I was kinda hoping images made this way would have a NSPDFImageRep I could use.
Any ideas how can I get an NSData (pref the vector version or at worse a large scale bitmap version) of this NSImage?
UPDATE
Spoke with some people on Twitter and they suggested that the real source of these images are multi resolution icns files (probably not vector at all). I couldn't find the location of these on disk but interesting to hear none-the-less.
Additionally they suggested I create the system NSImage and manually render it into a high res NSImage of my own. I'm doing this now and it's working for my needs. My code:
+ (NSImage *)pt_businessDefaultIcon
{
// Draws NSImageNameUser into a rendered bitmap.
// We do this because trying to create an NSData from
// [NSImage imageNamed:NSImageNameUser] directly results in a 32x32 image.
NSImage *icon = [NSImage imageNamed:NSImageNameUser];
NSImage *renderedIcon = [[NSImage alloc] initWithSize:CGSizeMake(PTAdditionsBusinessDefaultIconSize, PTAdditionsBusinessDefaultIconSize)];
[renderedIcon lockFocus];
NSRect inRect = NSMakeRect(0, 0, PTAdditionsBusinessDefaultIconSize, PTAdditionsBusinessDefaultIconSize);
NSRect fromRect = NSMakeRect(0, 0, icon.size.width, icon.size.width);;
[icon drawInRect:inRect fromRect:fromRect operation:NSCompositeCopy fraction:1.0];
[renderedIcon unlockFocus];
return renderedIcon;
}
(Tried to post this as my answer but I don't have enough reputation?)
You seem to be ignoring the documentation. Both of your major questions are answered there. The Cocoa Drawing Guide (companion guide linked from the NSImage API reference) has an Images section you really need to read thoroughly and refer to any time you have rep/caching/sizing/quality issues.
...if I ask for the TIFFRepresentation I get a small icon when the
real NSImage I originally created seemed to be vector and would scale
up to fill my image views nicely.
Relevant subsections of the Images section for this question are: How an Image Representation is Chosen, Images and Caching, and Image Size and Resolution. By default, the -cacheMode for a TIFF image "Behaves as if the NSImageCacheBySize setting were in effect." Also, for in-memory scaling/sizing operations, -imageInterpolation is important: "Table 6-4 lists the available interpolation settings." and "NSImageInterpolationHigh - Slower, higher-quality interpolation."
I'm fairly certain this applies to a named system image as well as any other.
I was kinda hoping images made [ by loading an image from disk ] would
have a NSPDFImageRep I could use.
Relevant subsection: Image Representations. "...with file-based images, most of the images you create need only a single image representation." and "You might create multiple representations in the following situations, however: For printing, you might want to create a PDF representation or high-resolution bitmap of your image."
You get the representation that suits the loaded image. You must create a PDF representation for a TIFF image, for example. To do so at high resolution, you'll need to refer back to the caching mode so you can get higher-res items.
There are a lot of fine details too numerous to list because of the high number of permutations of images/creation mechanisms/settings/ and what you want to do with it all. My post is meant to be a general guide toward finding the specific information you need for your situation.
For more detail, add specific details: the code you attempted to use, the type of image you're loading or creating -- you seemed to mention two different possibilities in your fourth paragraph -- and what went wrong.
I would guess that the image is "hard wired" into the graphics system somehow, and the NSImage representation of it is merely a number indicating which hard-wired graphic it is. So likely what you need to do is to draw it and then capture the drawing.
Very generally, create a view controller that will render the image, reference the VC's view property to cause the view to be rendered, extract the contentView of the VC, get the contentView.layer, render the layer into a UIGraphics context, get the UIImage from the context, extract whatever representation you want from the UIImage.
(There may be a simpler way, but this is the one I ended up using in one case.)
(And, sigh, I suppose this scheme doesn't preserve scaling either.)

Resources