I want to draw image with HardLight composite operation. I've created NSImageView with next draw code:
- (void)drawRect:(NSRect)dirtyRect {
//[super drawRect:dirtyRect];
if (self.image != NULL) {
[self.image drawInRect:self.bounds fromRect:NSZeroRect operation:NSCompositeHardLight fraction:1.0];
}
}
In usual case it works well. But it does not work over NSVisualEffectView.
How can I blend HardLight over NSVisualEffectView?
in the image linked below you can see rounded rectangle which blend HardLight over window background and colour image. But over NSVisualEffectView (red bottom rectangle) it draws just grey.
https://www.dropbox.com/s/bcpe6vdha6xfc5t/Screenshot%202015-03-27%2000.32.53.png?dl=0
Roughly speaking, image compositing takes none, one or both pixels from the source and destination, applies some composite operation and writes it to the destination. To get any effect that takes into account the destination pixel, that pixel’s color information must be known when the compositing operation takes place, which is in your implementation of -drawRect:.
I’m assuming you’re talking about behind window blending (NSVisualEffectBlendingModeBehindWindow) here. The problem with NSVisualEffectView is that it does not draw anything. Instead, it defines a region that tells the WindowServer process to do its vibrancy stuff in that region. This happens after your app draws its views.
Therefore a compositing operation in your app cannot take into account the pixels that the window server draws later. In short, this cannot be done.
Related
I have an image that is generated as an NSImage, and then I draw it into my NSView subclass, using a custom draw() method.
I want to modify this custom view so that the image is drawn in the same place, but it fades out on the side. That is, it's drawn as a linear gradient, from alpha=1.0 to alpha=0.0.
My best guess is one of the draw() variants with NSCompositingOperation might help me do what I want, but I'm having trouble understanding how they could do this. I'm not a graphics expert, and the NSImage and NSCompositingOperation docs seem to be using different terminology.
The quick version: pretty much this question but on macOS instead of Android.
You're on the right track. You'll want to use NSCompositingOperationDestinationOut to achieve this. That will effectively punch out the destination based on the alpha of the source being drawn. So if you first draw your image and then draw a gradient from alpha 0.0 to 1.0 with the .destinationOut operation on top, you'll end up with your image with alpha 1.0 to 0.0
Because that punch out happens to whatever is already in the backing store where the gradient is being drawn to, you'll want to be careful where/how you use it.
If you want to do this all within the drawing of the view, you should do the drawing of your image and the punch out gradient within a transparency layer using CGContextBeginTransparencyLayer and CGContextEndTransparencyLayer to prevent punching out anything else.
You could also first create a new NSImage to represent this faded variant, either using the drawingHandler constructor or NSCustomImageRep and doing the same image & gradient drawing within there (without worrying about transparency layers). And then draw that image into your view, or simply use that image with an NSImageView.
In our app we have UIScrollView above CAEAGLLayer. UIScrollView contains some UIViews (red rectangles). In CAEAGLLayer we draw white rectangles. Centers of white rectangles are the same as the centers of red rectangles. When UIScrollView scrolls, we update positions of white rectangles in CAEAGLLayer and render them.
We are getting the expected result: centers of white rectangles are always the same as centers of red rectangles.
But we can't synchronize updates of the CAEAGLLayer with the movement of the views inside UIScrollView.
We have some kind of mistiming – red rectangles lag behind white rectangles.
Speaking roughly, we really want to make CAEAGLLayer lag together with the UIScollView.
We have prepared sample code. Run it on the device and scroll, and you will see that white rectangles (drawn by OpenGL) are moving faster than red ones (regular UIViews). OpenGL is being updated within scrollViewDidScroll: delegate call.
https://www.dropbox.com/s/kzybsyu10825ikw/ios-opengl-scrollview-test.zip
It behaves the same even in iOS Simulator, just take a look at the video: http://www.youtube.com/watch?v=1T9hsAVrEXw
Red = UIKit, White = OpenGL
Code is:
- (void)scrollViewDidScroll:(UIScrollView *)aScrollView {
// reuses red squares that are gone outside thw bounds
[overlayView updateVisibleRect:CGRectMake(...)];
// draws white squares using OpenGL under the red squares
[openGlView updateVisibleRect:CGRectMake(...)];
}
Edit:
The same issue can easily be demonstrated in a much simplified sample. The working xcodeproj can be find at:
https://www.dropbox.com/s/vznzccibhlf8bon/simple_sample.zip
The sample project basically draws and animates a set of WHITE squares in OpenGL and does the same for a RED set of UIViews. The lagging can easily be seen between the red and white squares.
In iOS 9 CAEAGLLayer has presentsWithTransaction property that synchronizes the two.
In fact, you can't synchronize them using current APIs. MobileMaps.app and Apple Map Kit use private property on CAEAGLLayer asynchronous to workaround this issue.
Here is related radar: http://openradar.appspot.com/radar?id=3118401
After digging around a little I'd like to extend my previous answer:
Your OpenGL rendering is immediately done from within the scrollViewDidScroll: method while the UIKit drawing is performed later, during normal CATransaction updates from the runloop.
To synchronize UIKit updates with the OpenGL rendering just enclose both in one explicit transaction and flush it to force UIKit to commit the changes to backbaordd immediately:
- (void)scrollViewDidScroll:(UIScrollView *)aScrollView {
[CATransaction begin];
[overlayView updateVisibleRect:CGRectMake(...)];
[openGlView updateVisibleRect:CGRectMake(...)];
[CATransaction flush]; // trigger a UIKit and Core Animation graphics update
[CATransaction commit];
}
In lack of a proper answer I'd like to share my thoughts:
There are two ways of drawing involved: Core Animation (UIKit) and OpenGL. In the end, all drawing is done by OpenGL but the Core Animation part is rendered in backboardd (or Springboard.app, before iOS 6) which serves as a kind of window server.
To make this work your app's process serializes the layer hierarchy and changes to its properties and passes the data over to backboardd which in turn renders and composes the layers and makes the result visible.
When mixing OpenGL with UIKit, the CAEAGLLayer's rendering (which is done in your app) has to be composited with the rest of the Core Animation layers. I'm guessing that the render buffer used by the CAEAGLLayer is somehow shared with backboardd to have a fast way of transferring your application's rendering. This mechanism does not necessarily has to be synchronized with the updates from Core Animation.
To solve your issue it would be necessary to find a way of synchronizing the OpenGL rendering with the Core Animation layer serialization and transfer to backboardd. Sorry for not being able to present a solution but I hope these thoughts and guesses help you to understand the reason for the problem.
I am trying to solve exactly same issue. I tried all method described above but nothing was acceptable.
Ideally, this can be solved by accessing Core Animation's final compositing GL context. But we can't access the private API.
One another way is transferring GL result to CA. I tried this by reading frame-buffer pixels to dispatch to CALayer which was too slow. About over 200MB/sec data should be transferred. Now I am trying to optimize this. Because this is the last way that I can try.
Update
Still heavy, but reading frame-buffer and setting it to a CALayer.content is working well and shows acceptable performance on my iPhone 4S. Anyway over 70% of CPU load is used only for this reading and setting operation. Especially glReadPixels I this most of them are just waiting time for reading data from GPU memory.
Update2
I gave up last method. It's cost is almost purely pixel transferring and still too slow. I decided to draw every dynamic objects in GL. Only some static popup views will be drawn with CA.
In order to synchronize UIKit and OpenGL drawing, you should try to render where Core Animation expects you to draw, i.e. something like this:
- (void)displayLayer:(CALayer *)layer
{
[self drawFrame];
}
- (void)updateVisibleRect:(CGRect)rect {
_visibleRect = rect;
[self.layer setNeedsDisplay];
}
I am drawing a rounded image in the centre of my custom view by creating a rounded NSBezierPath and adding it to the clipping region of the graphics context. This is working well, but I'm wondering if I can improve the performance of drawing the background of the view (the area outside of the rounded centred image) but inverting this clip and performing the background draw (an NSGradient fill).
Can anyone suggest a method of inverting a clip path?
You won't improve the drawing performance of the background by doing this. If anything, using a more complex clipping path will slow down drawing.
If you want to draw objects over a background without significant overhead of redrawing the background, you could use Core Animation layers as I explained in my answer to this question.
Regardless of performance, this question comes first when you try to invert clip.
So if that's what you're looking for:
Using NSBezierPath addClip - How to invert clip
If I use a standard NSWindow to host a NSOpenGLView which extends for all the window frame, the two bottom corners of the window are automatically rounded.
When swithcing to NSBorderlessWindowMask I have to handle corner rounding myself.
I have already implemented a transparent custom NSWindow and a rounded custom NSView and they both work fine.
After that I have implemented a transparent NSOpenGLContext by setting NSOpenGLCPSurfaceOpacity to 0.
If I set a color background for the OpenGL context, the view is drawn correctly and I obtain the desired rounded corner.
But, since the app is a movie player, I need to draw the texture corresponding to every movie frame.
When I do this (using glTexCoord2f and glVertex2f) the texture is drawn till corners and therefore the image is drawn till outside of the rounded corners and I loose the rounded aspect of my window.
What does the system do when the window is standard and non NSBorderlessWindowMask that I can't seem to be able to reproduce?
What is the best way to round the corner of the texture while drawing it to the frame buffer?
You could apply the texture to a geometry with rounded corners, use an additional alpha mask on the movie texture with rounded corners, or use the stencil test to round off your viewport's corners.
I'm trying to draw an image with a certain color. To get this effect, first I draw the image (consisting of mostly white and black), and then I draw a rectangle over it in a specified color (let's say red). There are various compositing options available, but none are exactly what I want.
I want the colors (and alpha values) to be multiplied together. I think this is sometimes called a "Modulate" operation in other graphics systems. So, the white values in the image will be multiplied by the red values in the overlay rectangle to produce red. The black values in the image will be multiplied with black to product black.
In other words, R = S * D (result equals source multiplied by destination).
This is the code I'm working with now:
[image drawInRect:[self bounds] fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1];
NSColor * blend = [NSColor colorWithCalibratedRed:1 green:0 blue:0 alpha:1.0];
[blend setFill];
NSRectFillUsingOperation(self.bounds, NSCompositePlusDarker);
NSCompositePlusDarker is NOT what I want, but it's close. (It's addition instead of multiplication).
Is there a way to do this in Quartz? It's very easy to do in OpenGL for instance. I've looked at CIFilter's but they seem a bit cumbersome for this. But if that's the only way, I would appreciate any sample code to get me in the right direction.
Yes.
Note that AppKit compositing operations and Quartz blend modes, though there are quite a few in common, are not interchangeable. kCGBlendModeMultiply has the same numeric value as NSCompositeCopy, so the latter—flat overwriting—is what will happen if you try to pass it to any of AppKit's drawing APIs.
So, you'll need to use Quartz for every part of the fill. You can use NSGraphicsContext to get the CGContext you'll need, and you can ask your NSColor for its CGColor.