Is there a way to use layer backed NSView as the contentView of a NSDockTile? Tried all sorts of tricks, but all I get is transparent area. Also tried going different route and get an image out of the CALayer and use that for [NSApp setApplicationIconImage:], but no luck either - I think the issue here is creating image representation for offscreen image.
As usual, I got my answer soon after posting the question :) I'll post it here for future reference: I solved it by creating NSImage out of the layer as described in Cocoa is my girlfriend blog post here http://www.cimgf.com/2009/02/03/record-your-core-animation-animation/
The only missing piece is that in order to have anything rendered, a view must be added to a window, so using example code from the post, my solution is:
NSView *myView = ...
NSWindow *window = [[NSWindow alloc] initWithContentRect:NSMakeRect(-1000.0, -1000.0, 256.0, 256.0) styleMask:0 backing:NSBackingStoreNonretained defer:NO];
[window setContentView:myView];
NSUInteger pixelsHigh = myView.bounds.size.height;
NSUInteger pixelsWide = myView.bounds.size.width;
NSUInteger bitmapBytesPerRow = pixelsWide * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef context = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
[myView.layer.presentationLayer renderInContext:context];
CGImageRef image = CGBitmapContextCreateImage(context);
NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithCGImage:image];
CFRelease(image);
NSImage *img = [[NSImage alloc] initWithData:[bitmap TIFFRepresentation]];
[NSApp setApplicationIconImage:img];
Related
How to shift NSImage horizontally that shifted pixels appear at the other side so it looks like a loop?
Currently I am using drawInRect. Is there any CIFilter or smarter way to do this?
- (CIImage *)image:(NSImage *)image shiftedBy:(CGFloat)shiftAmount
{
NSUInteger width = image.size.width;
NSUInteger height = image.size.height;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[rep setSize:NSMakeSize(width, height)];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep:rep];
[NSGraphicsContext setCurrentContext:context];
CGRect rect0 = CGRectMake(0, 0, width, height);
CGRect leftSourceRect, rightSourceRect;
CGRectDivide(rect0, &leftSourceRect, &rightSourceRect, shiftAmount, CGRectMinXEdge);
CGRect rightDestinationRect = CGRectOffset(leftSourceRect, width - rightSourceRect.origin.x, 0);
CGRect leftDestinationRect = rightSourceRect;
leftDestinationRect.origin.x = 0;
[image drawInRect:leftDestinationRect fromRect:rightSourceRect operation:NSCompositingOperationSourceOver fraction:1.0];
[image drawInRect:rightDestinationRect fromRect:leftSourceRect operation:NSCompositingOperationSourceOver fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
return [[CIImage alloc] initWithBitmapImageRep:rep];
}
I tried it with CIFilter however the performance hit is 3-4x slower. However, the code is more readable.
- (CIImage *)image:(NSImage *)image shiftXBy:(CGFloat)shiftX YBy:(CGFloat)shiftY
{
//avoid calling TIFFRepresentation here cause the performance hit is even bigger
CIImage *ciImage = [[CIImage alloc] initWithData:[image TIFFRepresentation]];
CGAffineTransform xform = CGAffineTransformIdentity;
NSValue *xformObj = [NSValue valueWithBytes:&xform objCType:#encode(CGAffineTransform)];
ciImage = [ciImage imageByApplyingFilter:#"CIAffineTile"
withInputParameters:#{kCIInputTransformKey : xformObj} ];
ciImage = [ciImage imageByCroppingToRect:CGRectMake(shiftX, shiftY, image.size.width, image.size.height)];
return ciImage;
}
For optimum performance you need to use CALayer. Here’s the basic concept:
Your NSView subclass should have wantsLayer and layerUsesCoreImageFilters set to true.
Assign your image to content property of NSView.layer (or add a new sublayer).
Create CIAffineTile filter and add it to the layer.
Now you can change values of the filter without reloading or redrawing the image. This all would be hardware accelerated.
I want to show an image in NSview or in the NSImageView. In my header file I have
#interface FVView : NSView
{
NSImageView *imageView;
}
#end
here is what I been trying to do in my implementation file:
- (void)drawRect:(NSRect)dirtyRect
{
[super drawRect:dirtyRect];
(Here I get an image called fitsImage........ then I do)
//Here I make the image
CGImageRef cgImage = CGImageRetain([fitsImage CGImageScaledToSize:maxSize]);
NSImage *imageR = [self imageFromCGImageRef:cgImage];
[imageR lockFocus];
//Here I have the view context
CGContextRef ctx = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
//Here I set the via dimensions
CGRect renderRect = CGRectMake(0., 0., maxSize.width, maxSize.height);
[self.layer renderInContext:ctx];
[imageR unlockFocus];
CGContextDrawImage(ctx, renderRect, cgImage);
CGImageRelease(cgImage);
}
I don't get anything in the NSview window when I run the script. No errors at all I just can't see what I'm doing wrong. My Xcode version in 5.1.1
I'm trying to learn how to manipulate CGImageRef and view it in a window or nsview.
Thank you.
I'm not quite sure what exactly your setup is. Drawing an image in a custom view is a separate thing from using an NSImageView. Also, a custom view that may (or may not) be layer-backed is different from a layer-hosting view.
You have a lot of the right elements, but they're all mixed up together. In no case do you have to lock focus on an NSImage. That's for drawing into an NSImage. Also, a custom view that subclasses from NSView doesn't have to call super in its -drawRect:. NSView doesn't draw anything.
To draw an image in a custom view, try:
- (void) drawRect:(NSRect)dirtyRect
{
CGImageRef cgImage = /* ... */;
NSSize maxSize = /* ... */;
CGContextRef ctx = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGRect renderRect = CGRectMake(0., 0., maxSize.width, maxSize.height);
CGContextDrawImage(ctx, renderRect, cgImage);
CGImageRelease(cgImage);
}
If you have an NSImageView, then you don't need a custom view or any drawing method or code. Just do the following at the point where you obtain the image or the information necessary to generate it:
NSImageView* imageView = /* ... */; // Often an outlet to a view in a NIB rather than a local variable.
CGImageRef cgImage = /* ... */;
NSImage* image = [[NSImage alloc] initWithCGImage:cgImage size:/* ... */];
imageView.image = image;
CGImageRelease(cgImage);
If you're working with a layer-hosting view, you just need to set the CGImage as the layer's content. Again, you do this whenever you obtain the image or the information necessary to generate it. It's not in -drawRect:.
CALayer* layer = /* ... */; // Perhaps someView.layer
CGImageRef cgImage = /* ... */;
layer.contents = (__bridge id)cgImage;
CGImageRelease(cgImage);
So I've got a UISlider that I've been working on and I need it to have a custom image. I've got a nice #2x image which has a great amount of pixels and looks great when I add it to a UIImageView, but the moment I use the same image for a replacement for the UISlider thumb image, it pixelates the heck out of it and makes the thumb of the slider look like crap. Any thoughts on how to remedy this?
Here is some of my sample code.
slider = [[UISlider alloc] initWithFrame:CGRectMake(sliderOffset, 29.5, siderW, 50)];
[slider addTarget:self action:#selector(sliderAction) forControlEvents:UIControlEventValueChanged];
[slider setBackgroundColor:[UIColor clearColor];
slider.minimumValue = 0.0;
slider.maximumValue = 10000.0;
slider.continuous = YES;
slider.value = 3000.00;
[slider setThumbImage:[UIImage imageNamed:#"Slider#2x.png"] forState:UIControlStateNormal];
[slider setThumbImage:[UIImage imageNamed:#"Slider#2x.png"] forState:UIControlStateHighlighted];
Thanks in advance!
Here's how I solved it:
1) Subclassed UISlider
2) Created a clear image and set the slider thumb to that clear image.
[self setThumbImage:[UIImage imageNamed:#"ClearImageForOverride.png"] forState:UIControlStateNormal];
[self setThumbImage:[UIImage imageNamed:#"ClearImageForOverride.png"] forState:UIControlStateHighlighted];
3) Created a UIImageView and made that follow the center of the thumb.
float imageWH = PassedInHeightWidthFromSubClass;
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 1, imageWH, imageWH)];
imageView.image = [UIImage imageNamed:#"NewThumbImage"];
[self addSubview:imageView];
[self bringSubviewToFront:imageView];
imageView.center = CGPointMake(thumbRect.origin.x + self.frame.origin.x + localImageViewCenter, self.frame.origin.y - sliderImageViewYOffset);
[self addTarget:self action:#selector(sliderValueChanged) forControlEvents:UIControlEventValueChanged];
4) Then when the sider value changes I tell the imageView where it's center should be.
-(void)sliderValueChanged {
CGRect trackRect = [self trackRectForBounds:self.bounds];
CGRect thumbRect = [self thumbRectForBounds:self.bounds trackRect:trackRect value:self.value];
imageView.center = CGPointMake(thumbRect.origin.x + self.frame.origin.x + localImageViewCenter, self.frame.origin.y - sliderImageViewYOffset);
}
Essentially with this you can have any size of image for the slider knob which looks great and works flawlessly as the knob.
I'm trying to make a custom animation for replacing an NSView with another.
For that reason I need to get an image of the NSView before it appears on the screen.
The view may contain layers and NSOpenGLView subviews, and therefore standard options like initWithFocusedViewRect and bitmapImageRepForCachingDisplayInRect do not work well in this case (they layers or OpenGL content well in my experiments).
I am looking for something like CGWindowListCreateImage, that is able to "capture" an offline NSWindow including layers and OpenGL content.
Any suggestions?
I created a category for this:
#implementation NSView (PecuniaAdditions)
/**
* Returns an offscreen view containing all visual elements of this view for printing,
* including CALayer content. Useful only for views that are layer-backed.
*/
- (NSView*)printViewForLayerBackedView;
{
NSRect bounds = self.bounds;
int bitmapBytesPerRow = 4 * bounds.size.width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CGContextRef context = CGBitmapContextCreate (NULL,
bounds.size.width,
bounds.size.height,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context == NULL)
{
NSLog(#"getPrintViewForLayerBackedView: Failed to create context.");
return nil;
}
[[self layer] renderInContext: context];
CGImageRef img = CGBitmapContextCreateImage(context);
NSImage* image = [[NSImage alloc] initWithCGImage: img size: bounds.size];
NSImageView* canvas = [[NSImageView alloc] initWithFrame: bounds];
[canvas setImage: image];
CFRelease(img);
CFRelease(context);
return canvas;
}
#end
This code is primarily for printing NSViews which contain layered child views. Might help you too.
I'd like to get a window that has a semi-transparent blurred background, just like what the Terminal can do. See this video, about 30 sec in, to see what I mean: http://www.youtube.com/watch?v=zo8KPRY6-Mk
See an image here: http://osxdaily.com/wp-content/uploads/2011/04/mac-os-x-lion-terminal.jpg
I've been googling for an hour, and can't get anything to work. I believe I need to somehow create a core animation layer and add a background filter, but I've been unsuccessful so far... I just see the gray background of my window. Here's the code I've got so far:
Code:
// Get the content view -- everything but the titlebar.
NSView *theView = [[self window] contentView];
[theView setAlphaValue:0.5];
// Create core animation layer, with filter
CALayer *backgroundLayer = [CALayer layer];
[theView setWantsLayer:YES];
[theView setLayer:backgroundLayer];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setDefaults];
[theView layer].backgroundFilters = [NSArray arrayWithObject:blurFilter];
[[theView layer] setBackgroundFilters:[NSArray arrayWithObject:blurFilter]];
Any tips or examples to do what I'm trying to do?
Thanks!
no need for layers and filters, NSWindow can do it itself
[mywindow setOpaque:NO];
[mywindow setBackgroundColor: [NSColor colorWithCalibratedHue:0.0 saturation:0.0 brightness:0.2 alpha:0.5]];
please do not use this, as it will alpha your title bar also (post it here just in case others need)
[mywindow setOpaque:NO];
[mywindow setBackgroundColor: [NSColor blackColor]];
[mywindow setAlphaValue:0.5];
For the transparency use Jiulong Zhao's suggestion.
For a blurred background use this
The call on a NSWindow :
[self enableBlurForWindow:self];
The function :
-(void)enableBlurForWindow:(NSWindow *)window
{
//!!!! Uses private API - copied from http://blog.steventroughtonsmith.com/2008/03/using-core-image-filters-onunder.html
CGSConnection thisConnection;
uint32_t compositingFilter;
int compositingType = 1; // Under the window
/* Make a new connection to CoreGraphics */
CGSNewConnection(NULL, &thisConnection);
/* Create a CoreImage filter and set it up */
CGSNewCIFilterByName(thisConnection, (CFStringRef)#"CIGaussianBlur", &compositingFilter);
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:2.0] forKey:#"inputRadius"];
CGSSetCIFilterValuesFromDictionary(thisConnection, compositingFilter, (__bridge CFDictionaryRef)options);
/* Now apply the filter to the window */
CGSAddWindowFilter(thisConnection, [window windowNumber], compositingFilter, compositingType);
}
NB: It uses a private API
For those reading this in 2017 and using Swift 4 and wanting to change your BG Alpha you can add the following to your custom NSWindow class:
self.backgroundColor = NSColor.black
self.backgroundColor = NSColor.init(calibratedHue: 0, saturation: 0, brightness: 0, alpha: 0.2)
p.s. I do not need the blur effect yet and when I do, I'll update the answer