UIImage scale and tile - image

Is it possible to show a UIImage in a UIImageView such that the image is both, scaled and tiled?
for example, it should be scaled at 2 times it's actual resolution, but tiled across the full size of the UIImageView.
Is this doable?

This should work :
UIImage *image = [UIImage imageNamed:#"tile"];
UIImage *newImage = [UIImage imageWithCGImage:[image CGImage] scale:0.5 orientation:image.imageOrientation];
[self.myImageView setBackgroundColor:[UIColor colorWithPatternImage:newImage]];

Related

Displaying an image with alpha channel on OSX

I am trying to display a 32bit, 4 channeled image in my window. The fourth channel in the image is an alpha channel. As an experiment, I create a completely Red image, where each pixel has the RGB value: 255,0,0. For each pixel I also add an alpha value of 204.
What I expected to see was a completely Red image with some transparency, but what I see instead is a completely opaque image with the values altered.
Expected Output:
Current output:
The code I am using:
NSRect windowRect = {0,0,200,200};
m_NSWindow = [[NSWindow alloc] initWithContentRect:windowRect styleMask:NSBorderlessWindowMask backing:NSBackingStoreBuffered defer:NO];
[m_NSWindow setTitle:#"overlayWindow"];
[m_NSWindow makeKeyAndOrderFront:nil];
g_imageView = [[NSImageView alloc] initWithFrame:NSMakeRect(0,0,200,200)];
[m_NSWindow.contentView addSubview:g_imageView];
[m_NSWindow setOpaque:NO];
[m_NSWindow setAlphaValue:1.0];
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:200
pixelsHigh:200
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:NSAlphaNonpremultipliedBitmapFormat
bytesPerRow:(200*4)
bitsPerPixel:32];
memcpy(imageRep.bitmapData,m_paintBuffer.data,160000);
NSSize imageSize = NSMakeSize(200,200);
NSImage* myImage = [[NSImage alloc] initWithSize: imageSize];
[myImage addRepresentation:imageRep];
[g_imageView setImage:myImage];
where m_paintBuffer.data points to the raw pixel data for the image.
Not sure if it is relevant to the question, but m_paintBuffer is of type Mat from OpenCV
The window's content view is not the entirety of the window's content, as it were. Even a borderless window has (private) theme views surrounding and containing the content view.
In particular, the window has a background color of a light gray. This is drawn "behind" your content view and is opaque. However, you can change it by setting the window's backgroundColor property to, for example, [NSColor clearColor]. And that's what you should do to achieve your desired effect.

What is the best way to display a single-paged pdf as an image?

I would like to display in an NSView a single-paged PDF.
So far, I have two solutions but they both have downsides. Can anyone help me with any of these downsides?
First solution: with NSImage and NSImageView
NSString *path= [[NSBundle mainBundle] pathForResource:name ofType:#"pdf"];
NSImage * image = [[NSImage alloc] initWithContentsOfFile:path] ;
NSImageView * imageView = [[NSImageView alloc] init] ;
imageView.frame = NSMakeRect(0, 0, 2*image.size.width, 2*image.size.height) ;
imageView.image = image ;
imageView.imageScaling = NSImageScaleAxesIndependently ;
return imageView
Downsides:
the image is not anti-aliased
I don't understand why the factor 2 is needed. Why does my PDF is displayed smaller in an NSView than it is with the Finder?
Second solution: with PDFDocument and PDFView
NSString *path= [[NSBundle mainBundle] pathForResource:name ofType:#"pdf"];
NSURL *urlPDF = [NSURL fileURLWithPath:path] ;
PDFDocument * myPDFDocument = [[PDFDocument alloc] initWithURL:urlPDF] ;
PDFView *myPDFView = [[PDFView alloc] init] ;
myPDFView.document = myPDFDocument ;
PDFPage * firstPage = [myPDFDocument pageAtIndex:0] ;
NSRect myBounds = [firstPage boundsForBox:kPDFDisplayBoxMediaBox] ;
NSRect myNewBounds = NSMakeRect(0, 0, myBounds.size.width*2, myBounds.size.height*2+5) ;
myPDFView.frame = myNewBounds ;
myPDFView.autoScales = YES ;
return myPDFView ;
Downsides:
I am able to select the text of my pdf, I can zoom in or zoom out. But I would like my PDF document to be displayed as an image, without these possibilities
I don't understand why the factor 2 is needed. Why is my PDF displayed smaller in an NSView than it is with the Finder?
There are some margins around my image
I'm not seeing the problems you describe with NSImageView. I implemented a nib-based window and NSImageView. In my case I have an overlapping sibling view, so I turned CALayers turned on in the nib. I'm on 10.9.2. Sizing is normal (1x) and the text in my PDF is anti-aliased (sub-pixel I think, since I see colors when I blow it up). I do have scaling NONE - maybe scaling is preventing anti-aliased text?
Otherwise my guess is there's something different about your views or or PDF content. Try a simpler PDF and/or a nib-based view and if it works, you can look for differences.

NSImage and PDFImageRep caching still draws at only one resolution

I have an NSImage, initialized with PDF data, created like this:
NSData* data = [view dataWithPDFInsideRect:view.bounds];
slideImage = [[NSImage alloc] initWithData:data];
The slideImage is now the size of the view.
When I try to render the image in an NSImageView, it only draws sharp when the image view is exactly the original size of the image, even if you clear the cache or change the image size. I tried setting the cacheMode to NSImageCacheNever, which also didn't work. The only image rep in the image is the PDF one, and when I render it to a PDF file it shows that it's vector.
As a workaround, I create a NSBitmapImageRep with a different size, call drawInRect on the original image, and put the bitmap representation inside a new NSImage and render that, which works, but it feels like it's not optimal:
- (NSBitmapImageRep*)drawToBitmapOfWidth:(NSInteger)width
andHeight:(NSInteger)height
withScale:(CGFloat)scale
{
NSBitmapImageRep *bmpImageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:width * scale
pixelsHigh:height * scale
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:0
bitsPerPixel:0
];
bmpImageRep = [bmpImageRep bitmapImageRepByRetaggingWithColorSpace:
[NSColorSpace sRGBColorSpace]];
[bmpImageRep setSize:NSMakeSize(width, height)];
NSGraphicsContext *bitmapContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bmpImageRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:bitmapContext];
[self drawInRect:NSMakeRect(0, 0, width, height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1];
[NSGraphicsContext restoreGraphicsState];
return bmpImageRep;
}
- (NSImage*)rasterizedImageForSize:(NSSize)size
{
NSImage* newImage = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep* rep = [self drawToBitmapOfWidth:size.width andHeight:size.height withScale:1];
[newImage addRepresentation:rep];
return newImage;
}
How can I get the PDF to render nicely at any size without resorting to hacks like mine?
The point of NSImage is that you create it with the size (in points) that you want it to be. The backing representation can be vector based (e.g. PDF), and the NSImage is resolution independent (i.e. it supports different pixels per point), but the NSImage still has a fixed size (in points).
One one the points of an NSImage is that it will / can add a cache representation to speed up subsequent drawing.
If you need to draw a PDF to multiple sizes, and you want to use an NSImage, you're probably best of creating an NSImage for your given target size. If you want to, you can keep the NSPDFImageRef around -- I don't think it'll save you much.
We tried the following:
NSPDFImageRep* rep = self.representations.lastObject;
return [NSImage imageWithSize:size flipped:NO drawingHandler:^BOOL (NSRect dstRect)
{
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[rep drawInRect:dstRect fromRect:NSZeroRect operation:NSCompositeCopy fraction:1 respectFlipped:YES hints:#{
NSImageHintInterpolation: #(NSImageInterpolationHigh)
}];
return YES;
}];
And that does give you nice results when scaling up, but makes for blurry images
when scaling down.

How come IKImageBrowserView can resize images so much faster than I can?

This is my image resize code:
CALayer *newCALayer = [[CALayer layer] retain];
NSImage* image = [[NSImage alloc] initWithData:[NSData dataWithContentsOfFile:path]];
CGImageRef newCGImageFullResolution = [image CGImageForProposedRect:nil context:nil hints:nil];
CGContextRef context = CGBitmapContextCreate(NULL, drawRect.size.width, drawRect.size.height,
CGImageGetBitsPerComponent(newCGImageFullResolution),
CGImageGetBytesPerRow(newCGImageFullResolution),
CGImageGetColorSpace(newCGImageFullResolution),
CGImageGetAlphaInfo(newCGImageFullResolution));
CGContextDrawImage(context, CGRectMake(0, 0, drawRect.size.width, drawRect.size.height), newCGImageFullResolution);
CGImageRef scaledImage = CGBitmapContextCreateImage(context);
newCALayer.contents = (id)scaledImage;
CGImageRelease(scaledImage);
newCALayer.contentsGravity = kCAGravityResizeAspect;
newCALayer.opacity = 0.0;
newCALayer.anchorPoint = CGPointMake(0.0f,0.0f);
newCALayer.frame = CGRectMake( 0.0,
0.0,
[Singleton sharedSingleton].fullscreenRect.size.width,
[Singleton sharedSingleton].fullscreenRect.size.height);
[newCALayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
//CGImageRelease(cgImageFullResolution); (bonus points if you can explain why I can't release this! I mean, I can release the scaled image ok??)
CGContextRelease(context);
[image release];
I am doing all of this from a background thread in order to preload pictures so my GUI feels snappy. It took some work getting synchronization and what not set up so the CALayers ends up in view.
But I believe the term for describing how fast this is would be "it's a dog".
Comparing to IKImageView - that thing flings up thumbnails of images faster than I can scroll.
Does anybody have some suggestions for how to handle this better than I am doing it now?
In other words, my problem is that I want to have a super-fast UX. I believe the way to accomplish this is by preloading things to CALayers (this may be wrong? I tried NSImageView and some IK-stuff, but at least CALayer is better than that).
ImageKit is probably using CGImageSourceCreateThumbnailAtIndex() to quickly get an image appropriate to the destination, rather than reading in the entire image file.
Here:
NSImage *image = [[[NSImage alloc] initWithContentsOfFile:path] autorelease];
[image setScalesWhenResized:YES]; // *
[image setDataRetained:YES]; // *
[image setSize:desiredNewSize];
Then use the image as it is.
As for why your app is slow, run it under Instruments. That will tell you specifically where you are spending the majority of the processor time you use—it may not be in your scaling code after all.
*Since 10.6, these messages do nothing useful and are deprecated, so you can omit them if you are requiring Snow Leopard or later.

Clearing the alpha channel of an NSImage

It can be done by mallocing a temporary bitmap with 32bits per pixel
and then clearing the alpha component with a for loop and
and finally turn it back into a NSImage again.
I suspect is can be done in a simpler way using a clever
combination of NSColor and NSCompositingOperation. Or perhaps the image
needs to be composited with itself using drawAtPoint.
My code looks like this.
NSImage* img = some image with RGB and Alpha;
NSRect rect = some rect inside the image;
[img lockFocus];
[[NSColor clearColor] set];
NSRectFillUsingOperation(rect, NSCompositeXOR);
[img unlockFocus];
NOTE: Setting the alpha channel to 1 can be done by
using a blackColor with NSCompositePlusLighter.
What is the secret in clearing the alpha channel?
It won't be fast but this will work as well:
NSImage *newImage = [[NSImage alloc] initWithSize:[srcImage size]];
[newImage lockFocus];
[[NSColor whiteColor] set];
NSRectFill(NSMakeRect(0,0,[newImage size].width, [newImage size].height));
[srcImage compositeToPoint:NSZeroPoint operation:NSCompositeCopy];
[newImage unlockFocus];
Please read the AppKit release notes on the subject of image mutability. NSImage should basically be treated as immutable.
All of the pixel formats supported in graphics contexts have premultiplied alpha. If the alpha channel is zero, the other channels have to be zero too.

Resources