Can NSView be made to use software rendering for CIImages? - cocoa

From a previous question,
CIImage drawing EXC_BAD_ACCESS,
I learned to work around a CoreImage issue by telling a CIContext to do software rendering. I am now trying to figure out a crasher that happens when AppKit tries to draw an NSImageView that I've set to display a CIImage using the code below:
- (void)setCIImage:(CIImage *)processedImage;
{
NSSize size = [processedImage extent].size;
if (size.width == 0) {
[self setImage:nil];
return;
}
NSData * pixelData = [[OMFaceRecognizer defaultRecognizer] imagePlanarFData:processedImage];
LCDocument * document = [[[self window] windowController] document];
[[NSNotificationCenter defaultCenter] postNotificationName:LCCapturedImageNotification
object:document
userInfo:#{ #"data": pixelData, #"size": [NSValue valueWithSize:size] }];
#if 1
static dispatch_once_t onceToken;
static CGColorSpaceRef colorSpace;
static size_t bytesPerRow;
dispatch_once(&onceToken, ^ {
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericGray);
bytesPerRow = size.width * sizeof(float);
});
// For whatever bizarre reason, CoreGraphics uses big-endian floats (!)
const float * data = [[[OMFaceRecognizer defaultRecognizer] byteswapPlanarFData:pixelData
swapInPlace:NO] bytes];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * size.height, NULL);
CGImageRef renderedImage = CGImageCreate(size.width, size.height, 32, 32, bytesPerRow, colorSpace, kCGImageAlphaNone | kCGBitmapFloatComponents, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
NSImage * image = [[NSImage alloc] initWithCGImage:renderedImage
size:size];
CGImageRelease(renderedImage);
#else
NSCIImageRep * rep = [NSCIImageRep imageRepWithCIImage:processedImage];
NSImage * image = [[NSImage alloc] initWithSize:size];
[image addRepresentation:rep];
#endif
[self setImage:image];
}
Is there some way to get the NSImageView to use software rendering? I looked around in IB but I did not see anything that looked obviously promising…

Related

AVFoundation image captured is dark

On osx i use AVFoundation to capture image from a USB camera, all work fine, but the image I get is darker compared to live video.
Device capture configuration
-(BOOL)prepareCapture{
captureSession = [[AVCaptureSession alloc] init];
NSError *error;
imageOutput=[[AVCaptureStillImageOutput alloc] init];
NSNumber * pixelFormat = [NSNumber numberWithInt:k32BGRAPixelFormat];
[imageOutput setOutputSettings:[NSDictionary dictionaryWithObject:pixelFormat forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
videoOutput=[[AVCaptureMovieFileOutput alloc] init];
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:MyVideoDevice error:&error];
if (videoInput) {
[captureSession beginConfiguration];
[captureSession addInput:videoInput];
[captureSession setSessionPreset:AVCaptureSessionPresetHigh];
//[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
[captureSession addOutput:imageOutput];
[captureSession addOutput:videoOutput];
[captureSession commitConfiguration];
}
else {
// Handle the failure.
return NO;
}
return YES;
}
Add view for live preview
-(void)settingPreview:(NSView*)View{
// Attach preview to session
previewView = View;
CALayer *previewViewLayer = [previewView layer];
[previewViewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
AVCaptureVideoPreviewLayer *newPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
[newPreviewLayer setFrame:[previewViewLayer bounds]];
[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
[previewViewLayer addSublayer:newPreviewLayer];
//[self setPreviewLayer:newPreviewLayer];
[captureSession startRunning];
}
Code to capture the image
-(void)captureImage{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in imageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments =
CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// Do something with the attachments.
}
// Continue as appropriate.
//IMG is a global NSImage
IMG = [self imageFromSampleBuffer:imageSampleBuffer];
[[self delegate] imageReady:IMG];
}];
}
Create a NSImage from sample buffer data, i think the problem is here
- (NSImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
//UIImage *image = [UIImage imageWithCGImage:quartzImage];
NSImage * image = [[NSImage alloc] initWithCGImage:quartzImage size:NSZeroSize];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Solution found
The problem was in imageFromSampleBuffer
I used this code and the picture is perfect
// Continue as appropriate.
//IMG = [self imageFromSampleBuffer:imageSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer) {
CVBufferRetain(imageBuffer);
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
IMG = [[NSImage alloc] initWithSize: [imageRep size]];
[IMG addRepresentation: imageRep];
CVBufferRelease(imageBuffer);
}
Code found in this answer
In my case, you still need to call captureStillImageAsynchronouslyFromConnection: multiple times to force the built-in camera to expose properly.
int primeCount = 8; //YMMV
for (int i = 0; i < primeCount; i++) {
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {}];
}
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer) {
CVBufferRetain(imageBuffer);
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
IMG = [[NSImage alloc] initWithSize: [imageRep size]];
[IMG addRepresentation: imageRep];
}
}];

Is it possible to print IKImageBrowserView In Cocoa programmatically?

I want to take print of IKImageBrowserView with (content) images. I tried the following code
if (code == NSOKButton) {
NSPrintInfo *printInfo;
NSPrintInfo *sharedInfo;
NSPrintOperation *printOp;
NSMutableDictionary *printInfoDict;
NSMutableDictionary *sharedDict;
sharedInfo = [NSPrintInfo sharedPrintInfo];
sharedDict = [sharedInfo dictionary];
printInfoDict = [NSMutableDictionary dictionaryWithDictionary: sharedDict];
[printInfoDict setObject:NSPrintSaveJob
forKey:NSPrintJobDisposition];
[printInfoDict setObject:[sheet filename] forKey:NSPrintSavePath];
printInfo = [[NSPrintInfo alloc] initWithDictionary:printInfoDict];
[printInfo setHorizontalPagination: NSAutoPagination];
[printInfo setVerticalPagination: NSAutoPagination];
[printInfo setVerticallyCentered:NO];
printOp = [NSPrintOperation printOperationWithView:imageBrowser
printInfo:printInfo];
[printOp setShowsProgressPanel:NO];
[printOp runOperation];
}
because IKImageBrowserView is Inherits from NSView but print preview is showing null image. Please help me to over come this problem. Thanks in advance.....
/*
1) allocate a c buffer at the size of the visible rect of the image
browser
*/
NSRect vRect = [imageBrowser visibleRect];
NSSize size = vRect.size;
NSLog(#"Size W = %f and H = %f", size.width, size.height);
void *buffer = malloc(size.width * size.height * 4);
//2) read the pixels using openGL
[imageBrowser lockFocus];
glReadPixels(0,
0,
size.width,
size.height,
GL_RGBA,
GL_UNSIGNED_BYTE,
buffer);
[imageBrowser unlockFocus];
//3) create a bitmap with those pixels
unsigned char *planes[2];
planes[0] = (unsigned char *) (buffer);
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:planes pixelsWide:size.width
pixelsHigh:size.height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES
isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bitmapFormat:0
bytesPerRow:size.width*4 bitsPerPixel:32];
/*
4) create a temporary image with this bitmap and set it flipped
(because openGL and the AppKit don't have the same pixels coordinate
system)
*/
NSImage *img = [[NSImage alloc] initWithSize:size];
[img addRepresentation:imageRep];
[img setFlipped:YES];
[imageRep release];
/*
5) draw this temporary image into another image so that we get an
image without any reference to our "buffer" buffer so that we can
release it after that
*/
NSImage *finalImage = [[NSImage alloc] initWithSize:size];
[finalImage lockFocus];
[img drawAtPoint:NSZeroPoint
fromRect:NSMakeRect(0,0,size.width,size.height)
operation:NSCompositeCopy fraction:1.0];
[finalImage unlockFocus];
//[NSString stringWithFormat:#"/tmp/%#.tiff", marker]
NSData *imageData = [finalImage TIFFRepresentation];
NSString *writeToFileName = [NSString stringWithFormat:#"/Users/Desktop/%#.png", [NSDate date]];
[imageData writeToFile:writeToFileName atomically:NO];
//6) release intermediate objects
[img release];
free(buffer);
After this I send imageData for print, which works great for me.

NSImage to cv::Mat and vice versa

while working with OpenCV I need to convert a NSImage to an OpenCV multi-channel 2D matrix (cvMat) and vice versa.
What's the best way to do it?
Greets,
Dom
Here's my outcome, which works pretty well.
NSImage+OpenCV.h:
//
// NSImage+OpenCV.h
//
#import <AppKit/AppKit.h>
#interface NSImage (NSImage_OpenCV) {
}
+(NSImage*)imageWithCVMat:(const cv::Mat&)cvMat;
-(id)initWithCVMat:(const cv::Mat&)cvMat;
#property(nonatomic, readonly) cv::Mat CVMat;
#property(nonatomic, readonly) cv::Mat CVGrayscaleMat;
#end
NSImage+OpenCV.mm:
//
// NSImage+OpenCV.mm
//
#import "NSImage+OpenCV.h"
static void ProviderReleaseDataNOP(void *info, const void *data, size_t size)
{
return;
}
#implementation NSImage (NSImage_OpenCV)
-(CGImageRef)CGImage
{
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/,
[self size].width,
[self size].height,
8 /*bitsPerComponent*/,
0 /*bytesPerRow - CG will calculate it for you if it's allocating the data. This might get padded out a bit for better alignment*/,
[[NSColorSpace genericRGBColorSpace] CGColorSpace],
kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
[self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
CGContextRelease(bitmapCtx);
return cgImage;
}
-(cv::Mat)CVMat
{
CGImageRef imageRef = [self CGImage];
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imageRef);
CGContextRelease(contextRef);
CGImageRelease(imageRef);
return cvMat;
}
-(cv::Mat)CVGrayscaleMat
{
CGImageRef imageRef = [self CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
CGImageRelease(imageRef);
return cvMat;
}
+ (NSImage *)imageWithCVMat:(const cv::Mat&)cvMat
{
return [[[NSImage alloc] initWithCVMat:cvMat] autorelease];
}
- (id)initWithCVMat:(const cv::Mat&)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1)
{
colorSpace = CGColorSpaceCreateDeviceGray();
}
else
{
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:imageRef];
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
#end
Example usage:
Just import it like this:
#import "NSImage+OpenCV.h"
And use it like this:
cv::Mat cvMat_test;
NSImage *image = [NSImage imageNamed:#"test.jpg"];
cvMat_test = [image CVMat];
[myImageView setImage:[NSImage imageWithCVMat:cvMat_test]];
In -(id)initWithCVMat:(const cv::Mat&)cvMat, shouldn't you be adding the representation to self, rather than a new NSImage?
-(id)initWithCVMat:(const cv::Mat *)iMat
{
if(self = [super init]) {
NSData *tData = [NSData dataWithBytes:iMat->data length:iMat->elemSize() * iMat->total()];
CGColorSpaceRef tColorSpace;
if(iMat->elemSize() == 1) {
tColorSpace = CGColorSpaceCreateDeviceGray();
} else {
tColorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef tProvider = CGDataProviderCreateWithCFData((CFDataRef) tData);
CGImageRef tImage = CGImageCreate(
iMat->cols,
iMat->rows,
8,
8 * iMat->elemSize(),
iMat->step[0],
tColorSpace,
kCGImageAlphaNone | kCGBitmapByteOrderDefault,
tProvider,
NULL,
false,
kCGRenderingIntentDefault);
NSBitmapImageRep *tBitmap = [[NSBitmapImageRep alloc] initWithCGImage:tImage];
[self addRepresentation:tBitmap];
[tBitmap release];
CGImageRelease(tImage);
CGDataProviderRelease(tProvider);
CGColorSpaceRelease(tColorSpace);
}
return self;
}

Using CALayer's renderInContext: method with geometryFlipped

I have a CALayer (containerLayer) that I'm looking to convert to a NSBitmapImageRep before saving the data out as a flat file. containerLayer has its geometryFlipped property set to YES, and this seems to be causing issues. The PNG file that is ultimately generated renders the content correctly, but doesn't seem to takes the flipped geometry into account. I'm obviously looking for test.png to accurately represent the content shown to the left.
Attached below is a screenshot of the problem and the code I'm working with.
- (NSBitmapImageRep *)exportToImageRep
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
int pixelsHigh = (int)[[self containerLayer] bounds].size.height;
int pixelsWide = (int)[[self containerLayer] bounds].size.width;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
context = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context == NULL)
{
NSLog(#"Failed to create context.");
return nil;
}
CGColorSpaceRelease(colorSpace);
[[[self containerLayer] presentationLayer] renderInContext:context];
CGImageRef img = CGBitmapContextCreateImage(context);
NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithCGImage:img];
CFRelease(img);
return bitmap;
}
For reference, here's the code that actually saves out the generated NSBitmapImageRep:
NSData *imageData = [imageRep representationUsingType:NSPNGFileType properties:nil];
[imageData writeToFile:#"test.png" atomically:NO];
You need to flip the destination context BEFORE you render into it.
Update your code with this, I have just solved the same problem:
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, pixelsHigh);
CGContextConcatCTM(context, flipVertical);
[[[self containerLayer] presentationLayer] renderInContext:context];

reset image in tableView imageCell is blurry

i'm trying to show an 'info' icon during a cursor 'rollover' on an NSTableView cell. i'm getting a copy of the cell's image, drawing the 'info' icon on it, and telling the cell to setImage with this copy. just drawing the icon will scale it and it won't be the same size in every one as the images in the table are different sizes. i'm not having a problem with the scaling or positioning the correct size icon.
my problem is that the replaced image is slightly blurry and it's edges are not crisp on close examination. the lack of edge makes it appear to be 'moving' slightly when the mouseEntered happens and the image is replaced.
i've tried a number of drawing techniques that doen't use lockFocus on an NSImage (drawing in CGBitmapContext, or using CIFilter compositing), and they produce the same results.
i'm using NSTableView's preparedCellAtColumn as it seems that drawing in willDisplayCell is unpredictable -- i read somewhere but can't remember where.
here is my preparedCellAtColumn method:
- (NSCell *)preparedCellAtColumn:(NSInteger)column row:(NSInteger)row
{
NSCell *cell = [super preparedCellAtColumn:column row:row];
if ((self.mouseOverRow == row) && column == 0) {
NSCell * imageCell = [super preparedCellAtColumn:0 row:row];
NSImage *sourceImage = [[imageCell image] copy];
NSRect cellRect = [self frameOfCellAtColumn:0 row:row];
NSSize cellSize = cellRect.size;
NSSize scaledSize = [sourceImage proportionalSizeForTargetSize:cellSize];
NSImage *outputImage = [[NSImage alloc] initWithSize:cellSize];
[outputImage setBackgroundColor:[NSColor clearColor]];
NSPoint drawPoint = NSZeroPoint;
drawPoint.x = (cellSize.width - scaledSize.width) * 0.5;
drawPoint.y = (cellSize.height - scaledSize.height) * 0.5;
NSRect drawRect = NSMakeRect(drawPoint.x, drawPoint.y, scaledSize.width, scaledSize.height);
NSPoint infoPoint = drawPoint;
infoPoint.x += NSWidth(drawRect) - self.infoSize.width;
[outputImage lockFocus];
[sourceImage drawInRect:drawRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[self.infoImage drawAtPoint:infoPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[outputImage unlockFocus];
[cell setImage:outputImage];
}
return cell;
}
[this is the enclosed scaling method from scott stevenson]
- (NSSize)proportionalSizeForTargetSize:(NSSize)targetSize
{
NSSize imageSize = [self size];
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
if ( NSEqualSizes( imageSize, targetSize ) == NO )
{
CGFloat widthFactor;
CGFloat heightFactor;
widthFactor = targetWidth / width;
heightFactor = targetHeight / height;
if ( widthFactor < heightFactor )
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
}
return NSMakeSize(scaledWidth,scaledHeight);
}
i need to support 10.6, so can't use new groovy lion methods.
thanks for your consideration...
Your code is setting the origin of the image to non-integral coordinates. That means that the edge of the image will not be pixel-aligned, producing a fuzzy result.
You just need to do this:
NSPoint drawPoint = NSZeroPoint;
drawPoint.x = round((cellSize.width - scaledSize.width) * 0.5);
drawPoint.y = round((cellSize.height - scaledSize.height) * 0.5);
That will ensure the image origin has no fractional component.

Resources