I'm trying to create an ICNS (including a 1024x1024 image) programmatically. Currently I'm creating an NSImage, then I create CGImageRef objects with the appropriate resolution, finally I'm adding them to an icon by using CGImageDestinationAddImage(). Peter Hosey has helped me create '#2x' images already, but the sizes of the images don't wanna be set.
This is the code (still a bit messy, sourcefile represents the path to the image):
NSSize sizes[10];
sizes[0] = NSMakeSize(1024,1024);
sizes[1] = NSMakeSize(512,512);
sizes[2] = NSMakeSize(512,512);
sizes[3] = NSMakeSize(256,256);
sizes[4] = NSMakeSize(256,256);
sizes[5] = NSMakeSize(128,128);
sizes[6] = NSMakeSize(64,64);
sizes[7] = NSMakeSize(32,32);
sizes[8] = NSMakeSize(32,32);
sizes[9] = NSMakeSize(16,16);
int count = 0;
for (int i=0 ; i<10 ; i++) {
if ([[NSUserDefaults standardUserDefaults] boolForKey:[NSString stringWithFormat:#"Size%i",i+1]]) count++;
}
NSURL *fileURL = [NSURL fileURLWithPath:aPath];
// Create icns
CGImageDestinationRef dr = CGImageDestinationCreateWithURL((CFURLRef)fileURL, kUTTypeAppleICNS , count, NULL);
NSImage *img = [[NSImage alloc] initWithContentsOfFile:sourcefile];
for (int i=0 ; i<10 ; i++) {
if ([[NSUserDefaults standardUserDefaults] boolForKey:[NSString stringWithFormat:#"Size%i",i+1]]) {
// Create dictionary
BOOL is2X = true;
if (i == 1 || i == 3 || i == 5 || i == 7 || i == 9) is2X = false;
int dpi = 144, size = (int)(sizes[i].width/2);
if (!is2X) {dpi = 72;size = sizes[i].width;}
[img setSize:NSMakeSize(size,size)];
for (NSImageRep *rep in [img representations])[rep setSize:NSMakeSize(size,size)];
const void *keys[2] = {kCGImagePropertyDPIWidth, kCGImagePropertyDPIHeight};
const void *values[2] = {CFNumberCreate(0, kCFNumberSInt32Type, &dpi), CFNumberCreate(0, kCFNumberSInt32Type, &dpi)};
CFDictionaryRef imgprops = CFDictionaryCreate(NULL, keys, values, 2, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
// Add image
NSRect prect = NSMakeRect(0,0,size,size);
CGImageRef i1 = [img CGImageForProposedRect:&prect context:nil hints:nil];
CGImageDestinationAddImage(dr, i1, imgprops);
}
}
CGImageDestinationFinalize(dr);
CFRelease(dr);
size is the width or height that the current image should be. dpi is 144 if we're making an '#2x' image, otherwise it's 72. These values have been checked with NSLog.
The images in the resulting ICNS file are all the same size as the input image. If the size of the input image is 1024x1024, ImageIO complains:
ImageIO: _CGImagePluginWriteICNS unsupported image size (1024 x 1024) - scaling factor: 1
The above error is displayed every time the dpi is 72 and the size is 1024x1024.
I need to know how to set the size of the CGImage that is to be added to the ICNS file.
EDIT: I logged the images:
2012-12-31 12:48:51.281 Eicon[912:680f] |NSImage 0x101b4caf0
Size={512, 512} Reps=(
"NSBitmapImageRep 0x10380b900 Size={512, 512} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=1024x1024 Alpha=YES
Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting)
CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.058 Eicon[912:680f] |NSImage 0x101b4caf0
Size={512, 512} Reps=(
"NSBitmapImageRep 0x10380b900 Size={512, 512} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO
Format=2 CurrentBacking=|CGImageRef: 0x101c14630|
CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.111 Eicon[912:680f] |NSImage 0x101b4caf0
Size={256, 256} Reps=(
"NSBitmapImageRep 0x10380b900 Size={256, 256} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO
Format=2 CurrentBacking=|CGImageRef: 0x101c14630|
CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.238 Eicon[912:680f] |NSImage 0x101b4caf0
Size={256, 256} Reps=(
"NSBitmapImageRep 0x10380b900 Size={256, 256} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO
Format=2 CurrentBacking=|CGImageRef: 0x101c14630|
CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.309 Eicon[912:680f] |NSImage 0x101b4caf0
Size={128, 128} Reps=(
"NSBitmapImageRep 0x10380b900 Size={128, 128} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO
Format=2 CurrentBacking=|CGImageRef: 0x101c14630|
CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.409 Eicon[912:680f] |NSImage 0x101b4caf0
Size={128, 128} Reps=(
"NSBitmapImageRep 0x10380b900 Size={128, 128} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO
Format=2 CurrentBacking=|CGImageRef: 0x101c14630|
CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.534 Eicon[912:680f] |NSImage 0x101b4caf0 Size={32,
32} Reps=(
"NSBitmapImageRep 0x10380b900 Size={32, 32} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO Format=2
CurrentBacking=|CGImageRef: 0x101c14630| CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.616 Eicon[912:680f] |NSImage 0x101b4caf0 Size={32,
32} Reps=(
"NSBitmapImageRep 0x10380b900 Size={32, 32} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO Format=2
CurrentBacking=|CGImageRef: 0x101c14630| CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.729 Eicon[912:680f] |NSImage 0x101b4caf0 Size={16,
16} Reps=(
"NSBitmapImageRep 0x10380b900 Size={16, 16} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO Format=2
CurrentBacking=|CGImageRef: 0x101c14630| CGImageSource=0x10380ae70"
)|
2012-12-31 12:48:52.864 Eicon[912:680f] |NSImage 0x101b4caf0 Size={16,
16} Reps=(
"NSBitmapImageRep 0x10380b900 Size={16, 16} ColorSpace=Generic RGB colorspace BPS=8 BPP=32 Pixels=1024x1024 Alpha=YES Planar=NO Format=2
CurrentBacking=|CGImageRef: 0x101c14630| CGImageSource=0x10380ae70"
)|
The error message is correct. You're putting in images of a size that is not supported by the IconFamily format. Specifically, from your output:
2012-12-26 13:48:57.682 Eicon[1131:1b0f] |NSImage 0x1025233b0 Size={11.52, 11.52} Reps=( "NSBitmapImageRep 0x10421fc30 Size={11.52, 11.52} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=1024x1024 Alpha=NO Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x104221170"
11.52 points is not a valid size for any element of an IconFamily. You need to find out why this image and rep have that size.
A couple of other things:
As I told you on that other answer, you don't need to change the pixel size of the representation. Leave the pixel size alone. Set the size (point size) of the rep and image (preferably to something valid).
The -[NSImage initWithSize:] documentation says:
It is permissible to initialize the receiver by passing a size of (0.0, 0.0); however, the receiver’s size must be set to a non-zero value before the NSImage object is used or an exception will be raised.
You are not setting either object's size, which is what you need to do. (I'm surprised you're not getting an exception about this like the documentation promises.)
As I mentioned on your other question, there is no 1024-point element anymore; the correct specification for a 1024-by-1024-pixel element is as 512 points # 2x. That's a size (of both image and rep) of (NSSize){ 512.0, 512.0 } (points), with the rep being 1024 pixelsWide and 1024 pixelsHigh.
Looks like I was missing one key ingredient before. Here it is.
The CGImage that you give to the CGImageDestination doesn't have a point size associated with it—only NSImages and NSImageReps have that. The CGImage only has a pixel size; nothing to indicate the image's physical size or resolution.
To tell the CGImageDestination whether a given CGImage is meant to be # 1x or # 2x, you need to create a dictionary that gives the image's DPI:
NSDictionary *imageProps1x = #{
(__bridge NSString *)kCGImagePropertyDPIWidth: #72.0,
(__bridge NSString *)kCGImagePropertyDPIHeight: #72.0,
};
NSDictionary *imageProps2x = #{
(__bridge NSString *)kCGImagePropertyDPIWidth: #144.0,
(__bridge NSString *)kCGImagePropertyDPIHeight: #144.0,
};
Pass the correct dictionary as the last argument to CGImageDestinationAddImage.
Related
I'd like a way to tell whether a file's icon only provides a 32 x 32 image or if it has modern large icon sizes (512 x 512), so that I know which upscale method to use when enlarging.
I've found that many cross-platform apps still only provide 32 x 32 size Mac icons, which tend to look terrible when blown up to larger sizes, unless upscaled with a nearest neighbour method. On the other hand, modern large icons tend to look terrible when upscaled with nearest neighbour. In my QuickLook extension, I get icon images using [[NSWorkspace sharedWorkspace] iconForFile:path] which always returns an NSImage of size 32 x 32, but provides no clue to what the actual largest icon size available is.
If I iterate through its image representations, the largest one is always 2048 x 2048 pixels.
Is there a way to find out?
You need to check image representations. If it's NSPDFImageRep you can just call [image setSize:NSMakeSize(512,512)];
The easiest solution to get largest bitmap representation:
NSImageRep *imgRep = [image bestRepresentationForRect:NSMakeRect(0, 0, 2048, 2048) context:nil hints:nil];
NSLog(#"%#",NSStringFromSize([imgRep size]));
Alternatively check all given bitmap representations.
NSURL *URL;
CFErrorRef error;
URL = (__bridge_transfer NSURL *)LSCopyDefaultApplicationURLForContentType(kUTTypeVCard, kLSRolesViewer, &error);
NSImage *image = [[NSWorkspace sharedWorkspace] iconForFile:[URL path]];
NSArray<NSImageRep *>*representations = [image representations];
NSUInteger indexOfLargest = 0;
NSUInteger surfaceArea = 0;
for (NSUInteger i = 0 ; i < [representations count]; i++) {
NSImageRep *rep = representations[i];
if ([rep size].width * [rep size].height >= surfaceArea) {
surfaceArea = [rep size].width * [rep size].height;
indexOfLargest = i;
}
}
NSLog(#"Representations %#", representations);
NSLog(#"Largest index is: %lu", (unsigned long)indexOfLargest);
Console output:
2021-07-16 19:02:07.816423+0200 testImages[13756:755791] Representations (
"NSISIconImageRep 0x600002abe2b0 Size={32, 32} ColorSpace=Generic RGB colorspace BPS=0 Pixels=32x32 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe0d0 Size={32, 32} ColorSpace=Generic RGB colorspace BPS=0 Pixels=32x32 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe1c0 Size={32, 32} ColorSpace=Generic RGB colorspace BPS=0 Pixels=64x64 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe3a0 Size={32, 32} ColorSpace=Generic RGB colorspace BPS=0 Pixels=64x64 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe3f0 Size={16, 16} ColorSpace=Generic RGB colorspace BPS=0 Pixels=16x16 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe440 Size={16, 16} ColorSpace=Generic RGB colorspace BPS=0 Pixels=16x16 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe490 Size={16, 16} ColorSpace=Generic RGB colorspace BPS=0 Pixels=32x32 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe530 Size={16, 16} ColorSpace=Generic RGB colorspace BPS=0 Pixels=32x32 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe580 Size={18, 18} ColorSpace=Generic RGB colorspace BPS=0 Pixels=18x18 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe5d0 Size={18, 18} ColorSpace=Generic RGB colorspace BPS=0 Pixels=18x18 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe620 Size={18, 18} ColorSpace=Generic RGB colorspace BPS=0 Pixels=36x36 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe4e0 Size={18, 18} ColorSpace=Generic RGB colorspace BPS=0 Pixels=36x36 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe670 Size={24, 24} ColorSpace=Generic RGB colorspace BPS=0 Pixels=24x24 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe6c0 Size={24, 24} ColorSpace=Generic RGB colorspace BPS=0 Pixels=24x24 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe710 Size={24, 24} ColorSpace=Generic RGB colorspace BPS=0 Pixels=48x48 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe760 Size={24, 24} ColorSpace=Generic RGB colorspace BPS=0 Pixels=48x48 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe7b0 Size={128, 128} ColorSpace=Generic RGB colorspace BPS=0 Pixels=128x128 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe800 Size={128, 128} ColorSpace=Generic RGB colorspace BPS=0 Pixels=128x128 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe850 Size={128, 128} ColorSpace=Generic RGB colorspace BPS=0 Pixels=256x256 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe8a0 Size={128, 128} ColorSpace=Generic RGB colorspace BPS=0 Pixels=256x256 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe8f0 Size={256, 256} ColorSpace=Generic RGB colorspace BPS=0 Pixels=256x256 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe940 Size={256, 256} ColorSpace=Generic RGB colorspace BPS=0 Pixels=256x256 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abe990 Size={256, 256} ColorSpace=Generic RGB colorspace BPS=0 Pixels=512x512 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abe9e0 Size={256, 256} ColorSpace=Generic RGB colorspace BPS=0 Pixels=512x512 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abea30 Size={512, 512} ColorSpace=Generic RGB colorspace BPS=0 Pixels=512x512 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abea80 Size={512, 512} ColorSpace=Generic RGB colorspace BPS=0 Pixels=512x512 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abead0 Size={512, 512} ColorSpace=Generic RGB colorspace BPS=0 Pixels=1024x1024 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abeb20 Size={512, 512} ColorSpace=Generic RGB colorspace BPS=0 Pixels=1024x1024 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abeb70 Size={1024, 1024} ColorSpace=Generic RGB colorspace BPS=0 Pixels=1024x1024 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abebc0 Size={1024, 1024} ColorSpace=Generic RGB colorspace BPS=0 Pixels=1024x1024 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua",
"NSISIconImageRep 0x600002abec10 Size={1024, 1024} ColorSpace=Generic RGB colorspace BPS=0 Pixels=2048x2048 Alpha=NO AppearanceName=NSAppearanceNameAqua",
"NSISIconImageRep 0x600002abec60 Size={1024, 1024} ColorSpace=Generic RGB colorspace BPS=0 Pixels=2048x2048 Alpha=NO AppearanceName=NSAppearanceNameDarkAqua"
)
2021-07-16 19:02:07.854533+0200 testImages[13756:755791] Largest index is: 31
If you want to have access to original icon representations:
NSImage *image = nil;
OSErr err;
err = GetIconRefFromTypeInfo(0, 0, CFSTR("pdf"), 0, 0, &iconRef);
if (err == noErr && iconRef) {
IconFamilyHandle iconFamily = NULL;
err = IconRefToIconFamily(iconRef, kSelectorAllAvailableData, &iconFamily);
ReleaseIconRef(iconRef);
if(err == noErr && iconFamily)
{
NSData *data = [NSData dataWithBytes:*iconFamily length:GetHandleSize((Handle)iconFamily)];
image = [[NSImage alloc] initWithData:data];
}
}
Console:
2021-07-19 13:35:33.716142+0200 testImages2[10434:559291] Representations (
"NSBitmapImageRep 0x60000016e8b0 Size={512, 512} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=1024x1024 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x60000014bc60 Size={256, 256} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=512x512 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x60000014bdb0 Size={512, 512} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=512x512 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x60000014bf00 Size={128, 128} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=256x256 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x60000014ca10 Size={256, 256} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=256x256 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x600000144000 Size={128, 128} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=128x128 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x6000001441c0 Size={32, 32} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=64x64 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x600000144310 Size={16, 16} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=32x32 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x6000001443f0 Size={32, 32} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=32x32 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0",
"NSBitmapImageRep 0x6000001444d0 Size={16, 16} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=16x16 Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600002a3eca0"
)
I am posting to understand that is that possible to remove only black dot in the image.
Here are two methods:
Method #1: Contour filtering
We convert the image to grayscale, Otsu's threshold for a binary image, then find contours and filter using a minimum threshold area. We remove the black dots by drawing filling in the contours to effectively erase the dots
import cv2
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
if cv2.contourArea(c) < 10:
cv2.drawContours(thresh, [c], -1, (0,0,0), -1)
result = 255 - thresh
cv2.imshow('result', result)
cv2.waitKey()
Method #2: Morphological operations
Similarly, we convert to grayscale then Otsu's threshold. From here we create a kernel and perform morph open
import cv2
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = 255 - cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
cv2.imshow('opening', opening)
cv2.waitKey()
So I have an NSImage *startingImage
It is represented by an NSBitmapImageRep with a gray colorspace
I need to invert the colors on it, so I convert it to a CIImage
CIImage *startingCIImage = [[CIImage alloc] initWithBitmapImageRep:(NSBitmapImageRep*)[startingImage representations][0]];
CIFilter *invertColorFilter = [CIFilter filterWithName:NEVER_TRANSLATE(#"CIColorInvert")];
[invertColorFilter setValue:startingCIImage forKey:NEVER_TRANSLATE(#"inputImage")];
CIImage *outputImage = [invertColorFilter valueForKey:NEVER_TRANSLATE(#"outputImage")];
If I view the outputImage at this point, it is exactly what I expect, the same image except with inverted colors.
I then convert it back into an NSImage like so:
NSBitmapImageRep *finalImageRep = [[NSBitmapImageRep alloc] initWithCIImage:outputImage];
NSImage *finalImage = [[NSImage alloc] initWithSize:[finalImageRep size]];
[finalImage finalImageRep];
Here's my issue... My original NSImage has a Gray colorspace, and 8 bits per pixel.
<NSImage 0x610000071440 Size={500, 440} Reps=(
"NSBitmapImageRep 0x6100002a1800 Size={500, 440} ColorSpace=Device Gray colorspace BPS=8 BPP=8 Pixels=500x440 Alpha=NO Planar=NO Format=0
CurrentBacking=<CGImageRef: 0x6100001ab0c0>" )>
However, after I convert everything, and log out the image, this is what I have
<NSImage 0x61800127e540 Size={500, 440} Reps=(
"NSBitmapImageRep 0x6080000b8cc0 Size={500, 440} ColorSpace=ASUS PB278 colorspace BPS=8 BPP=32 Pixels=500x440 Alpha=YES Planar=NO
Format=0 CurrentBacking=<CGImageRef: 0x6180001a3f00>" )>
And as you may know, NSBitmapImageRep is meant to be immutable, and when I try setColorSpaceName or setAlpha, the image ends up just being a black box.
Is there something I'm missing so that I can convert my NSImage into a CIImage, invert the black and white, then convert back into an NSImage?
Maybe you could replace the color space at the end:
NSBitmapImageRep* fixedRep = [finalImageRep bitmapImageRepByConvertingToColorSpace: [startingImageRep colorSpace]
renderingIntent: NSColorRenderingIntentDefault];
I am trying to create an NSImage that is exactly 200 x 300 pixels large from the contents of another NSImage. I'm not just scaling, but taking it from a chunk of a much larger image.
The resulting image looks just like the pixels I want. However, there are too many. The NSImage reports a size of 200 x 300, and the image representations report a size of 200 x 300, but the image representations report a number of pixels twice that : 400 x 600. When I save this image representation to a file, I get an image that's 400 x 600.
Here's how I am doing it:
NSRect destRect = NSMakeRect(0,0,200,300);
NSImage* destImage = [[NSImage alloc] initWithSize:destRect.size];
// lock focus, set interpolation
[destImage lockFocus];
NSImageInterpolation oldInterpolation = [[NSGraphicsContext currentContext] imageInterpolation];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[_image drawInRect:destRect fromRect:srcRect operation:NSCompositeCopy fraction:1.0];
[destImage unlockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation:oldInterpolation];
NSData* tiffImageData = [destImage TIFFRepresentation];
NSBitmapImageRep* tiffImageRep = [NSBitmapImageRep imageRepWithData:tiffImageData];
In the debugger, you can see the NSBitmapImageRep has the right size, but twice the number of pixels.
po tiffImageRep
(NSBitmapImageRep *) $5 = 0x000000010301ad80 NSBitmapImageRep 0x10301ad80 Size={200, 300} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) **Pixels=400x600** Alpha=YES Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x10f353a40
So, when I save it to disk, I get an image that is 400 x 600, not 200 x 300. How do I fix this?
I'm decoding video with ffmpeg libraries and store decoded frames in array. Now i want to draw these frames on the screen with OpenGL. I googled and found that apple offers to use GL_APPLE_ycbcr_422 format for effective drawing. So i switched decoding frames in ffmpeg to PIX_FMT_YUYV422 format (packed YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr) which seems to be an equivalent of GL_APPLE_ycbcr_422 in OpenGL. Now i'm trying to draw frames on surface with this code:
GLsizei width = 2, height = 2;
uint8_t data[8] = {128,200,123,10,5,13,54,180};
GLuint texture_name;
glEnable(GL_TEXTURE_RECTANGLE_ARB);
assert(glGetError() == GL_NO_ERROR);
glGenTextures (1,&texture_name);
assert(glGetError() == GL_NO_ERROR);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texture_name);
assert(glGetError() == GL_NO_ERROR);
glTextureRangeAPPLE(GL_TEXTURE_RECTANGLE_ARB, width * height * 2, (void*)data);
assert(glGetError() == GL_NO_ERROR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_STORAGE_HINT_APPLE , GL_STORAGE_SHARED_APPLE);
assert(glGetError() == GL_NO_ERROR);
glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE);
assert(glGetError() == GL_NO_ERROR);
// not sure about code above
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
assert(glGetError() == GL_NO_ERROR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
assert(glGetError() == GL_NO_ERROR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
assert(glGetError() == GL_NO_ERROR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
assert(glGetError() == GL_NO_ERROR);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
assert(glGetError() == GL_NO_ERROR);
// end
glViewport(0, 0, width,height);
assert(glGetError() == GL_NO_ERROR);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
assert(glGetError() == GL_NO_ERROR);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, width, height, 0,
GL_YCBCR_422_APPLE,GL_UNSIGNED_SHORT_8_8_APPLE,
(void*)data);
assert(glGetError() == GL_NO_ERROR);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, 0, 0, width, height,
GL_YCBCR_422_APPLE,GL_UNSIGNED_SHORT_8_8_APPLE,
(void*)data);
assert(glGetError() == GL_NO_ERROR);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1.0f, -1.0f, 1.0f);
glOrtho(0, width, 0, height, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_QUADS);
glVertex3f(0, 0, -1.0f);
glVertex3f(width, 0, -1.0f);
glVertex3f(width, height, -1.0f);
glVertex3f(0, height, -1.0f);
glEnd();
But i don't see correct frames. Instead i see corrupted pictures of my display. So i understand that my using of OpenGL is not correct and maybe i don't understand some fundamental things.
Please, help to correct and to understand my mistakes. If i should RTFM please give me a link that will help me, because i didn't find any useful info.
Here's what I use in one of my applications to set up a YUV 4:2:2 texture target for uploaded data from YUV frames pulled off of a CCD camera:
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glGenTextures(1, &textureName);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, textureName);
glTextureRangeAPPLE(GL_TEXTURE_RECTANGLE_EXT, videoImageSize.width * videoImageSize.height * 2, videoTexture);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_STORAGE_HINT_APPLE , GL_STORAGE_SHARED_APPLE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, videoImageSize.width, videoImageSize.height, 0, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
The actual upload to the texture is accomplished using the following:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, textureName);
glTexSubImage2D (GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, videoImageSize.width, videoImageSize.height, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, videoTexture);
In the above, videoTexture is the memory location of the raw bytes from a captured YUV frame off of the camera. I'm also using Apple's storage hints in the above to accelerate the upload, although I never could get the GL_UNPACK_CLIENT_STORAGE_APPLE optimization to speed things up further.
This was copied and pasted out of a working application, and the only real differences I see is my use of GL_RGBA instead of GL_RGB in the getTexImage2D() and the use of GL_UNSIGNED_SHORT_8_8_REV_APPLE instead of GL_UNSIGNED_SHORT_8_8_APPLE. Both the ARB and EXT rectangle extensions resolve to the same thing on the Mac.