I had created 32 bit NSImage with following code.
NSBitmapImageRep *sourceRep = [[NSBitmapImageRep alloc] initWithData: imageData];
// create a new bitmap representation scaled down
NSBitmapImageRep *newRep =
[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: NULL
pixelsWide: imageSize
pixelsHigh: imageSize
bitsPerSample: 8
samplesPerPixel: 4
hasAlpha: YES
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
bytesPerRow: 0
bitsPerPixel: 0];
// save the graphics context, create a bitmap context and set it as current
[NSGraphicsContext saveGraphicsState] ;
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep: newRep];
[NSGraphicsContext setCurrentContext: context] ;
// draw the bitmap image representation in it and restore the context
[sourceRep drawInRect: NSMakeRect(0.0f, 0.0f, imageSize, imageSize)] ;
[NSGraphicsContext restoreGraphicsState] ;
// set the size of the new bitmap representation
[newRep setSize: NSMakeSize(imageSize,imageSize)] ;
NSDictionary *imageProps2 = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:1.0], kCGImageDestinationLossyCompressionQuality,
nil];
imageData = [newRep representationUsingType: NSPNGFileType properties: imageProps2];
NSImage *bitImage = [[NSImage alloc]initWithData:imageData];
Now I need to create 8 bit(256 Colors),4 bit(16 Colors),1 bit(Black & White) NSBitmapImageRep representation. what I want to do now?
Unfortunately it seems that Cocoa doesn't support operating on paletted images.
I've been trying that before and my conclusion is that it's not possible for PNG. NSGIFFileType is a hardcoded exception, and Graphics Contexts are even more limited than bitmap representations (e.g. RGBA is supported only with premultiplied alpha).
To work around it I convert NSBitmapImageRep to raw RGBA bitmap, use libimagequant to remap it to a palette and then libpng or lodepng to write the PNG file.
Sadly, I believe you can't using core graphics. Graphics contexts don't support anything with that few bits.
The documentation has a table of supported pixel formats.
Apparently Carbon had (has?) support for it, as seen referenced here where they also lament Cocoa's lack of support for it:
Turns out that basically Cocoa/Quartz does not support downsampling images to 8-bit colour. It supports drawing them, and it supports upsampling, but not going the other way. I guess this is a deliberate design on Apple's part to move away from indexed images as a standard graphics data type - after all, 32-bit colour is much simpler, right? Well, it is, but there are still useful uses for 8-bit. So..... what to do? One possibility is using Carbon, since General/QuickDraw's General/GWorld does support downsampling, etc.
From this thread
well This is probably going to be too long for a comment...
It sure seems like this just isn't possible... all of cocoa's drawing parts really seem to want to use 24-bit color colorspaces... I was able to make an 8bit NSBitmapImageRep but it was grayscale.
So I guess we have to figure out the why here. If you want to be able to use NSImages that are backed by certain types of representations, I don't think that is possible.
if you want to naively down sample (change to the closest 24/32-bit value to any pixel), that is very possible; this would be to give the appearance of 8-bit images.
If you want to be able to write these files out with good dithering / index colors then I think the best option would be to write to an image format that supports what you want (like writing to a 256 color GIF).
If you wanted to do this downsampling yourself for some reason, there are 2 issues at hand:
Pallet or CLUT selection.
Dithering.
If you didn't want to use indexed colors and just wanted to break the 8-bits into 3-3-2 RGB that is a little bit easier, but the result is much worse than indexed color.
The 4 bit is a bit tricker, because I don't really even know of a good historical use of 4-bit color.
I used indexed color to display escape times from a mandelbrot set in a little project I did once...
I just verified that it doesn't work anymore (was old fixed render pipeline OpenGL).
but basically for the view you would use glPixelMapuiv to map the index colors to a byte value, then display the byte buffer with glDrawPixels;
So... I guess if you comment and say why you are trying to do what you are doing we may be able to help.
Related
I have a very strange problem, which I did not find mentioned anywhere. My company develops plugins for various hosts. Right now we are trying to move our OpenGL code to Metal. I tried with some of hosts (Like Logic and Cubase), and, it worked. Here is the example:
However, recently, new versions of those apps became available, compiled with 10.14 MacOS SDK, and here is what I started to get:
So, we have 2 problems: Color and Flipped textures. I found a solution for color (see code below), but I have absolutely no idea how to solve textures problem! I can, of course, flip the textures, but then on the previous app versions, they will become corrupt.
I believe that something has change in PNG loading, since, if you look carefully - text textures, that are generated on the fly, look the same in both occasions.
Here is my code:
imageOptions = #{MTKTextureLoaderOptionSRGB : #FALSE}; // Solves the color problem
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:imageOptions error:&error];
while imageData is a memory where PNG is placed. I also tried this approach:
CGDataProvider* imageData = CGDataProviderCreateWithData(nullptr, imageBuffer, imageBufferSize, nullptr);
CGImage* loadedImage = CGImageCreateWithPNGDataProvider(imageData, nullptr, false, kCGRenderingIntentDefault);
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithCGImage:loadedImage options:0 error:&error];
And got EXACTLY the same result.
The issue is happening with all applications built with the latest, 10.14 SDK on 10.14 OS. Does anyone have a clue what causes it, or at least, give me a way to understand what SDK I was compiled with?
MTKTextureLoaderOptionOrigin a key used to specify when to flip the pixel coordinates of the texture.
If you omit this option, the texture loader doesn’t flip loaded textures.
This option cannot be used with block-compressed texture formats, and can be used only with 2D, 2D array, and cube map textures. Each mipmap level and slice of a texture are flipped.
imageOptions = #{MTKTextureLoaderOptionSRGB : #FALSE, MTKTextureLoaderOptionOrigin : #TRUE}; // Solves the color problem
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
requestedMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:imageOptions error:&error];
I try to build image. Pixel by pixel. So first I build some class which can draw by loops different color at each pixel. But it works nice only if alpha is set to 255. Changing alpha makes colors darker and changes picture. Size and place is OK.
var rep:NSBitmapImageRep = NSBitmapImageRep(
bitmapDataPlanes: nil,
pixelsWide: width,
pixelsHigh: height,
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSDeviceRGBColorSpace,
bytesPerRow: 0,
bitsPerPixel: 0)!
for posX in 0-offset...width+offset*2-1 {
for posY in 0-offset...height+offset*2-1 {
var R = Int(Float(posX)/Float(width)*255)
var G = Int(Float(posY)/Float(height)*255)
var B = Int(rand() % 256)
pixel = [R,G,B,alpha]
rep.setPixel(&pixel, atX: posX, y: posY)
}
}
rep.drawInRect(bounds)
alpha set to 255
alpha set to 196
alpha equals 127 this time.
And 64.
Where i'm wrong?
The most likely problem is that you're not premultiplying the alpha with the color components.
From the NSBitmapImageRep class reference:
Alpha Premultiplication and Bitmap Formats
When creating a bitmap using a premultiplied format, if a coverage
(alpha) plane exists, the bitmap’s color components are premultiplied
with it. In this case, if you modify the contents of the bitmap, you
are therefore responsible for premultiplying the data. Note that
premultiplying generally has negligible effect on output quality. For
floating-point image data, premultiplying color components is a
lossless operation, but for fixed-point image data, premultiplication
can introduce small rounding errors. In either case, more rounding
errors may appear when compositing many premultiplied images; however,
such errors are generally not readily visible.
For this reason, you should not use an NSBitmapImageRep object if you
want to manipulate image data. To work with data that is not
premultiplied, use the Core Graphics framework instead. (Specifically,
create images using the CGImageCreate function and kCGImageAlphaLast
parameter.) Alternatively, include the
NSAlphaNonpremultipliedBitmapFormat flag when creating the bitmap.
Note
Use the bitmapFormat parameter to the
initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel:
method to specify the format for creating a bitmap. When creating or
retrieving a bitmap with other methods, the bitmap format depends on
the original source of the image data. Check the bitmapFormat property
before working with image data.
You have used the -init... method without the bitmapFormat parameter. In that case, you need to query the bitmapFormat of the resulting object and make sure you build your pixel values to match that format. Note that the format dictates where the alpha appears in the component order, whether the color components are premultiplied by the alpha, and whether the components are floating point or integer.
You can switch to using the -init... method that does have the bitmapFormat parameter and specify NSAlphaNonpremultipliedBitmapFormat mixed in with your choice of other flags (first or last, integer or floating point, endianness). Note, though, that not all possible formats are supported for drawing.
By the way, I strongly recommend reading the sections about NSImage and NSBitmapImageRep in the 10.6 AppKit release notes. Search for "NSImage, CGImage, and CoreGraphics impedance matching" and start reading there through the section "NSBitmapImageRep: CoreGraphics impedance matching and performance notes", which is most relevant here. That last section, in particular, has important information about working directly with pixels.
I'm looking for the best way to quickly and repeatedly "blit" RGB bitmap data to a specific area within a Mac OS X window, for the purpose of displaying video frames coming from a custom video engine in real time. The data is in a simple C-style array containing a 32-BPP bitmap.
On Win32, I'd setup HWND and HDC's, copy the raw data into its memory space, and then use BitBlt(). On iOS, I've done it via UIImageView, although I didn't fully assess the performance of that approach (really didn't need to in that particular limited case). I have neither available to me on Mac OS X with Cocoa, so what should I do?
I know there are several bad or convoluted ways for me to accomplish this, but I'm hoping someone with experience can point me to something that's actually meant for this use and/or is performance efficient while being reasonably straightforward and reliable.
Thanks!
I would recommend either creating NSImages or CGImages with your data and then drawing them to the current context.
If you use NSImage, you'll need to create an NSBitmapImageRep with the data of your image. You don't need to copy the data, just pass the pointer to it as one of the parameters to the initializer.
If you use CGImage, you can create a CGBitmapContextRef using CGBitmapContextCreate(), and as above, just pass a pointer to the existing image data. Then you can create a CGImage from it by calling CGBitmapContextCreateImage().
This did the trick... (32-BPP RGBA bitmap data)
int RowBytes = Width * 4;
NSBitmapImageRep * ImageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:&Data pixelsWide:Width pixelsHigh:Height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSCalibratedRGBColorSpace bytesPerRow:RowBytes bitsPerPixel:32];
NSImage * Image = [[NSImage alloc] init];
[Image addRepresentation:ImageRep];
[ImageView setImage:Image];
Compared to a Windows bitmap, the Red and Blue channels are swapped (RGBA vs BGRA), and of course the Y rows are in opposite order (ie upside-down), but that's all easy enough to accommodate by manipulating the source data.
This has been driving me mad for months: I have a little app to preview camera raw images. As the files in question can be quite big and stored on a slow network drive I wanted to offer the user a chance to stop the loading of the image.
Handily I found this thread:
Cancel NSData initWithContentsOfURL in NSOperation
and am using Nick's great convenience method to cache the data and be able to issue a cancel request halfway through.
Anyway, once I have the data I use:
NSImage *sourceImage = [[NSImage alloc]initWithData:data];
The problem comes when looking at Nikon .NEF files; sourceImage returns only a thumbnail and not the full size. Displaying Canon .CR2 files and in fact, any other .TIFF's and .JPEG's seems fine and sourceImage is the expected size. I've checked the amount of data that is being loaded (with NSLog and [data length]) and it does seem that all of the Nikon files' 12mb is there for the -initWithData:
If I use
NSImage *sourceImage = [[NSImage alloc]initWithContentsOfURL:myNEFURL];
then I get the full sized image of the Nikon files but of course the app blocks.
So after poking around for what is beginning to feel like my entire life I think I know that the problem is related to the Nikon's metadata stating that the file's DPI is 300 whereas Canon et al is 72.
I hoped a solution would be to lazily access the file with:
NSImage*tempImg = [[NSImage alloc] initByReferencingURL:myNEFURL];
and having seen similar postings here and elsewhere I found a common possible answer of simply
[sourceImage setSize:tempImg.size];
but of course this just resizes the tiny thumbnail up to 3000x2000 or thereabouts.
I've been messing with the following hoping that they would provide a way to get the big picture from the .NEF:
CGImageSourceRef isr = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
CGImageRef isrRef = CGImageSourceCreateImageAtIndex(isr, 0, NULL);
and
NSBitmapImageRep *bitMapIR = [[NSBitmapImageRep alloc] initWithData:data];
But checking the sizes on these show similar thumbnail widths and heights. In fact, isrRef returns an even smaller thumbnail, one that is 4.2 times smaller. Perhaps worth noting that 300 / 72 == 4.2, so isrRef is taking account of the DPI on an image where the DPI (possibly) has already been observed.
Please! Can someone [nicely] put me out of my misery and help me get the full-sized image from the loaded data?!?! Currently, I'm special-case'ing the NEF files with a case insensitive search on the file extension and then loading the URL with the blocking methods. I have to take a hit on the app blocking and the search can't be fool-proof in the long run.
As an aside: is this actually a bug in the OS? It does seem like NSImage's -initWithData: and -initWithContentsOfURL: methods use different engines to actually render the image. Would it not be reasonable to have assumed that -initWithURL: simply loads the data which then gets rendered just as though it had been presented to the class with -initWithData: ?
It's a bug - confirmed when I did a DTS. Apparently I need to file a bug report. Currently the only way is to use the NSURL methods. Instead of checking the file extension I should probably traverse the meta dictionaries and check the manufacturer's entry for "Nikon", though...
If I create an NSImage via something like:
NSImage *icon = [NSImage imageNamed:NSImageNameUser];
it only has one representation, a NSCoreUIImageRep which seems to be a private class.
I'd like to archive this image as an NSData but if I ask for the TIFFRepresentation I get a
small icon when the real NSImage I originally created seemed to be vector and would scale up to fill my image views nicely.
I was kinda hoping images made this way would have a NSPDFImageRep I could use.
Any ideas how can I get an NSData (pref the vector version or at worse a large scale bitmap version) of this NSImage?
UPDATE
Spoke with some people on Twitter and they suggested that the real source of these images are multi resolution icns files (probably not vector at all). I couldn't find the location of these on disk but interesting to hear none-the-less.
Additionally they suggested I create the system NSImage and manually render it into a high res NSImage of my own. I'm doing this now and it's working for my needs. My code:
+ (NSImage *)pt_businessDefaultIcon
{
// Draws NSImageNameUser into a rendered bitmap.
// We do this because trying to create an NSData from
// [NSImage imageNamed:NSImageNameUser] directly results in a 32x32 image.
NSImage *icon = [NSImage imageNamed:NSImageNameUser];
NSImage *renderedIcon = [[NSImage alloc] initWithSize:CGSizeMake(PTAdditionsBusinessDefaultIconSize, PTAdditionsBusinessDefaultIconSize)];
[renderedIcon lockFocus];
NSRect inRect = NSMakeRect(0, 0, PTAdditionsBusinessDefaultIconSize, PTAdditionsBusinessDefaultIconSize);
NSRect fromRect = NSMakeRect(0, 0, icon.size.width, icon.size.width);;
[icon drawInRect:inRect fromRect:fromRect operation:NSCompositeCopy fraction:1.0];
[renderedIcon unlockFocus];
return renderedIcon;
}
(Tried to post this as my answer but I don't have enough reputation?)
You seem to be ignoring the documentation. Both of your major questions are answered there. The Cocoa Drawing Guide (companion guide linked from the NSImage API reference) has an Images section you really need to read thoroughly and refer to any time you have rep/caching/sizing/quality issues.
...if I ask for the TIFFRepresentation I get a small icon when the
real NSImage I originally created seemed to be vector and would scale
up to fill my image views nicely.
Relevant subsections of the Images section for this question are: How an Image Representation is Chosen, Images and Caching, and Image Size and Resolution. By default, the -cacheMode for a TIFF image "Behaves as if the NSImageCacheBySize setting were in effect." Also, for in-memory scaling/sizing operations, -imageInterpolation is important: "Table 6-4 lists the available interpolation settings." and "NSImageInterpolationHigh - Slower, higher-quality interpolation."
I'm fairly certain this applies to a named system image as well as any other.
I was kinda hoping images made [ by loading an image from disk ] would
have a NSPDFImageRep I could use.
Relevant subsection: Image Representations. "...with file-based images, most of the images you create need only a single image representation." and "You might create multiple representations in the following situations, however: For printing, you might want to create a PDF representation or high-resolution bitmap of your image."
You get the representation that suits the loaded image. You must create a PDF representation for a TIFF image, for example. To do so at high resolution, you'll need to refer back to the caching mode so you can get higher-res items.
There are a lot of fine details too numerous to list because of the high number of permutations of images/creation mechanisms/settings/ and what you want to do with it all. My post is meant to be a general guide toward finding the specific information you need for your situation.
For more detail, add specific details: the code you attempted to use, the type of image you're loading or creating -- you seemed to mention two different possibilities in your fourth paragraph -- and what went wrong.
I would guess that the image is "hard wired" into the graphics system somehow, and the NSImage representation of it is merely a number indicating which hard-wired graphic it is. So likely what you need to do is to draw it and then capture the drawing.
Very generally, create a view controller that will render the image, reference the VC's view property to cause the view to be rendered, extract the contentView of the VC, get the contentView.layer, render the layer into a UIGraphics context, get the UIImage from the context, extract whatever representation you want from the UIImage.
(There may be a simpler way, but this is the one I ended up using in one case.)
(And, sigh, I suppose this scheme doesn't preserve scaling either.)