I have an instance of NSImage that's been handed to me by an API whose implementation I don't control.
I would like to obtain the original data (NSData) from which that NSImage was created, without the data being converted to another representation/format (or otherwise "molested"). If the image was created from a file, I want the exact, byte-for-byte contents of the file, including all metadata, etc. If the image was created from some arbitrary NSData instance I want an exact, byte-for-byte-equivalent copy of that NSData.
To be pedantic (since this is the troublesome case I've come across), if the NSImage was created from an animated GIF, I need to get back an NSData that actually contains the original animated GIF, unmolested.
EDIT: I realize that this may not be strictly possible for all NSImages all the time; How about for the subset of images that were definitely created from files and/or data?
I have yet to figure out a way to do this. Anyone have any ideas?
I agree with Ken, and having a subset of conditions (I know it's a GIF read from a file) doesn't change anything. By the time you have an NSImage, a lot of things have already happened to the data. Cocoa doesn't like to hold a bunch of data in memory that it doesn't directly need. If you had an original CGImage (not one generated out of the NSImage), you might get really lucky and find the data you wanted in CGDataProviderCopyData, but even if it happened to work, there's no promises about it.
But thinking through how you might, if you happened to get incredibly lucky, try to make it work:
Get the list of representations with -imageRepresentations.
Find the one that matches the original (hopefully there's just the one)
Get a CGImage from it with -CGImageForProposedRect:context:hints. You probably want a rect that matches the size of the image, and I'd probably pass a hint of no interpolation.
Get the data provider with CGImageGetDataProvider
Copy its data with CGDataProviderCopyData. (But I doubt this will be the actual original data including metadata, byte-for-byte.)
There are callbacks that will get you a direct byte-pointer into the internal data of a CGDataProvider (like CGDataProviderGetBytePointerCallback), but I don't know of any way to request the list of callbacks from an existing CGDataProvider. That's typically something Quartz accesses, and we just pass during creation.
I strongly suspect this is impossible.
This is not possible.
For one thing, not all images are backed by data. Some may be procedural. For example, an image created using +imageWithSize:flipped:drawingHandler: takes a block which draws the image.
But, in any case, even CGImage converts the data on import, and that's about as low-level as the Mac frameworks get.
Related
I noticed that when creating a NSBitmapImageRep using initWithBitmapDataPlanes and providing an existing owned data buffer, then after calling drawInRect the NSBitmapImageRep object may have replaced the pixel data by its own version, and thus bitmapData returns another pointer than what i originally provided and then changing the pixels in my buffer won't have any effect anymore when redrawing the NSBitmapImageRep! This is clearly contrary to the documentation! I wanted to take advantage of the documented fact that when providing your own pixel data buffer it will be preserved, but it is not! Am i missing something? If not then i want to share this info here so that others don't have to waste a lot of time looking for a code bug while the issue is the NSBitmapImageRep class behaving different than documented.
In a Mac OS X app (Cocoa), I'm copying some images from my app to others using a NSDraggingSession. The NSDraggingItem makes use of an object that implements the protocol NSPasteboardItemDataProvider, to provide the data when the user drops it.
As I'm dealing with images, the types involved are: NSPasteboardTypePNG, kPasteboardTypeFileURLPromise, kUTTypeFileURL, com.adobe.photoshop-image and public.svg-image. These images are in a remote location, so before I can provide them to the pasteboard, I have to download them from the Internet.
I implement the method - pasteboard(pasteboard:item:provideDataForType:) doing something like this:
If the type requested is kPasteboardTypeFileURLPromise, I get the paste location and build and set in the pasteboard the URL string with the location where the file is supposed to be written in the future.
If the type requested is kUTTypeFileURL, I download the file, specify a temporal location and write the downloaded file to that location. Then, I set in the pasteboard the URL string of the location.
If the type requested is one of the others, I download the file and set the plain NSData in the pasteboard.
All these operations are performed on the main thread, producing some lags that I want to get rid of.
I've tried to perform these operations on a background thread, and come back to the main thread to set the final data in the pasteboard, but this doesn't work because the method finishes before.
Does anyone know a way to achieve it?
Promises of pasteboard types are usually meant to be an alternative format of data that you already have, where you want to avoid the expense in time and memory of converting before it's necessary. I don't think it's really appropriate to use it to defer downloading any of the data, at all. For one thing, the download could fail when it's ultimately requested. For another, it could take an arbitrarily long time, as you're struggling with now.
So, I think you should download the data in advance. Either keep it in memory or save it to a temporary file. Use promised types, if appropriate, to deliver it in different forms, but have it on hand in advance.
I have been looking at creating PARGB32 bitmaps. This seems to be necessary to produce images which work fully with post-XP menu-items.
This example http://msdn.microsoft.com/en-us/library/bb757020.aspx?s=6 is very interesting but rather complicated as I have rarely if ever used OLE interfaces before. However, after carefully studying the piece of code that uses WIC and OLE I think I understand how it works. The one thing which confuses me is the comment by user 'David Hellerman'. To summarize, he states this example function is not complete. It does not take in to account any potential alpha information in the source Icon - and if there IS alpha data it must be pre-multiplied on a pixel by pixel basis while scanning through the ppvBuffer variable.
My question has two parts. How do I detect the presence of alpha data in my icons while using WIC instead of GDI and how do I go about pre-multiplying it in to the pixels if it does exist?
Technically, the sample code is wrong because it does not account for or check the format of the IWICBitmap object when calling CopyPixels. CreateBitmapFromHICON presumably always uses a specific format (that sample suggests it's 32-bit PBGRA but the comments are suggesting it's BGRA) when creating the bitmap, but that is not documented by MSDN, and relying on it is at the very least bad form. WIC supports many pixel formats, and not all of them are 32-bit or RGB.
You can use the WICConvertBitmapSource function (http://msdn.microsoft.com/en-us/library/windows/desktop/ee719819(v=vs.85).aspx) to convert the data to a specific, known format, in your case you'll want GUID_WICPixelFormat32bppPBGRA (you're probably used to seeing that format written as PARGB, but WIC uses an oddly sensible naming convention based on the order of components in an array of bytes rather than 32-bit ints). If converting means premultiplying the data, it will do that, but the point is that if you want a specific format you don't need to worry about how it gets there. You can use the resulting IWICBitmapSource in the same way that the MSDN sample uses its IWICBitmap.
You can use IWICBitmapSource::GetPixelFormat (http://msdn.microsoft.com/en-us/library/windows/desktop/ee690181(v=vs.85).aspx) to determine the pixel format of the image. However, there is no way to know in general whether the alpha data (if the format has alpha data) is premultiplied, you'd simply have to recognize the format GUID. I generally use GetPixelFormat when I want to handle more than one format specifically, but I still fall back on WICConvertBitmapSource for the formats I don't handle.
Edit: I missed part of your question. It's not possible to detect based on the IWICBitmap whether the icon originally had an alpha channel, because WIC creates a bitmap with an alpha channel from the icon in all cases. It's also not necessary, because premultiplying the alpha is a no-op in this case.
I am looking for way to create CGImageRef buffer once and use it again for different images.
I my application is facing a performance hit as it creates a image and then draws it on context. This process is in a timer which fires every 1ms. I wonder if there is anything I could do to avoid calling
CGBitmapContextCreateImage(bitmapcontext); on every single tick.
Thanks
There is one way you could theoretically do it:
Create an NSMutableData or CFMutableData.
Use its mutableBytes/MutableBytePtr as the backing buffer of the bitmap context.
Create a CGDataProvider with the data object.
Create a CGImage with that data provider, making sure to use all the same parameter values you created the bitmap context with.
However, I'm not sure that this is guaranteed to work. More specifically, I don't think that the CGImage is guaranteed not to copy, cache, and reuse any data that your data provider provided. If it ever does, you'll find your app showing a stale image (or even an image that is partly stale).
You might be better off simply holding on to the CGImage(s). If you generate the image based on some input, consider whether you might be able to cache resulting images by that input—for example, if you're drawing a number or two into the context, consider caching the CGImages in a dictionary or NSCache keyed by the number(s) string. Of course, how feasible this is depends on how big the images are and how limited memory is; if this is on iOS, you'd probably be dropping items from that cache pretty quickly.
Also, doing anything every 1 ms will not actually be visible to the user. If you mean to show these images to the user, there's no way to do that 1000 times per second—even if you could do that in your application, the user simply cannot see that fast. As of Snow Leopard (and I think since Tiger, if not earlier), Mac OS X limits drawing to 60 frames per second; I think this is also true on iOS. What you should do at a reasonable interval—1/60th of a second being plenty reasonable—is set a view as needing display, and you should do this drawing/image generation only when the view is told to draw.
I have an NSData that I would like to read as an NSInputStream. This way I can have a consistent API for processing both files and in-memory data. As part of the processing, I would like to make sure that the stream begins with some set of bytes (if it doesn't, I need to process it differently). I'd like to avoid reading a whole file into memory if it's of the wrong type.
So I'm looking for either a way to rewind the stream, or a way to "peek" at the upcoming bytes without moving the read pointer. If this is an NSInputStream created with URL, I can use setProperty:forKey: on NSStreamFileCurrentOffsetKey, but bizarrely this does not work on an NSInputStream created from an NSData (even though you would presume this would have been even easier to implement than the file version). I can't close and reopen the steam to reset the input pointer either (this is explicitly not allowed by NSStream).
I can rework this problem using an NSData-only interface and -initWithContentsOfMappedFile, but I'd rather stay with the NSStream approach if I can.
I think I don't understand something here. An NSInputStream can take data from three places: a socket, an NSData object, or a file. You haven't said that you want to use a socket, which leaves the other two as your data sources. Also, docs for NSStream say that only file-based streams are seekable. (NSStream, overview, 3rd paragraph)
Given that, I'd think that an NSData object would be a better choice. An NSData object will handle both files and bytes (which I think is what you mean by data in memory).
But you consider that and say that you'd prefer to stick with streams. Is there some other consideration here?
(Edit) Sorry, I should have made this a real answer. My answer for the issue you've described is that using NSData really is the right thing to do.
If you prefer a different answer, then please give more details.
You can indeed seek in an NSInputStream that is reading a file:
BOOL samplePositionAccepted = [iStream setProperty:[NSNumber numberWithUnsignedLong:samplePosition] forKey:NSStreamFileCurrentOffsetKey];
I am not sure if this works for NSData though. (Sorry I haven't got enough points to write a comment yet...)
(Oops sorry, didn't see you already tried this...)