I have been looking at creating PARGB32 bitmaps. This seems to be necessary to produce images which work fully with post-XP menu-items.
This example http://msdn.microsoft.com/en-us/library/bb757020.aspx?s=6 is very interesting but rather complicated as I have rarely if ever used OLE interfaces before. However, after carefully studying the piece of code that uses WIC and OLE I think I understand how it works. The one thing which confuses me is the comment by user 'David Hellerman'. To summarize, he states this example function is not complete. It does not take in to account any potential alpha information in the source Icon - and if there IS alpha data it must be pre-multiplied on a pixel by pixel basis while scanning through the ppvBuffer variable.
My question has two parts. How do I detect the presence of alpha data in my icons while using WIC instead of GDI and how do I go about pre-multiplying it in to the pixels if it does exist?
Technically, the sample code is wrong because it does not account for or check the format of the IWICBitmap object when calling CopyPixels. CreateBitmapFromHICON presumably always uses a specific format (that sample suggests it's 32-bit PBGRA but the comments are suggesting it's BGRA) when creating the bitmap, but that is not documented by MSDN, and relying on it is at the very least bad form. WIC supports many pixel formats, and not all of them are 32-bit or RGB.
You can use the WICConvertBitmapSource function (http://msdn.microsoft.com/en-us/library/windows/desktop/ee719819(v=vs.85).aspx) to convert the data to a specific, known format, in your case you'll want GUID_WICPixelFormat32bppPBGRA (you're probably used to seeing that format written as PARGB, but WIC uses an oddly sensible naming convention based on the order of components in an array of bytes rather than 32-bit ints). If converting means premultiplying the data, it will do that, but the point is that if you want a specific format you don't need to worry about how it gets there. You can use the resulting IWICBitmapSource in the same way that the MSDN sample uses its IWICBitmap.
You can use IWICBitmapSource::GetPixelFormat (http://msdn.microsoft.com/en-us/library/windows/desktop/ee690181(v=vs.85).aspx) to determine the pixel format of the image. However, there is no way to know in general whether the alpha data (if the format has alpha data) is premultiplied, you'd simply have to recognize the format GUID. I generally use GetPixelFormat when I want to handle more than one format specifically, but I still fall back on WICConvertBitmapSource for the formats I don't handle.
Edit: I missed part of your question. It's not possible to detect based on the IWICBitmap whether the icon originally had an alpha channel, because WIC creates a bitmap with an alpha channel from the icon in all cases. It's also not necessary, because premultiplying the alpha is a no-op in this case.
Related
I need to design a binary format to save data from a scientific application. This data has to be encoded in a binary format that can't be easily read by any other application (it is a requirement by some of our clients). As a consequence, we decided to build our own binary format, its encoder and its decoder.
We got some inspiration from many binary format, including protobuf. One thing that puzzles me is the way protobuf encodes the length of embedded messages. According to https://developers.google.com/protocol-buffers/docs/encoding, the size of an embedded message is encoded at its very beginning as a varint.
But before we encode an embedded message, we don't know yet its size (think for instance of an embedded message that contains many integers encoded as varint). As a consequence, we need to encode the message entirely, before we write it to the disk so we know its size.
Imagine that this message is huge. As a consequence, it is very difficult to encode it in an efficient way. We could encode this size as a full int and seek back to this part of the file once the embedded message is written, but we loose the nice property of varints: you don't need to specify if you have a 32-bit or a 64-bit integer. So going back to Google's implementation using a varint:
Is there an implementation trick I am missing, or is this scheme likely to be inefficient for large messages?
Yes, the correct way to do this is to write the message first, at the back of the buffer, and then prepend the size. With proper buffer management, you can write the message in reverse.
That said, why write your own message format when you can just use protobuf? It would be better to just use Protobuf directly and encrypt the file format. That would be easy for you to use, and still be hard for other applications to read.
I have an instance of NSImage that's been handed to me by an API whose implementation I don't control.
I would like to obtain the original data (NSData) from which that NSImage was created, without the data being converted to another representation/format (or otherwise "molested"). If the image was created from a file, I want the exact, byte-for-byte contents of the file, including all metadata, etc. If the image was created from some arbitrary NSData instance I want an exact, byte-for-byte-equivalent copy of that NSData.
To be pedantic (since this is the troublesome case I've come across), if the NSImage was created from an animated GIF, I need to get back an NSData that actually contains the original animated GIF, unmolested.
EDIT: I realize that this may not be strictly possible for all NSImages all the time; How about for the subset of images that were definitely created from files and/or data?
I have yet to figure out a way to do this. Anyone have any ideas?
I agree with Ken, and having a subset of conditions (I know it's a GIF read from a file) doesn't change anything. By the time you have an NSImage, a lot of things have already happened to the data. Cocoa doesn't like to hold a bunch of data in memory that it doesn't directly need. If you had an original CGImage (not one generated out of the NSImage), you might get really lucky and find the data you wanted in CGDataProviderCopyData, but even if it happened to work, there's no promises about it.
But thinking through how you might, if you happened to get incredibly lucky, try to make it work:
Get the list of representations with -imageRepresentations.
Find the one that matches the original (hopefully there's just the one)
Get a CGImage from it with -CGImageForProposedRect:context:hints. You probably want a rect that matches the size of the image, and I'd probably pass a hint of no interpolation.
Get the data provider with CGImageGetDataProvider
Copy its data with CGDataProviderCopyData. (But I doubt this will be the actual original data including metadata, byte-for-byte.)
There are callbacks that will get you a direct byte-pointer into the internal data of a CGDataProvider (like CGDataProviderGetBytePointerCallback), but I don't know of any way to request the list of callbacks from an existing CGDataProvider. That's typically something Quartz accesses, and we just pass during creation.
I strongly suspect this is impossible.
This is not possible.
For one thing, not all images are backed by data. Some may be procedural. For example, an image created using +imageWithSize:flipped:drawingHandler: takes a block which draws the image.
But, in any case, even CGImage converts the data on import, and that's about as low-level as the Mac frameworks get.
When I thought about resizing images and saving the new sizes parallel on the server, I came to the following question:
// Original size
DSC_18342.jpg
// New size: Use an "x" for "times"
DSC_18342_640x480px.jpg
// New size: Use the real "×" for "times"
DSC_18342_640×480px.jpg
The point is, that it's slightly easier if you got a real × instead of an x in the file name, as the unit px already contains the x, which makes it a little bit harder to read.
Question: What problems could I get in, when using the Html entity in the filename?
Sidenotes: I'm writing an open source, publicly available script, so the targeted server can be anything - therefore I'm also interested (and will vote up) edge cases, that I'm not aware off.
Thank you all!
You may have noticed, that I'm aware, that I could simply avoid it (which I'll do anyway), but I'm interested in this issue and learning about it, so please just take above example as possible case.
There are file systems that simply don't support unicode. This may be less of a problem if you make unicode support a requirement of your application.
Some consideration about different unicode file system are given in File Systems, Unicode, and Normalization.
A concluding remark (from a viewpoint of solaris file systems) is:
Complete compatibility and seamless interoperability with
all other existing Unicode file systems appears not 100%
possible due to inherent differences.
I can imagine that there will be problems especially when migrating the application. Just storing files is probably no problem but if their names are stored in a database there might be a mismatch after migration.
I am looking for way to create CGImageRef buffer once and use it again for different images.
I my application is facing a performance hit as it creates a image and then draws it on context. This process is in a timer which fires every 1ms. I wonder if there is anything I could do to avoid calling
CGBitmapContextCreateImage(bitmapcontext); on every single tick.
Thanks
There is one way you could theoretically do it:
Create an NSMutableData or CFMutableData.
Use its mutableBytes/MutableBytePtr as the backing buffer of the bitmap context.
Create a CGDataProvider with the data object.
Create a CGImage with that data provider, making sure to use all the same parameter values you created the bitmap context with.
However, I'm not sure that this is guaranteed to work. More specifically, I don't think that the CGImage is guaranteed not to copy, cache, and reuse any data that your data provider provided. If it ever does, you'll find your app showing a stale image (or even an image that is partly stale).
You might be better off simply holding on to the CGImage(s). If you generate the image based on some input, consider whether you might be able to cache resulting images by that input—for example, if you're drawing a number or two into the context, consider caching the CGImages in a dictionary or NSCache keyed by the number(s) string. Of course, how feasible this is depends on how big the images are and how limited memory is; if this is on iOS, you'd probably be dropping items from that cache pretty quickly.
Also, doing anything every 1 ms will not actually be visible to the user. If you mean to show these images to the user, there's no way to do that 1000 times per second—even if you could do that in your application, the user simply cannot see that fast. As of Snow Leopard (and I think since Tiger, if not earlier), Mac OS X limits drawing to 60 frames per second; I think this is also true on iOS. What you should do at a reasonable interval—1/60th of a second being plenty reasonable—is set a view as needing display, and you should do this drawing/image generation only when the view is told to draw.
I am in the process of selecting an image format that will be used as the storage format for all in-house textures.
The format will be used as a source format from which compressed textures for different platforms and configurations will be generated, and so needs to cover all possible texture types (2D, cube, volymetric, varying number of mip-maps, floating point pixel formats, etc.) and be completely lossless.
In addition the format has to be able to keep a bit of metadata.
Currently a custom format is used for this, but a commonly available format will be easier to work with for the artists since its viewable in most image editors.
I have thought of using DDS, but this format does not support metadata as far as I can see.
All suggestions appreciated!
With your requirements you should stay with your selfmade format. I don't know about any image-format besides DDS that supports volumetric and cube-textures. Unfortunately DDS does not support meta-data.
The closest thing you can find is TIFF. It does not directly support cube-maps or volumetric textures, but it supports any number of sub-images. That way you could re-use the sub-images as slices or cube-sides.
TIFF also has a very good support for custom meta-data. The libtiff image reading/writing library works pretty good. It looks a bit archaic if you come from a OO side, but it gets it's job done.
Nils
When peeking inside various games' resources I found out that most of them store textures (I don't know whether they're compressed or not) in TGA
TIFF would probably be your closest bet for a format which supports arbitrary meta-data and multiple frames, but I think you are better off keeping the assets (in this case, images) separate from how they are converted and utilized in your engine.
Keep images in 32 bit PNG format, and put type- and meta information in XML. That keeps your data human viewable, readable and editable. Obscure custom formats are for engines, not people.
Stick with whatever your artists work with.
If you are a windows/mac shop and use
photoshop stick with .psd
If you are a unix shop and use gimp
stick with .xcf
These formats will store layers and all the stuff your artists need and are used to.
Since your artists will be creating loads of assets make their life as easy as possible,
even if it means to write some extra code.
Put the meta data (whatever it may be) somewhere "along" the images if the native format (psd/xcf) doesn't support it.
For stuff like cube maps, mipmaps (if not generated by the converter) stick to naming guidlines or guidlines on how to put them into one file.
Depending on what tool you use to create the volumetric stuff, just stick with that tools native format.
While writing custom formats for the target is usually a good idea,
writing custom formats for artists results in mayhem...
My experience with DDS is that it is a poorly documented and difficult format to work with and offers few advantages. It is generally simpler to just store a master file for each image class that has references to the source images that make it up ( i.e. 6 faces for a cube map, an arbitrary number of slices for a volume texture ) as well as any other useful meta-data. It's always going to be a good idea to keep the meta-data in a seperate file ( or in a database ) as you do not want to be loading large numbers of images when carryong out searches, populating browsers, etc. It also makes sense to seperate your source image format ( tiff, tga, jpeg, dds ... ) from your "meta-format" ( cube, volume ... ) since you may well find that you need to use lossy compression to support HDR formats or very large source volume data.
Have you tried PNG? http://java.sun.com/javase/6/docs/api/javax/imageio/metadata/doc-files/png_metadata.html
As an alternative solution, maybe spend some time writing a plugin for a Free Image Editor for your file format? I've never done it before, so I don't know the work involved, but there is boatloads of example code out there for you.