Color Space options for CIContext - macos

When creating a Core Image CIContext using the following:
contextWithCGLContext:pixelFormat:colorSpace:options:
You can specify a color space with a CGColorSpaceRef. However, the options dictionary that you can pass also allows you to set certain color space parameters. From Apple's documentation:
kCIContextOutputColorSpace
A key for the color space to use for images before they are rendered
to the context. By default, Core Image uses the GenericRGB color
space, which leaves color matching to the system. You can specify a
different output color space by providing a Quartz 2D CGColorSpace
object (CGColorSpaceRef). (See Quartz 2D Programming Guide for
information on creating and using CGColorSpace objects.)
kCIContextWorkingColorSpace
A key for the color space to use for image operations. By default,
Core Image assumes that processing nodes are 128 bits-per-pixel,
linear light, premultiplied RGBA floating-point values that use the
GenericRGB color space. You can specify a different working color
space by providing a Quartz 2D CGColorSpace object (CGColorSpaceRef).
Note that the working color space must be RGB-based. If you have YUV
data as input (or other data that is not RGB-based), you can use
ColorSync functions to convert to the working color space. (See Quartz
2D Programming Guide for information on creating and using
CGColorSpace objects.)
My question is, how do these various color space parameters interact?
My assumption would be that creating the CIContext with with a color space set using the colorspace: parameter above would be the same as setting the color space using the kCIContextOutputColorSpace in the options: dictionary. Is that correct? If not, what does each parameter mean?
If so, then what happens if the color space is set using both the colorspace: parameter as well as the kCIContextOutputColorSpace key in the options: dictionary? Does one override the other? Is there some other behavior?

Related

What is the Color Space of the Display Native Values on Digital Color Meter

Does anyone know what is the Color Space of the Display Native Values on Digital Color Meter and how to get this color space as a NSColorSpace or a CGColorSpace object. Thanks.
I assume that that's NSColorSpace.deviceRGBColorSpace a.k.a. NSDeviceRGBColorSpace a.k.a. CGColorSpaceCreateDeviceRGB().
In other words, I think those are the actual values in the hardware framebuffer (modulo float vs. integer representation).
The "Display native values" color space is simply the active display profile.
Now how do we go about getting a hold it? We have two options:
NSScreen.colorSpace
CGDisplayCopyColorSpace — also see CGMainDisplayID()

Xcode custom symbol image set asset

From Apples documentation I read:
An alternative to creating bitmap images is to use template images or symbol images instead. Template images specify the shape you want to draw, but not the associated color information. Symbol images are similar to template images but are vector based, so they scale to different sizes. Both types of images simplify the process for supporting Dark Mode. They also reduce the number of image assets you must ship with your app.
Do I understand this correct that I can not use a vector based symbol image set as a replacement for bitmap but scalable?
Update
My situation:
I've an Image Set in my iOS project. There I can add three bitmaps for the three scaling options iOS devices support. Those images must be of type *.png
Now I would like to replace those three bitmaps with a single vector graphic file *.svg. This seems not possible. If I want to use vector graphic images I've to add another asset type called "symbol image set" (Xcode -> Editor -> Add Assets -> New Symbol Image Set).
So far so good but now my question. Those "symbol image set" assets only the the shape but not the color information into account. That's why I would like the "old" way where the color information is taken into account but the file being a vector graphics file (*.svg).
You don't actually need a Symbol Set for what you are trying to achieve.
All you need is a vector image in pdf format, marking it as Single Scale
Then, you can mark the asset as template (meaning you can tint it with any color) or original if you want to retain its original colors.
Finally, note that although you supply vector images, Xcode by default will produce png versions for all scales behind the scenes. If you need to preserve the vector data (which can help in some cases where you need to upscale) you can enable Preserve Vector Data as well.

What is the difference between framebuffer and image in Vulkan?

I've known that framebuffer is the final destination of the rendering pipeline and swapchain contains many image. So what is the relation between those two things? Which one is the actual render target? And does the framebuffer later attach the final picture of the current frame on the image view? If so, how will it transfer?
Describing this via paint or diagram would be pleased.
VkFramebuffer + VkRenderPass defines the render target.
Render pass defines which attachment will be written with colors.
VkFramebuffer defines which VkImageView is to be which attachment.
VkImageView defines which part of VkImage to use.
VkImage defines which VkDeviceMemory is used and a format of the texel.
Or maybe in opposite sequence:
VkDeviceMemory is just a sequence of N bytes in memory.
VkImage object adds to it e.g. information about the format (so you can address by texels, not bytes).
VkImageView object helps select only part (array or mip) of the VkImage (like stringView, arrayView or whathaveyou does). Also can help to match to some incompatible interface (by type casting format).
VkFramebuffer binds a VkImageView with an attachment.
VkRenderpass defines which attachment will be drawn into
So it's not like you do not use an image. You do, through the Vulkan Framebuffer.
Swapchain image is no different from any other image. Except that the driver is the owner of the image. You can't destroy it directly or allocate it yourself. You just borrow it from the driver for the duration between acquire and present operation.
There's (usually) more of the swapchain images for the purposes of buffering and advance rendering. AFAIK you would need a separate VkFramebuffer for each image (which is annoying, but more in tune with what actually happens underneath).
Probably the best single sentence from the Vulkan spec that describes framebuffers is:
The specific image views that will be used for the attachments, and
their dimensions, are specified in VkFramebuffer objects.
Yes, you would need a VkFramebuffer object for each image in a swapchain, but you generally would need to allocate only one VkMemory for a depth buffer VkImage and then add the VkImageView for that single depth buffer VkImage to all of your framebuffers.

DX11 add a simple black box on a texture

I want to add a simple black box(like this: effect) on a texture(ID3D11ShaderResourceView), is there a simple way to do it in DX11? don't want write a shadow to do it.
Well, what you're trying to do is actually "initializing texture programmatically". Textures from D3D POV are nothing more than pieces of memory with clearly defined layout. Normally, you create a texture resource, read data from a texture file (like *.BMP for example), put the data in the texture and then feed it to the pipeline for sampling.
In your case though, you need an additional step:
Create texture resource using either D3D11_USAGE_DEFAULT or D3D11_USAGE_DYNAMIC flag - so you can access it from the CPU
Read the color map to your texture
Depending on the chosen type, either add your data to the initial data or Map/Unmap and add your data (by your data I mean overwrite each edge of the image with black color)
This can be also done to kind of "generate" textures, like for example checker-board or clouds.
All the information you need can be found here.

Find the number of components in the current fill & stroke color space form CGContext?

I am looking at way of finding the color space of a CGContext?
CGBitmapContextGetColorSpace() will not work as I am working with general context and not bitmap context.
Thank you,
Rahul
There does not seem to be any way to retrieve this information; for a system-provided CGContext, the color space actually used may not even be any of the publicly-defined color spaces.
There also does not seem to be any way to retrieve the current fill or stroke color or color space. But there also does not seem to be any documented restriction on the types of colors that may be passed to CGContextSetFillColorWithColor or CGContextSetStrokeColorWithColor, so in general you should just set the color you want and let the framework handle translating that into the appropriate format for the CGContext.
You can't directly read colorspace data from CGContext properties. So you'll have to work around this shortcoming in the CGContext API by
- intercepting property updates on the way to the CGContext of interest, or
- outputting PDF data whose stream you then examine for the colorspace & color data.
colorspaceref CGContextGetStrokeColorSpace(ccontextref);
colorspaceref CGContextGetFillColorSpace(ccontextref);
but this are priate.

Resources