What is the difference between framebuffer and image in Vulkan? - image

I've known that framebuffer is the final destination of the rendering pipeline and swapchain contains many image. So what is the relation between those two things? Which one is the actual render target? And does the framebuffer later attach the final picture of the current frame on the image view? If so, how will it transfer?
Describing this via paint or diagram would be pleased.

VkFramebuffer + VkRenderPass defines the render target.
Render pass defines which attachment will be written with colors.
VkFramebuffer defines which VkImageView is to be which attachment.
VkImageView defines which part of VkImage to use.
VkImage defines which VkDeviceMemory is used and a format of the texel.
Or maybe in opposite sequence:
VkDeviceMemory is just a sequence of N bytes in memory.
VkImage object adds to it e.g. information about the format (so you can address by texels, not bytes).
VkImageView object helps select only part (array or mip) of the VkImage (like stringView, arrayView or whathaveyou does). Also can help to match to some incompatible interface (by type casting format).
VkFramebuffer binds a VkImageView with an attachment.
VkRenderpass defines which attachment will be drawn into
So it's not like you do not use an image. You do, through the Vulkan Framebuffer.
Swapchain image is no different from any other image. Except that the driver is the owner of the image. You can't destroy it directly or allocate it yourself. You just borrow it from the driver for the duration between acquire and present operation.
There's (usually) more of the swapchain images for the purposes of buffering and advance rendering. AFAIK you would need a separate VkFramebuffer for each image (which is annoying, but more in tune with what actually happens underneath).

Probably the best single sentence from the Vulkan spec that describes framebuffers is:
The specific image views that will be used for the attachments, and
their dimensions, are specified in VkFramebuffer objects.
Yes, you would need a VkFramebuffer object for each image in a swapchain, but you generally would need to allocate only one VkMemory for a depth buffer VkImage and then add the VkImageView for that single depth buffer VkImage to all of your framebuffers.

Related

What does CIImageAccumulator do?

Problem
The apple documentation states when the CIImageAccumulater can be used, but unfortunately it does not say what it actually does.
The CIImageAccumulator class enables feedback-based image processing for such things as iterative painting operations or fluid dynamics simulations. You use CIImageAccumulator objects in conjunction with other Core Image classes, such as CIFilter, CIImage, CIVector, and CIContext, to take advantage of the built-in Core Image filters when processing images.
I have to fix code that used a CIImageAccumulator. It seems to me that all it is meant to do, despite its name, is to return a CIImage with all CIFilters applied to the image. Adding the first image however darkens the output. That is not what I would expect from an accumulator nor from any other Operator that enables feedback based image processing.
Question
Can anyone answer what logic / algorithm is being used when setting and getting images in and out of the CIImageAccumulator
The biggest advantage of the CIImageAccumulater is that stores its contents between different rendering steps (in contrast to CIFilter or CIImage). This allows you to use the state of a previous rendering step, blend it with something new and store that result again in the accumulator.
Apple's main use case is interactive painting: You retrieve the current image from the accumulator, blend a new stroke the user just painted with a gesture on top of it, and store the resulting image back into the accumulator. Then you display the content of the accumulator. You can read about it here.

use ghost script to extract single spot color

I am trying to use ghostscript to extract the image for a single spot color (from a PDF), REGARDLESS of whether it would be visible when printed.
I tried using the tiffsep device, but the problem is that any of the spot color that is hidden by objects above does not get sent out.
Is there any device, or settings that would allow simply any objects regardless of visibility to be extracted to a bitmap file.
If an object overlies another in PostScript then it is absolutely correct that it causes the underlying object not to render in that position (modulo overprint) because PostScript has an opaque imaging model.
So no, you can't prevent this, its supposed to work like that.

DX11 add a simple black box on a texture

I want to add a simple black box(like this: effect) on a texture(ID3D11ShaderResourceView), is there a simple way to do it in DX11? don't want write a shadow to do it.
Well, what you're trying to do is actually "initializing texture programmatically". Textures from D3D POV are nothing more than pieces of memory with clearly defined layout. Normally, you create a texture resource, read data from a texture file (like *.BMP for example), put the data in the texture and then feed it to the pipeline for sampling.
In your case though, you need an additional step:
Create texture resource using either D3D11_USAGE_DEFAULT or D3D11_USAGE_DYNAMIC flag - so you can access it from the CPU
Read the color map to your texture
Depending on the chosen type, either add your data to the initial data or Map/Unmap and add your data (by your data I mean overwrite each edge of the image with black color)
This can be also done to kind of "generate" textures, like for example checker-board or clouds.
All the information you need can be found here.

Sharing an OpenGL framebuffer between processes in Mac OS X

Is there any way in Mac OS X to share an OpenGL framebuffer between processes? That is, I want to render to an off-screen target in one process and display it in another.
You can do this with DirectX (actually DXGI) in Windows by creating a surface (the DXGI equivalent of an OpenGL framebuffer) in shared mode, getting an opaque handle for that surface, passing that to another process via whatever means you like, then creating a surface in that other process, but passing in the existing handle. You use the surface as a render target in one process then and use it as a texture in the other to consume as you wish. And in fact the whole compositing Window system works like this from Vista onwards.
If this isn't possible I can of course get the contents of the framebuffer into system memory and use cross-process shared memory to get it to the target process, then upload it again from there, but that would be unnecessarily slow.
Depending on what you're really trying to do this sample code project may be what you want:
MultiGPUIOSurface sample code
It really depends upon the context of how you're using it.
Objects that may be shared between contexts include buffer objects,
program and shader objects, renderbuffer objects, sampler objects,
sync objects, and texture objects (except for the texture objects
named zero).
Some of these objects may contain views (alternate interpretations) of
part or all of the data store of another object. Examples are texture
buffer objects, which contain a view of a buffer object’s data store,
and texture views, which contain a view of another texture object’s
data store. Views act as references on the object whose data store is
viewed.
Objects which contain references to other objects include framebuffer,
program pipeline, query, transform feedback, and vertex array objects.
Such objects are called container objects and are not shared.
Chapter 5 / OpenGL-4.4 core specification
The reason you can do those things on Windows and not OS X is that graphics obviously utilizes an API that allows DirectX contexts to be shared between those processes. If OS X doesn't have the capability within the OpenGL API then you're going to have to come up with your own solution. Take a look at OpenGL Programming Guide for Mac, there's a small section that describes using multiple OpenGL contexts.

Trying to understand Open GL

I am reading the Open GL ES guide provided by Apple but I don't understand it in full detail and have a question. I am very grateful if you help me understand. On page 28, the chapter about drawing, it says the following:
To correctly create a framebuffer:
Create a framebuffer object.
Create one or more targets (renderbuffers or textures), allocate storage for them, and attach each to an attachment point on the
framebuffer object.
Test the framebuffer for completeness.
My question is: In point 2, shouldn't it say "create one or more sources..."? As I currently understand it, the frame buffer is what will be rendered to the screen in my draw method. Therefore it would make sense to me if we specify what images we want to have rendered by attaching them to the frame buffer. Clearly, I am misunderstanding something fundamental since in what I describe, the target is the screen and everything else is a source.
Your program renders into the framebuffer, by executing actions that cause OpenGL to rasterize fragments into the framebuffer.
But, the framebuffer isn't displayed anywhere, unless you do as the documentation says and send it out to a target.
It's a bit like this (very very rough and off the top of my head):
+----------+ +--------+ +---------------------+ +----------------------+
|draw calls|---|pipeline|---|pixels in framebuffer|---|pixels in renderbuffer|
+----------+ +--------+ +---------------------+ +----------------------+
Target is correct. A framebuffer renders to a region of memory that later will either be used to be composited to the screen (renderbuffer) or will be used as texture in a secondary render pass.
The framebuffer attachment is first the target of the writes, then the source of the reads. Note that when the spec says "target", it's usually the active object that's being processed or changed, regardless of actual operation.
Clearly, I am misunderstanding something fundamental since in what I describe, the target is the screen and everything else is a source.
Yes you fundamentally misunderstood something. A Framebuffer Object is used when you want to render things not to the screen but to some off-screen image. For example a texutre, or some image later being post-processed. If you want to render just to the screen, you don't need a FBO.
Therefore it would make sense to me if we specify what images we want to have rendered by attaching them to the frame buffer.
No. A framebuffer object is not a source of image content, but a "canvas", a receiver, you can draw on. Or more precisely "frame" for the actual canvas, where the target is the canvas to draw on.
The framebuffer sources have nothing to do with the stuff you render. You can attach multiple render targets to each framebuffer and switch between them. However, this is something you don't need actually and it has nothing to do with the rendering step itself until you use multiple passes or need some other render to texture stuff.
I would skip this part of your OpenGL learning and start directly with VBO's and shaders and just use some template for the framebuffers at this point. Later you may need them but not now.

Resources