Vulkan - How to know what is current image layout? - image

After rendering or some other actions, I want to read the target image into cpu.
For this, there is need first to do layout transition and change the image's current layout (old layout ) to a new one that allows transferring its data into stage image - VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL.
For transition operation, I need to give both, the current layout and the new layout.
But how do I know what is the current layout ? - each render pass may set the finalLayout to a different value, it also may that some transitions where done by the time.
A solution I think is to store per image its current layout and set it after each render pass , and after each transition operation.
Is this correct?

But how do I know what is the current layout ? - each render pass may set the finalLayout to a different value, it also may that some transitions where done by the time.
Yes, but you created those render passes. You issued commands to use those render passes on that image. Therefore, at any point in the command stream, you know what layout the image is in.
Vulkan expects you to be aware of what you've done. How you pull that off is up to you. Maybe you always leave the image in color-attachment optimal. Maybe you explicitly keep track of it with some higher-level layer. It could be any number of things.
But at the end of the day, it's up to you. With great power, comes great responsibility.

Related

What does CIImageAccumulator do?

Problem
The apple documentation states when the CIImageAccumulater can be used, but unfortunately it does not say what it actually does.
The CIImageAccumulator class enables feedback-based image processing for such things as iterative painting operations or fluid dynamics simulations. You use CIImageAccumulator objects in conjunction with other Core Image classes, such as CIFilter, CIImage, CIVector, and CIContext, to take advantage of the built-in Core Image filters when processing images.
I have to fix code that used a CIImageAccumulator. It seems to me that all it is meant to do, despite its name, is to return a CIImage with all CIFilters applied to the image. Adding the first image however darkens the output. That is not what I would expect from an accumulator nor from any other Operator that enables feedback based image processing.
Question
Can anyone answer what logic / algorithm is being used when setting and getting images in and out of the CIImageAccumulator
The biggest advantage of the CIImageAccumulater is that stores its contents between different rendering steps (in contrast to CIFilter or CIImage). This allows you to use the state of a previous rendering step, blend it with something new and store that result again in the accumulator.
Apple's main use case is interactive painting: You retrieve the current image from the accumulator, blend a new stroke the user just painted with a gesture on top of it, and store the resulting image back into the accumulator. Then you display the content of the accumulator. You can read about it here.

Rendering to CAMetalLayer from dedicated render thread / loop

In Windows World, a dedicated render thread would loop something similar to this:
void RenderThread()
{
while (!quit)
{
UpdateStates();
RenderToDirect3D();
// Can either present with no synchronisation,
// or synchronise after 1-4 vertical blanks.
// See docs for IDXGISwapChain::Present
PresentToSwapChain();
}
}
What is the equivalent in Cocoa with CAMetalLayer? All the examples deal with updates being done in the main thread, either using MTKView (with it's internal timer) or using CADisplayLink in the iOS examples.
I want to be in control of the whole render loop, rather than just receiving a callback at some non-specified interval (and ideally blocking for V-Sync if it's enabled).
At some level, you're going to be throttled by the availability of drawables. A CAMetalLayer has a fixed pool of drawables available, and calling nextDrawable will block the current thread until a drawable becomes available. This doesn't imply you have to call nextDrawable at the top of your render loop, though.
If you want to draw on your own schedule without getting blocked waiting on a drawable, render to an off-screen renderbuffer (i.e., a MTLTexture with dimensions matching your drawable size), and then blit from the most-recently-drawn texture to a drawable's texture and present on whatever cadence you prefer. This can be useful for getting frame timings, but every frame you draw and then don't display is wasted work. It also increases the risk of judder.
Your options are limited when it comes to getting callbacks that match the v-sync cadence. Your best is almost certainly a CVDisplayLink scheduled in the default and tracking run loop modes, though this has caveats.
You could use something like a counting semaphore in concert with a display link if you want to free-run without getting too far ahead.
If your application is able to maintain a real-time framerate, you'll normally be rendering a frame or two ahead of what's going on the glass, so you don't want to literally block on v-sync; you just want to inform the window server that you'd like presentation to match v-sync. On macOS, you do this by setting the layer's displaySyncEnabled to true (the default). Turning this off may cause tearing on certain displays.
At the point where you want to render to screen, you obtain the drawable from the layer by calling nextDrawable. You obtain the drawable's texture from its texture property. You use that texture to set up the render target (color attachment) of a MTLRenderPassDescriptor. For example:
id<CAMetalDrawable> drawable = layer.nextDrawable;
id<MTLTexture> texture = drawable.texture;
MTLRenderPassDescriptor *desc = [MTLRenderPassDescriptor renderPassDescriptor];
desc.colorAttachments[0].texture = texture;
From here, it's pretty similar to what you do in an MTKView's drawRect: method. You create a command buffer (if you don't already have one), create a render command encoder using the descriptor, encode drawing commands, end encoding, tell the command buffer to present the drawable (using a -presentDrawable:... method), and commit the command buffer. Whatever was drawn to the drawable's texture is what will end up on-screen when it's presented.
I agree with Warren that you probably don't really want to sync your loop with the display refresh. You want parallelism. You want the CPU to be working on the next frame while the GPU is rendering the most current frame (and the display is showing the last frame).
The fact that there's a limit on how many drawables may be in flight at once and that nextDrawable will block waiting for one will prevent your render loop from getting too far ahead. (You'll probably use some other synchronization before that, like for managing a small pool of buffers.) If you want only double-buffering and not triple-buffering, you can set the layer's maximumDrawableCount to 2 instead of its default value of 3.

What is the best way to convert PDF pairs into single pages?

I need to take an existing PDF (created with Prawn), and combine pairs after page 1 (the cover) into single pages. I would also like to add a vertical line in the center of the joined pages. The pages are to be printed in books, and the goal is to make single PDF pages that are similar to the side by side view in Acrobat. I know I can convert them to images, do what I need to with ImageMagick, then put them back into a PDF format, but I am trying to minimize the number of conversions so I can save as much quality as possible.
I also realize I can do this from the start with Prawn, but I am trying to avoid that as it would require a very large change to our application.
It is possible to do this with Ghostscript and the pdfwrite device, but its by no means simple. You need to write some PostScript to do the job.
You would need to add BeginPage and EndPage procedures, the BeginPage would need to check the current page number (and you would need to track this yourself). If its page 1, process normally. If its an even page, throw away the current PageSize and replace it with one which covers a pair of pages. Process the even page. Do not transmit the content.
If the page is odd (and not 1) then translate the origin so that its offset to the right by the width of the page. Process the odd page. use moveto, lineto and stroke to draw the required line between the two pages. Transmit the page.
This assumes that all the pages are the same size and orientation, or least that the sizes of each page are known in advance. It would be possible to retrieve those programmatically as well, but more complex.
Its definitely non-trivial, but if you rummage through my answers in the PostScript tags and look for anything with the word 'imposition' you'll probably find program outlines to do the job.
I did a quick look and here's an answer I wrote some time back. It uses a different approach to that outlined above, it copies some of the guts of the PDF interpreter and repurposes them. It does a chunk of what you want though.

Restoring modified size of wxFrame in wxWidgets

I am using wxWidgets to design GUI in windows. The requirements is, if the user has modified the frame size then I have to store the modified size and use the modified size for next session. I am able to store the size, but still I am getting older size not the modified size in next session. My window has several children(check, text, label). These controls are put in panel using sizers. Every time the best size is queried and recalculated and SetClientSize(size) is called. Is this the reason why the modified size is not reflected?
First, don't save and restore the frame size yourself, use wxPersistentTLW which does it for you instead, see the overview for more information and the "widgets" sample for an example of using it to preserve the frame geometry.
Second, the layout mechanism in wxWidgets is totally deterministic, so restoring the same frame size as during the last run should definitely result in the same positions and sizes being used for the children. If this isn't the case (I'm not really sure about it, you don't actually say what the problem is), most likely explanation is that your size saving/restoring code doesn't work correctly -- and that simply getting rid of it and using the built-in support for this should fix the problem (whatever it is).

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

Resources