SDL_RenderPresent() implementation - sdl-2

The SDL wiki says of SDL_RenderPresent():
SDL's rendering functions operate on a backbuffer; that is, calling a rendering function such as SDL_RenderDrawLine() does not directly put a line on the screen, but rather updates the backbuffer. As such, you compose your entire scene and present the composed backbuffer to the screen as a complete picture. Therefore, when using SDL's rendering API, one does all drawing intended for the frame, and then calls this function once per frame to present the final drawing to the user. The backbuffer should be considered invalidated after each present; do not assume that previous contents will exist between frames. You are strongly encouraged to call SDL_RenderClear() to initialize the backbuffer before starting each new frame's drawing, even if you plan to overwrite every pixel.
Why is the backbuffer invalidated? I'd like to optimize rendering performance by only redrawing parts of the rendering target that need to be redrawn. How can I do this if the backbuffer is invalidated? The Win32 API allows one to redraw portions of the rendering target. Why not SDL?

When you call present SDL sends the backbuffer to be displayed and it swaps it out thus the new backbuffer is garbage and you have to redraw it again, this is done for performance reasons.
What I think you can do if you're not modifying the whole screen is try to minimize draw calls by using your own backbuffer and then presenting that, something along the lines of:
// we'll use this texture as our own backbuffer
SDL_Texture *buffer = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGB888,
SDL_TEXTUREACCESS_TARGET, w, h);
SDL_SetRenderTarget(renderer, buffer); // all the draw calls go to buffer instead of the sdl backbuffer
SDL_RenderCopy(renderer, some_texture, src_rect, dst_rect); // for example this would draw to buffer
SDL_SetRenderTarget(renderer, NULL); // we set the renderer back so we can draw to the SDL backbuffer
SDL_RenderCopy(renderer, buffer, src_rect, dst_rect); // copy our own backbuffer in one call
SDL_RenderPresent(renderer);
With this you can minimize draw calls since you'll be updating your own backbuffer and then presenting that instead of redrawing the whole screen each time.

Related

Multi-window support for opengles2

Recently I am writting game editor in my project. I want to implement a editor which has four viewport like 3ds max or other 3D software.
So, how to use opengles2 to render context on multi-window?
You can usually have multiple views with each having its own frame buffer. In this case all you need to do is bind the correct frame buffer before drawing to each of the views. You might also need to have different contexts for each view and setting them as current before drawing (also before binding the frame buffer). If you need multiple contexts you will need to find a way to share resources between them though.
Another approach is having a single view and simply using glViewport to draw to different parts. In this case you need to set glViewport for a specific part, setting ortho or frustum (if view segments are of different size) and that is it. For instance if you split the view with buffer having dimensions bWidth and bHeight into 4 equal rectangles and you want to refresh the top right:
glViewport(bWidth*.5f, .0f, bWidth*.5f, bWidth*.5f);
glOrthof(.0f, bWidth*.5f, bHeight*.5f, .0f, .1, 1.0); //same for each in this case
//do all the drawing
and when you are finished with all you want to update just present the frame buffer.

Why does WebGL canvas turn white on the second frame when alpha is masked?

Please see this thread for details.
To summarize, given the following circumstances:
gl = canvas.getContext('experimental-webgl');
gl.clearColor(0, 0, 0, 1);
gl.colorMask(1, 1, 1, 0);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.BLEND);
...and a standard render loop:
function doRender() {
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// render stuff, and request another frame
requestAnimationFrame(doRender);
}
...then I would like to know what the expected output should theoretically be.
In actuality, I'm seeing that the first frame renders as if there were no color mask, and the second (and subsequent) frames render the entire canvas opaque white.
Note that it doesn't matter what the alpha level is set to: the second frame is always immediately, completely white (including areas that were not rendered to), even if the rendered alpha values are extremely low.
The Question: what is the expected result of the above operations on the first, second, and subsequent frames? Also, is what I am experiencing the expected result, or due to some bug in the GL driver or WebGL implementation? And finally, if it is the expected result, why? What is actually happening on the video card to produce this result?
System details: Chrome / Firefox (both) on a MacBook Pro / GeForce 320M / Snow Leopard.
WebGL automatically clears the drawing buffer on each frame unless you tell it not to
try
gl = canvas.getContext('experimental-webgl', {
preserveDrawingBuffer: true
});
That's potentially slower than letting it clear though since it might have to make a copy of the drawing buffer each frame to preserve it so it has a copy to composite with the rest of the page while you draw new stuff into it. So, it's probably better to call gl.clear inside your render loop. Of course if the effect you were going for was to continually blend stuff into the drawing buffer then you either need to tell it to be preserved like the example above or you need to draw into a texture or renderbuffer using a framebuffer and then render that to the drawing buffer.

Does Direct2D use back-buffer?

Reading https://learn.microsoft.com/en-us/windows/win32/direct2d/comparing-direct2d-and-gdi :
Presentation Model
When Windows was
first designed, there was insufficient
memory to allow every window to be
stored in its own bitmap. As a result,
GDI always rendered logically directly
to the screen, with various clipping
regions applied to ensure that it did
not render outside of its window. In
contract, Direct2D follows a model
where the application renders to a
back-buffer and the result is
atomically “flipped” when the
application is done drawing. This
allows Direct2D to handle animation
scenarios much more fluidly that GDI
can.
The author says Direct2D uses back-buffer and by 'flipped' he meant swap-chain I guess. I created a simple demo that draw a rectangle at random location on mouse click. But previous rectangles are not cleared so it seems that it is drawn directly to the screen and does not use any back-buffer.
When you initialize the RenderTarget for your Direct2D operations you can specify in the second parameter the D2D1_PRESENT_OPTIONS option.
I think what confuses you is the D2D1_PRESENT_OPTIONS_RETAIN_CONTENTS and the fact that the buffer isn't swapped but copied.
That doesn't disprove the existence of back-buffers, it only means the back-buffer isn't cleared between redraws. Right observation, wrong conclusion!
If you increase the number of back-buffers in the chain, you'll start noticing flickering rectangles as you keep clicking, so you should always clear your back-buffer between redraws.
Direct2D indeed uses back-buffer.
Perhaps you forgot to clear your render target, which is the back-buffer, right after calling begindraw and so previous draws stayed there?

Can I create a device context that is just a portion of another device context?

I have subclassed a graphics control that takes a device context handle, HDC, as an input and uses that for drawing. My new control is just the original control centered on top of a larger image. I would like to be able to call the original control's Draw() method for code re-use, but I'm unsure how to proceed.
Here's the idea:
void CCheckBox::DrawCtrl( HDC hdc, HDC hdcTmp, LPSIZE pCtlSize, BYTE alpha ) {
// original method draws a checkbox
}
void CBorderedCheckBox::DrawCtrl( HDC hdc, HDC hdcTmp, LPSIZE pCtlSize, BYTE alpha ) {
// Draw my image here
// Create new hdc2 and hdcTemp2 which are just some portion of hdc and hdcTemp
// For example, hdc2 may just be a rectangle inside of hdc that is 20 pixels
// indented on all sides.
// Call CCheckBox::DrawCtrl() with hdc2 and hdcTemp2
}
I think you may be confused of what a device context is. A device context is a place in memory that you can draw to, be it the screen buffer or a bitmap or something else. Since I imagine you only want to draw on the screen, you only need one DC. To accomplish what you want, I would recommend passing a rectangle to the function that tells it where to draw. Optionally, and with poorer performance, you could create a new Bitmap for the smaller area, and give the function the Bitmap's DC to draw on. Now that I think about it, that might have been what you meant in the first place :P Good luck!
While not foolproof, you can fake a DC as a subsection of a DC by using a combination of SetViewportOrgEx and SelectObject with a region clipped to the sub area in question.
The problem with this approach is if drawing code already uses these APIs it needs to rewritten to be aware that it needs to combine its masking and offsetting with the existing offsets and clipping regions.

GDI fast scroll

I use GDI to create some custom textwidget. I draw directly to the screen, unbuffered.
now i'd like to implement some fast scrolling, that simply pixelshifts the respective part of the framebuffer (and only redraws the newly visible lines).
I noticed that for example the rich text controls does it like this. If i use some GDI drawing functions to directly draw to the framebuffer, over a rich text control, and then scroll the rich text, it will also scroll my drawing along with the text. so i assume the rich text simply pixelshifts it's part of the framebuffer.
I'd like to do the same, but don't know how to do so.
Can someone help? (independant of programming language))
thanks!
The ScrollWindowEx() API function is optimized to do this.
See BitBlt function:
The BitBlt function performs a
bit-block transfer of the color data
corresponding to a rectangle of pixels
from the specified source device
context into a destination device
context.
and the example at the end of its documentation: Capturing an Image:
You can use a bitmap to capture an
image, and you can store the captured
image in memory, display it at a
different location in your
application's window. [...]
In some cases, you may want your
application to capture images and
store them only temporarily. [...] To
store an image temporarily, your
application must call
CreateCompatibleDC to create a DC that
is compatible with the current window
DC. After you create a compatible DC,
you create a bitmap with the
appropriate dimensions by calling the
CreateCompatibleBitmap function and
then select it into this device
context by calling the SelectObject
function.
After the compatible device context is
created and the appropriate bitmap has
been selected into it, you can capture
the image. The BitBlt function
captures images. This function
performs a bit block transfer that is,
it copies data from a source bitmap
into a destination bitmap. [...]
To redisplay the image, call BitBlt a
second time, specifying the compatible
DC as the source DC and a window DC as
the target DC.

Resources