Your app called glInvalidateFramebuffer before presenting renderbuffer - opengl-es

"Your app called glInvalidateFramebuffer before presenting renderbuffer" is an error message I get in the line:
int retVal = UIApplicationMain(argc, argv, nil, #"myAppDelegate");
of my main.m file.
There is not a single call to the glInvalidateFramebuffer method in my project. What is more, the project uses OpenGL ES 2.0 (running with a GLKView) and the problematic method is part of OpenGL ES 3.0.
I use a number of offscreen frame buffers to draw procedural textures. This error was not notified before iOS 10. Also, it does not prevent rendering, does not show any visible issues and it is impossible to indicate its exact place in code (other than main.m) using Capture Frame.

This is almost certainly a false alarm by XCode's GPU Report.
When presenting, both the multi sample buffer and the depth buffer are no longer required.
The multi sample buffer has already been resolved, and to present the pixels, the depth values are no longer necessary.
This means that GLKView does the correct thing: invalidate them before presenting.
Note: the false warning goes away with GLKViewDrawableMultisampleNone instead of 4x.
So in GLK's case, it is triggered by the invalidation of the multi sample buffer after resolving.

Just for Reference. When I doing Multisampling Anti-Alias, I get the same message with Instruments 8, but switch to Instruments 7 the message is gone. I'm not running with GLKView, CAEAGLLayer instead.

Related

DirectX to OpenGL hot-swap, doesn't display on Win32 window

During the developement of my engine, I'm trying to implement a feature, that enables hot-swapping between OpenGL and DirectX. Currently I'm testing on Win32 platform, and came across the following problem:
I implemented both renderer (OpenGL 3.0, and Direct3D11), both work fine alone. The swapping mechanism is the following:
Destroy the current rendering context, and build up the new one. For example: Release all DirectX objects, and then create OpenGL context, via WGL. I'm trying to implement this, using only one window (HWND).
Swapping from OpenGL 3.0 to DirectX11 works. (After destroying OpenGL, DirectX renders fine)
Destroying OpenGL and then recreating OpenGL again, works. Same with DirectX.
When I'm trying to swap from DirectX to OpenGL, the window will stop displaying the newly draw content, only the lastly drawn DirectX frame.
To construct the OpenGL context I'm using WGL. The class for the window was created with the CS_OWNDC style. I'm using SwapBuffers to flip the window buffers. Before setting up the context, I use SetPixelFormat with the previously returned value from ChoosePixelFormat. The created context is version 3.0, ensured via wglCreateContextAttribsARB.
Additional information:
All of the DirectX references are released, this was checked by calling ReportLiveDeviceObjects and checking the return value of ID3D11Device1::Release (0). ID3D11DeviceContext1::ClearState and Flush were called to ensure object destruction.
None of the OpenGL methods report error via glGetError, this is checked after every call. This is same for all OS, and WGL calls.
The OpenGL rendering calls are executing as expected, for example:
OpenGL rendering with 150 fps
Swap to DirectX, render with 60 fps (VSYNC)
Swap back to OpenGL, rendering again with 150 fps (not more)
There are other scenarios where OpenGL renders with more than 150 fps, so the rendering calls are executing properly.
My guess is that the flipping of the buffers doesn't work somehow, however SwapBuffers returns TRUE anyway.
I tried using SaveDC and RestoreDC before and after using DirectX, this resulted in now solution.
Using wglSwapLayerBuffers instead of SwapBuffers gives no change.
Can I somehow restore the HWND, or HDC to the original state, or do you guys have any idea why this might happen?
Guess I posted my question to soon, but however, this is how I solved it.
I dug around the documnentation for DirectX, and for the function CreateSwapChainForHwnd, I found the following:
Because you can associate only one flip presentation model swap chain at a time with an HWND, the Microsoft Direct3D 11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain.
I was using DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL in my swap chain descriptor, and this could mean, that DirectX sets up a flip swap chain for the window, but when I try to use it with OpenGL, it will fail swapping buffers somehow.
The solution for this, is to not use FLIP mode for creating the swap chain:
DXGI_SWAP_CHAIN_DESC1 scd;
scd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
scd.Scaling = DXGI_SCALING_ASPECT_RATIO_STRETCH;
You have to set the Scaling to something else than DXGI_SCALING_NONE, or the creation will fail.
The interesting part is, that the DirectX still does not properly destroy the flip model on the window, altough I did everything it suggested in the documentation (ClearState and Flush calls).
CreateSwapChainForHwnd see Remarks
Edit: I found this question after some time. If anybody still has some idea, how to revert back to using GDI again instead of the DWM backbuffer, it is greatly appreciated.

directX capture renderer in C++

I have to get (shared memory or GPU memory) the directX render from a windows application called "Myapp" and apply this render (view) to four directX simple applications (only exactly the same view as the first windows application "Myapp")
Someone tells about backbuffer and anothers tells about FrontBufferData
1) How can I get easily the directX render of a directXWindows application in C++ ?
2) How can i Share easily and quickly this render to 4 another DirectX applications in C++ ?
Thanks in advance
You can never get the rendering data from backbuffer for an 3rd application, the only Interface Microsoft provide is GetFrontBufferData(), this function is the only way to take an antialiased screen shot, and it's very slow.
front buffer contains the data currently displayed on your screen.
back buffer contains the data being draw, but not present yet.
when you call Present, DirecX will swap the two buffers by simply change the buffer pointers, thus the front buffer became back buffer now, and the back buffer became front buffer now. this is called surface flipping.
There are many ways to share memory between processes.
Can I ask a question, what do you want to do with the rendering data?
thanks for your answer.
I just want to post / show the render / view of the application "Myapp" in 4 others directX views without changes (in C++)

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

Cocos2d. Load and store images correctly?

There are a lot of answers for this question. But all of them are incorrect!
For example if I have created a CCLayer object with one CCSprite object. I have 3 textures and I want to switch between them on every touch.
For example I will use something similar to this:
link
I run this application in Simulator. Then I call a memory warning. Then I try to switch between images (textures). And I see that 2 of 3 images are deleted (except of the image was shown at the same time memory warning appeared).
I tried to use retain/release commands for CCSprite and ССTexture2D but they cause a situation when dealloc method of released object is never called.
So how to store them correctly? I want to store them at memory warning and release/remove them when the current layer is destroyed.
Store them in one texture atlas, created with Texture Packer. Then it's as simple as calling [Sprite setDisplayFrame:frameName] to switch the displayed texture.
By default on memory warning cocos2d will remove unused textures. The whole point of memory warning is that OS tells your app "hey, that's not okay, cut down your memory appetite or I'll shut you down", and your app should be like "oops, sorry, freeing memory now".
If you receive a memory warning when preloading textures, cocos2d's default behavior of removing unused textures will shoot you in the foot. More about this issue here.
My advice: remove the call to purge cocos2d's caches in the memory warning method in AppDelegate. Of course you want to be extra careful with your memory usage. Alternatively you could simply disable the behavior while you're preloading images, but this might simply move the problem to a later point.

Buffer issue with an OpenGL ES 3D view in a Cocos2D app

I'm trying to insert an OpenGL ES 3D view in a Cocos2D app on the IPad. I'm relatively new to these frameworks, so I basically added those lines in my CCLayer:
CGRect rScreen;
// some code to define the bounds and origin of my frame
EAGL3DView * view3d = [[EAGL3DView alloc] initWithFrame:rScreen] ;
[[[CCDirector sharedDirector] openGLView] addSubview: view3d];
[view3d startAnimation]
The code I'm using for the 3D part is based on a sample code from Apple Developer : http://developer.apple.com/library/mac/#samplecode/GLEssentials/Introduction/Intro.html
The only changes I made were to create my view programmatically (no xib file, initWithCoder -> initWithFrame...), and I also renamed the EAGLView class & files to EAGL3DView so as not to interfere with the EAGLView that comes along with Cocos2D.
Now onto my problem: when I run these, I get an "OpenGL error 0x0502 in -[EAGLView swapBuffers]", the 3D view being properly displayed but with a completely pink screen otherwise.
I went into the swapBuffers function in Cocos2d EAGLView, and it turns out the only block of code that is important is this one:
if(![context_ presentRenderbuffer:GL_RENDERBUFFER_OES])
CCLOG(#"cocos2d: Failed to swap renderbuffer in %s\n", __FUNCTION__);
which btw does not enter the "if" condition (presentRenderbuffer does not return a null value, but is not correct though since the CHECK_GL_ERROR() afterwards gives an 0x0502 error).
So I understand that there is some kind of incorrect overriding of the OpenGL ES renderbuffer by my 3D view (since Cocos2d also uses OpenGL ES) causing the Cocos2D view not to work properly. This is what I got so far, and I can't figure out precisely what needs to be done in order to fix it. So what do you think?
Hoping this is only a newbie problem…
Pixelvore
I think the correct approach for what you are trying to do is:
create your own custom CCSprite/CCNode class;
put all the GL code that you are using from the Apple sample into that class (i.e., overriding the draw or visit method of the class).
If you want to try and make the two GL views work nicely together, you could try reading this post, which will explain how you associate different buffers to your views.
As to the first approach, have a look at this post and this one.
To be true, the first approach might be more complex (depending on how the Apple sample is doing open gl), but will use less memory and will be more optimized that the second.

Resources