Multi-window support for opengles2 - opengl-es

Recently I am writting game editor in my project. I want to implement a editor which has four viewport like 3ds max or other 3D software.
So, how to use opengles2 to render context on multi-window?

You can usually have multiple views with each having its own frame buffer. In this case all you need to do is bind the correct frame buffer before drawing to each of the views. You might also need to have different contexts for each view and setting them as current before drawing (also before binding the frame buffer). If you need multiple contexts you will need to find a way to share resources between them though.
Another approach is having a single view and simply using glViewport to draw to different parts. In this case you need to set glViewport for a specific part, setting ortho or frustum (if view segments are of different size) and that is it. For instance if you split the view with buffer having dimensions bWidth and bHeight into 4 equal rectangles and you want to refresh the top right:
glViewport(bWidth*.5f, .0f, bWidth*.5f, bWidth*.5f);
glOrthof(.0f, bWidth*.5f, bHeight*.5f, .0f, .1, 1.0); //same for each in this case
//do all the drawing
and when you are finished with all you want to update just present the frame buffer.

Related

Gamemaker Studio2, Draw GUI layer conflict

I would like to preface this by saying I am still quite new to gameMakerstudio and I do not know all there is to know about how the software works and this is probably the root cause of my problem as I do not know why I am having this current layering issue.
I have been having an issue where I have TWO DrawGUI events in separate objects within the same room.
The first object is a Fog Of War that draws a GUI and reveals the map as the player moves, and keeps explored places visible but not in view.
The second object is the joystick where a player will use their thumb to drag the stick to move the player.
Ever since I have implemented the Fog of War. I have been unable to view the joystick. It appears that the fog of war draws overtop of it and I am unable to use it.
I understand there are other draw events where I can do this.
Draw
Draw GUI
Draw Begin
Draw End
Draw GUI BEGIN
Draw GUI END
After changing where I have the code drawing.
Example: At first the joystick and the fog were both in Draw GUI, After moving one from Draw GUI to Draw GUI Begin, the same issue appears.
I have made sure to place the joystick at the top most level in the room and the fog of war at the bottom most layer.
I have tried to apply depth the object
oJoystick_Stick.depth = -100;
this does not achieve anything.
Is there another way to force two objects on the GUI layer to be on top of the other?
To my understanding, DrawGUI always prioritises the objects drawn there above anything in Draw, including Begin Draw and End Draw. This is because DrawGUI is for an interface (like the stats you see about your character like health, ammo ect.), and objects drawn there aren't part of the room itself. You may also have noticed that the objects drawn in DrawGUI also follows the camera/Viewpoint.
So, to clear up the draw priority:
First is the DrawGUI layer that places objects in front of everything, like an interface.
After that, the depth variable and layers inside the room has priority, each layer has also given a depth value, with an interval of 100.
If the depth is also the same (for example when they're in the same layer), then the order of the objects and code loaded decides the order drawn.
The latter is not always reliable when multiple objects are overlapping at the same depth, because if the objects are redrawn in-game again (e.g. a persistent object been loaded into a new room, or pause and unpause using instance_activate_all), then the order of objects drawn may differ. Keep in mind when overlapping objects, that they are placed in different layers to prevent mixed priorities.
I've however not used a Fog of War system myself, so I don't know if it's build-in or not, but I wouldn't recommend placing them in the DrawGUI, as that should be reserved for the interface layout. With the default Draw options, you'll have more flexibility to the layers inside the room.
Some advice about DrawGUI
DrawGUI Begin --> call the event before every drawGUI in that moment
DrawGUI --------> the event is called sync with the screen refresh rate
DrawGUI End ----> call the event after every drawGUI in that moment
so GM-Studio "pipeline" for the DrawGUI is like the step event, we have a BEGIN, a CURRENT, an END.
To prioritize the object render, GM-Studio use the depth in-build variabile. Take the 0 as the reference value. Object with depth value > 0 are rendering as last. Object with depth value < 0 are rendering infront everything.
Check the depth during the calling of the instance_create_depth() function, check where the depth variabile is changing, check in the room editor for each instance layer the depth value. and z-order.

Qt Quick: Can I use the layout components for item layout, without the overhead of rendering the items to the window?

I want to render a QML subtree manually to a Canvas3D by having all the subtree's items' be-texture-source flag (layer.enabled) enabled, and taking the geometry (x, y, width, height) of each item from the corresponding QML properties of each item (calculated by layouts, anchors, etc), so I know how to position and size each item with OpenGL calls.
I have no problems with implementing this, except for a performance issue: since I'm drawing the items in OpenGL, I don't want Qt to also draw them to the window (this would be doing the same work twice). I could achieve that easily by setting opacity: 0 on the items, but that would probably only fix the visual problem, it wouldn't save performance. I could also set visible: false but then I think the layout components wouldn't be able to do their job correctly.
The reasons I want to implement the described architecture are:
I want to implement custom rendering for some of the items in the subtree, but still use their positions and sizes (calculated by the layout for me) for the custom rendering. This custom rendering is complex enough that it can't be done with QML features alone, and I don't want to use Qt3D.
I could use use another approach: use a separate Canvas3D for every item that I want to render customly. And let Qt Quick render the rest of the items for me normally. That's what I've been doing until now. The problem, however, is that:
The different Canvas3Ds can't share GL resources, so I need to initialize and keep copies of the GL resources in each Canvas3D. This increases load time and memory usage.
The creation time of each Canvas3D (even without my GL init code) is significant, I think
So instead I want to use a single, big Canvas3D.

Single OpenGL context, multiple views

I have a Windows app which can create several view windows which can render some models using OpenGL (3.2+). Each window can either render it's own independent object, or two (or more) windows can render the same object (but for example from different camera perspectives):
After reading various posts here on stackoverflow I decided to create a single OpenGL context (HGLRC), and for each window that I am rendering to (HDC) I switch with
wglMakeCurrent(targetWindowHDC, m_deviceContext)
As you can see in the screenshot, that principally seems to work fine (the window code is happening on the main thread, and for Rendering I have my own RenderThread to which all the OpenGL operations are limited to). For each of the windows I render to an FBO (which has MSAA support if the user activates it), which only gets updated in case something in the scene changes, otherwise it will just draw it to the window as is.
My question is now, what states do I have to set every time I switch to drawing to another window? And is my approach reasonable in terms of performance?
This is what I now set every time after I make the context current for another HDC:
glClearDepth( 1.0f );
glClearColor( color.r, color.g, color.b, 1.0f );
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0f, 1.0f);
glPointSize(3.0f);
glEnable(GL_BLEND);
glBlendFunc( srcBlend, dstBlend );
glPolygonMode( GL_FRONT_AND_BACK, targetType );
glEnable( GL_CULL_FACE );
glCullFace( GL_BACK );
glViewport( 0, 0, vp.width, vp.height );
These are basically all the settings that could be changed when the user sets up the render windows, so I need to be sure they are set correctly before rendering each window.
But is it really necessary to do all those calls? It means in the above example with 4 render windows I need to call those 4 times each frame. Is there a better way? Would it be more efficient with several GL contexts?
The absolute minimum set of state you need to track between windows is the viewport, clearing color, color and depth masks, depth test function and depth range; you've got those covered in your code snippet already, so you're good.
Most other OpenGL state should be set on-demand right before it's needed anyway (and also cleaned up when no longer needed). So I'd say setting blend modes, face culling and so on actually is superfluous in your snippet.
Using the same context for multiple windows makes sense if the kind of rendering is the same for all the windows. For example in a typical 3D modeler there's a "quad view". If those subviews are implemented using multiple windows, then reusing a single context makes sense.
I'm one of those guys who keeps reminding people, that there's no need to have a separate OpenGL context for each window. That doesn't mean doing this is a bad thing if it makes your life simpler.
If your concern is about multiple windows with largely different rendering settings, then using separate render contexts is sensible.
So how do you decide if to use multiple context or a single one. Well, that's easy:
If the windows are sharing much of the render code and conceptually show the same thing (the same scene from different vantage points, different objects that make use of the same texture and are rendered using the same code) then context reuse it is.
If the contents of the windows differ a lot, then multiple contexts.

Xcode GLKit printing Text on GLKView without using UIImages

I have an app, its a small game using opengles with GLKit.
No im wondering how it works when i want to draw text on
my screen (if it is possible).
How can i do it?
i draw all of my game objects using images (wrapped in some kind
of sprite). its possible to scale, to move, and to rotate.
everything works fine.
but finding out how it works to print text on that glkview
gets me deep inside of problems ^^
I dont want to use uiimages cause i also dont know how
to present uiimages on a glkview.
There are a number of ways to do what you want:
1) Have an image with all the text glyphs you need in it. For example, if your application is in English, you'd have the 26 uppercase and 26 lowercase letters in the image. Upload that texture to the GPU and use the proper texture coordinates or glSubTexImage2d() to pull out the glyphs you need. (It's not clear to me if this is what you meant by not wanting a UIImage. It doesn't have to be a UIImage, though that's probably easiest.)
2) Every time you need to display text, draw it on the CPU on the fly, and upload the entire word, phrase, or sentence as a texture. You could create a CGBitmapContext and use Core Graphics to draw text to it. Then upload it using glTexImage2D().
3) Get the individual glyphs out of the fonts and draw directly using the bezier curves that make up the glyphs. This allows for 3D extrusion, too. However, this option is the most time consuming to code and probably least performant. It also involves dealing with the many small problems that fonts have (like degenerate segments, and incorrect winding orders). IF you want to go down this path, I think maybe Core Text can help.
There are at least two clean ways to do this, depending on your requirements.
While documentation advises against compositing over a CAEAGLLayer (GLKView), it works quite well, at least in recent iOS versions, when transparent content is layered on top of the CAEAGLLayer. For example, try dropping a UITextView, with opaque set to false and a clear background color, on top of a GLKView in your Storyboard in Interface Builder in the Apple GLKit template or your app. In my test on an iPhone 5, frame rendering time remained around 1ms, even while scrolling in the text view. If your text needs are static, or you don't want the user to interact with the text, use CATextLayer as a child layer of your EAGLLayer instead of a view.
The second approach is to render the text into a texture. You can then composite the text onto your view by disabling the depth buffer and rendering the texture on a full screen rectangle. Look at UIGraphicsBeginImageContextWithOptions to see how to render to an offscreen image with Quartz. UIGraphicsGetImageFromCurrentImageContext allows you to retrieve the UIImage to use as a texture.

"CoreAnimation: surface is too large"

I'm creating a custom (layer-hosting) document view, which is contained within a scroll view. The root layer has two sub layers of the same size--one for the view's content, and one for anything that needs to hover over the main content. I set the frame to 2500x2500 and added a number of cells to the content layer, which was fine. On adding a translucent clone of one of the cell's layers to the overlay layer, the whole view clears briefly, and I get a log message 'core animation: surface 2502x2502 is too large'. This happens between adding the new layer and the next cycle of the event loop, so I guess when core animation renders the new layer.
I knew that a layer's content size is related to opengl texture size, but didn't think its frame mattered. I'm not drawing anything to these layers, not setting any style properties, and remove offscreen sub layers. All I'm really using them for is to handle the geometry of the document view. Is this an appropriate use of CA layers? If not, are there better ways of handling a large core animation-based document view?
Edit:
I've had this problem again, caused by an implicit animation on adding sublayers to the large parent. So in addition to what is suggested below, that's one to check if you run into this.
I would check to make sure that you're not setting any properties on your 2500x2500 layers which could require offscreen rendering. (This causes the layer to try and create a full-size buffer off-screen and render its contents into that buffer, rather than just rendering the contents to the screen directly.)
For example, setting an opacity, masksToBounds, mask, shouldRasterize, etc, could cause offscreen-rendering. You can see if offscreen-rendering is happening with the Core Animation instrument. (There's a checkbox to highlight offscreen-rendered areas.)

Resources