Rendering Multiple Viewports using GLUT - user-interface

Using OpenGL and GLUT, I want to render a scene from two different viewpoints. For the first viewpoint, it is a standard perspective projection using shaders. For the second viewpoint, it is a visualisation of the depth buffer. I want these two images to be contained within the same window, side-by-side.
So far, I have been using GLUT for display. For example, I use:
glutInitWindowSize(1000, 1000);
glutInitWindowPosition(500, 200);
glutCreateWindow("OpenGL Test");
This will draw my scene across the entire window for the one viewport which I have defined. But can I use GLUT to draw two different images from two different viewports, as described above? Or perhaps this is not so easy with just GLUT, and I will need to create a window natively in my operating system (I am using Ubuntu), and then define two different areas in that window which I should draw upon...
Thank you!

GLUT ultimately has nothing to do with it. It creates and manages a window. What you do within that window is entirely up to you.
What you need to do is use the viewport transform. Because the viewport happens after clipping, no primitives outside of the range of the viewport transform will be rendered to (by drawing commands. Buffer clearing will still clear the whole framebuffer). This effectively defines the region of the window that all vertices will lie within.
So you call glViewport, specifying half of the window. Then you render the stuff you want in that half. Then you call glViewport to specify the other half. Then you render the stuff you want there. And then you're done; just swap buffers.
However, this also means that the typical tactic of only calling glViewport in your GLUT resize callback will not work. You must store the window's current size, then use that in your display function.

Two ways you can do this:
You can create a new window with glutCreateWindow(). Note that this will have a different OpenGL context. Also note that it has a return value, an integer.
You can select part of the window using glViewport(), and then call glViewport() again to draw into a different part of the same window.

There is always the option of rendering your two views into a single texture, and then simply making a screen size quad and rendering that texture onto your quad.
I'm not sure its going to satisfy all your needs, but from a visual perspective this should give you the same result.

Related

Possible to draw OpenGL HUD overlay over a different application's window (without flickering)?

I need to draw an overlay consisting of lines and text on top of another application. The application in question is a 3D outside world viewpoint, and the overlay is a head up display.
I don't have access to any type of callback from the outside world application to execute draw code in it's draw loop.
Drawing directly over the application's window will result in flickering as the draw loops will not be synchronized, so to me that doesn't seem like an option.
One method I can think of is to capture the outside world application's pixels and stream them into my application, so I can draw the overlay on top in the same draw loop, but that seems very inefficient.
Is there an efficient way to draw over the outside world application without flickering?
Is it possible to draw something over the final graphics card output / at the monitor's refresh rate?
P.s. It doesn't have to be OpenGL, but the HUD is already written in OpenGL so it would make it easier.
To repeat what I said in the comments, I've come across quite a few apps that hijack the 3D api calls and inject their own code to draw stuff right at the end of each frame - steam, teamspeak, mumble. Since it's in the same application there's no flickering and you can draw directly, rather than copying the result somewhere and compositing. I've never done it before and probably won't do a good job explaining it.
A relted question is here: Overlaying on a 3D fullscreen application

OpenGL ES. Hide layers in 2D?

For example I have 2 layers: background and image. In my case I must show or hide an image on zoom value changed (simply float variable).
The only solution I know is to keep 2 various frame buffers for both background and image and not to draw the image when it is not necessary.
But is it possible to do this in an easier way?
Just don't pass the geometry to glDrawArrays() for the layer you want to hide when the zoom occurs. OpenGL ES completely re-renders everything every frame. You should have a glClear() call at the start of your frame render loop. So, removing something is done by just not sending its triangles. You might need to divide your geometry into separate lists for each layer.

Warping GUI elements in Unity's OnGUI

I am using Unity3D, and I have a function which is being called inside of OnGUI to lay out the various gui components of my application. Ordinarily, the labels and buttons are all inside of a certain Rect that I supply, which is centered on the screen.
No problem there... however, what I want to is sometime render the exact same gui elements, which can be dynamic, and thus not just put into a prefabbed texture, into a trapezoid-shaped area off to the side, looking as if that gui were actually on a flat plane, pushed away from the center of the screen, and rotated slightly. All gui buttons that were drawn in the function should still respond normally.
I was rather hoping I could just specify some values in GUI.matrix to map the rectangle to a trapezoid, but my initial exploration seems to show that the gui elements don't appear to use homogenous coordinates, and everything still shows up as rectangular.
Is there any way to do this with Unity, ideally without requiring access to pro-only features?
Since now Unity3D GUI system isn't very flexible. The new GUI system is one of the features still not released in Unity 4 (we are all waiting for it).
From my point of view it has several problems, particularly:
You are forced to layout components using the flow of the code, instead of having a more declarative (or at least a more structured) way to do that.
It's quite inefficient (at least one draw call for button).
It isn't flexible at all. Add, Remove, Enable/Disable buttons can be come quick a painful operation when the number of buttons increase.
however, what I want to is sometime render the exact same gui
elements, which can be dynamic, and thus not just put into a prefabbed
texture, into a trapezoid-shaped area off to the side, looking as if
that gui were actually on a flat plane, pushed away from the center of
the screen, and rotated slightly. All gui buttons that were drawn in
the function should still respond normally.
This is quite hard if not impossible to obtain using Unity's GUI classes.
I see 2 possibilities:
Don't use GUI classes to do that. If your GUI is simple enough, you can implement your own (even 3d) buttons using for example:
A mesh (a plane or a trapezoid mesh) with a texture for the button background
TextMesh for drawing 3D text
RayCasting to check if a button has been pressed
Use a library that implements a more advanced GUI system like NGUI
When I ran into the same problem, I just used normal 3D GameObjects cubes with textures and called OnMouseDown(PC/Mac) or RayCasting(Android/iOS) on them. I guess that's how everyone does it.

Make HTML5 canvas behave like TclTK canvas for scale/translate?

I'm trying to port a TclTK program I wrote 20 years ago to HTML5.
After hours of frustation, I learned that when you "scale" or
"translate" HTML5's canvas element, it only applies to future
drawings, not items already on the canvas.
This is the opposite of TclTK, where items already on the canvas are
scaled/translated instead.
Short of creating a draw/redraw loop (where I clear the canvas and
redraw all the objects myself when I want to scale/translate), is
there anyway to make HTML5's canvas element behave like TclTK's?
Or am I missing something big?
The Canvas 2D Context is based around pixel-wise image manipulations — it is not a “retained mode” graphics interface as you are apparently familiar with. There literally is no record of your graphics for it to redraw. If you want to change the graphics, you have to redraw them somehow.
Everything is redrawing, in the end (though the redrawing may be hidden from your code), but there are ways to reduce the amount of work you have to do. Here are some options, roughly in order of amount of change you'll have to do to your code (and roughly in order of improved quality/performance):
Draw your graphics on the canvas, then scale and translate the canvas itself using CSS properties (not the width and height attributes of the canvas, which will clear it). This will rescale the image, possibly losing quality, since you're not drawing it anew optimized for the current scale.
Draw your graphics on the canvas, then export them into an ImageData or a data URL, then when needed redraw that onto the canvas. Again, may lose quality.
The above two are essentially kludges to keep using the canvas code you've already written. To get a proper system like you describe TK as being, you want to:
Build your own scene graph: Create a set of objects like Circle, Line, etc. which represent graphics, and containers for those which store transform attributes like scale and position. Then write routines to walk this graph and execute the appropriate drawing commands, whenever you need to redraw.
Use SVG instead. SVG is a language for vector graphics which, in modern browsers, you can embed directly in your HTML, and manipulate in JavaScript just like you would the rest of your page. In SVG, you can simply change a scale attribute and get the change you expect to see.
(The previous option is basically reinventing a small amount of SVG.)

OpenGL draw partial object in scrollable panel

I am making a GUI in OpenGL (more specifically lwjgl). I have tried hard to research different ways of doing this but I am having a hard time finding exactly what I want. I do not want to use any external libraries (only ones built in OpenGL, even trying to stay away from using GLUT) and I would like to have it work on anything that supports OpenGL (ex. Frame Buffer Objects don't work on older graphic cards).
I am making a 3D GUI with a scrollable panel as a component. The problem is I don't know how to draw a partial GUI component without doing a lot of calculations to only render part of it. I am making the components out of OpenGL primitives, not textures. I was hoping there is an easy way to do this like use multiple viewports. I don't really even understand what viewports are.
In short: I need to have a scrollable panel as a component overlapping other GUI components (since it will be a drop down menu) and not let any of the components in my panel draw outside my panel.
If you just want to prevent drawing pixels that are outside of a rectangular region (and I think that's what you're asking), than glScissor is exactly what you're looking for.
In lwjgl, you can find the function in org.lwjgl.opengl.GL11.
If you want to scroll a larger scene within a fixed region on the screen, the most straightforward way to go is by just modifying your projection matrix for the scroll position and redrawing the scene. If you are using gluPerspective to set up your projection matrix you'll have to convert it to a direct call to glFrustum; if you're using glOrtho it's much more straightforward.
Keep in mind that "scrolling" a perspective view has no one right way to do things - it depends on what sort of effect you want to achieve, and what particular sort of distortion you want near the edges of the overall viewport.

Resources