Possible to draw OpenGL HUD overlay over a different application's window (without flickering)? - windows

I need to draw an overlay consisting of lines and text on top of another application. The application in question is a 3D outside world viewpoint, and the overlay is a head up display.
I don't have access to any type of callback from the outside world application to execute draw code in it's draw loop.
Drawing directly over the application's window will result in flickering as the draw loops will not be synchronized, so to me that doesn't seem like an option.
One method I can think of is to capture the outside world application's pixels and stream them into my application, so I can draw the overlay on top in the same draw loop, but that seems very inefficient.
Is there an efficient way to draw over the outside world application without flickering?
Is it possible to draw something over the final graphics card output / at the monitor's refresh rate?
P.s. It doesn't have to be OpenGL, but the HUD is already written in OpenGL so it would make it easier.

To repeat what I said in the comments, I've come across quite a few apps that hijack the 3D api calls and inject their own code to draw stuff right at the end of each frame - steam, teamspeak, mumble. Since it's in the same application there's no flickering and you can draw directly, rather than copying the result somewhere and compositing. I've never done it before and probably won't do a good job explaining it.
A relted question is here: Overlaying on a 3D fullscreen application

Related

Does Chipmunk/Pymumk have culling of objects that are outside screen boundary?

I found culling only under spatial hashing for collisions. I'm referring to the kind of backface culling performed by 3D graphics libraries, where anything that need not be visible isn't rendered.
Does Chipmunk2D/Pymunk have any provision for not drawing objects that are not within screen bounds or does that user have to implement it themselves?
For example:
The red rectangle is the screen boundary. All blue objects should get drawn because they are within the screen. Green objects shouldn't be drawn.
I was hoping debug_draw() would have a culling functionality.
ps: btw, if I don't use debug_draw() for drawing, what is the other way of drawing? I don't see a draw() or release_draw() function. So would the user have to write code to individually iterate all objects and draw them? I guess that'd work fine because then the user can do a rectangle intersection test and decide which objects to cull. Perhaps debug_draw could be renamed to drawAll().
The debug draw method is mainly meant for debugging and quick prototyping, so more advanced features such as culling is out of scope for its implementation.
If you feel yourself limited by debug draw it might be time to transition to your own drawing code, where you have full control. It should be quite easy to emulate what debug draw is doing yourself, some of the example code do custom drawing.

Rendering Multiple Viewports using GLUT

Using OpenGL and GLUT, I want to render a scene from two different viewpoints. For the first viewpoint, it is a standard perspective projection using shaders. For the second viewpoint, it is a visualisation of the depth buffer. I want these two images to be contained within the same window, side-by-side.
So far, I have been using GLUT for display. For example, I use:
glutInitWindowSize(1000, 1000);
glutInitWindowPosition(500, 200);
glutCreateWindow("OpenGL Test");
This will draw my scene across the entire window for the one viewport which I have defined. But can I use GLUT to draw two different images from two different viewports, as described above? Or perhaps this is not so easy with just GLUT, and I will need to create a window natively in my operating system (I am using Ubuntu), and then define two different areas in that window which I should draw upon...
Thank you!
GLUT ultimately has nothing to do with it. It creates and manages a window. What you do within that window is entirely up to you.
What you need to do is use the viewport transform. Because the viewport happens after clipping, no primitives outside of the range of the viewport transform will be rendered to (by drawing commands. Buffer clearing will still clear the whole framebuffer). This effectively defines the region of the window that all vertices will lie within.
So you call glViewport, specifying half of the window. Then you render the stuff you want in that half. Then you call glViewport to specify the other half. Then you render the stuff you want there. And then you're done; just swap buffers.
However, this also means that the typical tactic of only calling glViewport in your GLUT resize callback will not work. You must store the window's current size, then use that in your display function.
Two ways you can do this:
You can create a new window with glutCreateWindow(). Note that this will have a different OpenGL context. Also note that it has a return value, an integer.
You can select part of the window using glViewport(), and then call glViewport() again to draw into a different part of the same window.
There is always the option of rendering your two views into a single texture, and then simply making a screen size quad and rendering that texture onto your quad.
I'm not sure its going to satisfy all your needs, but from a visual perspective this should give you the same result.

Warping GUI elements in Unity's OnGUI

I am using Unity3D, and I have a function which is being called inside of OnGUI to lay out the various gui components of my application. Ordinarily, the labels and buttons are all inside of a certain Rect that I supply, which is centered on the screen.
No problem there... however, what I want to is sometime render the exact same gui elements, which can be dynamic, and thus not just put into a prefabbed texture, into a trapezoid-shaped area off to the side, looking as if that gui were actually on a flat plane, pushed away from the center of the screen, and rotated slightly. All gui buttons that were drawn in the function should still respond normally.
I was rather hoping I could just specify some values in GUI.matrix to map the rectangle to a trapezoid, but my initial exploration seems to show that the gui elements don't appear to use homogenous coordinates, and everything still shows up as rectangular.
Is there any way to do this with Unity, ideally without requiring access to pro-only features?
Since now Unity3D GUI system isn't very flexible. The new GUI system is one of the features still not released in Unity 4 (we are all waiting for it).
From my point of view it has several problems, particularly:
You are forced to layout components using the flow of the code, instead of having a more declarative (or at least a more structured) way to do that.
It's quite inefficient (at least one draw call for button).
It isn't flexible at all. Add, Remove, Enable/Disable buttons can be come quick a painful operation when the number of buttons increase.
however, what I want to is sometime render the exact same gui
elements, which can be dynamic, and thus not just put into a prefabbed
texture, into a trapezoid-shaped area off to the side, looking as if
that gui were actually on a flat plane, pushed away from the center of
the screen, and rotated slightly. All gui buttons that were drawn in
the function should still respond normally.
This is quite hard if not impossible to obtain using Unity's GUI classes.
I see 2 possibilities:
Don't use GUI classes to do that. If your GUI is simple enough, you can implement your own (even 3d) buttons using for example:
A mesh (a plane or a trapezoid mesh) with a texture for the button background
TextMesh for drawing 3D text
RayCasting to check if a button has been pressed
Use a library that implements a more advanced GUI system like NGUI
When I ran into the same problem, I just used normal 3D GameObjects cubes with textures and called OnMouseDown(PC/Mac) or RayCasting(Android/iOS) on them. I guess that's how everyone does it.

LibGDX - Sprite to Pixmap

I am using LibGDX for a small app project, and I need to somehow take a series of sprites and place them (or their pixels rather) into a Pixmap. The basic idea is to take random sprites that are generated through various means while the app is running, and, only at specific times, merge some of them onto a single background sprite.
I believe that most of this can be done easily, but the step of getting the sprite images into the Pixmap isn't quite so obvious to me. The sprites also have various transparent and semi-transparent pixels, so simply grabbing the color at each pixel while it is all on the same screen isn't really applicable either, as it obviously shouldn't take the background colors with it.
If there is a suitable alternative to this that would accomplish what I am looking for I would also love to hear it. Any help is highly appreciated.
I think you want to render your sprites to an off-screen buffer (called an "FBO" or FrameBuffer in libgdx) (blending them as they're added), and then render that offscreen buffer to the screen as a single draw call? If so, this question should help: libgdx SpriteBatch render to texture
This requires OpenGL ES 2.0, which will eliminate support for some older devices.

OpenGL draw partial object in scrollable panel

I am making a GUI in OpenGL (more specifically lwjgl). I have tried hard to research different ways of doing this but I am having a hard time finding exactly what I want. I do not want to use any external libraries (only ones built in OpenGL, even trying to stay away from using GLUT) and I would like to have it work on anything that supports OpenGL (ex. Frame Buffer Objects don't work on older graphic cards).
I am making a 3D GUI with a scrollable panel as a component. The problem is I don't know how to draw a partial GUI component without doing a lot of calculations to only render part of it. I am making the components out of OpenGL primitives, not textures. I was hoping there is an easy way to do this like use multiple viewports. I don't really even understand what viewports are.
In short: I need to have a scrollable panel as a component overlapping other GUI components (since it will be a drop down menu) and not let any of the components in my panel draw outside my panel.
If you just want to prevent drawing pixels that are outside of a rectangular region (and I think that's what you're asking), than glScissor is exactly what you're looking for.
In lwjgl, you can find the function in org.lwjgl.opengl.GL11.
If you want to scroll a larger scene within a fixed region on the screen, the most straightforward way to go is by just modifying your projection matrix for the scroll position and redrawing the scene. If you are using gluPerspective to set up your projection matrix you'll have to convert it to a direct call to glFrustum; if you're using glOrtho it's much more straightforward.
Keep in mind that "scrolling" a perspective view has no one right way to do things - it depends on what sort of effect you want to achieve, and what particular sort of distortion you want near the edges of the overall viewport.

Resources