I am making a GUI in OpenGL (more specifically lwjgl). I have tried hard to research different ways of doing this but I am having a hard time finding exactly what I want. I do not want to use any external libraries (only ones built in OpenGL, even trying to stay away from using GLUT) and I would like to have it work on anything that supports OpenGL (ex. Frame Buffer Objects don't work on older graphic cards).
I am making a 3D GUI with a scrollable panel as a component. The problem is I don't know how to draw a partial GUI component without doing a lot of calculations to only render part of it. I am making the components out of OpenGL primitives, not textures. I was hoping there is an easy way to do this like use multiple viewports. I don't really even understand what viewports are.
In short: I need to have a scrollable panel as a component overlapping other GUI components (since it will be a drop down menu) and not let any of the components in my panel draw outside my panel.
If you just want to prevent drawing pixels that are outside of a rectangular region (and I think that's what you're asking), than glScissor is exactly what you're looking for.
In lwjgl, you can find the function in org.lwjgl.opengl.GL11.
If you want to scroll a larger scene within a fixed region on the screen, the most straightforward way to go is by just modifying your projection matrix for the scroll position and redrawing the scene. If you are using gluPerspective to set up your projection matrix you'll have to convert it to a direct call to glFrustum; if you're using glOrtho it's much more straightforward.
Keep in mind that "scrolling" a perspective view has no one right way to do things - it depends on what sort of effect you want to achieve, and what particular sort of distortion you want near the edges of the overall viewport.
Related
I'm developing a game with THREEjs and webvr-boilerplate. I'm struggling a bit with how to properly render a HUD (score, distance, powerups etc) that always stays at the top of the scene. I've tried to have a plane (with a texture that's brought in from a hidden canvas element) but positioning it in space proves difficult since I can't match the right depth.
Any clues please? :)
Well, you shouldn't have a classic HUD, VR doesn't work like that.
You're searching for something called diegetic or spatial UI - that is the scores and other icons are rendered as geometry in scene space in a fixed position or distance (this one is called spatial UI). For best results, draw the information on some game object mimicking real displays, for example a fuel gauge on the dashboard of a car or visible remaining bullets on a gun (this one is called diegetic UI).
Unity has made a nice page describing these concepts.
I created a performance statistics HUD specifically for WebVR & THREE.js Projects.
https://github.com/Sean-Bradley/StatsVR
While the default setup shows specific information, you can modify it to show custom graphics and other data.
And if you don't believe me, just check out the
StatVR video tutorial.
Using OpenGL and GLUT, I want to render a scene from two different viewpoints. For the first viewpoint, it is a standard perspective projection using shaders. For the second viewpoint, it is a visualisation of the depth buffer. I want these two images to be contained within the same window, side-by-side.
So far, I have been using GLUT for display. For example, I use:
glutInitWindowSize(1000, 1000);
glutInitWindowPosition(500, 200);
glutCreateWindow("OpenGL Test");
This will draw my scene across the entire window for the one viewport which I have defined. But can I use GLUT to draw two different images from two different viewports, as described above? Or perhaps this is not so easy with just GLUT, and I will need to create a window natively in my operating system (I am using Ubuntu), and then define two different areas in that window which I should draw upon...
Thank you!
GLUT ultimately has nothing to do with it. It creates and manages a window. What you do within that window is entirely up to you.
What you need to do is use the viewport transform. Because the viewport happens after clipping, no primitives outside of the range of the viewport transform will be rendered to (by drawing commands. Buffer clearing will still clear the whole framebuffer). This effectively defines the region of the window that all vertices will lie within.
So you call glViewport, specifying half of the window. Then you render the stuff you want in that half. Then you call glViewport to specify the other half. Then you render the stuff you want there. And then you're done; just swap buffers.
However, this also means that the typical tactic of only calling glViewport in your GLUT resize callback will not work. You must store the window's current size, then use that in your display function.
Two ways you can do this:
You can create a new window with glutCreateWindow(). Note that this will have a different OpenGL context. Also note that it has a return value, an integer.
You can select part of the window using glViewport(), and then call glViewport() again to draw into a different part of the same window.
There is always the option of rendering your two views into a single texture, and then simply making a screen size quad and rendering that texture onto your quad.
I'm not sure its going to satisfy all your needs, but from a visual perspective this should give you the same result.
I'm using Unity 4.6 to develop a 2D game. I want to know if having a lot of GameObjects in the scene (out of the camera's sight) has a considerable influence on performance.
For example, is it efficient to make an scrollable list of names (like 1000 of them)? (each one is a GameObject and has a text, a button etc.)
I mask them in a specified area (for example 10 of them are visible at the same time).
Thanks in advance!
Depends on whether or not the objects have visible components. If they do, the engine will draw them even if they are 'off-camera'. A game object by itself has a pretty light load - a tile based game could have thousands in memory. You'll want to toggle the visibility of sprites if you plan on drawing a large number to the scene off-camera. This is where a SpriteManager comes in. It'll check to see if the sprite is in the camera's rectangle and disabled sprites that aren't. There is a semi-offical exmaple here that is good if a little complicated:
http://wiki.unity3d.com/index.php?title=SpriteManager
I am using Unity3D, and I have a function which is being called inside of OnGUI to lay out the various gui components of my application. Ordinarily, the labels and buttons are all inside of a certain Rect that I supply, which is centered on the screen.
No problem there... however, what I want to is sometime render the exact same gui elements, which can be dynamic, and thus not just put into a prefabbed texture, into a trapezoid-shaped area off to the side, looking as if that gui were actually on a flat plane, pushed away from the center of the screen, and rotated slightly. All gui buttons that were drawn in the function should still respond normally.
I was rather hoping I could just specify some values in GUI.matrix to map the rectangle to a trapezoid, but my initial exploration seems to show that the gui elements don't appear to use homogenous coordinates, and everything still shows up as rectangular.
Is there any way to do this with Unity, ideally without requiring access to pro-only features?
Since now Unity3D GUI system isn't very flexible. The new GUI system is one of the features still not released in Unity 4 (we are all waiting for it).
From my point of view it has several problems, particularly:
You are forced to layout components using the flow of the code, instead of having a more declarative (or at least a more structured) way to do that.
It's quite inefficient (at least one draw call for button).
It isn't flexible at all. Add, Remove, Enable/Disable buttons can be come quick a painful operation when the number of buttons increase.
however, what I want to is sometime render the exact same gui
elements, which can be dynamic, and thus not just put into a prefabbed
texture, into a trapezoid-shaped area off to the side, looking as if
that gui were actually on a flat plane, pushed away from the center of
the screen, and rotated slightly. All gui buttons that were drawn in
the function should still respond normally.
This is quite hard if not impossible to obtain using Unity's GUI classes.
I see 2 possibilities:
Don't use GUI classes to do that. If your GUI is simple enough, you can implement your own (even 3d) buttons using for example:
A mesh (a plane or a trapezoid mesh) with a texture for the button background
TextMesh for drawing 3D text
RayCasting to check if a button has been pressed
Use a library that implements a more advanced GUI system like NGUI
When I ran into the same problem, I just used normal 3D GameObjects cubes with textures and called OnMouseDown(PC/Mac) or RayCasting(Android/iOS) on them. I guess that's how everyone does it.
I am developing a Map App for our school. Our school provide me its own map image and coordinate information. So I want use my map image as the source of map and accord to user's location to show a point in the map image. Can anybody gives me some advice?
Thanks in advance.
There are 2 ways:
It is possible to change the source of the map-tiles (e.g. from Bing to say Nokia or Google) of the Map Control. However, for this to work, it is important that map-tiles source implements mechanisms like quadkeys (e.g. see this). Therefore, to answer your question if you would like to use the Bing Map Control with your school's map so that you can leverage the positioning features of the control, it would require that you have a map-tile server properly designed in order to achieve this. AND, there might be some legal issue with altering the Bing Map control if i am not mistaken.
However, given that you are suggesting an image of the map and then doing positioning, then i would suggest that it can be as easy as you calibrating the pixel X-Y coordinate system on the map with that of the geo-coordinate provided by the geo-watcher. Then, in your code you could do a simple mapping between these 2 systems and then draw something on top of the image. For this part you could use a writeablebitmap or simply use the fact that you can overlay UI controls with silverlight. So, for the latter have a canvas with the an image of the map of your school and then on top of that canvas you can have an <image> representing the device and change its top-left coordinate wrt to the canvas.
So, in summary, as the geo-watcher gives geo x-y coordinates to your code, there is mapping function to the pixel X-Y (which you have pre-calculated) and use that XY to position an overlay <image> or draw some "pin" on a writeablebitmap where you have previously draw the image of the map of your school. Things get complicated with this approach when you want to have zooming as well but, this solution is easily scalable.
Does this help clear things a bit?
Answering 2nd question in comment below:
Yes you can zoom in and out of the canvas but, you would have to program it yourself. The control itself, the canvas does not have this capability. Hence, you would have to recognize the triggers for a zoom action (e.g. clicking on the (+) or (-) buttons or, pinch and stretch gestures) and react to that by re-drawing on the canvas a portion of the region on the canvas so that now that regions stretches over the entire canvas. That is, zooming. For instance for the zoom in case: you would have to determine a geometrical area which corresponds to the zoom factor and is in ratio to the dimensions of the canvas object. Then, you would have to scale that portion up so that edges and empty spaces representing walls and spaces between them grow proportionately. Also, you have to determine the center point of that region which your fix on the canvas so that everything grows away from it. Hence, you would be achieving a appropriate zooming effect. At this point you would have to re-adjust your mapping function of geo-coordinates to pixel XY so that the "pin" or object of interest can be drawn with precision and accurately on the newly rendered surface.
I understand that this can appear quite intensive but, it is straightforward once you appreciate for yourself the mechanics of what is required.
Another easier option could be to use SVG (Scalable Vector Graphics) in a Web-Browser control. Note that you would still require the geo-coordinate to pixel-xy system. However, with this approach you can get the zooming for free with the combination of SVG (which have transformation capabilities for the scale up and down operations) and Web-Browser which enables you to render the SVG and does the gesture handling of zooming in to the map. For that, i believe that the cost of work would be in re-creating the map of your school which is in bitmap to SVG. There are tools like Inkscape which you can use to load the image of your map and then trace the outlines over it. You can then save that outline document as an SVG. In fact, i would recommend this approach to your problem before tackling the Canvas method as i feel that it would be the easiest path for your needs.