I'm developing a game with THREEjs and webvr-boilerplate. I'm struggling a bit with how to properly render a HUD (score, distance, powerups etc) that always stays at the top of the scene. I've tried to have a plane (with a texture that's brought in from a hidden canvas element) but positioning it in space proves difficult since I can't match the right depth.
Any clues please? :)
Well, you shouldn't have a classic HUD, VR doesn't work like that.
You're searching for something called diegetic or spatial UI - that is the scores and other icons are rendered as geometry in scene space in a fixed position or distance (this one is called spatial UI). For best results, draw the information on some game object mimicking real displays, for example a fuel gauge on the dashboard of a car or visible remaining bullets on a gun (this one is called diegetic UI).
Unity has made a nice page describing these concepts.
I created a performance statistics HUD specifically for WebVR & THREE.js Projects.
https://github.com/Sean-Bradley/StatsVR
While the default setup shows specific information, you can modify it to show custom graphics and other data.
And if you don't believe me, just check out the
StatVR video tutorial.
Related
I try to implement a scene, where an object is updated in different way for each eye (eg. I want to project opposite rotation of box for each eye).
I have demo application written with WebGL using Three.js as described in Google Developers tutorial.
But there is only one scene, containing one mesh, with single update function. I can't find way to implement separation of update, so it's done seperately for each eye (just as rendering is done) and wonder if it's even possible.
Anybody has any experiences with similar case?
Your use case is rather unusual (and may I say, eye-watering), so basically the answer is no: Three.js has abstracted away the left/right eye dichotomy of VR. Internally it renders the scene using an array of 2 camera's, with the correct left/eye setting.
Fortunately, every object has an onBeforeRender(renderer, scene, camera, ...) event. If you hook that event, and find a way to distinguish the left/right eye camera you should be able to modify the orientation just before it gets rendered.
A (perhaps too) simple way to distinguish the camera's would be to keep a track of the index with a counter.
I have noticed that in threejs, every example of the sizing of an object relative to its container seems to only be based on the height of the canvas/window it is responding to. This works well for worlds where the canvas is the only thing on the page. However, in a new project we are using overlaying divs as part of the design and as the window gets thinner, we are hoping to be able to scale down the main object in our scene. I have linked to and took 2 screenshots of a basic example. I am wondering if anyone has come up with a good solution for dealing with making the width responsive as well.
https://threejs.org/examples/#webgl_geometry_cube
I'm using Unity 4.6 to develop a 2D game. I want to know if having a lot of GameObjects in the scene (out of the camera's sight) has a considerable influence on performance.
For example, is it efficient to make an scrollable list of names (like 1000 of them)? (each one is a GameObject and has a text, a button etc.)
I mask them in a specified area (for example 10 of them are visible at the same time).
Thanks in advance!
Depends on whether or not the objects have visible components. If they do, the engine will draw them even if they are 'off-camera'. A game object by itself has a pretty light load - a tile based game could have thousands in memory. You'll want to toggle the visibility of sprites if you plan on drawing a large number to the scene off-camera. This is where a SpriteManager comes in. It'll check to see if the sprite is in the camera's rectangle and disabled sprites that aren't. There is a semi-offical exmaple here that is good if a little complicated:
http://wiki.unity3d.com/index.php?title=SpriteManager
I am using Unity3D, and I have a function which is being called inside of OnGUI to lay out the various gui components of my application. Ordinarily, the labels and buttons are all inside of a certain Rect that I supply, which is centered on the screen.
No problem there... however, what I want to is sometime render the exact same gui elements, which can be dynamic, and thus not just put into a prefabbed texture, into a trapezoid-shaped area off to the side, looking as if that gui were actually on a flat plane, pushed away from the center of the screen, and rotated slightly. All gui buttons that were drawn in the function should still respond normally.
I was rather hoping I could just specify some values in GUI.matrix to map the rectangle to a trapezoid, but my initial exploration seems to show that the gui elements don't appear to use homogenous coordinates, and everything still shows up as rectangular.
Is there any way to do this with Unity, ideally without requiring access to pro-only features?
Since now Unity3D GUI system isn't very flexible. The new GUI system is one of the features still not released in Unity 4 (we are all waiting for it).
From my point of view it has several problems, particularly:
You are forced to layout components using the flow of the code, instead of having a more declarative (or at least a more structured) way to do that.
It's quite inefficient (at least one draw call for button).
It isn't flexible at all. Add, Remove, Enable/Disable buttons can be come quick a painful operation when the number of buttons increase.
however, what I want to is sometime render the exact same gui
elements, which can be dynamic, and thus not just put into a prefabbed
texture, into a trapezoid-shaped area off to the side, looking as if
that gui were actually on a flat plane, pushed away from the center of
the screen, and rotated slightly. All gui buttons that were drawn in
the function should still respond normally.
This is quite hard if not impossible to obtain using Unity's GUI classes.
I see 2 possibilities:
Don't use GUI classes to do that. If your GUI is simple enough, you can implement your own (even 3d) buttons using for example:
A mesh (a plane or a trapezoid mesh) with a texture for the button background
TextMesh for drawing 3D text
RayCasting to check if a button has been pressed
Use a library that implements a more advanced GUI system like NGUI
When I ran into the same problem, I just used normal 3D GameObjects cubes with textures and called OnMouseDown(PC/Mac) or RayCasting(Android/iOS) on them. I guess that's how everyone does it.
I am making a GUI in OpenGL (more specifically lwjgl). I have tried hard to research different ways of doing this but I am having a hard time finding exactly what I want. I do not want to use any external libraries (only ones built in OpenGL, even trying to stay away from using GLUT) and I would like to have it work on anything that supports OpenGL (ex. Frame Buffer Objects don't work on older graphic cards).
I am making a 3D GUI with a scrollable panel as a component. The problem is I don't know how to draw a partial GUI component without doing a lot of calculations to only render part of it. I am making the components out of OpenGL primitives, not textures. I was hoping there is an easy way to do this like use multiple viewports. I don't really even understand what viewports are.
In short: I need to have a scrollable panel as a component overlapping other GUI components (since it will be a drop down menu) and not let any of the components in my panel draw outside my panel.
If you just want to prevent drawing pixels that are outside of a rectangular region (and I think that's what you're asking), than glScissor is exactly what you're looking for.
In lwjgl, you can find the function in org.lwjgl.opengl.GL11.
If you want to scroll a larger scene within a fixed region on the screen, the most straightforward way to go is by just modifying your projection matrix for the scroll position and redrawing the scene. If you are using gluPerspective to set up your projection matrix you'll have to convert it to a direct call to glFrustum; if you're using glOrtho it's much more straightforward.
Keep in mind that "scrolling" a perspective view has no one right way to do things - it depends on what sort of effect you want to achieve, and what particular sort of distortion you want near the edges of the overall viewport.