I have a WebGL 2 app that renders a bunch of point lights w/a deferred pipeline. I would like to port this to Aframe for use with Oculus Rift S.
My questions relate only to rendering. Now I know next to nothing about VR specific rendering; other than the fact that two images are rendered for each eye and then passed through some distortion filters. I see that there exist components that (were last updated quite a while ago) provide this functionality. My pipeline is written with a low level WebGL lib and I do not want to port it to some other component (performance, compatibility reasons + my own vanity).
I would also like to avoid as much direct integration of this pipeline with three.js as possible. Right now I have a basic three.js scene with a full screen quad textured with the output of my deferred renderer. I assume leaving this as-is and shoving this scene into Aframe wouldn't render properly on a Rift, so how would I go about rendering two full-screen quads for each eye in Aframe? Are the camera frustums and views for each eye easily exposed in Aframe? Is my thinking way off entirely?
Thanks for any help, I've looked through the aframe git for some time now and cannot find any clear place to start.
Related
I try to implement a scene, where an object is updated in different way for each eye (eg. I want to project opposite rotation of box for each eye).
I have demo application written with WebGL using Three.js as described in Google Developers tutorial.
But there is only one scene, containing one mesh, with single update function. I can't find way to implement separation of update, so it's done seperately for each eye (just as rendering is done) and wonder if it's even possible.
Anybody has any experiences with similar case?
Your use case is rather unusual (and may I say, eye-watering), so basically the answer is no: Three.js has abstracted away the left/right eye dichotomy of VR. Internally it renders the scene using an array of 2 camera's, with the correct left/eye setting.
Fortunately, every object has an onBeforeRender(renderer, scene, camera, ...) event. If you hook that event, and find a way to distinguish the left/right eye camera you should be able to modify the orientation just before it gets rendered.
A (perhaps too) simple way to distinguish the camera's would be to keep a track of the index with a counter.
I'm developing a game with THREEjs and webvr-boilerplate. I'm struggling a bit with how to properly render a HUD (score, distance, powerups etc) that always stays at the top of the scene. I've tried to have a plane (with a texture that's brought in from a hidden canvas element) but positioning it in space proves difficult since I can't match the right depth.
Any clues please? :)
Well, you shouldn't have a classic HUD, VR doesn't work like that.
You're searching for something called diegetic or spatial UI - that is the scores and other icons are rendered as geometry in scene space in a fixed position or distance (this one is called spatial UI). For best results, draw the information on some game object mimicking real displays, for example a fuel gauge on the dashboard of a car or visible remaining bullets on a gun (this one is called diegetic UI).
Unity has made a nice page describing these concepts.
I created a performance statistics HUD specifically for WebVR & THREE.js Projects.
https://github.com/Sean-Bradley/StatsVR
While the default setup shows specific information, you can modify it to show custom graphics and other data.
And if you don't believe me, just check out the
StatVR video tutorial.
Does Three.JS have a function or capability of AI( Artificial intelligence )? Specifically let's say a FPS game. I want enemies to look for me and try to kill me, is it possible in three.js? Do they have a functionality or a system of such?
Webgl
create buffer
bind buffer
allocate data
set up state
issue draw call
run GLSL shaders
three.js
create a 3d context using WebGL
create 3 dimensional objects
create a scene graph
create primitives like spheres, cubes, toruses
move objects around, rotate them scale them
test for intersections between rays, triangles, planes, spheres, etc.
create 'materials' (rather than shaders)
javascript
write algorithms
I want enemies to look for me and try to kill me
Yes, three.js is capable of doing this, you just have to write an algorithm using three's classes. Your enemies would be 3d objects, casting rays, intersecting with other objects, etc.
You would be building a game engine, and you could use three.js as your rendering framework within that engine. Rendering is just one part of it. Think of a 2d shooter, you could make it using a 2d context, but you could also enhance it and make it 2.5d, by working with a 3d context. Everything else can stay the same.
any webgl engine that might have it ? or is it just not a webgl thing
Unity probably has everything you can possibly think of. Unity is capable of outputting WebGL, so it could be considered a 'webgl engine'.
Bablyon.js is more engine like.
Three Js is the best and most powerfull WebGL 3d engine that has no equal on the market , and its missing out on such an ability
Three.js isn't exactly a 3d engine. Wikipedia says:
Three.js is a lightweight cross-browser JavaScript library/API used to
create and display animated 3D computer graphics on a Web browser.
Three.js uses WebGL.
so if i need to just draw a car, or a spinning logo, i don't need them to come looking for me, or try to shoot me. I just need them to stay in one place, and rotate.
For a graphics demo you don't even need this - with a few draw instructions, you could render a full screen quad with a very elaborate pixel shader. Three gives you a ton of options, especially if you consider all the featured examples.
It works both ways, while you can expand three.js anyway you want, you can strip it down for just a very specific purpose.
If you need to build an app that needs to do image processing, and feature no '3d' graphics, you could still leverage webgl with three.js.
You don't need any vector, matrix, ray , geometry classes.
If you don't have vector3, you probably cant keep planeGeometry, but you would use bufferGeometry, and manually construct a plane. No transformations need to happen, so no need for matrix classes. You'd use shaders, and textures, and perhaps something like the EffectsComposer.
I’m afraid not. Three.js is just a engine for displaying 3d content.
Using it to create games only is one possibility. However few websites raise with pre-coded stuff like AI (among other things) to attract game creators, but using them is more restrictive than writing the exact code you need
Three.js itself doesn't however https://mugen87.github.io/yuka/ is a great AI engine that can work in collaboration with three to create AI.
They do a line if sight and a shooting game logic, as well as car logic which I've been playing around with recently, a React Three Fiber example here: https://codesandbox.io/s/loving-tdd-u1fs9o
I'm making a game in Unity 5, it's minecraft-like. For the world rendering I don't know if I should destroy cubes that I don't see or make them invisible.
My idea was to destroy them, but creating them each time they become visible would take too much processor power so I'm searching alternatives, is making them invisible a viable solution?
I'll be loading a ton of cubes at the same time, for those unfamiliar with minecraft, here is a screenshot so that you get the idea.
That is just a part of what is rendered at the same time in a tipical session.
Unity, like all graphics engines, can cause the GPU to process geometry that would not be visible on screen. The processes that try to limit this are culling and depth testing:
Frustum culling - prevents objects fully outside of the cameras viewing area (frustum) to be rendered. The viewing frustum is defined by the near and far clipping planes and the four planes connecting near and far on each side. This is always on in Unity and is defined by your cameras settings. Excluded objects will not be sent to the GPU.
Occlusion culling - prevents objects that are within the cameras view frustum but completely occluded by other objects from being rendered. This is not on by default. For information on how to enable and configure see occlusion culling in the Unity manual. Occluded objects will not be sent to the GPU.
Back face culling - prevents polygons with normals facing away from the camera from being rendered. This occurs at the shader level so the geometry IS processed by the GPU. Most shaders do cull back faces. See the Cull setting in the Unity shader docs.
Z-culling/depth testing - prevents polygons that won't be seen, due to being further away from the camera than opaque geometry that has already been rendered this frame, from being rendered. Only fully opaque (no transparency) polygons can cause this. Depth testing is also done in the shader and therefore geometry IS processed by the GPU. This process can be controlled by the ZWrite and ZTest settings described in the Unity shader docs.
On a related note, if you are using so many geometrically identical blocks make sure you are using a prefab. This allows Unity to reuse the same set of triangles rather than having 2 x 6 x thousands in your scene, thereby reducing GPU memory load.
A middle ground in between rendering the object as invisible or destroying it is to keep the C++ object but detach it from the scene graph.
This will give you all the rendering speed benefits of destroying it, but when it comes time to put it back you won't need to pay for recreation, just reattach it at the right place in the graph.
I create a THREE.PlaneGeometry with heights, in the highest point locate a THREE.PointLight, but this illuminates areas that are not seen from this point.
Why?
I want light from a point only the areas that are viewed from.
By default, the appearance of any given point on a surface is calculated using the lights, their properties and of course the material properties - it does not take the rest of the scene into account, as that would be very computationally expensive. Various ray tracing renderers do this, but they are really slow, and that's not how WebGL and Three.js work.
What you want is shadows. Three.js is capable of rendering shadows using Shadow Map method. There are various examples on using shadow maps both on the net and Three.js examples folder.
A word of warning though, getting shadows to work well can be hard if you don't have the basics down well - you may need to do some studying. Shadows can slow your application down (specially with many lighs) and look ugly if not properly configured and fine-tuned. Also I think shadow maps are only supported for SpotLight and DirectionalLight, PointLights are trickier.