I would like to ask you if there is some support or some plugin in three.js for 3D active glasses.
I have Nvidia 3D Vision active glasses and for my web application I need to implement cross-eye effect in tree.js.
Can you please help me with it?
Doing 3D stereo with active glasses is not a matter of plugin for three.js. If you read
Active Shutter 3D System you will see that you need a way to turn off the right eye while displaying the left eye followed by an off left eye which displaying the right eye content. So you need to synchronize the glasses with the display. And displays need to be of higher Herz than normal (ie. 60Hz) in order to be able to keep a real-time feel to the rendering. This limitation cannot be overcome in a browser window.
Related
I have a WebGL 2 app that renders a bunch of point lights w/a deferred pipeline. I would like to port this to Aframe for use with Oculus Rift S.
My questions relate only to rendering. Now I know next to nothing about VR specific rendering; other than the fact that two images are rendered for each eye and then passed through some distortion filters. I see that there exist components that (were last updated quite a while ago) provide this functionality. My pipeline is written with a low level WebGL lib and I do not want to port it to some other component (performance, compatibility reasons + my own vanity).
I would also like to avoid as much direct integration of this pipeline with three.js as possible. Right now I have a basic three.js scene with a full screen quad textured with the output of my deferred renderer. I assume leaving this as-is and shoving this scene into Aframe wouldn't render properly on a Rift, so how would I go about rendering two full-screen quads for each eye in Aframe? Are the camera frustums and views for each eye easily exposed in Aframe? Is my thinking way off entirely?
Thanks for any help, I've looked through the aframe git for some time now and cannot find any clear place to start.
I have a 3D app, my camera doesn't move- despite "clipping planes- Far"
That's why I can choose between 3 options-
Skybox (and use only 1 side of it). Not my favorite option because blurry and cropped.
Canvas+ Image (the problem is that I need to keep playing with its size because of the Camera's far option).
3D quad with an image material (the same problem as in 2).
Which option will give me the performance on mobile?
I'm developing a game with THREEjs and webvr-boilerplate. I'm struggling a bit with how to properly render a HUD (score, distance, powerups etc) that always stays at the top of the scene. I've tried to have a plane (with a texture that's brought in from a hidden canvas element) but positioning it in space proves difficult since I can't match the right depth.
Any clues please? :)
Well, you shouldn't have a classic HUD, VR doesn't work like that.
You're searching for something called diegetic or spatial UI - that is the scores and other icons are rendered as geometry in scene space in a fixed position or distance (this one is called spatial UI). For best results, draw the information on some game object mimicking real displays, for example a fuel gauge on the dashboard of a car or visible remaining bullets on a gun (this one is called diegetic UI).
Unity has made a nice page describing these concepts.
I created a performance statistics HUD specifically for WebVR & THREE.js Projects.
https://github.com/Sean-Bradley/StatsVR
While the default setup shows specific information, you can modify it to show custom graphics and other data.
And if you don't believe me, just check out the
StatVR video tutorial.
I am a total beginner to Unity, but I'm getting familiar with the interface and the way scripts work with Game Objects. Some days ago, I came across with an article regarding a 3D LED Matrix controlled by Unity and since then I've been trying to make it work with my project.
Original article: http://philippseifried.com/blog/2014/10/29/3d-led-matrix-with-unity/
Basically, once the script is attached to an Orthographic camera (or at least that's what I understood from the article), the camera layers and "slices" the scene, transforms it into a pixel matrix and paints the result into some preview layers dynamically generated.
I have accomplished to attach the camera and get the preview layers to show up. However, I'm unable to get the final result the article shows, as preview layers show absolutely nothing. I think it has to do with the fact that the author is using some kind of transparent planes I have been unable to replicate.
It would be great if someone could guide me a bit to get the exact same result of the article by reading it and watching the last Vine, as it shows his Unity screen with the transparent layers working up and running.
The script was looking at the background color to decided wether a pixel had to be painted or not.
Changing the camera background to transparent (RGBA) was enough to see the final result.
I would like to put a photorealistic virtual scene on a tablet so when the user rotates the tablet, it shows as if the tablet is a window to an virtual world.
Pre-rendered scenes can be rendered photorealistic, while real-time rendering has a "computer-made look". Given that for one scene, the POV can be rotated but not translated in space, is it possible that a pre-rendered virtual panoramic scene give an immersive impression?
I doubt that this is easy, since rotating the view point will cause some sort of distortion. This kind of distortion is easy for apps like Starwalk, but difficult for photos. Can anyone point me out a direction?
I know that this will be tremendously easy for restricting motion in only one direction, but I would like the user to have a full 3d experience.
You need to either warp the photographs before applying them as textures to your "sky dome" or use non uniform texture coordinates. If done right this will even out most of the distortions giving a more realistic appearance.
Another alternative is to use more photographs so that you are only actually using the central area of each one.
I've found that http://code.google.com/p/panoramagl/ can render cubic, spherical and cylindrical panoramic images, so the problem transforms to how to make render a panorama which can be solved by stitching. I will still leave this answer open to see if anyone else has better answers.