Google Cardboard view using three.js? - three.js

Is it possible to add a Google Cardboard view camera, as shown in the image below, to Google VR View, using Three.js? If so, how can I do it?
More specifically, how can I add Three.js to the Google VR View code below?
function onLoad() {
// Load VR View.
vrView = new VRView.Player('#vrview', {
width: window.innerWidth,
height: window.innerHeight,
video: 'crusher-final.mp4',
is_stereo: true,
loop: false,
});
}

This is not possible using the VRView-library as it runs within an iframe and doesn't provide any interface for you to add content to the 3d-views. However, it is fully open-source and implemented using three.js, see here for the source-code: https://github.com/googlevr/vrview
So you could use that code and add your stuff to it or implement it yourself.
The easiest way to do it is to use the WebVR-polyfill that does most of the work automatically. This will allow you to use the WebVR-API even if it is not yet supported by the browser.
Three.js has support for the WebVR-API built in, so there is not much more to do than to enable it using renderer.vr.enabled = true and setting the VR display to use via navigator.getVRDisplays().then(displays => renderer.vr.setDevice(displays[0]));.
See the webvr-examples and the WebVR-specification for further reference.

Related

AFrame specify WebGL version

AFrame 1.1.0 is using THREE.js 123, which by default is now using WebGL2.
Some AFrame components are not working with WebGL2 yet. It would be great if we could use the AFrame with WebGL1. THREE.js still supports WebGL1 rendering.
When creating the renderer, you can pass your own canvas and context to the constructor, instead of letting the engine create its own. So you could simply grab the <canvas> HTMLElement, request a WebGL1 context from it, and use them when initializing the renderer:
var myCanvas = document.getElementById('myCanvas');
var gl = canvas.getContext('webgl');
var renderer = new WebGLRenderer({
canvas: myCanvas,
context: gl
});
According to the maintainer of AFrame, this is not possible currently with 1.1.0.
AFrame creates its own canvas in a-scene.js setupCanvas(), so you can't pass in one with the webgl1 context already set.
We ended up making a patch to our own version of AFrame. You can see the logic here for specifying whether or not to use WebGL2 vs WebGL1 using the renderer system.
https://github.com/8thwall/8frame/commit/533c4304f20153834a85f60a19e161e61ce71ec6
Even with the patch, there are still issues with webgl1 around shader GLSL compatibility.

How to look to objects using lookAt() with a-frame camera component and look-controls

Goal: I want to create a Web based experience where the user need to see a series of elements on the scene and later, I want to leave the user explore alone.
I have several objects around the scene and I want the camera to look at them one by one. I am using the lookat() method but is not working correctly. I found this example on Threejs:
http://jsfiddle.net/L0rdzbej/135/
But my example is not working like the previous example.
After the answer of #Mugen87 is working but with a little modification:
document.getElementById('cam').sceneEl.camera.lookAt
Access the camera in this way. You can see the example here:
https://glitch.com/~aframe-lookat-cam-not-working
Please click on the button "animate camera".
As mentioned in this thread, you have to remove or disable look-controls if you're overriding camera rotation manually. So you can do:
var cameraEl = document.getElementById('camera');
cameraEl.setAttribute('look-controls', {enabled: false});
to disable the controls, perform your lookAt() operations, and then enable the controls via:
cameraEl.setAttribute('look-controls', {enabled: true})
I finally could make it worked. I am new in Threejs and aframe and surely I don't understand rotation, positions,world coordinates good enough but I think I did a decent work. Here the link:
https://glitch.com/~aframe-lookat-camera-working
I hope will be useful for somebody on the future.

Are all renderers good for textures?

So, the scene include an earth spinning on its axis, a moon rotating around the earth, and a light source to the right that will help to simulate the effect of an eclipse. I thought it would be easy because we've done shadows and transformations before but I ran into a problem.
In our template we have the following at the top:
// For the assignment where a texture is required you should
// deactivate the Detector and use ONLY the CanvasRenderer. There are some
// issues in using waht are called Cross Domain images for textures. You
// can get more details by looking up WebGL and CORS using Google search.
// if ( Detector.webgl )
// var renderer = new THREE.WebGLRenderer();
// else
var renderer = new THREE.CanvasRenderer();
My problem is, when I leave it like that, the spotlight doesn't appear on the scene. However, as was warned, if I activate the Detector, the textures won't work.
But I need both textures and the spotlight. How do I work around this?
You are confusing yourself. Detector.webgl only checks for support of WebGL on the browser. The code below uses the WebGL renderer if the current browser supports WebGL and CanvasRenderer if there is no WebGL support.
if ( Detector.webgl )
var renderer = new THREE.WebGLRenderer();
else
var renderer = new THREE.CanvasRenderer();
With WebGL - loading textures will run into a cross domain issue. Best to then execute the code either on a web server or a local server like http://www.wampserver.com/en/ for Windows or https://www.mamp.info/en/ for Mac. Or npm-package like https://github.com/tapio/live-server.
As far as I know shadows are not supported on the CSSCanvasRender. I would ask your assignment head to clarify.

ThreeJS: Updating existing objects with Matrix4

Can anyone offer suitable documentation for updating ThreeJS objects with Matrix4? I've found very few samples online, and they seem to use outdated syntax. In this post for example, the multiplySelf syntax is deprecated and the jsfiddle doesn't work.
I've had success getting the transformation to work during the init() function:
object.matrixAutoUpdate=false;
scene.add( object );
var m=new THREE.Matrix4(1,0,0,0,0,1.132,0,0,0,0,1.3,0,0,0,0,1);
object.applyMatrix(m);
But I'm specifically trying to activate a transition based on Matrix4 (a user clicks a button and the transformation happens as an animation). I'm having a lot of trouble getting the transformation to operate after the scene is loaded, so thanks in advance for any tips.
You should be able to do this:
object.matrixAutoUpdate = false;
object.matrix.set(1,0,0,0,0,1.132,0,0,0,0,1.3,0,0,0,0,1);

Combine THREE.WebGLRenderer and Kinetic.js Layers

He Guys,
I'm trying to combine THREE.js and Kinetic.js in my web-application. I'm having problems doing this with the THREE.WebGLRenderer. How can I setup my view that I have a 3D-Layer that is rendered by the THREE.WebGLRenderer and a seperate Layer on top of that for 2D-Elements, like e.g. Labels etc., using Kinetic.js?
I've tried to give the WebGLRenderer the canvas element of an instance of a Kinetic.Layer Element. But it does not work.
this.renderer = new THREE.WebGLRenderer({
antialias: true,
preserveDrawingBuffer: true,
canvas: this.layer3D.getCanvas()._canvas
});
Until now I only found examples that do this with the THREE.CanvasRenderer.
Ideas somebody? Thanks a lot.
A canvas can have either a 2D context or a 3D context, not both as they are considered incompatible. When you pass the Canvas from kinetic layer, it already has a 2D context bound to it.
However you can have another HTML element (ex, DIV) on top of the GL rendered canvas.
Hello I just want to say this may not be possible. As far as I know KineticJS is based on Canvas. So what you wan to do is only possible using Canvas Renderer.
The workaround I can think of is, if the browser supports WebGL, you might be able to place the webGL element on top of your KineticJS element.

Resources