May I have a 2D layer for UI, Text, Buttons, etc over the 3D scene in ThreeJS?
Ideally something like engine from PixiJS inside ThreeJS? I've seen PixiJS offers some 3D features so why not combine both libraries in something super-powerful? I just do not want to place any HTML Dom elements over WebGL canvas as this will probably slow down performance on Mobile devices.
One way to solve this issue is to implement the UI as screen space sprites like demonstrated in the following official example (check out how the red sprites are rendered):
https://threejs.org/examples/webgl_sprites
The idea is to render them with a separate orthographic camera and an additional call of WebGLRenderer.render(). Besides, instances of THREE.Sprite do support raycasting which is of course useful when implementing interaction.
Building up on Mugen87's answer, you can also use THREE.Shape to make visual containers adapted to the user screen size :
https://threejs.org/docs/#api/en/extras/core/Shape
You can use THREE.Shape to make mesh-based text, is illustrated in this example :
https://threejs.org/examples/?q=text#webgl_geometry_text_shapes
You should also have a look at three-mesh-ui, an add-on for building mesh-based user interface with three.js :
https://github.com/felixmariotto/three-mesh-ui
I've a working 360 video viewer built using Rajawali/Google VR. I'm trying to modify it to display a texture which is the output of a non Rajawali GL program, rather than displaying the StreamingTexture from the video directly (i.e. within onDrawEye I'm using a separate GL program and passing it a texture ID, it's rendering to a framebuffer with an attachment which'll be used as the texture on the sphere).
The problem I am having is that however I manage the textures (either directly with OpenGL, or using Rajawali's texture classes), they are empty within my inner GL program (i.e. black/no output when embedding into a sampler). The only thing that works is if I add the texture to a material on some dummy object visible within the scene - then the texture is available within my separate program - but that's not what I want. I've tried just adding the texture to the TextureManager, but it's insufficient to keep it around. What I'm trying to do is extremely simple and works fine without Rajawali or the GVR machinery.
What's causing even textures I'm generating and managing myself to be compromised? I don't have a minimum failing example, but could put one together with some effort.
I have a hypothetical question:
Is it possible to simulate an animation of objects without rendering it to the canvas. I just want to capture objects' position using Vector.project(camera) and present it using CSS. And THREE.DeviceOrientationControls controls how the camera "view" the simulation.
I tried commenting THREE.WebGLRenderer, but it seems that THREE.PerpectiveCamera cannot update it's MatrixWorld property. Hence, the camera seems to not move and the Vector.project(camera) gives a static value. I do this because I need to put my three.js codes within a web worker.
Do I need still need to use THREE.WebGLRenderer to have a working simulation?
UPDATE:
I checked the following:
I digged deeper into ((three.scene.getObjectByName("one")).matrixWorld.getPosition()).project(three.camera);, I inspect the following values, having the above requirement (inside web worker, no renderer), using this example:
matrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
camera.projectionMatrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":0,"13":0,"14":-0.2000100016593933,"15":0}}
camera.matrixWorld: {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":1,"6":0,"7":0,"8":0,"9":0,"10":1,"11":0,"12":-1.7000000476837158,"13":-1,"14":0,"15":1}}
matrix.getInverse(camera.matrixWorld): {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":1,"6":0,"7":0,"8":0,"9":0,"10":1,"11":0,"12":1.7000000476837158,"13":1,"14":0,"15":1}}
matrix.multiplyMatrices(camera.projectionMatrix, matrix.getInverse(camera.matrixWorld)): {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
But, when normal (no modification), I inspect the following:
matrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
camera.projectionMatrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":0,"13":0,"14":-0.2000100016593933,"15":0}}
camera.matrixWorld: {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":-2.220446049250313e-16,"6":-1,"7":0,"8":0,"9":1,"10":-2.220446049250313e-16,"11":0,"12":-1.7000000476837158,"13":-1,"14":0,"15":1}}
matrix.getInverse(camera.matrixWorld): {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":-2.220446049250313e-16,"6":1,"7":0,"8":0,"9":-1,"10":-2.220446049250313e-16,"11":0,"12":1.7000000476837158,"13":-2.220446049250313e-16,"14":1,"15":1}}
matrix.multiplyMatrices(camera.projectionMatrix, matrix.getInverse(camera.matrixWorld)): {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":-4.761761943205948e-16,"6":-1.000100016593933,"7":-1,"8":0,"9":-2.1445069313049316,"10":2.2206681307011713e-16,"11":2.220446049250313e-16,"12":5.4684929847717285,"13":-4.761761943205948e-16,"14":-1.2001099586486816,"15":-1}}
I noticed that the camera.matrixWorld property has significant difference in both condition. I do not understand what makes the difference.
Apparently, the following lines from THREE.WebGLRenderer.render are still needed to update camera.matrixWorld property:
scene.updateMatrixWorld();
camera.updateMatrixWorld();
camera.matrixWorldInverse.getInverse(vs._3.camera.matrixWorld);
Within my current MATLAB GUI project, i have two axes objects. The first is used by the workaround "uibutton" (I don't use GUIDE) in order to display a LaTeX-formula (as far as I know, only axes labels are capable to use LaTeX whereas normal static text fields aren't...). The other axes object is used to actually plot a 3D-function.
The program has the following steps:
the first axes creats the LATEX-formula (e.g. f(x)=).
User enters a function in the edit field after the LaTeX-formula (e.g. f(x)=a+b).
User presses a "plot"-button.
3D-function is plotted in the second axes object.
Problem:
As soon as the 3D-function is plotted, the nicely rendered LaTeX-formular becomes crisp. Is there any way to prevent this from happening?
http://i42.tinypic.com/348pq2u.png (See picture for problem demonstration)
Check your figure properties before and after you draw the 3D plot
get(gcf, 'renderer')
My guess is that plotting the 3D function changes the renderer from the default ("painters") to another (likely OpenGL). Matlab's Latex rendering does not seem to play well with zbuffer or OpenGL (these produce bitmaps rather than line art).
You may be stuck if painters can't render your 3D graphics properly, but you can try to force it by setting the renderer manually back to painters
set(gcf, 'renderer', 'painters')