Procedure for RenderToTexture in android NDK using opengles2.0? - opengl-es

We have been working on the android NDK project that uses opengles2.0 and successful in rendering the 3d models. But unable to learn "RenderToTexture" functionality, which draws the output to desired texture. What's the procedure?

Render to texture, and then using that texture for further rendering - involves first creating the fbo and binding as the current render target, doing the first pass render, then setting up this rendered texture as an input, and rendering it again. The shaders that are used in both the steps, and other states could be different.
Assuming all other states remain the same, a simple approach to rendering offscreen and reusing it as input (this is native C, not NDK, but the API and flow should be same) is described in:
https://gist.github.com/prabindh/8173489

Related

The best approach to use only ThreeJS for building interactive UI without HTML DOM overlays

May I have a 2D layer for UI, Text, Buttons, etc over the 3D scene in ThreeJS?
Ideally something like engine from PixiJS inside ThreeJS? I've seen PixiJS offers some 3D features so why not combine both libraries in something super-powerful? I just do not want to place any HTML Dom elements over WebGL canvas as this will probably slow down performance on Mobile devices.
One way to solve this issue is to implement the UI as screen space sprites like demonstrated in the following official example (check out how the red sprites are rendered):
https://threejs.org/examples/webgl_sprites
The idea is to render them with a separate orthographic camera and an additional call of WebGLRenderer.render(). Besides, instances of THREE.Sprite do support raycasting which is of course useful when implementing interaction.
Building up on Mugen87's answer, you can also use THREE.Shape to make visual containers adapted to the user screen size :
https://threejs.org/docs/#api/en/extras/core/Shape
You can use THREE.Shape to make mesh-based text, is illustrated in this example :
https://threejs.org/examples/?q=text#webgl_geometry_text_shapes
You should also have a look at three-mesh-ui, an add-on for building mesh-based user interface with three.js :
https://github.com/felixmariotto/three-mesh-ui

Three.js - is there a simple way to process a texture in a fragment shader and get it back in javascript code using GPUComputationRenderer?

I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
As this texture is not meant to change over time, I want to process it only once.
I think that GPUComputationRenderer could do the trick but I don't figure out how and what is the minimal code that can achieve this.
I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
Sounds like you just want to perform basic RTT. In this case, I suggest you use THREE.WebGLRenderTarget. The idea is to setup a simple scene with a full-screen quad and a custom instance of THREE.ShaderMaterial containing your shader code that produces the texture. Instead of rendering to the screen (or default framebuffer), you render to the render target. In the next step, you can use this render target as a texture in your actual scene.
Check out the following example that demonstrates this workflow.
https://threejs.org/examples/webgl_rtt
GPUComputationRenderer is actually intended for GPGPU which I don't think is necessary for your use case.

Preserving textures not visible in a Rajawali scene

I've a working 360 video viewer built using Rajawali/Google VR. I'm trying to modify it to display a texture which is the output of a non Rajawali GL program, rather than displaying the StreamingTexture from the video directly (i.e. within onDrawEye I'm using a separate GL program and passing it a texture ID, it's rendering to a framebuffer with an attachment which'll be used as the texture on the sphere).
The problem I am having is that however I manage the textures (either directly with OpenGL, or using Rajawali's texture classes), they are empty within my inner GL program (i.e. black/no output when embedding into a sampler). The only thing that works is if I add the texture to a material on some dummy object visible within the scene - then the texture is available within my separate program - but that's not what I want. I've tried just adding the texture to the TextureManager, but it's insufficient to keep it around. What I'm trying to do is extremely simple and works fine without Rajawali or the GVR machinery.
What's causing even textures I'm generating and managing myself to be compromised? I don't have a minimum failing example, but could put one together with some effort.

Three.js within web worker: Simulating animation without rendering to canvas

I have a hypothetical question:
Is it possible to simulate an animation of objects without rendering it to the canvas. I just want to capture objects' position using Vector.project(camera) and present it using CSS. And THREE.DeviceOrientationControls controls how the camera "view" the simulation.
I tried commenting THREE.WebGLRenderer, but it seems that THREE.PerpectiveCamera cannot update it's MatrixWorld property. Hence, the camera seems to not move and the Vector.project(camera) gives a static value. I do this because I need to put my three.js codes within a web worker.
Do I need still need to use THREE.WebGLRenderer to have a working simulation?
UPDATE:
I checked the following:
I digged deeper into ((three.scene.getObjectByName("one")).matrixWorld.getPosition()).project(three.camera);, I inspect the following values, having the above requirement (inside web worker, no renderer), using this example:
matrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
camera.projectionMatrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":0,"13":0,"14":-0.2000100016593933,"15":0}}
camera.matrixWorld: {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":1,"6":0,"7":0,"8":0,"9":0,"10":1,"11":0,"12":-1.7000000476837158,"13":-1,"14":0,"15":1}}
matrix.getInverse(camera.matrixWorld): {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":1,"6":0,"7":0,"8":0,"9":0,"10":1,"11":0,"12":1.7000000476837158,"13":1,"14":0,"15":1}}
matrix.multiplyMatrices(camera.projectionMatrix, matrix.getInverse(camera.matrixWorld)): {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
But, when normal (no modification), I inspect the following:
matrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
camera.projectionMatrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":0,"13":0,"14":-0.2000100016593933,"15":0}}
camera.matrixWorld: {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":-2.220446049250313e-16,"6":-1,"7":0,"8":0,"9":1,"10":-2.220446049250313e-16,"11":0,"12":-1.7000000476837158,"13":-1,"14":0,"15":1}}
matrix.getInverse(camera.matrixWorld): {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":-2.220446049250313e-16,"6":1,"7":0,"8":0,"9":-1,"10":-2.220446049250313e-16,"11":0,"12":1.7000000476837158,"13":-2.220446049250313e-16,"14":1,"15":1}}
matrix.multiplyMatrices(camera.projectionMatrix, matrix.getInverse(camera.matrixWorld)): {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":-4.761761943205948e-16,"6":-1.000100016593933,"7":-1,"8":0,"9":-2.1445069313049316,"10":2.2206681307011713e-16,"11":2.220446049250313e-16,"12":5.4684929847717285,"13":-4.761761943205948e-16,"14":-1.2001099586486816,"15":-1}}
I noticed that the camera.matrixWorld property has significant difference in both condition. I do not understand what makes the difference.
Apparently, the following lines from THREE.WebGLRenderer.render are still needed to update camera.matrixWorld property:
scene.updateMatrixWorld();
camera.updateMatrixWorld();
camera.matrixWorldInverse.getInverse(vs._3.camera.matrixWorld);

How to prevent LATEX-labels in MATLAB GUI to become blurry?

Within my current MATLAB GUI project, i have two axes objects. The first is used by the workaround "uibutton" (I don't use GUIDE) in order to display a LaTeX-formula (as far as I know, only axes labels are capable to use LaTeX whereas normal static text fields aren't...). The other axes object is used to actually plot a 3D-function.
The program has the following steps:
the first axes creats the LATEX-formula (e.g. f(x)=).
User enters a function in the edit field after the LaTeX-formula (e.g. f(x)=a+b).
User presses a "plot"-button.
3D-function is plotted in the second axes object.
Problem:
As soon as the 3D-function is plotted, the nicely rendered LaTeX-formular becomes crisp. Is there any way to prevent this from happening?
http://i42.tinypic.com/348pq2u.png (See picture for problem demonstration)
Check your figure properties before and after you draw the 3D plot
get(gcf, 'renderer')
My guess is that plotting the 3D function changes the renderer from the default ("painters") to another (likely OpenGL). Matlab's Latex rendering does not seem to play well with zbuffer or OpenGL (these produce bitmaps rather than line art).
You may be stuck if painters can't render your 3D graphics properly, but you can try to force it by setting the renderer manually back to painters
set(gcf, 'renderer', 'painters')

Resources