Preserving textures not visible in a Rajawali scene - opengl-es

I've a working 360 video viewer built using Rajawali/Google VR. I'm trying to modify it to display a texture which is the output of a non Rajawali GL program, rather than displaying the StreamingTexture from the video directly (i.e. within onDrawEye I'm using a separate GL program and passing it a texture ID, it's rendering to a framebuffer with an attachment which'll be used as the texture on the sphere).
The problem I am having is that however I manage the textures (either directly with OpenGL, or using Rajawali's texture classes), they are empty within my inner GL program (i.e. black/no output when embedding into a sampler). The only thing that works is if I add the texture to a material on some dummy object visible within the scene - then the texture is available within my separate program - but that's not what I want. I've tried just adding the texture to the TextureManager, but it's insufficient to keep it around. What I'm trying to do is extremely simple and works fine without Rajawali or the GVR machinery.
What's causing even textures I'm generating and managing myself to be compromised? I don't have a minimum failing example, but could put one together with some effort.

Related

Three.js - is there a simple way to process a texture in a fragment shader and get it back in javascript code using GPUComputationRenderer?

I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
As this texture is not meant to change over time, I want to process it only once.
I think that GPUComputationRenderer could do the trick but I don't figure out how and what is the minimal code that can achieve this.
I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
Sounds like you just want to perform basic RTT. In this case, I suggest you use THREE.WebGLRenderTarget. The idea is to setup a simple scene with a full-screen quad and a custom instance of THREE.ShaderMaterial containing your shader code that produces the texture. Instead of rendering to the screen (or default framebuffer), you render to the render target. In the next step, you can use this render target as a texture in your actual scene.
Check out the following example that demonstrates this workflow.
https://threejs.org/examples/webgl_rtt
GPUComputationRenderer is actually intended for GPGPU which I don't think is necessary for your use case.

Occlusion of real-world objects using three.js

I’m using three.js inside an experimental augmented-reality web browser. (The browser is called Argon. Essentially, Argon uses Qualcomm’s Vuforia AR SDK to track images and objects in the phone camera. Argon sends the tracking information into Javascript, where it uses transparent web pages with three.js to create 3D graphics on top of the phone video feed.) My question is about three.js, however.
The data Argon sends into the web page allows me to align the 3D camera with the physical phone camera and draw 3D graphics such that they appear to align with the real world as expected. I would also like to have some of the things in the physical world occlude the 3D graphics (I have 3D models of the physical objects, because I’ve set the scene up or because they are prepared objects like boxes that are being tracked by Vuforia).
I’m wondering if folks have suggestions on the best way to accomplish this occlusion with three.js. Thanks.
EDIT: it appears that the next version of three.js (R71) will have a simpler way to do this, so if you can use the dev branch (or just wait), you can do this much more easily. See this post: three.js transparent object occlusion
MY ORIGINAL ANSWER (without using the new features in R71):
I think the best way to do this is (to avoid extra work by creating new rendering passes for example) to modify the WebGL renderer (src/renderers/WebGLRenderer.js) and add support for a new kind of object, perhaps call them “occlusionObjects”.
If you look in the renderer, you will see two current object lists, opaqueObjects and transparentObjects. The renderer sorts the renderable objects into these two lists, so that it can render the opaque objects first, and then the transparent objects after them. What you need to do is store all of your new objects into the occlusionObjects list rather than those two. You will see that the opaque and transparent objects are sorted based on their material properties. I think here, you may want to add a property to an object you want to be an occluder (“myObject.occluder = true”, perhaps), and just pull those objects out.
Once you have the three lists, look what the render() function does with these object lists. You’ll see a couple of places with rendering calls like this:
renderObjects( opaqueObjects, camera, lights, fog, true, material );
Add something like this before that line, to turn off writing into the color buffers, render the occlusion objects into the depth buffer only, and then turn color buffer writes back on before you render the remaining objects.
context.colorMask( false, false, false, false);
renderObjects( occluderObjects, camera, lights, fog, true, material );
context.colorMask(true, true, true, true);
You’ll need to do this in a couple of places, but it should work.
Now you can just mark any objects in your scene as “occluder = true” and they will only render into the depth buffer, allowing the video to show through and occluding any opaque or transparent objects rendered behind them.

Procedure for RenderToTexture in android NDK using opengles2.0?

We have been working on the android NDK project that uses opengles2.0 and successful in rendering the 3d models. But unable to learn "RenderToTexture" functionality, which draws the output to desired texture. What's the procedure?
Render to texture, and then using that texture for further rendering - involves first creating the fbo and binding as the current render target, doing the first pass render, then setting up this rendered texture as an input, and rendering it again. The shaders that are used in both the steps, and other states could be different.
Assuming all other states remain the same, a simple approach to rendering offscreen and reusing it as input (this is native C, not NDK, but the API and flow should be same) is described in:
https://gist.github.com/prabindh/8173489

three.js bind same texture multiple times with different wrapping/filterings

thanks for reading.
I have a WebGLRenderTarget that I render to. At a subsequent stage in the rendering process I use that texture as input to a shader.
I would like to be able to use that same render target with multiple wrappings/filterings. I have looked some at the internals of three.js, and am not sure that it is possible.
It seems like in webGL I would be able to just bind the same texture multiple times with different parameter settings. I figure I can fork three.js to support a new type of texture that just uses another texture with new parameters, but wanted to see if there was some way I could do this without forking.
Thanks in advance!

Placing objects on scrolling background in XNA

How do I place objects that appear only if the background scrolls to a certain point?
Example- I have this long image that keeps scrolling using the technique above. However, after scrolling to part of the image, I want to add a platform there. How would I do that?
In general, you will probably need to save the locations of your objects in a file and then load that file at the beginning of the level (assuming you are making some kind of platformer game). You can do this by creating a class or a struct containing all the relevant information for the platform (position, size, texture, etc) and then using XML serialization to write an array of those classes/structs to a file.
Your level loader would then load and deserialize the level data, which would end up being a list of all the objects in your level (such as platforms). Now that you have the locations of your platforms in memory you have a couple different options on how to get them to the screen.
Draw all the objects (platforms) all the time, whether or not they are in the view of the camera. If your levels don't contain a lot of objects, this would be simple to implement.
Draw only those in the camera's view. Without knowing how you implemented the horizontal scroller, it's kind of hard to make suggestions for this part. Whatever mechanism you currently have to identify the boundaries of what part of the background to show could be used to determine which objects to draw as well.
I'm working on a game that scrolls vertically right now, and I needed a way to do something similar: place objects in a level and have them appear when the background scrolled to them. I used TorqueX 2D (free engine binaries if you've payed to develop for XNA) and its 2D scene editor to set this up pretty easily. I have my camera scrolling up, the background stays in place. When it gets to an object position defined in the XML level file it spawns the object in the level.

Resources