OpenGL & Multiview - opengl-es

I am trying to implement multiview using OpenGL ES. I have created a vertex shader in that I specified
#extension GL_OVR_multiview : enable
layout(num_views=2) in;
And I am sending 2 MVPs (model view projection matrices).
I have created an FBO with array texture as an attachment. I am binding this FBO and rendering it and everything is working fine and as expected.
Now my question is:
is it possible to render to only one view in one draw call even though I specified 2 views?
I want to render it to a single view per draw call and change the view index in the vertex shader.
Is it possible to change gl_ViewID_OVRin the vertex shader??
Please help me with this.
Thank you.

Related

RealityKit fit model entity within field of view

Using RealityKit I’m trying to position a PerspectiveCamera to fit an ModelEntity within its field of view. Specifically I want the entity to fill the frame as much as possible. Does anyone have a function to do this? I've seen an example in a different environment but am not successful in translating it to RealityKit. I need the function to take all three axis into account when determining the camera distance.
Thanks,
Spiff

Three.js - is there a simple way to process a texture in a fragment shader and get it back in javascript code using GPUComputationRenderer?

I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
As this texture is not meant to change over time, I want to process it only once.
I think that GPUComputationRenderer could do the trick but I don't figure out how and what is the minimal code that can achieve this.
I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
Sounds like you just want to perform basic RTT. In this case, I suggest you use THREE.WebGLRenderTarget. The idea is to setup a simple scene with a full-screen quad and a custom instance of THREE.ShaderMaterial containing your shader code that produces the texture. Instead of rendering to the screen (or default framebuffer), you render to the render target. In the next step, you can use this render target as a texture in your actual scene.
Check out the following example that demonstrates this workflow.
https://threejs.org/examples/webgl_rtt
GPUComputationRenderer is actually intended for GPGPU which I don't think is necessary for your use case.

Procedure for RenderToTexture in android NDK using opengles2.0?

We have been working on the android NDK project that uses opengles2.0 and successful in rendering the 3d models. But unable to learn "RenderToTexture" functionality, which draws the output to desired texture. What's the procedure?
Render to texture, and then using that texture for further rendering - involves first creating the fbo and binding as the current render target, doing the first pass render, then setting up this rendered texture as an input, and rendering it again. The shaders that are used in both the steps, and other states could be different.
Assuming all other states remain the same, a simple approach to rendering offscreen and reusing it as input (this is native C, not NDK, but the API and flow should be same) is described in:
https://gist.github.com/prabindh/8173489

Render multiple object with different characteristic by OpenGL ES 2.0

I am a beginner with OpenGL ES 2.0. I have learnt about it for a few weeks. Now I can render multiple objects. However, I have a problem. My question is: How can I render 2 object: one rotates and one does not. When I want rotate a object, I use function esRotate() with modelview_matrix.
Thanks
The simple solution is to call glDrawArrays() or glDrawElements() twice. The first call would be for the model you do want to rotate and the second call for the model you do not want to rotate. Only apply the esRotate() to the model for the first call.
Note that you will also need to call glUniformMatrix4fv() twice to reload the Model-View matrix for each model, with and without the rotation applied.

Rendering a QGraphicsScene using OpenGL in Qt

I'm trying to save a QGraphicsScene with OpenGL as an image (png or jpeg), but I don't want the image to depend on the current view (zoom). That's why I'm not using grabFrameBuffer but I use render() instead :
QImage imgToSave(1024,768,QImage::Format_ARGB32_Premultiplied);
// fill the image
// and define rectbuffer(), the QRect() containing what I want to save
QPainter painter(&imgToSave);
m_scene = new QGraphicsScene;
// fill the Scene
m_scene->render(&painter,imgToSave.rect(),rectbuffer());
It does work. My question is: is it using OpenGL capabilities or not?
If not, how to do so?
n.b. : i am using a QGLWidget as a Viewport for my GraphicsView. And the display using OpenGL is working. My concerns are about the image saving.
I am going to guess: no. Because for the QGraphicsScene to render through OpenGL you need to specify a QGLWidget derived object as the viewport of the scene - you haven't, so it's almost certainly using the raster engine. Secondly the QPainter uses whatever paint device you construct it with as a backend, you have specified a straight QImage which does not use OpenGL.
If you can't/won't use a QGLWidget through a QGraphicsView, then you might be able to render onto QGLFramebufferObject. But this brings it's own complications, namely you will have to create a hidden context beforehand.

Resources