How to render an Entity or a Layer exactly once / in one frame, in QT3D - qt3d

I am rendering a mesh on screen, and when the user presses a button I am rendering a black square to a texture and then the mesh to the texture. After this I am firing up a compute shader that performs image processing on the "snaphot". My question is: How to do the rendering exactly once?
These are the steps I need to perform:
Render black square to texture (to clear texture)
Render geometry to texture (to take a snapshot of it so I can process it)
Start compute shader with texture as input
I am able to perform the compute shader exactly one time using the trigger(1) command (technically this was not straightforward though : https://bugreports.qt.io/browse/QTBUG-86493).
However, how do I render the black square and then the mesh (step 1&2) to a texture exactly one time?
Ideally I would like to start the compute shader just after step 1&2 are finished, and it is important that step 1&2 stop being executed after step 3 is executed, or else the result from 3 will be overwritten, and this is the problem I am facing now in my qt3d application.
I guess I could just set the property on the qt3d entities to true, and then to false, and then start the compute shader. But how do I know when to set enabled back to false? If I do it too early, they might get turned on and off before they have had time to get rendered.

Related

Per-object post-processing

Suppose I need to render the following scene:
Two cubes, one yellow, another red.
The red cube needs to 'glow' with red light, the yellow one does not glow.
The cubes are rotating around the common center of gravity.
The camera is positioned in
such a way that when the red, glowing cube is close to the camera,
it partially obstructs the yellow cube, and when the yellow cube is
close to the camera, it partially obstructs the red, glowing one.
If not for the glow, the scene would be trivial to render. With the glow, I can see at least 2 ways of rendering it:
WAY 1
Render the yellow cube to the screen.
Compute where the red cube will end up on the screen (easy, we have the vertices +the model view matrix), so render it to an off-screen
FBO just big enough (leave margins for the glow); make sure to save
the Depths to a texture.
Post-process the FBO and make the glow.
Now the hard part: merge the FBO with the screen. We need to take into account the Depths (which we have stored in a texture) so looks
like we need to do the following:
a) render a quad , textured with the FBO's color attachment.
b) set up the ModelView matrix appropriately (
we need to move the texture by some vector because we intentionally
rendered the red cube to a smaller than the screen FBO in step 2 (for
speed reasons!)) c) in the 'merging' fragment shader, we need to write
the gl_FragDepth from FBO's Depth attachment texture (and not from
FragCoord.z)
WAY2
Render both cubes to a off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1's.
Post-process the FBO so that the marked area gets blurred and blend this to make the glow
Blit the FBO to the screen
WAY 1 works, but major problem with it is speed, namely step 4c. Writing to gl_FragDepth in fragment shader disables the early z-test.
WAY 2 also kind of works, and looks like it should be much faster, but it does not give 100% correct results.
The problem is when the red cube is partially obstructed by the yellow one, pixels of the red cube that are close to the yellow one get 'yellowish' when we blur them, i.e. the closer, yellow cube 'creeps' into the glow.
I guess I could kind of remedy the above problem by, when I am blurring, stop blurring when the pixels I am reading suddenly decrease in Depth (means we just jumped from a further object to a closer one) but that would mean twice as many texture accesses when blurring (in addition to fetching the COLOR texture we need to keep fetching the DEPTH texture), and a conditional statement in the blurring fragment shader. I haven't tried, but I am not convinced it would be any faster than WAY 1, and even that wouldn't give 100% correct results (the red pixels close to the border with the yellow cube would be only influenced by the visible part of the red cube, rather than the whole (-blurRadius,+blurRadius) area so in this area the glow would not be 100% the same).
Would anyone have suggestions how to best implement such 'per-object post-processing' ?
EDIT:
What I am writing is a sort of OpenGL ES library for graphics effects. Clients are able to give it a series of instructions like 'take this Mesh, texture it with this, apply the following matrix transformations it its ModelView matrix, apply the following distortions to its vertices, the following set of fragment effects, render to the following Framebuffer'.
In my library, I already have what I call 'matrix effects' (modifying the Model View) 'vertex effects' (various vertex distortions) and 'fragment effects' (various changes of RGBA per-fragment).
Now I am trying to add what I call 'post-processing' effects, this 'GLOW' being the first of them. I define the effect and I vision it exactly as you described above.
The effects are applied to whole Meshes; thus now I need what I call 'per-object post-processing'.
The library is aimed mostly at kind of '2.5D' usages, like GPU-accelerated UIs in Mobile Apps, 2-2.5D games (think Candy Crush), etc. I doubt people will actually ever use it for any real 3D, large game.
So FPS, while always important, is a bit less crucial then usually.
I try really hard to keep the API 'Mesh-local', i.e. the rendering pipeline only knows about the current Mesh it is rendering. Main complaint about the above is that it has to be aware of the whole set me meshes we are going to render to a given Framebuffer. That being said, if 'mesh-locality' is impossible or cannot be done efficiently with post-processing effects, then I guess I'll have to give it up (and make my Tutorials more complicated).
Yesterday I was thinking about this:
# 'Almost-Mesh-local' algorithm for rendering N different Meshes, some of them glowing
Create FBO, attach texture the size of the screen to COLOR0, another texture 1/4 the size of the screen to COLOR1.
Enable DEPTH test, clear COLOR/DEPTH
FOREACH( glowing Mesh )
{
use MRT to render it to COLOR0 and COLOR1 in one go
}
Detach COLOR1, attach STENCIL texture
Set up STENCIL so that the test always passes and writes 1s when Depth test passes
Switch off DEPTH/COLOR writes
FOREACH( glowing Mesh )
{
enlarge it by N% (amount of GLOW needs to be modifiable!)
render to STENCIL // i.e. mark the future 'glow' regions with 1s in stencil
}
Set up STENCIL so that test always passes and writes 0 when Depth test passes
Switch on DEPTH/COLOR writes
FOREACH( not glowing Mesh )
{
render to COLOR0/STENCIL/DEPTH // now COLOR0 contains everything rendered, except for the GLOW. STENCIL marks the unobstructed glowing areas with 1s
}
Blur the COLOR1 texture with BLUR radius 'N'
Merge COLOR0 and COLOR1 to the screen in the following way:
IF ( STENCIL==0 ) take pixel from COLOR0
ELSE blend COLOR0 and COLOR1
END
This is not Mesh-local (we still need to be able to process all 'glowing' Meshes first) although I call it 'almost Mesh-local' because it differentiates between meshes only on the basis of the Effects being applied to them, and not which one is where or which obstructs which.
It also can have problems when two GLOWING Meshes obstruct each other (blend does not have to be done in the right order) although with the GLOW being half-transparent, I am hoping the final look will be more or less ok.
Looks like it can even be turned into a completely 'Mesh-local' algorithm by doing one giant
FOREACH(Mesh)
{
if( glowing )
{
}
else
{
}
}
although at a cost of having to attach and detach stuff from FBO and setting STENCILS differently at each loop iteration.
A knee-jerk suggestion is to do the hybrid:
compute where the red cube will end up on screen, so render it to an off-screen FBO just big enough (or one the same size as the screen, since creating FBOs on the hoof may not be efficient); don't worry about depths, it's only the colours you're after;
render both cubes to an off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1s;
post-process to the screen by using an original pixel from (2) wherever the stencil is 0, or a blurred pixel computed by sampling (1) wherever the stencil is 1.

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

How to store and access per fragment attributes in WebGL

I am doing a particle system in WebGL using Three.js, and I want to do all the computation of the particles in the shaders. To achieve that, the positions (for example) of the particles are stored in a texture which is sampled by the vertex shader of each particle (POINT primitive).
The position texture is in fact two render targets which are swapped each frame after being updated off screen. Each pixel of this texture represent a particle.
To update a position, I read one of he render targets (texture2D), do some computation, and write on the other render target (fragment output).
To perform the "do some computation" step, I need some per particle attributes, like its velocity (and a lot of others). Since this step is done in the fragment shader, I can't use the vertex attributes buffers, so I have to store these properties in separate textures and sample each of them in the fragment shader.
It works, but sampling textures is slow as far as I know, and I wonder if there is some better ways to do this, like having one vertex per particle, each rendering a single fragment of the position texture.
I know that OpenGL 4 as some alternative ways to deal with this, like UBO or SSBO, but I'm not sure about WebGL.

OpenGL ES. Hide layers in 2D?

For example I have 2 layers: background and image. In my case I must show or hide an image on zoom value changed (simply float variable).
The only solution I know is to keep 2 various frame buffers for both background and image and not to draw the image when it is not necessary.
But is it possible to do this in an easier way?
Just don't pass the geometry to glDrawArrays() for the layer you want to hide when the zoom occurs. OpenGL ES completely re-renders everything every frame. You should have a glClear() call at the start of your frame render loop. So, removing something is done by just not sending its triangles. You might need to divide your geometry into separate lists for each layer.

OpenGL performance issue

I'm writing a 2D RPG using the LWJGL and Java 1.6. By now, I have a 'World' class, which holds an ArrayList of Tile (interface with basic code for every Tile) and a GrassTile class, which makes the use of a Spritesheet.
When using Immediate mode to draw a grid of 64x64 GrassTiles I get around 100 FPS and do this by calling the .draw() method from each tile inside the ArrayList, which binds the spritesheet and draws a certain area of it (with glTexCoord2f()). So I heard it's better to use VBO's, got a basic tutorial and tried to implement them on the .draw() method.
Now there are two issues: I don't know how to bind only a certain area of a texture to a VBO (the whole texture would be simply glBindTexture()) so I tried using them with colours only.
That takes me to second issue: I got only +20 FPS (120 total) which is not really what I expected, so I suppose I'm doing something wrong. Also, I am making a single VBO for each GrassTile while iterating inside the ArrayList. I think that's kind of wrong, because I can simply throw all the tiles inside a single FloatBuffer.
So, how can I draw similar geometry in a better way and how can I bind only a certain area of a Texture to a VBO?
So, how can I draw similar geometry in a better way...
Like #Ian Mallett described; put all your vertex data into a single vertex buffer object. This makes it possible to render your map in one call. If your map get 1000 times bigger you may want to implement a camera solution which only draws the vertices that are being shown on the screen, but that is a question that will arise later if you're planning on a significantly bigger map.
...and how can I bind only a certain area of a Texture to a VBO?
You can only bind a whole texture. You have to point to a certain area of the texture that you want to be mapped.
Every texture coordinate relates to a specific vertex. Every tile relates to four vertices. Common tiles in your game will share the same texture, hence the 'tile map' name. Make use of that. Place all your tile textures in a texture sheet and bind that texture sheet.
For every new 'tile' you create, check whether the area is meant to be air, grass or ground and then point to the part of the texture that corresponds to what you intend.
Let's say your texture area in pixels are 100x100. The ground area is 15x15 from the lower left corner. Follow the logic above explains the example code being shown below:
// The vertexData array simply contains information
// about a tile's four vertices (or six
// vertices if you draw using GL_TRIANGLES).
mVertexBuffer.put(0, vertexData[0]);
mVertexBuffer.put(1, vertex[1]);
mVertexBuffer.put(2, vertex[2]);
mVertexBuffer.put(3, vertex[3]);
mVertexBuffer.put(4, vertex[4]);
mVertexBuffer.put(5, vertex[5]);
mVertexBuffer.put(6, vertex[6]);
mVertexBuffer.put(7, vertex[7]);
mVertexBuffer.put(8, vertex[8]);
mVertexBuffer.put(9, vertex[9]);
mVertexBuffer.put(10, vertex[10]);
mVertexBuffer.put(11, vertex[11]);
if (tileIsGround) {
mTextureCoordBuffer.put(0, 0.0f);
mTextureCoordBuffer.put(1, 0.0f);
mTextureCoordBuffer.put(2, 0.15f);
mTextureCoordBuffer.put(3, 0.0f);
mTextureCoordBuffer.put(4, 0.15f);
mTextureCoordBuffer.put(5, 0.15f);
mTextureCoordBuffer.put(6, 0.15f);
mTextureCoordBuffer.put(7, 0.0f);
} else { /* Other texture coordinates. */ }
You actually wrote the solution. The only difference is that you should upload the texture coordinates data to the GPU.
This is the key:
I am making a single VBO for each GrassTile while iterating inside the ArrayList.
Don't do this. You make a VBO once, and then you update it if necessary. Making textures, VBOs, shaders, is the slowest possible use of OpenGL--no wonder you're getting problematic framerates--you're doing it O(n) times, each frame.
I think that's kind of wrong, because I can['t?] simply throw all the tiles inside a single FloatBuffer.
You only gain performance when you batch draw calls. This means that when you draw your tiles, you should draw all of them at once with one VBO.
//Initialize
Make a single VBO (or two: one for vertex, one for texture
coordinates, whatever--the key point is O(1) VBOs).
Fill your VBO with ALL of your tiles' data.
//Main loop
while (true) {
Draw the VBO with a single draw call,
thus drawing all your tiles all at once.
}

Resources