Do elements drawn outside the clip plane affect OpenGL performance? - performance

OpenGL Question:I have something to ask about clip space transformation. I am reading an online tutorial and it says that everything you draw outside the clip space will be clipped. When it come to this, does the elements outside the clip space affects the performance or not? Because it will not be drawn and thus it doesn't affect.
Assuming that it will affect performance and in case of 2d game like super mario, I am thinking about not to draw the elements outside the clip space to achieve better performance. Please clarify. Thanks.

OpenGL has only a certain amount of knowledge about your scene and will clip very late in the pipeline. It can't apply a broad phase test. Assuming you can, you should.
Supposing you had a model with 30,000 triangles, OpenGL would transform each and every one of those 30,000 triangles before considering clipping. If you know something as simple as the bounding sphere for the model it's possible you could see that the whole thing is completely outside of the frustum in a single test and save almost 30,000 extra bits of effort.
In a 2d game like Mario what this usually means is using the scroll position to index into the map and to generate geometry only for potentially visible tiles and sprites that are within the visible area.
For the map that will generally just men figuring out the (x, y) of one corner and then generating geometry for the known width and height of the screen so it means discarding the vast majority of the geometry with zero processing.
For the sprites, this is generally why in those sort of games you often see enemies reset to their starting position if you walk a little way from them and then walk back: they're added to the active list based on a map location trigger and removed when you walk far enough away. While not active, no mutable storage is afforded to them.

Related

understanding the display of the pixels on the screen

I'm sorry if this is a stupid question, but I want to make sure that I'm right or not.
suppose we have an 8x8 pixel screen and we want to represent a 2x2 square, a pixel can be black - 1 and white - 0. I would imagine this as an 8x8 matrix
[[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0]]
using this matrix, we paint over the pixels and update them (for example) every second. we also have the coordinates of the pixels representing the square : (4,4) (4,5) (5,4) (5,5) and if we want to move the square we add 1 to x part of coordinate.
is it true or not?
Graphics Rendering is a complex mesh of art, mathematics, and hardware, assuming you're asking about how the screen actually works instead of a pet problem on simulating displays.
The buffer you described in the question is the interface which software uses to tell the hardware (video card) what to draw on the screen, and how it is actually done is in the realm of hardware. Hence, the logic for manipulating graphics objects (things you want drawn) is separate from the rendering process itself. Your program tells the buffer which pixels you want to update, and that's all; this can be done as often as you like, regardless of whether the hardware actually manages to flush its buffers onto the screen.
The software would be responsible for sorting out what exactly to draw on the screen; this is usually handled on multiple logical levels. Higher levels would construct a virtual worldspace for your objects and determine their interactions and attributes (position, velocity, collision, etc.), as well as a camera to determine the FOV the screen should display (if your world is 3D). Lower levels would then figure out the actual pixel values to write to the buffer, based on the camera FOV (3D), or just plain pixel coordinates after applying the desired transformations (rotation, shear, resize, etc.) to the associated image (2D).
It should be noted that virtual worldspace coordinates do not necessarily reflect pixel coordinates, even in 2D worlds. I'm not an expert on this subject, frankly, but I suspect it'll be easier if you first determine how far you want the object to move in virtual space first, and then apply the necessary transformations to show the results in a viewing window with customizable dimensions.
In short, you probably don't want to 'add 1 to x' when you want to move something on screen; you move it in a high abstraction layer, and then draw the results. This will save you a lot of trouble, especially if you have a complex scene with all kinds of stuff and a background.
Assuming you want to move a group of pixels to the right, then yes, all you need to do is identify the group of pixels and add 1 to their X coordinate. Of course you need to fill in the vacated spots with zeroes, otherwise that would have been a copy operation.
Keep in mind, my answer is a bit naive in the sense that when you reach the rightmost boundary, you have to wrap.

What is the most efficient way to create a reservoir in Three.js?

I am creating a 3D reservoir model which looks like this.
It's made of hundreds of thousands of cells with outline. The outline is needed for all cells underneath, because there is an IJK filter used to hide cells on any level and thus show the rest. Once the model is rendered, it shouldn't need to be updated in terms of position or scale.
That's enough about the background. The approach I'm using is creating one large geometry, which stores all vertices cross the reservoir in one triangle strip. It also stores IJK index for each cell, so the IJK filter works in shader level. This should create the mesh part. Then I create another object to draw all outlines using one THREE.LineSegments.
The approach works pretty well for small amount of cells, but for large data set, frame rate drops.
I'm proposing another way of doing this by barycentric outline and instancing drawing. Barycentric outline drawing removes the extra LineSegment object, since it draws outline in fragment shader. However, it comes with drawbacks. Because of the missing of geometry shader in WebGL, I have to use full triangle rather than triangle strip to store barycentric coordinates for each vertex. I'm ok with this extra memory usage, if instanced drawing can boost the performance.(?) That's to say, I draw a cube with outline, and I create as many instances as I need and put them in right position.
I am wondering if this approach is indeed gonna increase the performance theoretically. Any thoughts are welcomed!
Ok I think I am gonna answer this question myself.
I implemented the change based on above ideas and it works pretty good compared to the original version.
Let's put the result first: this approach has no problem rendering hundreds of thousands of cells at reasonable frame rate. My demo contains 400,000 cells, with the frame rate at 50 fps in worst case, running on my Nvidia 1050Ti card and 4k monitor. For comparison, if I draw 400,000 cells in the previous version, the frame rate could drop to 10 fps.
This means using instanced drawing for a large object is faster than composing a single large geometry. For rendering performance, the instanced cube is rendered only one side, while triangle-stripped cube is two-sided. Once I can draw a single unit cube with ideal outline, I can transform it to any places in "any" shape in vertex shader. But of course instanced drawing comes with its restrictions: each cell doesn't have to be at same shape, but has to have same number of vertices, faces, etc; I lost control to change vertex color...
As for memory usage, the new approach actually use less. I provide position for 8 vertices, instead of 14, in each cell. Even though the first unit cube has 36 vertices, I can use its unit position as index, for subsequent instances. That is, for 36 unit vertices (0/1, 0/1, 0/1), I only need to provide 8 real positions.
Hope this helps for people who want to implement the same optimization.

Using three.js, how would you project a globe world to a map on the screen?

I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

three.js - Overlapping layers flickering

When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.

Resources