Does OpenGL (ES) draw objects completely outside of the viewport? - opengl-es

I have a viewport setup with orthographic projection. If I ask OpenGL to draw a quad outside of the viewport (x y bounds) using glDrawArrays(), does it ignore, or still draws it?

opengl will process your vertices (modelview transform etc.) because thats how it figures out where the pixels would end up, but when it comes to actually rendering, it will not 'draw' anything, because the pixel coordinates will not exist in the framebuffer. Depending on exactly where the coordinates are and other factors opengl may be able to stop processing your vertices sooner, but in general it will do all the coordinate transformations at least.
So in a word no, it will not 'draw' them.

Related

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

Is drawing outside the viewport in OpenGL expensive?

I have several thousand quads to draw, some of which might fall entirely outside the viewport. I could write code which will detect which quads fall wholly outside viewport and ask OpenGL to draw only those which will be at least partially visible. Alternatively, I could simply have OpenGL draw all of the quads, regardless of whether they intersect with the viewport.
I don't have enough experience with OpenGL to know if one of these is obviously better (or if OpenGL offers some quick viewport intersection test I can use). Are draws outside the viewport close to being no-ops, or are they expensive enough that I should I try to avoid them?
It depends on your circumstances.
Drawing is best done in batches, preferably batches that are static in structure (ie: each batch is drawn in its entirety). So you shouldn't be culling down at the quad level. But doing some culling of large groups of quads is not unwelcome.
The primary performance that you'll lose is vertex transform (aka: your vertex shader). A vertex shader has to be run on every vertex you provide, regardless of anything else. However, hardware will discard triangles that are trivially outside of the viewport, so you won't soak up any fillrate or other performance.
However, that doesn't mean that it's OK if your vertex T&L is cheap. Rendering large blocks of triangles that aren't visible may very well stall the rasterizer, because all of the triangles are being culled. That is, if you draw a lot of stuff that gets culled by being off screen, the fillrate that you might have used on actually visible triangles may be lost.
So it's not a good idea to just hurl geometry at the GPU willy-nilly.
In any case, if you're doing 2D rendering, coarse culling of discrete groups of quads is really all you need. You could divide your tilemap into screen-sized portions, and you draw up to 4 of these based on the position of the camera.

How to create a shader to mask using a degree offset from a central point?

I'm a little bit lost, and this is somewhat related to another question I've asked about fragment shaders, but goes beyond it.
I have an orthographic scene (although that may not be relevant), with the scene drawn here as black, and I have one billboarded sprite that I draw using a shader, which I show in red. I have a point that I know and define myself, A, represented by the blue dot, at some x,y coordinate in the 2d coordinate space. (Lower-left of screen is origin). I need to mask the red billboard in a programmatic fashion where I specify 0% to 100%, with 0% being fully intact and 100% being fully masked. I can either pass 0-100% (0 to 1.0) in to the shader, or I could precompute an angle, either solution would be fine.
( Here you can see the scene drawn with '0%' masking )
So when I set "15%" I want the following to show up:
( Here you can see the scene drawn with '15%' masking )
And when I set "45%" I want the following to show up:
( Here you can see the scene drawn with '45%' masking )
And here's an example of "80%":
The general idea, I think, is to pass in a uniform 'A' vec2d, and within the fragment shader I determine if the fragment is within the area from 'A' to bottom of screen, to the a line that's the correct angle offset clockwise from there. If within that area, discard the fragment. (Discarding makes more sense than setting alpha to 0.0 or 1.0 if keeping, right?)
But how can I actually achieve this?? I don't understand how to implement that algorithm in terms of a shader. (I'm using OpenGL ES 2.0)
One solution to this would be to calculate the difference between gl_FragCoord (I hope that exists under ES 2.0!) and the point (must be sure the point is in screen coords) and using the atan function with two parameters, giving you an angle. If the angle is not some value that you like (greater than minimum and less than maximum), kill the fragment.
Of course, killing fragments is not precisely the most performant thing to do. A (somewhat more complicated) triangle solution may still be faster.
EDIT:
To better explain "not precisely the most performant thing", consider that killing fragments still causes the fragment shader to run (it only discards the result afterwards) and interferes with early depth/stencil fragment rejection.
Constructing a triangle fan like whoplisp suggested is more work, but will not process any fragments that are not visible, will not interfere with depth/stencil rejection, and may look better in some situations, too (MSAA for example).
Why don't you just draw some black triangles ontop of the red rectangle?

Limiting the drawing to a rectangle in OpenGLES

I need to limit the drawing of an object to a rectangle. I can't just change the viewport to match the rectangle becouse the ModelView matrix (that should change the rectangle, but not the content) may not be identity. A solution that would work is to draw to a FBO that match the rectangle, then draw the FBO to the screen, but it seems to slow. Is there any better option to do that?
If I understood you correctly, glScissor should be the function you are looking for. It crops the rendering to a selected sub-rectangle of the viewport. This does not modify the viewport. So the objects cover the same size on the screen, it just prevents you from drawing any pixels outside of the scissor region. If this is not what you want and you want the sub-rectangle to contain the whole scene and thus your objects to shrink, then changing the viewport is the solution of choice.
EDIT: If you want the rectangle to be transformable and especially rotatable (and therefore not a rectangle anymore on the screen), then rendering into an FBO and using this as texture on a quad is probably the best solution. Otherwise you could probably also just modify the vertex coordinates after projection, thus multiplying the transformation matrix of the target rectangle with the projection matrix and using this as new projection matrix, but I'm not completely sure about that (but at least something similar should do it.

iPhone OpenGL ES: Applying a Depth Test on Textures that have transparent pixels for 2D game

Currently, I have blending and depth testing turn on for a 2D game. When I draw my textures, the "upper" texture remove some portion of the lower textures if they intersect. Clearly, transparent pixels of the textures are taken into account of the depth test, and it clear out all the colors of the drawn lower textures if they intersect. Moreover, alpha blendings are incorrectly rendered. Are there any sort of functions that can tell OpenGL to not include transparent pixels into depth testing?
glEnable( GL_ALPHA_TEST );
glAlphaFunc( GL_EQUAL, 1.0f );
This will discard all pixels with an alpha of anything other than fully opaque. These pixels will, then, not be rendered to the Z-Buffer. This does, however, affect various Z-Buffer pipeline optimisations so it may cause some serious slowdowns. Only use it if you really have too.
No it's not possible. This is true of all hardware depth testing.
GL (full or ES -- and D3D) all have the same model -- they paint in the order you specify polygons. If you draw polygon A in before polygon B, and logically polygon A should be in front on polygon B, polygon B won't be painted (courtesy of the depth test).
The solution is to draw you polygons in order from farthest to nearest the current view origin. Happily in a 2D game this should just be a simple sort (one you probably won't even need to do very often).
In 3D games BSPs are the basic solution to this issue.
if you're using shaders, can try disabling blending and discard the pixels with alpha 0
if(texColor.w == 0.0)
discard;
What type of blending are you using?
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Should prevent any fragments with alpha of 0 from writing to the depth buffer.

Resources