I have been reading articles about clipping now for hours now but I dont seem to find a solution for my problem.
This is my scenario:
Within an OpenGL ES environment (IOS,Android) I am having a 2D scene graph consisting of drawable objects, forming a tree.
Each tree node has its own spatial room with it's own transformation matrix, and each node inherits its coordinate space to its children. Each node has a rectangular bounding box, but these bounding boxes are not axis aligned.
This setup works perfect to render a 2D scene graph, iterating through the tree, rendering each object and then it's children.
Now comes my problem: I am looking for a way enable clipping on a node. With clipping enabled, children of a node should be clipped when leaving the area of the parent's bounding box.
For example I wand to have a node containing a set of text nodes as children, which can be scrolled up and down withing it's parents bounding box, and should be clipped when leaving the area of the parent's bounding box.
Since bounding boxes are not axis aligned, I cannot use glScissor, which would be the easiest way.
I was thinking about using the stencil buffer and draw the filled bounding rectangle into it, when clipping is enabled and then enable the stencil buffer. This might workd but leads to another problem: What happens if a child inside a clipped node has clipping again ? - The stencil mask would have to be setup for the child, erasing the parents stencil mask.
Another solution I was thinking of, is to do the clipping in software. This would be possible, because within each node, clipping could be done relative easily in it's own local coordinate space. The backdraw of this solution would be that clipping has to be implemented for each new node type that is implemented.
Can anybody here point me into the right direction?
What I am looking for is something like the functionality of glscissor for clipping non axis aligned rectangular areas.
The scissor box is a simple tool for simple needs. If you have less simple needs, then you're going to have to use the less simple OpenGL mechanism for clipping: the stencil buffer.
Assuming that you aren't using the stencil buffer for anything else at present, you have access to 8 stencil bits. By giving each level in your scene graph a higher number than the last, you can assign each level its own stencil mask. Of course, if two siblings at the same depth level overlap, or simply are too close, then you've got a problem.
Another alternative is to assign each node a number from 1 to 255. Using a stencil test of GL_EQUAL, you will be able to prevent the overlap issue above. However, this means that you can have no more than 255 nodes in your entire scene graph. Depending on your particular needs, this may be too much of a restriction.
You can of course clear the stencil (and depth. Don't try to clear just one of them) when you run out of bits via either method. But that can be somewhat costly, and it requires that you clear the depth, which you may not want to do.
Another method is to use the increasing number and GL_EQUAL test only for non-axis-aligned nodes. You use the scissor box for most nodes, but if there is some rotation (whether on that node or inherited), then you use the stencil mask instead, bumping the mask counter as you process the graph. Again, you are limited to 256 nodes, but at least these are only for rotated nodes. Depending on your needs and how many rotated nodes there are, that may be sufficient.
Related
I'm sorry if this is a stupid question, but I want to make sure that I'm right or not.
suppose we have an 8x8 pixel screen and we want to represent a 2x2 square, a pixel can be black - 1 and white - 0. I would imagine this as an 8x8 matrix
[[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0]]
using this matrix, we paint over the pixels and update them (for example) every second. we also have the coordinates of the pixels representing the square : (4,4) (4,5) (5,4) (5,5) and if we want to move the square we add 1 to x part of coordinate.
is it true or not?
Graphics Rendering is a complex mesh of art, mathematics, and hardware, assuming you're asking about how the screen actually works instead of a pet problem on simulating displays.
The buffer you described in the question is the interface which software uses to tell the hardware (video card) what to draw on the screen, and how it is actually done is in the realm of hardware. Hence, the logic for manipulating graphics objects (things you want drawn) is separate from the rendering process itself. Your program tells the buffer which pixels you want to update, and that's all; this can be done as often as you like, regardless of whether the hardware actually manages to flush its buffers onto the screen.
The software would be responsible for sorting out what exactly to draw on the screen; this is usually handled on multiple logical levels. Higher levels would construct a virtual worldspace for your objects and determine their interactions and attributes (position, velocity, collision, etc.), as well as a camera to determine the FOV the screen should display (if your world is 3D). Lower levels would then figure out the actual pixel values to write to the buffer, based on the camera FOV (3D), or just plain pixel coordinates after applying the desired transformations (rotation, shear, resize, etc.) to the associated image (2D).
It should be noted that virtual worldspace coordinates do not necessarily reflect pixel coordinates, even in 2D worlds. I'm not an expert on this subject, frankly, but I suspect it'll be easier if you first determine how far you want the object to move in virtual space first, and then apply the necessary transformations to show the results in a viewing window with customizable dimensions.
In short, you probably don't want to 'add 1 to x' when you want to move something on screen; you move it in a high abstraction layer, and then draw the results. This will save you a lot of trouble, especially if you have a complex scene with all kinds of stuff and a background.
Assuming you want to move a group of pixels to the right, then yes, all you need to do is identify the group of pixels and add 1 to their X coordinate. Of course you need to fill in the vacated spots with zeroes, otherwise that would have been a copy operation.
Keep in mind, my answer is a bit naive in the sense that when you reach the rightmost boundary, you have to wrap.
When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.
I am actually trying to develop a web application that would visualize a Finite Element mesh. In order to do so, I am using WebGl. Right now I have a page with all the code necessary to draw the mesh in the viewport using triangles as primitives (each quad element of the mesh was splitted into two triangles to draw it). The problem is that, when using triangles, all the piece is "continuous" and you cant see the separation between triangles. In fact, what I would like to achieve is to add lines between the nodes so that, around each quad element (formed by two triangles) we have these lines in black, and so the mesh can actually be shown.
So I was able to define the lines in my page, but since one shader just can have one type of primitive, if I add the code for the line buffers and bind them it just show the lines, not the element (as they were the last buffers binded).
So the closest solution I have found is using multiple shaders, and managing them with multiple programs, but this solution would just enable me whether to plot the geometry with trias or to draw just the lines, depending on which program is currently selected.
Could any of you help me about how to approach this issue? I have seen a windows application that shows FE meshes using OpenGL and it is able to mix the triangles with points and lines, apart from using different layers, illumination etc. So I am aware that this may be complicated, but I assume that if it is possible somehow with OpenGl it should be as well with webGL.
Please if you provide any solution I would appreciate a lot that it contains some code as an example, for instance drawing a single triangle but including three black lines at its borders and maybe three points at the vertices.
setup()
{
<your current code here>
Additional step - Unbind the previous textures, upload and bind one 1x1 black pixel as a texture. Let this texture object be borderID;
}
Draw loop()
{
Unbind the previous textures, bind your normal textures, and draw the mesh like your current setup. This will fill the entire area with different colours, without border (the current case)
Bind the borderID texture, and draw the same vertices again except this time, use context.LINE_STRIP instead of context.TRIANGLES. This will draw lines with the black texture, and will appear as border, on top of the previously drawn colors for each triangle. You can have something like below
if(currDrawMode==0)
context3dStore.bindTexture(context3dStore.TEXTURE_2D, meshTextureObj[bindId]); else context3dStore.bindTexture(context3dStore.TEXTURE_2D, borderTexture1pixObj[bindId]);
context3dStore.drawElements((currDrawMode == 0) ? context3dStore.TRIANGLES: context3dStore.LINE_LOOP, indicesCount[bindId], context3dStore.UNSIGNED_SHORT, 0); , where currDrawMode toggles between drawing the border and drawing the meshfill.
Since the line texture appears as a border over the flat colors you had earlier, this should solve your need
}
I have a general question (I know I should present specific code with a problem, but in my case the problem is of a more general nature).
In Processing, let's say I make an ellipse:
ellipse(30, 30, 10, 10);
Now, is there a way to get the pixels where this ellipse is on the canvas? The reason would be to have a way of creating user interaction with the mouse (for instance). So when someone clicks the mouse over the ellipse, something happens.
I thought of turning everything into objects and use a constructor to somehow store the position of the shape, but this is easier said than done, particularly for more complex shapes. And that is what I am interested in. It's one thing to calculate the position of an ellipse, but what about more complex shapes? Are there any libraries?
Check out the geomerative library. It has a way to check whether the mouse is inside any SVG shape. I can't remember off the top of my head but it works something like you make a shape:
myShape = RG.loadShape("shape.svg");
and a point:
RPoint p = new RPoint(mouseX, mouseY);
and the boolean function contains() will tell you if the point is inside the shape:
myShape.contains(p);
It's better to use a mathematical formula than pixel-by-pixel checking of the mouse position (it's much faster, and involves less code).
For a perfect circle, you can calculate the Euclidean distance using Pythagoras' theorem. Assume your circle is centred at position (circleX,circleY), and has a radius (not diameter) of circleR. You can check if the mouse is over the circle like this:
if(sq(mouseX-circleX)+sq(mouseY-circleY) <= sq(circleR)) {
// mouse is over circle
} else {
// mouse is not over circle
}
This approach basically imagines a right-angled triangle, where the hypotenuse (the longest side) runs from the centre of the circle to the mouse position. It uses Pythagoras' theorem to calculate the length of that hypotenuse, and if it's less than the circle's radius then the mouse is inside the circle. (It includes a slight optimisation though -- it's comparing squares to avoid doing a square root, as that can be comparatively slow.)
An alternative to my original mathematical answer also occurred to me. If you can afford the memory and processing power of drawing all your UI elements twice then you can get good results by using a secondary buffer.
The principle involves having an off-screen graphics buffer (e.g. using PGraphics). It must be exactly the same size as the main display, and have anti-aliasing disabled. Draw all your interactive UI elements (buttons etc.) to this buffer. However, instead of drawing them the normal way, give each one a unique colour which it uses for fill and stroke (don't add any text or images... just solid colours). For example, one button might be entirely red, and another entirely green. Any other RGB value works, as long as each item has a unique colour. Make sure the background has a unique colour too.
The user never sees that buffer, so don't draw it to the screen (unless you're debugging or something). When you want to detect what item the mouse is over, just lookup the mouse position on that off-screen buffer. Get the pixel colour at that location, and match it to the UI element.
After you've done all that, just go ahead and draw everything to the main display as normal.
It's worth noting that you can cut-down the processing time of this approach a lot if your UI elements never (or rarely) move. You only need to redraw the secondary buffer when something appears/disappears, animates, or changes size/position.
Some background:
I am very new to OGL. My application concerns itself with 2D only. All objects are normal to the viewing direction, and I use orthographic projection. I find that the performance of the system is limited by the number of draw* calls indicating that I need to batch more.
There is only one object that I need to draw, but it consists of thousands of triangles that potentially overlap. I have the ability to pre-compute geometry in my particular application and order the triangles back to front since they have varying degrees of transparency. The vertex attribute consists of the color (only) including alpha that is used in the fragment program.
What I've done:
All the primitives are triangles and I assign the 3 vertices of each triangle the same color since the color is constant across a face. I put all of the vertices, for all triangles, and their colors into a single VBO (16-bit; there aren't that many vertices). The index buffer orders the triangles back to front and I issue a single draw call. I use alpha blending (SRC_ALPHA, ONE_MINUS_SRC_ALPHA).
Result:
I see that the result is correctly blended and rendered on the only machine that I possess and test on. I have not tried it on others. I've searched for quite some time, but in vain, for some definitive answer. BTW, the only reference is in the VBO extension spec where there is a mention of a "sequence of primitives" but it does not address what happens when the primitives overlap.
Question:
Is this the guaranteed behavior? That is will the result be the same as issuing multiple calls within glBegin(...) and glEnd(...) in immediate mode (which is guaranteed by the standard)?
Note: Depth buffer and stencil buffer are turned off.
It is guaranteed by the OpenGL specification that primitives will be rendered in the order provided. Each primitive pulled from a glDraw* command will be rendered in the order specified by its component vertices.
So yes: if you put the triangles in an order, that's the order you'll get them out when you render them.