I am experimenting with a primitive rendering style (solid colors with highlighted edges/creases) for an open-source game I contribute to. The world geometry is fairly simplistic, and is mostly comprised of blocks, pyramids, and there may ultimately be other simple volumes like cylinders, cones, other kinds of prisms, etc. The rendering will be done with OpenGL ES 2.
I have been experimenting with edge detection methods for the edges/creases. It seemed like doing shader-based edge detection (I tried the sobel filter and several other algorithms) on the depth value and face normals would be easiest, but I was unable to get a good result, mostly due to the precision limits of the depth buffer and the complexity of far-away geometry, as well as the inability to do any good antialiasing on the edges.
I ultimately decided that I needed to render the lines geometrically so I could make them thick and smooth out the edges, etc. I would like to generate the lines programmatically from the geometry definition prior to rendering to improve runtime performance. I can get most of the effect I want by drawing the main geometry, set a depth offset, then draw lines over the geometry. However, this technique has some shortcomings, as seen below:
Overlapping Geometry
There may be several pieces of geometry overlapping or adjoining to form more complex structures. Where several pieces have overlapping/coplanar faces, I want to draw an outline around them but not around each individual piece so you can see each separate part.
Current result on top; desired result on bottom:
Creases
This issue was also visible in the image above, but the image below also shows what my goals are. I want to draw lines where there are creases in overlapping geometry to make them stand out a lot more.
Current result on top; desired result on bottom:
From what I can tell so far, for the overlapping faces problem, I think I need to do intersection tests between my lines and any nearby intersecting faces, then somehow break the lines up and get rid of the segments that cross other faces. To create lines in the creases between geometry, I think I need to do some kind of intersection tests between the two faces that form the crease. However, I'm having a hard time wrapping my head around the step-by-step procedure for doing that. Again, I would like to set up these lines programmatically in a pre-rendering step if possible. Any guidance would be much appreciated.
Related
I've been trying to render silhouettes on CAD models with webgl. The closest i got to the desired result was with fwidth and a dot between the normal and the eye vector. I found it difficult to control the width though.
I saw another web based viewer and it's capable of doing something like this:
I started digging through the shaders, and the most i could figure out is that this is analytical - an actual line entity is drawn and that the width is achieved by rendering a quad instead of default webgl lines. There is a bunch of logic in the shader and my best guess is that the vertex positions are simply updated on every render.
This is a procedural model, so i guess that for cones and cylinders, two lines can always be allocated, silhouette points computed, and the lines updated.
If that is the case, would it be a good idea to try and do something like this in the shader (maybe it's already happening and i didn't understand it). I can see a cylinder being written to attributes or uniforms and the points computed.
Is there an approach like this already documented somewhere?
edit 8/15/17
I have not found any papers or documented techniques about this. But it got a couple of votes.
Given that i do have information about cylinders and cones, my idea is to sample the normal of that parametric surface from the vertex, push the surface out by some factor that would cover some amount of pixels in screen space, stencil it, and draw a thick line thus clipping it with the actual shape of the surface.
The traditional shader-based method is Gooch shading. The original paper is here:
http://artis.imag.fr/~Cyril.Soler/DEA/NonPhotoRealisticRendering/Papers/p447-gooch.pdf
The old fashing OpenGL technique from Jeff Lander
OpenGL Question:I have something to ask about clip space transformation. I am reading an online tutorial and it says that everything you draw outside the clip space will be clipped. When it come to this, does the elements outside the clip space affects the performance or not? Because it will not be drawn and thus it doesn't affect.
Assuming that it will affect performance and in case of 2d game like super mario, I am thinking about not to draw the elements outside the clip space to achieve better performance. Please clarify. Thanks.
OpenGL has only a certain amount of knowledge about your scene and will clip very late in the pipeline. It can't apply a broad phase test. Assuming you can, you should.
Supposing you had a model with 30,000 triangles, OpenGL would transform each and every one of those 30,000 triangles before considering clipping. If you know something as simple as the bounding sphere for the model it's possible you could see that the whole thing is completely outside of the frustum in a single test and save almost 30,000 extra bits of effort.
In a 2d game like Mario what this usually means is using the scroll position to index into the map and to generate geometry only for potentially visible tiles and sprites that are within the visible area.
For the map that will generally just men figuring out the (x, y) of one corner and then generating geometry for the known width and height of the screen so it means discarding the vast majority of the geometry with zero processing.
For the sprites, this is generally why in those sort of games you often see enemies reset to their starting position if you walk a little way from them and then walk back: they're added to the active list based on a map location trigger and removed when you walk far enough away. While not active, no mutable storage is afforded to them.
When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.
I'm trying and failing to work out how to achieve a quad-tree of materials (images) on a single plane, much like a Google Maps-style zoomable tile that gets more accurate the closer you get.
In short, I want to be able to have a 1x1 image texture (covering a plane that is 256 units wide and tall) that can then be replaced with a 2x2 texture, that can then be replaced with a 4x4 texture, and so on.
Like the image example below…
Ideally, I want to avoid having to create a different plane for each zoom level / number of segments. A perfect solution would allow me to break a single plane into 8x8 segments (highest zoom) and update the number of textures on the fly. So it would start with a 1x1 texture across all 64 (8x8) segments, then change into a 2x2 texture with each texture covering 4x4 segments, and so on.
Unfortunately, I can't work out how to do this. I explored setting the materialIndex for each face but you aren't able to update those after the first render so that wouldn't work. I've tried looking into UV coordinates but I don't understand how it would work in this situation, nor how to actually implement that in Three.js – there is little in the way of documentation / examples for this specific case.
A vertex shader is another option that came up in research, but again I don't know enough to understand how to construct that.
I'd appreciate any and all help with this, it will be a technique that proves valuable for other Three.js users I'm sure.
Not 100% sure what you are trying to do, whether you are talking about texture atlasing (looking up and different textures based on current setting/zooms) but if you are looking for quad-tree based texturing that increases in detail as you zoom in then this is essentially what mipmaping is and does.
(It can be also be used to do all sorts of weird things because of that, but that's another adventure entirely)
Generally mipmapping is automatic based on the filtering you use - however it sounds like you need more control over it.
I created an example hidden away in the three.js source tree which may help:
http://mrdoob.github.com/three.js/examples/webgl_materials_texture_manualmipmap.html
Which shows you how to load each mipmap level in manually, rather than have it just be automatically generated.
HTH
Once upon a time, I dabbled in programming Homebrew for the Nintendo DS. During testing of the 3D hardware, you could pull a count of currently rendered polygons from a hardware register. I was doing this to confirm how many, approximately, of a certain model would be rendered at various angles because there was a max of 2048 triangles allowed.
I'm hoping this isn't a completely stupid question... but here it goes. Is there anyway to get the number of polygons that are actually being rendered each frame (not being omitted by depth buffering) in OpenGL? Specifically, OpenGL ES 1.1?
You could just run each triangle through an algorithm to determine if it inside the view frustrum by treating the viewing area as four planes. Then it's just a matter of checking which side of the plane each triangle is on, and making sure you only add up triangles that are on the correct side of all four planes. This wouldn't be good for rendering speed, but it would give you an accurate count on how many polygons are being rendered for each viewing angle. This website Graphics Gems contains a lot of good source code that can help you with the math portion of things if you need it. It contains the source code for a series of five books that contain graphics algorithms such as ray triangle intersection, etc.
Edit:
I didn't notice your comment about depth buffering, the above description is for all triangles in the viewing window. You could just add two more planes at your depth buffer distances and use those to further filter out visible polygons.