Inner glow effect for primitives using GLSL ES 2.0 - opengl-es

I'm trying to create an inner glow effect for a triangle fan primitive using GLSL ES 2.0 - though only the outer edges are to be subject to the effect at hand. I guess there are many ways to do this, but haven't found any description so far.
There is the technique described in Make the edges of a textured polygon glow in OpenGL ES 2.0, however, this doesn't work for me as I'm working purely with primitive at this stage.
My initial thought was to somehow calculate the distance to the nearest edge in the fragment shader, and then set the color according to wether or not the distance falls within the bounds of some threshold value or not. (Of course, the color and alpha is to be a function of the distance from the nearest edge - the exact gradient profile is not important at this point.)
This approach poses two problems:
1) How do I calculate the distance from a fragment to the nearest edge?
2) How do I exclude common edges in this process, i.e. edges that are common to two (or more) triangles?
Is this a sensible approach, and if so: how do I resolve my two issues? Suggestions for alternative approaches are also greatly appreciated. (For instance, I've been reading that texture data need not be an image, and that it may be utilized for custom purposes. Could a non-image texture be part of the solution?) :)

To answer your two questions, I don't think there is any glsl magic that will do this for you. By the time you get to the fragment shader, there is no longer any information available about edges, especially trying to segregate true edges from internal edges.
What I recommend is to add more vertices to your fan, and use a new custom attribute to define the 'glow level'. See image for example, I would put a row of vertices around the edge, define these (and the center of the fan) to have maximum glow, and then define the edges to have zero glow, and then you can get an interpolated glow value between the edge and the new vertices.

Related

"face-based" vs "vertex-based" attributes in fragment shader

I'm writing a rather complex webgl application in three.js and in order to have more control on my mesh materials I'm defining them through shaders.
However, I'm facing a serious problem for which I could not find any truly satisfying answer. Let's assume the following strong pre-requisites:
my geometry is indexed, i.e. the vertices are NOT duplicated among faces and I don't want to change this
the geometry is stored in a THREE.BufferGeometry object, again I don't want to switch to THREE.Geometry
I don't want to duplicate faces and/or create new geometries
I don't want any two-ways rendering that would drop my framerate
Given the above, let's say I click on a triangle (i.e., shoot a ray and get the closest face intersecting the ray) of my geometry and I want such triangle to be highlighted with a different color.
I'm already performing this in a quite efficient way by assigning a custom vertex attribute, say 1.0, to each vertex belonging to a face to be highlighted.
In the vertex shader I define a varying with such vertex attribute and within the fragment shader I just execute the following (pseudocode):
vec4 frag_color=mix(default_color,highlight_color,(vert_attribute>1-eps));
where eps is a small value (say 1.0e-4).
The idea is that each fragment interpolates its corresponding vertex attributes, say Vatt_1, Vatt_2 and Vatt_3, and if all the three attributes get the value 1.0 thus their interpolation is still nearly 1.0 (not precisely because of some roundoff error, that's why I use a small tolerance eps) and the test is true.
If the test is true I have frag_color=highlight_color.
On the other hand, if at least one Vatt_i (i=1,2,3) is not 1.0 but the default 0.0, the interpolation at the current fragment is <1-eps and the test is false (giving frag_color=default_color).
This seems to work perfectly, but now I have the following problem that looks truly challenging (given the constraints 1 and 2 above):
I don't have any simple way in the fragment shader to know whether or not the current fragment belongs to a specific face (or stays within three specific vertices, which is the same). So, if I select two triangles T1 and T2, and by chance a third triangle T3 has one (or two) vertices shared with T1 and two (or one) vertices shared with T2 I get that T3 gets highlighted too because its three vertices get the attribute 1.0.
Of course, seen from a human perspective this shouldn't happen because the three "highlighted" vertices of T3 should "logically" refer to two different highlighted faces but for obvious reasons the fragment shader highlights also T3.
This is a quite general problem I guess, I read a lot of forums without finding any satisfying possible solution. I understand that "this is how it works" and that the fragment shader does not have any knowledge of the background triangle nor its vertices, but here I'm looking for some clever idea or trick.
Does anybody have any suggestion to face this issue? Sorry for bothering but just to prevent some possible arguments: I consider the four points 1-4 above as strong requirements because otherwise I'd have other problems related to the overall performance and I don't want to pay this price.
Thanks in advance
You can know if a point is inside a triangle using barycentric coordinates. This can be done at vertex or fragment shader level depending on your needs.
Just get the coordinates of the vertices of the triangle that you want to test and convert the coordinates of the current vertex or fragment to barycentric coordinates. After that a simple test of the value of barycentric coordinates will tell you if the point is inside the triangle or not.

3D mesh edge detection / feature line computation algorithm

I have a program that visualizes triangular meshes and allows the users to draw on the meshes using a pen. I want to have a "snapping" mode in my system. The snapping mode performs drawing corrections for the user in the sense that the user-drawn lines are snapped to the nearest edge (or the silhouette) of that part of the mesh.
I'm looking for an algorithm that compute the edges visible on the mesh from a given point of view. By edges, I'm referring to the outlines of the shape: corner points and the lines between them (similar to the definition of an edge in computer vision/image processing -- such as Canny edges).
So far I've thought of two approaches for this:
Edge detection: so far I've only found this paper. Their method is understandable, yet the implementation is not trivial (due to tensor computations and some ambiguity in their explanations). The problem with this approach is that it produces "edge strength values" which is a value in the range [0, 1] for every vertex. The value of 1 indicates an edge vertex with a high confidence. This introduces extra thresholding parameters in the system which I'd rather not have. Their output looks like this (range [0, 1] scaled to [0, 65535]):
Rendering or non-photorealistic methods such as the one asked in this question or this paper. They seem to be able to create the silhouette that I'm after as can be seen below:
I'm not a graphics expert and as of yet I don't know whether their methods can be used for computation of the feature lines rather than rendering.
I was wondering if anybody has any ideas about a good algorithm for what I want to do. Since the system is very interactive, the performance is important. The snapping feature does not have to be enabled all the time (therefore, if the method is computationally expensive, some delay in when "snapping enabled" mode is toggled can be tolerated while the algorithm is computing the edges.) Also, if you know of any implementation (preferably open source), I'd be grateful if you could share it with me.
There are two types of edges that you want to detect:
silhouette edges are viewpoint dependent, they correspond to the places where the line of sight tangents the surfaces. With a triangulated model, they are easy to determine, as they are shared by a front-facing triangle and a back-facing one.
"angular" edges are viewpoint independent and formed by a discontinuity in the tangent plane direction. As a triangulated model has itself this kind of discontinuity, there is no exact criterion to find them. Just set a threshold on the angle formed by two triangles. This threshold must be such that smooth patches do not trigger.
By this approach, you will find the wanted edges in 3D.
This is not enough, as part of them are hidden by other surfaces. You have the option of integrating them as edges in the 3D model and letting the rendering engine do its job, or, if you have the courage, to implement an hidden lines removal algorithm. (The wikipedia link is a little terse.)
Since posting the question, something else came into my head. Since 2D edge detection is a very well-studied problem, one way of tackling the problem is performing 2D edge detection on the projection image of the mesh.
In other words, given a specific view of the mesh, one could generate a 2D image. A 2D edge detection algorithm (such as Canny edge detector) could then be run on the 2D image and the results can be back-projected to 3D to determine the silhouettes of the mesh in question. One possible advantage of this is simplicity!
Edit (2017):
Even though I moved away from this, I returned to this problem again for a different purpose. To anybody else looking into this problem: there is a paper that talks about various contours from meshes that's worth reading (the paper is "Suggestive Contours for Conveying Shape" by DeCarlo et al.).
Working implementation of the methods discussed in the paper are available here.

How to discriminate between vertices 1,2, and 3 inside a GLES Vertex Shader that is drawing triangle(s)

Is there any way to tell, from within a gl es vertex shader (that is drawing triangles) which of the three vertices is being processed?
Using gl_VertexID doesn't work for me, because it gives the index of the vertex in the list of vertices, but I use indices to specify a different order to draw the vertices, and so the value I want cannot be determined from gl_VertexID alone.
You can add a vertex attribute to represent the indices 0, 1, 2, but as #matic-oblak noted you may have to replicate some vertices that are shared between triangles. If the mesh is "three-colorable" (in the graph theory sense) then you can assign indices without any replication.
A tetrahedron is not 3-colorable, whereas a cube is 2-colorable, and we can triangulate the faces of a cube and get a 3-colorable mesh. Ordinary vertices have degree 6 in a triangular mesh and are "locally" 3-colorable.
Therefore you can 3-color a mesh as much as possible -- where it fails you will have to replicate vertices. Unfortunately 3-coloring is an NP-complete problem , but with a some simple heuristics I think you can do a fairly reasonable job.
As I commented above, what I was looking for is deliberately not available for pipeline efficiency reasons. See the comment by Alfonse Reinheart at the following page:
https://www.opengl.org/discussion_boards/showthread.php/181822-gl_VertexId-gl_InstanceID-gl_PrimitiveID-but-where-is-gl_IndexID
The other answer, posted by wcochran is interesting, and could be a way to pass less information to the rendering pipeline, although as s/he points out, it comes at the cost of some substantial preprocessing.

OpenGL ES GL_TRIANGLES gradient issue

I am trying to draw a area graph with a gradient. This is what I have right now.
If you look at the red-green graph, you will notice the gradient is does not look the way its supposed to.
EDIT: The gradient should be uniform like this:
I am using OpenGL ES 2.0 and GLKit to draw a bunch of charts. The chart is drawn using GL_TRIANGLES. I understand that the issue is that the gradient is being drawn for each triangle individually.
The only approach I can think of is to use a stencil buffer. I will draw the gradient in a big rectangle and clip it to this shape using the stencil. Is there a better way to do this? If not could you help me draw a stencil with specified points? I am new to OpenGL and not getting a good explanation on using stencil buffer.
You don't need a stencil buffer. I don't think more triangles will help, either — more likely that'd just cause you more confusion because you'd be assigning per-vertex colors to intermediate vertices and having to interpolate them yourself.
Your gradients are coming out that way because of how and where you assign vertex colors for interpolation. Notice the difference in colors between your output and the example of what you're looking for:
You've got 100% red at every vertex along the top edge of your graph, and 100% green at every vertex along the bottom edge. OpenGL interpolates colors linearly across the face of each triangle, which is why you've got more red in the shorter parts of your graph.
In the output you're looking for, the top of the graph starts out less red in the shorter parts, so that it makes a shorter transition to white in over shorter distance.
There are a few different ways to do this, but probably the easiest (for your plan of using GLKBaseEffect instead of writing your own shaders) might be to use a 1D texture for your gradient, and assign a texture coordinate to each vertex that's proportional to its Y coordinate on the graph, like so:
(The example coordinates in my diagram assume your graph vertices cover the range 0.0 to 1.0, but the point stands regardless: the vertical texture coordinate for each point should be a fraction of the graph's total height, between 0.0 and 1.0.)
Alternatively, you could look into drawing in two passes: First, draw the shape of your graph, then draw a quad (two triangles) covering the entire screen with your gradient, using the appropriate glBlendFunc so that it only draws over the area you've filled in with your graph shape.
OpenGL ES can do what you want but you need to increase the tessellation of your model. In other words, instead of using just a few large triangles, you need more and smaller triangles, with the vertex color changes spread over them evenly. This will give you better control over the gradients. Triangles are cheap on accelerated OpenGL ES, so even if you increase the number 100 times, it will not have much impact on performance.
You might also consider a different approach, where the entire graph is covered by a single texture which contains the gradient. That would be easier to implement.

2D geometry outline shader

I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all.
I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter.
I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore.
The following image shows what this looks like for one object when it is fully intact.
The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case.
This image shows the same object with a few fixtures removed.
Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated.
Thanks :)
Assuming you're able to do something about those awkward points that are slightly inset from the corners (eg, if you numbered the points in English-reading order, with the first being '1', point 6 would be one)...
If a point is interior then if you list all the polygon edges connected to it in clockwise order, each pair of edges in sequence will have a polygon in common. If any two edges don't have a polygon in common then it's an exterior point.
Starting from any exterior point you can then get the whole outline by first walking in any direction and subsequently along any edge that connects to an exterior point you haven't visited yet (or, alternatively, that isn't the edge you walked along just now).
Starting from an existing outline and removing some parts, you can obviously start from either exterior point that used to connect to another but no longer does and just walk from there until you get to the other.
You can't handle this stuff in a shader under ES because you don't get connectivity information.
I think the best you could do in a shader is to expand the geometry by pushing vertices outward along their surface normals. Supposing that your data structure is a list of rectangles, each described by, say, a centre, a width and a height, you could achieve the same thing by drawing each with the same centre but with a small amount added to the width and height.
To be completely general you'd need to store normals at vertices, but also to update them as geometry is removed. So there'd be some pushing of new information from the CPU but it'd be relatively limited.

Resources