How many normals? - opengl-es

if you are calculating the normals of a polygon for rendering it on WebGL, do you use a normal for every index in the index array or for every vertex on the vertex array?

In the notes here, the user is calculating them for each vertex.

Every vertex. A vertex, in the WebGL sense (which is the same as OpenGL ES and other predecessors), isn't really a point in space, but rather a combination of attributes. One of these is almost always the location (though in unusual cases you might not have that), and others are generally things like the normal vector, the colour, the texture coordinates, and so on.
The index array, by contrast, is an offset into the vertex attribute arrays. So when you specify index (say) 1 in an index array, it's shorthand for "the vertex made of combining the first location in the location buffer, the first normal in the normal buffer, the first colour in the colour buffer, and the first texture coordinate in the texture coordinate buffer".
The most counter-intuitive thing for me when learning this was separating vertices from the locations they happen to occupy. There's no reason why two vertices can't have the same location.

Related

Does three.js counts vertices that doesn't referenced in face list?

Let say I have a geometry having unnecessary vertices in vertex list. But those unnecessary vertices are not referenced in face list of this geometry. Will three.js make an extra calculation while rendering this geometry due to unused vertices or will it just ignore those vertices?
Three.js technically doesn't care about the contents of the vertex list. It uses that data to create a buffer, which it sends to the GPU. The same is true for the faces list.
The GPU will look at the faces buffer to determine which vertices to use to draw a particular face. If there are extra vertices in the vertex list, they should be ignored, but they do take up extra memory.
Take a look at how BufferGeometry builds its vertex list (BufferGeometry.attributes.position) and faces list (BufferGeometry.index). This is much closer to how GL and the GPU use these lists, and should give you a clearer picture of how the relationships work.

OpenGL ES/real time/position of any vertex that is displayed?

I'm currently dealing with OpenGL ES (2, iOS 6)… and I have a question
i. Let be a mesh that has to be drawn. Moreover,
ii. I can ask for a rotation/translation so that the point of view changes.
So,
how can I know (in real time) the position of any vertex that is displayed?
Thank you in advance.
jgapc
It's not entirely clear what it is you are after, but if you want to know where your object is after doing a bunch of rotations and translations, then one very easy option, if you perform these changes in your program code instead of in the shader, is to simply take the entire last row or column of your transformation matrix (depends if you are using row or column major matrices) which will be the final translation of your object's center as a coordinate vector.
This last row or column is the same thing as multiplying your final transformation matrix by your object's local coordinate center vector, which is (0,0,0,1).
If you want to know where an object's vertex is, rather than the object's center, then multiply that vertex in local coordinate space by the final transformation matrix, and you will get the new coordinate where that vertex is positioned.
There are two things I'd like to point out:
Back-face culling discards triangles, not vertices.
Triangles are also clipped so that they're within the viewing frustum.
I'm curious as to why you care about what is not displayed?

How to detect a click on an edge of a multigraph?

I have written a win32 api-based GUI app which uses GDI+ features such as DrawCurve() and DrawLine().
This app draws lines and curves that represent a multigraph.
The data structure for the edge is simply a struct of five int's. (x1, y1, x2, y2, and id)
If there is only one edge between two vertices, a straight line segment is drawn using DrawLine().
If there are more than one edges, curves are drawn using DrawCurve() -- Here, I spread straight-line edges about the midpoint of two vertices, making them curves. A point some unit pixels apart from it is calculated using the normal line equation. If more edges are added then a pixel two unit pixels apart from the midpoint is selected, then next time 3 unit pixels, and so on.
Now I have two questions on detecting the click on edges.
In finding straight-line edges, to minimize the search time, what should I do?
It's quite simple to check if the pixel clicked is on the line segment but comparing all edges would be inefficient if the number of edges large. It seems possible to do it in O(log n), where n is the number of edges.
EDIT: at this point the edges (class Edge) are stored in std::map that maps edge id (int)'s
to Edge objects and I'm considering declaring another container that maps pixels to edge id's.
I'm considering using binary search trees but what can be the key? Or should I use just a 2D pixel array?
Can I get the array of points used by DrawCurve()? If this is impossible, then I should re-calculate the cardinal spline, get the array of points, and check if the point clicked by the user matches any point in that array.
If you have complex shaped lines you can do as follows:
Create an internal bitmap the size of your graph and fill it with black.
When you render your graph also render to this bitmap the edges you want to have click-able, but, render them with a different color. Store these color values in a table together with the corresponding ID. The important thing here is that the colors are different (unique).
When the graph is clicked, transfer the X and Y co-ordinates to your internal bitmap and read the pixel. If non-black, look up the color value in your table and get the associated ID.
This way do don't need to worry about the shape at all, neither is there a need to use your own curve algorithm and so forth. The cost is extra memory, this will a consideration, but unless it is a huge graph (in which case you can buffer the drawing) it is in most cases not an issue. You can render the internal bitmap in a second pass to have main graphics appear faster (as usual).
Hope this helps!
(tip: you can render the "internal" lines with a wider Pen so it gets more sensitive).

2D geometry outline shader

I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all.
I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter.
I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore.
The following image shows what this looks like for one object when it is fully intact.
The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case.
This image shows the same object with a few fixtures removed.
Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated.
Thanks :)
Assuming you're able to do something about those awkward points that are slightly inset from the corners (eg, if you numbered the points in English-reading order, with the first being '1', point 6 would be one)...
If a point is interior then if you list all the polygon edges connected to it in clockwise order, each pair of edges in sequence will have a polygon in common. If any two edges don't have a polygon in common then it's an exterior point.
Starting from any exterior point you can then get the whole outline by first walking in any direction and subsequently along any edge that connects to an exterior point you haven't visited yet (or, alternatively, that isn't the edge you walked along just now).
Starting from an existing outline and removing some parts, you can obviously start from either exterior point that used to connect to another but no longer does and just walk from there until you get to the other.
You can't handle this stuff in a shader under ES because you don't get connectivity information.
I think the best you could do in a shader is to expand the geometry by pushing vertices outward along their surface normals. Supposing that your data structure is a list of rectangles, each described by, say, a centre, a width and a height, you could achieve the same thing by drawing each with the same centre but with a small amount added to the width and height.
To be completely general you'd need to store normals at vertices, but also to update them as geometry is removed. So there'd be some pushing of new information from the CPU but it'd be relatively limited.

How should I specify rectangles in a 3D scene?

When rendering 3D rectangles (i.e. rectangles in 3D space), of course, they are specified as a list of vertexes for two triangles. However, that representation contains a lot of extraneous information that gets tiresome to code multiple times. I'd like to create a "Rectangle" object that will allow me to specify its texture, size, position, and orientation in space and export the list of vertexes (and indexes), but I'm not sure of the best way to do it. Should I specify the position of the lower left corner (pre-rotation), or the center of the rectangle? How should I specify the orientation, as a vector containing rotation angles? This is such a simple and standard requirement that I'm sure people have thought about it before, but I can't find anything on this site or elsewhere on the subject. I plan to use these objects a lot, so my primary goal (apart from performance) is ease of use rather than anything to do with the internal representation. It wouldn't be hard for me to simply code the first thing I can think of, but I don't want to miss anything and make it unnecessarily difficult.
So, how should I represent a Rectangle object? Opinions are welcome, but sources would be especially helpful.
Edit: if it helps, I believe I'd primarily be using the rectangles on the faces of cubes, though not necessarily as the entire faces of those cubes.
It would probably be simplest to store the homogeneous matrix that transforms a standard, axis-aligned square into the desired location, along with a separate matrix that determines how to map the texture onto it.
For the location matrix, you can store the 4x3 matrix that doesn't affect the w-coordinate. This is only a bit redundant: it uses 12 values where a general rectangle needs 8, but on the other hand, it will be much easier to convert it back to a form usable for rendering.
Alternately, you can store a point location (edge or center depending on whatever is most convenient), and two direction vectors, describing the direction and length of each edge; you are relying on your rectangle generator to make sure the edge vectors are orthogonal. This will take 9 values, which is almost the best you can do.
For the texture mapping, you can store a 3x2 matrix that defines an affine mapping of the (u,v) coordinates onto the coordinates defined by the edges of the rectangle. You can choose a zero-based (0,1)x(0,1) mapping, or a symmetric (-1,1)x(-1,1) mapping, based on whatever is convenient for your application. In any case, this will require 6 values.
As a rectangle is just a bounded plane, what about storing it as an extension of that: a point and a normal vector (defining the centre---or perhaps one of the corners---and orientation); but add in two more components for the width and height bounds?
I think it really depends on how you intend to use the rectangle.
For example: If you have lots of rectangles, storing the three points of one of the two triangles might be best, because it you only have to calculate one more point.
If you typically center your rectangles on something the center point, width, height and rotation angles might be more appropriate.
I'd say: start with what ever seems naturally for you. Make sure your class is able to do all the necessary calculations and hide them behind accessors. Have a good suite of tests for that.
That way you can change the implementation any time. Or you can even have different rectangle implementations for different needs

Resources