Drawing Stippled Lines using OpenGL-ES - opengl-es

I have an application that draws 3-d map view marked up lines that show various features.
I am porting the map over to an OpenGL-ES architecture, but am having a bit of trouble working out how to display dashed lines.
Doing a lot of googling, I've found many references to the idea that drawing dashed lines and polygons were removed from OpenGL-ES as they can be easily emulated using textures triangles. That's great, but I can't find anyone that actually does this emulation and/or has a description of the steps involved.
An example, of one problem I have encountered trying to prototype out this concept is perspective squeezes my lines to invisible as they go towards the horizon. Using LINE_STRIP, this doesn't happen, and the lines remain a constant width in the map.
Any advice on how to achieve dashed constant width lines in a perspective view would be much appreciated.

I believe you can apply a texture to a line, not just a triangle. You'll need to set texture coordinates for each end of the line; the texture will be interpolated linearly along the line.
The effectiveness of this solution should be invariant of whether you use lines or line strips - line strips are just a way to create lines with fewer vertices.
There is one other problem: the tendency of the texture pattern to become compact as a line goes away from the camera. This happens because texture coordinate interpolation is perspective-correct even for lines (see section 3.5 of the GL spec).
There are two ways to get around this:
If you can calculate a "q" coordinate for your texture that undoes the perspective, you can restore screen-space texturing. This technique is probably too performance-expensive.
You can project the texture in eye space (e.g. glTexGen).
Texture coordinate generation is of course not available in GLES 1.1, but if you are using vertices by array, you can fake it by:
Setting your texture coordinate array to be your vertex coordinate array and
Using the texture matrix to "transform" the vertices.
The disadvantage of this technique is that the texture pattern will be in fixed screen space - that is, the texture won't run across the lines.

If all you want is to draw dashed lines, just change from GL_LINE_STRIP to GL_LINES. That way, open GL will connect vertices 1&2, 3&4, 5&6, but not 3&4, or 4&5 leaving spaces there. It will in essence be a half/half ratio dotted line-- the rough equivalent of glLineStipple(1, 0101);
IE: in the vertex array
[0,0,1,1,2,2,3,3,4,4,5,5,6,6]
OpenGL will connect (0,0) to (1,1), but will not connect (1,1) to (2,2) [whereas it would with GL_LINE_STRIP]. It will then connect (3,3) to (4,4), but NOT (4,4) to (5,5). The final connection will be (5,5) to (6,6).
This is what it looks like when I did it:
Dotted Lines on Android
The line is not 50/50 dotted/empty because in my case, it represents the position of a game entity each game frame-- and game frames are not necessarily all of equal length, thus the inconsistent line:space ratio.
The code looks like this to draw it:
public void draw(GL10 gl)
{
gl.glDisable(GL10.GL_TEXTURE_2D);
gl.glColor4f(color[0], color[1], color[2], color[3] / fade);
//pointsBuffer holds the points on the line
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, pointsBuffer);
gl.glDrawArrays(GL10.GL_LINES, 0, points.length/2);
gl.glEnable(GL10.GL_TEXTURE_2D);
}
An alternative idea to create a more intentionally patterned stipple would be to skip certain vertices by instead drawing with an indices array (glDrawElements). However, I don't have an example of that to show you. :/

Related

Using multiple primitives in WebGL at the same time

I am actually trying to develop a web application that would visualize a Finite Element mesh. In order to do so, I am using WebGl. Right now I have a page with all the code necessary to draw the mesh in the viewport using triangles as primitives (each quad element of the mesh was splitted into two triangles to draw it). The problem is that, when using triangles, all the piece is "continuous" and you cant see the separation between triangles. In fact, what I would like to achieve is to add lines between the nodes so that, around each quad element (formed by two triangles) we have these lines in black, and so the mesh can actually be shown.
So I was able to define the lines in my page, but since one shader just can have one type of primitive, if I add the code for the line buffers and bind them it just show the lines, not the element (as they were the last buffers binded).
So the closest solution I have found is using multiple shaders, and managing them with multiple programs, but this solution would just enable me whether to plot the geometry with trias or to draw just the lines, depending on which program is currently selected.
Could any of you help me about how to approach this issue? I have seen a windows application that shows FE meshes using OpenGL and it is able to mix the triangles with points and lines, apart from using different layers, illumination etc. So I am aware that this may be complicated, but I assume that if it is possible somehow with OpenGl it should be as well with webGL.
Please if you provide any solution I would appreciate a lot that it contains some code as an example, for instance drawing a single triangle but including three black lines at its borders and maybe three points at the vertices.
setup()
{
<your current code here>
Additional step - Unbind the previous textures, upload and bind one 1x1 black pixel as a texture. Let this texture object be borderID;
}
Draw loop()
{
Unbind the previous textures, bind your normal textures, and draw the mesh like your current setup. This will fill the entire area with different colours, without border (the current case)
Bind the borderID texture, and draw the same vertices again except this time, use context.LINE_STRIP instead of context.TRIANGLES. This will draw lines with the black texture, and will appear as border, on top of the previously drawn colors for each triangle. You can have something like below
if(currDrawMode==0)
context3dStore.bindTexture(context3dStore.TEXTURE_2D, meshTextureObj[bindId]); else context3dStore.bindTexture(context3dStore.TEXTURE_2D, borderTexture1pixObj[bindId]);
context3dStore.drawElements((currDrawMode == 0) ? context3dStore.TRIANGLES: context3dStore.LINE_LOOP, indicesCount[bindId], context3dStore.UNSIGNED_SHORT, 0); , where currDrawMode toggles between drawing the border and drawing the meshfill.
Since the line texture appears as a border over the flat colors you had earlier, this should solve your need
}

Orthographic 3D Backface Culling using Surface Normals

I'm creating an HTML5 canvas 3D renderer, and I'd say I've gotten pretty far without the help of SO, but I've run into a showstopper of sorts. I'm trying to implement backface culling on a cube with the help of some normals calculations. Also, I've tagged this as WebGL, as this is a general enough question that it could apply to both my use case and a 3D-accelerated one.
At any rate, as I'm rotating the cube, I've found that the wrong faces are being hidden. Example:
I'm using the following vertices:
https://developer.mozilla.org/en/WebGL/Creating_3D_objects_using_WebGL#Define_the_positions_of_the_cube%27s_vertices
The general procedure I'm using is:
Create a transformation matrix by which to transform the cube's vertices
For each face, and for each point on each face, I convert these to vec3s, andn multiply them by the matrix made in step 1.
I then get the surface normal of the face using Newell's method, then get a dot-product from that normal and some made-up vec3, e.g., [-1, 1, 1], since I couldn't think of a good value to put in here. I've seen some folks use the position of the camera for this, but...
Skipping the usual step of using a camera matrix, I pull the x and y values from the resulting vectors to send to my line and face renderers, but only if they have a dot-product above 0. I realize it's rather arbitrary which ones I pull, really.
I'm wondering two things; if my procedure in step 3 is correct (it most likely isn't), and if the order of the points I'm drawing on the faces is incorrect (very likely). If the latter is true, I'm not quite sure how to visualize the problem. I've seen people say that normals aren't pertinent, that it's the direction the line is being drawn, but... It's hard for me to wrap my head around that, or if that's the source of my problem.
It probably doesn't matter, but the matrix library I'm using is gl-matrix:
https://github.com/toji/gl-matrix
Also, the particular file in my open source codebase I'm using is here:
http://code.google.com/p/nanoblok/source/browse/nb11/app/render.js
Thanks in advance!
I haven't reviewed your entire system, but the “made-up vec3” should not be arbitrary; it should be the “out of the screen” vector, which (since your projection is ⟨x, y, z⟩ → ⟨x, y⟩) is either ⟨0, 0, -1⟩ or ⟨0, 0, 1⟩ depending on your coordinate system's handedness and screen axes. You don't have an explicit "camera matrix" (that is usually called a view matrix), but your camera (view and projection) is implicitly defined by your step 4 projection!
However, note that this approach will only work for orthographic projections, not perspective ones (consider a face on the left side of the screen, facing rightward and parallel to the view direction; the dot product would be 0 but it should be visible). The usual approach, used in actual 3D hardware, is to first do all of the transformation (including projection), then check whether the resulting 2D triangle is counterclockwise or clockwise wound, and keep or discard based on that condition.

How to create a shader to mask using a degree offset from a central point?

I'm a little bit lost, and this is somewhat related to another question I've asked about fragment shaders, but goes beyond it.
I have an orthographic scene (although that may not be relevant), with the scene drawn here as black, and I have one billboarded sprite that I draw using a shader, which I show in red. I have a point that I know and define myself, A, represented by the blue dot, at some x,y coordinate in the 2d coordinate space. (Lower-left of screen is origin). I need to mask the red billboard in a programmatic fashion where I specify 0% to 100%, with 0% being fully intact and 100% being fully masked. I can either pass 0-100% (0 to 1.0) in to the shader, or I could precompute an angle, either solution would be fine.
( Here you can see the scene drawn with '0%' masking )
So when I set "15%" I want the following to show up:
( Here you can see the scene drawn with '15%' masking )
And when I set "45%" I want the following to show up:
( Here you can see the scene drawn with '45%' masking )
And here's an example of "80%":
The general idea, I think, is to pass in a uniform 'A' vec2d, and within the fragment shader I determine if the fragment is within the area from 'A' to bottom of screen, to the a line that's the correct angle offset clockwise from there. If within that area, discard the fragment. (Discarding makes more sense than setting alpha to 0.0 or 1.0 if keeping, right?)
But how can I actually achieve this?? I don't understand how to implement that algorithm in terms of a shader. (I'm using OpenGL ES 2.0)
One solution to this would be to calculate the difference between gl_FragCoord (I hope that exists under ES 2.0!) and the point (must be sure the point is in screen coords) and using the atan function with two parameters, giving you an angle. If the angle is not some value that you like (greater than minimum and less than maximum), kill the fragment.
Of course, killing fragments is not precisely the most performant thing to do. A (somewhat more complicated) triangle solution may still be faster.
EDIT:
To better explain "not precisely the most performant thing", consider that killing fragments still causes the fragment shader to run (it only discards the result afterwards) and interferes with early depth/stencil fragment rejection.
Constructing a triangle fan like whoplisp suggested is more work, but will not process any fragments that are not visible, will not interfere with depth/stencil rejection, and may look better in some situations, too (MSAA for example).
Why don't you just draw some black triangles ontop of the red rectangle?

How do I add an outline to a 2d concave polygon?

I'm successfully drawing the convex polys which make up the following white concave shape.
The orange color is my attempt to add a uniform outline around the white shape. As you can see it's not so uniform. On some edges the orange doesn't show at all.
Evidently using...
glScalef(1.1, 1.1, 0.0);
... to draw a slightly larger orange shape before I drew the white shape wasn't the way to go.
I just have a nagging feeling I'm missing a more simple way to do this.
Note that the white part is going to be mapped with a texture which has areas of transparency, so the orange part needs to be behind the white shapes too, not just surrounding them.
Also, I'm using a parallel projection matrix, that's why glScalef's z is set to 0.0 - reminds me there is no perspective scaling.
Any ideas? Thanks!
Nope, you wont be going anywhere with glScale in this case. Possible options are
a) construct an extruded polygon from the original one (possibly rounding sharp corners)
b) draw the polygon with GL_LINES and set glLineWidth to your desired outline width (in fact you might want to draw the outline with 2x width first)
The first approach will generate CPU load, the second one might slow down rendering significantly AFAIK.
You can displace your polygon in the 8 directions of the compass.
You can have a look at this link: http://simonschreibt.de/gat/cell-shading/
It's a nice trick, and might do the job
Unfortunately there is no simple way to get an outline of consistent width - you just have to do the maths:
For each edge: calculate the normal, scale to the desired width, and add to the edge vertices to get a line segment on the new expanded edge
Calculate the intersection of the lines through two adjacent segments to find the expanded vertex positions
A distinct answer from those offered to date, posted just for interest; if you're in GLES 2.0 have access to shaders then you could render the source polygon to a framebuffer with a texture bound as the colour renderbuffer, then do a second parse to write to the screen (so you're using the image of the white polygon as the input texture and running a post-processing pixel shader to every pixel on the screen) with a shader that obeys the following logic for an outline of thickness q:
if the input is white then output a white pixel
if the input pixel is black then sample every pixel within a radius of q from the current pixel; if any one of them is white then output an orange pixel, otherwise output a black pixel
In practise you'd spend an awful lot on texture sampling and probably turn that into the bottleneck. And they'd be mostly dependent reads, which are bad for the pipeline on lots of GPUs — including the PowerVR SGX that powers the overwhelming majority of OpenGL ES 2.0 devices.
EDIT: actually, you could speed this up substantially; if your radius is q then have the hardware generate mip maps for your framebuffer object, take the first one for which the output pixels are at least q by q in the source image. You've then essentially got a set of bins that'll be pure black if there were no bits of the polygon in that region and pure white if that area was entirely internal to the polygon. For each output fragment that you're considering might be on the border you can quite possibly just straight to a conclusion of definitely in or definitely out and beyond the border based on four samples of the mipmap.

Bresenham line algorithm (thickness)

I was wondering if anyone knew of any algorithm to draw a line with specific thickness, based on Bresenham's line algorithm or any similar.
On a second thought, I've been wondering about for each setPixel(x,y) I'd just draw a circle, e.g.:
filledCircle(x,y,thickness); for every x,y but that would of course be very slow. I also tried to use dictionary but that would fill the memory in no time. Check the pixels I'm about to draw on if they have the same color, but that's also not efficient enough for large brushes.
Perhaps I could somehow draw half circles depending on the angle?
Any input would be appreciated.
Thanks.
duplicate: how do I create a line of arbitrary thickness using Bresenham?
You cannot actually draw circles along the line. This approach is patented. :)
You can still read patent for inspiration.
I don't know what is commonly used, but it seems to me that you could use Bresenham for the 1-pixel-wide line, but extend it a set number of pixels vertically or horizonally. For instance, suppose your line is roughly 30 degrees away from the horizontal, and you want it to be four pixels wide. You calculate that the vertical thickness of the line should be five pixels. You run Bresenham, but for each pixel (x,y), you actually draw (x,y), (x,y+1), ... (x,y+4). And if you want the ends of the line to be rounded, draw a circle at each end.
For overkill, make a pixel map of the stylus (a circle or diagonal nib, or whatever), then draw a set of parallel Bresenham lines, one for each pixel in the stylus.
There are variations on Bresenhams which calculate pixel coverage, such as those used in the anti-grain geometry libraries; whether you want something that quality - you don't say what the output medium is, and most systems more capable than on-off LCDS support pens with thickness anyway.

Resources