In OpenGL NURBs can be drawn using evaluators. But it seems evaluators were removed from OpenGL ES spec to make it light weight. In that case, how can one draw NURBs using OpenGL ES API?
You won't get around implementing the NURBS stuff yourself. Meaning you have to sample the curve or surface at discrete points and thus convert it to an ordinary line strip or triangle set respectively. This can then be drawn with the usual vertex arrays/buffers, which should also be faster than evaluators or the GLU NURBS functions.
In openGL, NURBS curves are rendered in 2 steps - 1) evaluate some points (100 or 1000) on the curve using the mathematical formula. This can be done on GPU in openGL4, using the SSBOs (Shader Storage Buffer Objects). 2) render the evaluated points as line strip, using the VBOs.
If you want to understand the NURBS in more detail, then there is a nice web-app available here.
Related
I've been trying to render silhouettes on CAD models with webgl. The closest i got to the desired result was with fwidth and a dot between the normal and the eye vector. I found it difficult to control the width though.
I saw another web based viewer and it's capable of doing something like this:
I started digging through the shaders, and the most i could figure out is that this is analytical - an actual line entity is drawn and that the width is achieved by rendering a quad instead of default webgl lines. There is a bunch of logic in the shader and my best guess is that the vertex positions are simply updated on every render.
This is a procedural model, so i guess that for cones and cylinders, two lines can always be allocated, silhouette points computed, and the lines updated.
If that is the case, would it be a good idea to try and do something like this in the shader (maybe it's already happening and i didn't understand it). I can see a cylinder being written to attributes or uniforms and the points computed.
Is there an approach like this already documented somewhere?
edit 8/15/17
I have not found any papers or documented techniques about this. But it got a couple of votes.
Given that i do have information about cylinders and cones, my idea is to sample the normal of that parametric surface from the vertex, push the surface out by some factor that would cover some amount of pixels in screen space, stencil it, and draw a thick line thus clipping it with the actual shape of the surface.
The traditional shader-based method is Gooch shading. The original paper is here:
http://artis.imag.fr/~Cyril.Soler/DEA/NonPhotoRealisticRendering/Papers/p447-gooch.pdf
The old fashing OpenGL technique from Jeff Lander
Is there any way to render a different set of screen coordinates than the standard equidistant grid between -1,-1 to 1,1?
I am not talking about a transformation that can be accomplished by transformations in the vertex shader.
Specifically ES2 would be nice, but any version is fine.
Is this even directly OpenGl related or is the standard grid typically provided by plumbing libraries?
No, there isn't any other way. The values you write to gl_Position in the vertex (or tesselation or geometry) shader are clip space coordinates. The GPU will convert these to normalized device space (the "[-1,1] grid") by dividing by the w coordinate (after the actual primitive clipping, of course) and will finally use the viewport parameters to transform the results to window space.
There is no way to directly use window coordinates when you want to use the rendering pipeline. There are some operations which bypass large parts of that pipeline, like frambuffer blitting, which provides a limited way to draw some things directly to the framebuffer.
However, working with pixel coordinates isn't hard to achieve. You basically have to invert the transformations the GPU will be doing. While typically "ortho" matrices are used for this, the only required operations are a scale and a translation, which boils down to a fused multiply-add per component, so you can implement this extremely efficient in the vertex shader.
So, I am working on an OpenGL ES 2.0 terrain rendering program still.
I have weird drawing happening at the tops of ridges. I am guessing this is due to surface normals not being applied.
So, I have calculated normals.
I know that in other versions of OpenGL you can just activate the normal array and it will be used for culling.
To use normals in OpenGL ES can I just activate the normals or do I have to use a lighting algorithm within the shader?
Thanks
OpenGL doesn't use normals for culling. It culls based on whether the projected triangle has its vertices arranged clockwise or anticlockwise. The specific decision is based on (i) which way around you said was considered to be front-facing via glFrontFace; (ii) which of front-facing and/or back-facing triangles you asked to be culled via glCullFace; and (iii) whether culling is enabled at all via glEnable/glDisable.
Culling is identical in both ES 1.x and 2.x. It's a fixed hardware feature. It's external to the programmable pipeline (and, indeed, would be hard to reproduce within the ES 2.x programmable pipeline because there's no shader with per-triangle oversight).
If you don't have culling enabled then you are more likely to see depth-buffer fighting at ridges as the face with its back to the camera and the face with its front to the camera have very similar depths close to the ridge and limited precision can make them impossible to distinguish correctly.
Lighting in ES 1.x is calculated from the normals. Per-vertex lighting can produce weird problems at hard ridges because normals at vertices are usually the average of those at the faces that join at that vertex, so e.g. a fixed mesh shaped like \/\/\/\/\ ends up with exactly the same normal at every vertex. But if you're not using 1.x then that won't be what's happening.
To implement lighting in ES 2.x you need to do so within your shader. As a result of that, and of normals not being used for any other purpose, there is no formal way to specify normals as anything special. They're just another vertex attribute and you can do with them as you wish.
I'm looking for a way to draw lines with thickness and smoothness with OpenGL ES2 without using the build it glLineWidth function which has a lot of limitations. I think this can be done using shaders, but my glsl skills are limited.
What I've already tried is to actually construct a poly with rounded joints like in this question's answer. However, for my purpose of free drawing this is an overkill and makes my app run very slow. So, I'm thinking, doing the same in the vertex shader will increase the performance, but not that much to become usable for my purpose (drawing).
So, right now I have a set of points that would describe the line nicely, if I would be able to connect them and give each connected segment thickness.
What's often done in this case is to draw a quad with no smoothing. Then draw the outline of the quad using smooth, single-pixel wide line drawing.
Is this easy to do? I don't want to use texture images. I want to create a rectangle, probably of two polygons, and then set a color on this. A friend who claims to know OpenGL a little bit said that I must always use triangles for everything and that I must use textures for everything when I want it colored. Can't imagine that is true.
You can set per-vertex colors (which can all be the same) and draw quads. The tricky part about OpenGL ES is that they don't support immediate mode, so you have a much steeper initial learning curve compared to OpenGL.
This question covers the differences between OpenGL and ES:
OpenGL vs OpenGL ES 2.0 - Can an OpenGL Application Be Easily Ported?
With OpenGL ES 2.0, you do have to use a shader, which (among other things) normally sets the color. As long as you want one solid color for the whole thing, you can do it in the vertex shader.