I have model soucast1.3DS
If I open this model in CAD Autodesk Inventor it looks
model in Autodesk Inventor
if I use simple application using OpenGL it looks
model in OpenGL app
GL.Color3(Color.Aqua);
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Fill);
DrawMatrix(); // draw model
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Line);
GL.Enable(EnableCap.PolygonOffsetLine);
GL.PolygonOffset(-1.0f, -1.0f);
GL.Color3(Color.Black);
DrawMatrix(); // draw model
so, my question is:
How can I get same result in my application (OpenGL) as you can see from Inventor?
(Only edges of areas are black)
There are many approaches. You could use tricky shaders combination, stencil buffer or object scaling:
Draw slightly scaled-up model with black color, dropping front faces (e.g. GL_CULL_FACE=GL_CW). Then draw normal model with correct scale and colors, dropping back faces (GL_CCW).
Not a perfect solution, usable for cartoon-like shading in games; may be too unprecise for CAD. If it isn't fit for you - google opengl edge outline.
To clarify things: 3D modeling software always have information about edges in addition to faces, so they can just draw edge lines after model is drawn, and you getting an outline. If you don't have edges (or, like in games - don't even want to have edges because of memory consumption and other issues) - you have to perform some form of edge detection or hack.
I solved it. Change model to STL (contains triangles and normals), found areas (compare of normals) and write algorithm to find border.
And result is:
Opengl
Related
I'm developing a software that can display 3D IFC building models. I'm using Xbim, and Helix toolkit for that.
I managed to display the geometry correctly, but there is something off with the shading as far as I can tell.
Can I alter the calculation of shading either during Xbim triangulation, or in Helix toolkit? The goal is to have sharp edges, and remove the ugly shading lines from the flat surfaces which appear on the border between 2 triangles in the mesh.
I already did some research, and what I found out, that shading calculation may be based around the vertices: If they are shared by 2 polygons, the shading will be smooth, and if you have 2 vertices in the same place, each belonging to a separate polygon, there will be a hard edge. Is this theory correct?
EDIT: I think this may be more related to Xbim, and not to helix toolkit. So The question is, how to tell Xbim, that you want sharp edges, like the ones you get in Xbim.WindowsUI?
Recently, I tried to realize a semi transparent surface in Qt3D. I put this semi transparent surface in the scene graph in Qt3D together with many other entities that should be draw. However, I found that the drawing order of the surface is not fixed, which seriously affect the blending effect.
How can I know the drawing sequence of the entities in the scene graph in Qt3D? Also, how can I make sure that my semi transparent surface was drawn last?
Thank you.
What you are looking for is the class QSortPolicy. You can set the sorting policy there to back to front, to draw the transparent surface last. Although, according to the documentation, the drawing order depends on when the entities appear in the scene graph, when no sorting policy is present.
Other than that, I found a frame/scene graph in Qt3D that showcases a transparent object here: https://github.com/alpqr/q3dpostproc. It's written in QML but you should be able to transfer it to C++ without much work.
I've been trying to render silhouettes on CAD models with webgl. The closest i got to the desired result was with fwidth and a dot between the normal and the eye vector. I found it difficult to control the width though.
I saw another web based viewer and it's capable of doing something like this:
I started digging through the shaders, and the most i could figure out is that this is analytical - an actual line entity is drawn and that the width is achieved by rendering a quad instead of default webgl lines. There is a bunch of logic in the shader and my best guess is that the vertex positions are simply updated on every render.
This is a procedural model, so i guess that for cones and cylinders, two lines can always be allocated, silhouette points computed, and the lines updated.
If that is the case, would it be a good idea to try and do something like this in the shader (maybe it's already happening and i didn't understand it). I can see a cylinder being written to attributes or uniforms and the points computed.
Is there an approach like this already documented somewhere?
edit 8/15/17
I have not found any papers or documented techniques about this. But it got a couple of votes.
Given that i do have information about cylinders and cones, my idea is to sample the normal of that parametric surface from the vertex, push the surface out by some factor that would cover some amount of pixels in screen space, stencil it, and draw a thick line thus clipping it with the actual shape of the surface.
The traditional shader-based method is Gooch shading. The original paper is here:
http://artis.imag.fr/~Cyril.Soler/DEA/NonPhotoRealisticRendering/Papers/p447-gooch.pdf
The old fashing OpenGL technique from Jeff Lander
I need to add this classic effect which consist in highlighting a 3D model by stroking the outlines, just like this for example (without the transparent gradiant, just a solid stroke) :
I found a way to do this here which seems pretty simple and easy to implement. The guy is playing with the stencil buffer to compute the model shape, then he's drawing the model using wireframes and the thickness of the lines is doing the job.
This is my problem, the wireframes. I'm using OpenGL ES 2.0, which means I can't use glPolygonMode to change the render mode to GL_LINE.
And I'm stuck here, I can't find any simple alternative way to do it, the most relevant solution i found for the moment is to implement the wireframe rendering myself, which is clearly not the easiest solution. To draw my objects I'm using glDrawElements with GL_TRIANGLES as primitive, I tried to use GL_TRIANGLE_STRIP as primitive but the result is definetely not the right one.
Any idea/trick to bypass the lack of glPolygonMode with OpenGL ES? Thanks in advance.
Drawing Outline or border for a Model in OpenGL ES 2 is not straight forward as the example you have mentioned.
Method 1:
The easiest way is to do it in multiple passes.
Step 1 (Shape Pass): Render only the object and draw it in black using the same camera settings. And draw all other pixels with different color.
Step 2 (Render Pass): This is the usual Render pass, where you actually draw the objects in real color. This every time you a fragment, you have to test the color at the same pixel on the ShapePass image to see if any of the nearby 8 pixels are different in color. If all nearby pixels are of same color, then the fragment does not represent a border, else add some color to draw the border.
Method 2: There are other techniques that can give you similar effects in a single pass. You can draw the same object twice, first time slightly scaled up with a single color, and then with real color.
I'm writing a little 3D engine. I've just added the alpha blending functionality in my program and I wonder one thing: do I have to sort all the primitives compared with the camera?)
Let's take a simple example : I have a scene composed by 1 skybox and 1 tree with alpha blended leafs!
Here's a screenshot of a such scene:
Until here all seems to be correct concerning the alpha blending of the leafs relative to each others.
But if we get closer...
... we can see there is a little trouble on the top right of the image (the area around the leaf forms a quad).
I think this bug comes from the fact these two quads (primitives) should have been rendered later than the ones in back.
What do you think about my supposition ?
PS: I want to precise all the geometry concerning the leafs is rendered in just one draw call.
But if I'm right it would means when I need to render an alpha blended mesh like this tree I need update my VBO each time my camera is moving by sorting all the primitives (triangles or quads) from the camera's point of view. So the primitives in back should be rendered in first...
What do you think of my idea?