Ugly shading in Xbim - helix-3d-toolkit

I'm developing a software that can display 3D IFC building models. I'm using Xbim, and Helix toolkit for that.
I managed to display the geometry correctly, but there is something off with the shading as far as I can tell.
Can I alter the calculation of shading either during Xbim triangulation, or in Helix toolkit? The goal is to have sharp edges, and remove the ugly shading lines from the flat surfaces which appear on the border between 2 triangles in the mesh.
I already did some research, and what I found out, that shading calculation may be based around the vertices: If they are shared by 2 polygons, the shading will be smooth, and if you have 2 vertices in the same place, each belonging to a separate polygon, there will be a hard edge. Is this theory correct?
EDIT: I think this may be more related to Xbim, and not to helix toolkit. So The question is, how to tell Xbim, that you want sharp edges, like the ones you get in Xbim.WindowsUI?

Related

Objects look stretched out/distorted only around edges of screen [duplicate]

This question already has answers here:
90 degree field of view without distortion in THREE.PerspectiveCamera
(2 answers)
Closed 1 year ago.
Is this an FOV issue with my perspective camera? In my scene, spheres look like eggs/oval shaped rather than spheres when they reach the edges of the screen. Anyone know why this happens?
It sounds like you've encountered one of the unfortunate realities of 3D.
In any 3-dimensional scene, the view from a given point is most naturally thought of as a sphere. When we render a scene, we're rendering a piece of that sphere, but we need to somehow convert that piece of a sphere into a flat rectangle, since our computer screens are flat, not round.
So, in order to render a 3D scene as a rectangle, the software needs to use a projection. For 3D rendering, the most common projection is probably a rectilinear projection, also called a gnomonic projection. (On Wikipedia, see "Rectilinear lens" for a discussion of rectilinear projections in photography, and "Gnomonic projection" for a discussion of rectilinear projections in mapmaking.)
The biggest advantage of a rectilinear projection is that straight lines in the scene appear as straight lines in the rendering. A big disadvantage is that objects far from the center are distorted: small circles get turned into large ovals.
This phenomenon is an unalterable mathematical fact that no software will ever be able to overcome. However, there are things you may be able to do to mitigate the situation. One option is to use a narrower field of view. Another option is to use a different projection; the answers here have a few suggestions for how to do that: Three.js - Fisheye effect

silhouette rendering with webgl / opengl

I've been trying to render silhouettes on CAD models with webgl. The closest i got to the desired result was with fwidth and a dot between the normal and the eye vector. I found it difficult to control the width though.
I saw another web based viewer and it's capable of doing something like this:
I started digging through the shaders, and the most i could figure out is that this is analytical - an actual line entity is drawn and that the width is achieved by rendering a quad instead of default webgl lines. There is a bunch of logic in the shader and my best guess is that the vertex positions are simply updated on every render.
This is a procedural model, so i guess that for cones and cylinders, two lines can always be allocated, silhouette points computed, and the lines updated.
If that is the case, would it be a good idea to try and do something like this in the shader (maybe it's already happening and i didn't understand it). I can see a cylinder being written to attributes or uniforms and the points computed.
Is there an approach like this already documented somewhere?
edit 8/15/17
I have not found any papers or documented techniques about this. But it got a couple of votes.
Given that i do have information about cylinders and cones, my idea is to sample the normal of that parametric surface from the vertex, push the surface out by some factor that would cover some amount of pixels in screen space, stencil it, and draw a thick line thus clipping it with the actual shape of the surface.
The traditional shader-based method is Gooch shading. The original paper is here:
http://artis.imag.fr/~Cyril.Soler/DEA/NonPhotoRealisticRendering/Papers/p447-gooch.pdf
The old fashing OpenGL technique from Jeff Lander

Mesh simplification in three.js

I am using the Constructive Solid Geomery library for Three.js, made by Chandler Prall.
https://github.com/chandlerprall/ThreeCSG
The resulting meshes are accurate, but they are very fragmented (lots and lots of unnecessary triangles), and they break some functionality in Three.js, for example the EdgesHelper class is not able to find the edges anymore. This problem is also mentioned here:
http://moczys.com/2014/01/13/three-js-experiment-3-additive-geometry/
Is there a mesh simplification library for Three.js that can take care of this? Perhaps there already is a polygon union function (merge all co-planar adjacent triangles into a 2D Shape), which then can be triangulated back into 3D again?

Draw model with edges in OpenGL

I have model soucast1.3DS
If I open this model in CAD Autodesk Inventor it looks
model in Autodesk Inventor
if I use simple application using OpenGL it looks
model in OpenGL app
GL.Color3(Color.Aqua);
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Fill);
DrawMatrix(); // draw model
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Line);
GL.Enable(EnableCap.PolygonOffsetLine);
GL.PolygonOffset(-1.0f, -1.0f);
GL.Color3(Color.Black);
DrawMatrix(); // draw model
so, my question is:
How can I get same result in my application (OpenGL) as you can see from Inventor?
(Only edges of areas are black)
There are many approaches. You could use tricky shaders combination, stencil buffer or object scaling:
Draw slightly scaled-up model with black color, dropping front faces (e.g. GL_CULL_FACE=GL_CW). Then draw normal model with correct scale and colors, dropping back faces (GL_CCW).
Not a perfect solution, usable for cartoon-like shading in games; may be too unprecise for CAD. If it isn't fit for you - google opengl edge outline.
To clarify things: 3D modeling software always have information about edges in addition to faces, so they can just draw edge lines after model is drawn, and you getting an outline. If you don't have edges (or, like in games - don't even want to have edges because of memory consumption and other issues) - you have to perform some form of edge detection or hack.
I solved it. Change model to STL (contains triangles and normals), found areas (compare of normals) and write algorithm to find border.
And result is:
Opengl

Recommend some Bresenham's-like algorithm of sphere mapping in 2D?

I need the fastest sphere mapping algorithm. Something like Bresenham's line drawing one.
Something like the implementation that I saw in Star Control 2 (rotating planets).
Are there any already invented and/or implemented techniques for this?
I really don't want to reinvent the bicycle. Please, help...
Description of the problem.
I have a place on the 2D surface where the sphere has to appear. Sphere (let it be an Earth) has to be textured with fine map and has to have an ability to scale and rotate freely. I want to implement it with a map or some simple transformation function of coordinates: each pixel on the 2D image of the sphere is defined as a number of pixels from the cylindrical map of the sphere. This gives me an ability to implement the antialiasing of the resulting image. Also I think about using mipmaps to implement mapping if one pixel on resulting picture is corresponding to more than one pixel on the original map (for example, close to poles of the sphere). Deeply inside I feel that this can be implemented with some trivial math. But all these thoughts are just my thoughts.
This question is a little bit related to this one: Textured spheres without strong distortion, but there were no answers available on my question.
UPD: I suppose that I have no hardware support. I want to have an cross-platform solution.
The standard way to do this kind of mapping is a cube map: the sphere is projected onto the 6 sides of a cube. Modern graphics cards support this kind of texture at the hardware level, including full texture filtering; I believe mipmapping is also supported.
An alternative method (which is not explicitly supported by hardware, but which can be implemented with reasonable performance by procedural shaders) is parabolic mapping, which projects the sphere onto two opposing parabolas (each of which is mapped to a circle in the middle of a square texture). The parabolic projection is not a projective transformation, so you'll need to handle the math "by hand".
In both cases, the distortion is strictly limited. Due to the hardware support, I recommend the cube map.
There is a nice new way to do this: HEALPix.
Advantages over any other mapping:
The bitmap can be divided into equal parts (very little distortion)
Very simple, recursive geometry of the sphere with arbitrary precision.
Example image.
Did you take a look at Jim Blinn's articles "How to draw a sphere" ? I do not have access to the full articles, but it looks like what you need.
I'm a big fan of StarconII, but unfortunately I don't remember the details of what the planet drawing looked like...
The first option is triangulating the sphere and drawing it with standard 3D polygons. This has definite weaknesses as far as versimilitude is concerned, but it uses the available hardware acceleration and can be made to look reasonably good.
If you want to roll your own, you can rasterize it yourself. Foley, van Dam et al's Computer Graphics -- Principles and Practice has a chapter on Bresenham-style algorithms; you want the section on "Scan Converting Ellipses".
For the point cloud idea I suggested in earlier comments: you could avoid runtime parameterization questions by preselecting and storing the (x,y,z) coordinates of surface points instead of a 2D map. I was thinking of partially randomizing the point locations on the sphere, so that they wouldn't cause structured aliasing when transformed (forwards, backwards, whatever 8^) onto the screen. On the downside, you'd have to deal with the "fill" factor -- summing up the colors as you draw them, and dividing by the number of points. Er, also, you'd have the problem of what to do if there are no points; e.g., if you want to zoom in with extreme magnification, you'll need to do something like look for the nearest point in that case.

Resources