I'm currently trying to get my head around a method to create roof-like structures on top of my extruded shapes in Three.js. Without delving into straight skeletons, the simplest approach I can think of is to extrude and scale the top face of my extruded mesh to look roof-like, or create a new mesh on top that is shaped to look like a roof.
This is the most basic style that I'm after, if applied to an extruded rectangle (a cube):
The shaded area of the roof is higher than the non-shaded area.
And the same style, if applied to a more complex extruded shape:
What I can't work out is how to create a roof structure like that, especially for complex shapes like the second example. I have the vertices for the building 'footprint' but I don't know how to extrude them while scaling the top face to give the slanted sides.
I could definitely work out the scaled vertex positions but them I'd have another problem in not know how to connect the top (scaled) face to the bottom face (ie. how to make the side faces).
Any ideas?
Sounds like you want to experiment with the bevel parameters in ExtrudeGeometry.
https://github.com/mrdoob/three.js/blob/master/src/extras/geometries/ExtrudeGeometry.js
As the top roof polygon will have the same number of verts as the bottom roof polygon isn't this just a case of looping through each top roof polygon vert and connecting it to it's bottom roof polygon vert?
And, any two verts on the top roof polygon along with their associated verts on the bottom roof polygon will give you all the verts for a side face.
Related
How can I adapt a geometry (a box geometry to start with) to another one? I am looking for an effect like the one in the picture
where the cyan part was originally a box and then it got "adapted" to the plane and over the red part.
This is possible in some software packages (Modo, for example) but I'd like to do it in webGL/three.js
Consider modifying mesh geometry.
That implies for good results mesh will need to have high polygon count.
If you want to hug a simple shape (box, sphere) - vertex displacement can be sufficient:
Pass your red shape's parameters as uniforms when drawing blue shape
For any blue shape vertex find if it is inside or red shape and offset vertex position if needed
Choosing offset direction as closest face normal of red shape should be ok
That will give just visuals, if you need more robust solution - generate new geometry entirely on cpu on demand.
For example:
Loop through all vertices and offset them, mark offseted vertices
Additionally loop to relax hard edges
I suspect real algorithms from 3d modelling software are more complex.
I'm working on some simple building planning editor. For 3D preview I'm using Three.js library for Dart (from GitHib). So far algorithm is pretty simple: it converts single lines to rectangles and then extrude it (based on thickness and height).
Is it possible to normalize vertex position depending on adjacent walls? Technically I store list of walls, within can query adjacent walls and can calculate Vector2 list for mesh generation for each wall. I have to apply changes to each wall separately due to extrusion.
Thanks in advance!
Maybe you could instead try to properly tessellate the 2D thickened walls, and then only extrude them (instead of extruding, tessellating and then trying to fix the joints). For simple polylines, joint tessellation can be handled like described in this article: http://www.codeproject.com/Articles/226569/Drawing-polylines-by-tessellation.
I am trying to create a terrain solution in ThreeJS and I'm running into some trouble with the generation of the normals. I am approaching the problem by creating a number of mesh objects using the THREE.PlaneGeometry class. Once all of the tiles have been created I go through each and set the UV's so that each tile represents a part of the whole. I also generate a height value of the vertex Y positions to create some hills. I then call the geometry functions
geometry.computeFaceNormals();
geometry.computeVertexNormals();
This is just so that I have some default face and vertex normals for each tile.
I then go through each tile and try to average out the normals on each corner.
The problem is (I think) with the normals, but I don't really know what to call this problem. Each of the normals on the plane's corners point in the same direction as the face when created. This makes the terrain look like a flat shaded object. To prevent this I thought perhaps what I needed to do was make sure each vertext normal (each corner) had the same averaged normal as its immediate neighbours normals. I.E each corner of each tile has the same normal as all the immediate normals around it from the adjacent planes.
figure A
Here I am visualising each of the 4 normals on the mesh. You can see that at each corner the normals are the same (On top of eachother)
figure B
EDIT
figure C
EDIT
Figure D
Except even when the verts all share the same normals it still comes up all blocky <:/
I don't know how to do this... I think my understanding of what needs to be done is incorrect...?
Any help would be greatly appreciated.
You're basically right about what should happen. The shading you're getting is not consistent with continuous normals. If each all the vertex-faces at a given location have the same normal you should not see the clear shading discontinuities in your second image. However the image doesn't look like simple face normals either, at least not to my eye.
A couple of things to look at:
1) I note that your quads themselves are not planar. Is it possible your algorithm is assuming that they are? the non-planar quad meshes don't have real 'face normal' to use as a base.
2) Are your normalized normalized after you average them? That is, do they have a vector length of 1?
3) Are you confident that the normal averaging code is actually using the correct normals to average? The shading in this does not look like completely flat shaded image where each vertex-face normal in a quad is the same - if that were the case you'd get consistent shading across each quad although the quads would not be continuous. This it possible your original vertex-face normals are not in fact lined up with the face normals?
4) Try turning off the bump maps to debug. Depending on how the bump is being done in your shader you may have incorrect binormals/bitangents rather than bad vert normals.
Instead of averaging at each vertex / corner the neighborhood normals you should average the four normals that each vertex has (4 tiles meet at each vertex).
I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all.
I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter.
I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore.
The following image shows what this looks like for one object when it is fully intact.
The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case.
This image shows the same object with a few fixtures removed.
Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated.
Thanks :)
Assuming you're able to do something about those awkward points that are slightly inset from the corners (eg, if you numbered the points in English-reading order, with the first being '1', point 6 would be one)...
If a point is interior then if you list all the polygon edges connected to it in clockwise order, each pair of edges in sequence will have a polygon in common. If any two edges don't have a polygon in common then it's an exterior point.
Starting from any exterior point you can then get the whole outline by first walking in any direction and subsequently along any edge that connects to an exterior point you haven't visited yet (or, alternatively, that isn't the edge you walked along just now).
Starting from an existing outline and removing some parts, you can obviously start from either exterior point that used to connect to another but no longer does and just walk from there until you get to the other.
You can't handle this stuff in a shader under ES because you don't get connectivity information.
I think the best you could do in a shader is to expand the geometry by pushing vertices outward along their surface normals. Supposing that your data structure is a list of rectangles, each described by, say, a centre, a width and a height, you could achieve the same thing by drawing each with the same centre but with a small amount added to the width and height.
To be completely general you'd need to store normals at vertices, but also to update them as geometry is removed. So there'd be some pushing of new information from the CPU but it'd be relatively limited.
* SOLVED *
It was not about 0,0,0 or distortion. It´s super weird but I found out that compute geometry as a Sphere worked! (even at the tiles corners, where you should think a sphere wouldnt cover it does)
http://threejs.org/docs/#Reference/Core/Geometry
computeBoundingSphere();
Problem desc. follows below.
Hey I'm building a webgl wall for my portfolio site, I need ray intersection to know both when user hovers over the wall and when they click what plane they're clicking on so I can redirect them to correct project.
http://www.martinlindelof.com
What i do is adding all planes on xyz(0,0,0) then I'm using dynamic geometry to place out their vertices on a point grid that's affected by a repelling particle (using traer)
now when I'm doing ray intersect (using examples from threejs r49) I get an empty array back, nothing hit.
could this be because all planes origins are in 0,0,0. should I maybe on each frame not only moving vertices but the entire plane?
or is something else.
(face normals seems to be pointing in the right direction, I see the texture on the plane and it's not inverted as it should be if it was the face backside with double sided planes. guess it's not by default in three.js when creating plane)
Ray got problems with object position 0,0,0 (because somewhere will be divided by position and will result in not dividable). Try another position.