I have a cube (six faces). I render three faces of a statically-positioned cube with material that have their transparent property set.
I want to retrieve the three closest faces to the camera, so that I can set their transparency/opacity.
If I programmatically rotate the cube in the render loop, how would I calculate the distance of each cube's face (Face3) from the camera?
At any moment, only one of the 2 opposite faces can be in the 'closest' group ... or in the group that are facing the camera, it is the same subset.
So, for a pair of opposite faces , take the normal of one of the faces, and calculate the dot product of this vector and the vector linking that face to the camera. If the dot product is positive, choose this face. Else, choose the opposite face.
And repeat for the remaining 2 pairs of faces.
Related
I want to know the basic idea of creating 2d views of a 3d geometry in cads like autocad, solidworks, and etc..
Here, I listed some basic ideas that I had reached now.
Which method are they used ? or any method I didn't listed ?
idea A:
first, to render every single face to a plane space.
then detect the boundaries of faces.
do something magic that can recognize the 2d curves from the boundary pixels .
do something magic again to recognize which segments of curves should be hiddened.
construct a final view from lines and curves generated from above steps.
idea B:
they create projection rules for every type of surface with boundary wires, like plane, cylinder, sphere, spline. And thoes rules can be used in all projection angles.
then, implement projection rules for every face, and finally they got a view of many curves.
to iterate all curves generated from step 2, and check the visibility of the curve.
construct a final view.
idea C:
first, tessellate every faces to many triangles.
then, found boundaries from triangles for every faces.
then, we got many polylines from step 2.
to iterate all polylines generated for every faces, and check the visibility of the polylines.
construct a final view.
I found a solution, it follows this way:
tessellate every face and edge to triangles and segments.
project all those triangles and segments to a plane.
then choose a suitable resolution to construct those projected triangles and segments to pixels with a height parameter.
found contours for every face and edge from those pixels.
set visible value for every pixel on that contour depends on the height parameter of a total pixel's view.
reconstruct line, circle, and polylines from pixels.
I tested this method for some models, and works well. below is one of them:
I have added a pyramid mesh into the scene and I can rotate it about the x, y and z axes individually.
What I need to do is add an object to the scene that is 5 coloured dots to represent the 5 vertices of the pyramid, and then rotate this object.
I know the coordinates of the vertices but I'm not sure how I would implement this. To rotate the pyramid mesh I am using mesh.rotation.x, mesh.rotation.y, mesh.rotation.z.
Should I maybe try to create a custom mesh containing the 5 vertices and use mesh.rotation, or is a different approach easier?
The usual approach for solving this issue is to add the coloured dots as child objects to your pyramid. If you then rotate the pyramid, the dots will rotate to (because the keep their position relative to their parent).
The position of the colored dots are the coordinates of the respective pyramid vertices.
I know i can go from 3d space to 2d space of the Mesh by getting the corresponding uv coordinates of the vertex.
When i transform to uv space, each vertex will have its color and i can put the color in the pixel position what the uv co-ordinate returns for a particular vertex, but the issue is how do i derive the pixels that lie inbetween them, i want a smooth gradient.
For example, the color value at uv co-ordinate (0.5,0.5)->(u,v) is [30,40,50]->(RGB) and at [0.75,0.75] its [70,80,90] and lets say there are three vertices and theres one more at [0.25.0.6] as [10,20,30], how do i derive the colors that goes on the area these three uv/vertex coordinates fill, i mean the inbetween values for the pixels?
Just draw this mesh on a GPU. You know already that you can replace vertex positions with UVs so you have it represented on the texture space.
Keep your vertex colors unchanged and draw this mesh on your target texture, GPU will do color interpolation for every triangle you draw.
And keep in mind that if two or more triangles share the same texture space then the result will depend on triangle order.
Newbie to three.js. I have multiple n-sided polygons to be displayed as faces (I want the polygon face to be opaque). Each polygon is facing a different direction in 3D space (essentially theses faces are part of some building).
Here are a couple of methods I tried, but they do not fit the bill:
Used Geometry object and added the n-vertices and used line mesh. It created the polygon as a hollow polygon. As my number of points are not just 3 or 4, I could not use the Face3 or Face4 object. Essentially a Face-n object.
I looked at the WebGL geometric shapes example. The shape object works in 2D and extrusion. All the objects in the example are on one plane. While my requirement is each polygon has a different 3D normal vector. Should I use 2D shape and also take note of the face normal and rotate the 2D shape after rendering.
Or is there a better way to render multiple 3D flat polygons with opaque faces with just x, y, z vertices.
As long as your polygons are convex you can still use the Face3 object. If you take one n-sided polygon, lets say a hexagon, you can create Face3 polygons by taking vertices numbered (0,1,2) as one face, vertices (0,2,3) as another face, vertices (0,3,4) as other face and vertices (0,4,5) as last face. I think you can get the idea if you draw it on paper. But this works only for convex polygons.
I'm drawing a simple cube using 8 vertices and 36 indices. No problem as long as I don't try to texture it.
However I want to texture it. Can I do that with only 8 vertices? It seems like I get some strange texture behaviour. Do I need to set up the cube with 24 vertices and 36 indices to be able to texture the cube correctly?
It just doesn't make sence to use vertices AND indices to draw then. I could just as well use vertices only.
One index refers to one set of attributes (vertex, normal, color, edge flag, etc). If you are willing to have the texture mirrored on adjacent faces of the sides of your cube, you could share both texture and vertex coordinates for the sides. However, the top and bottom faces sharing those same coordinates would not work -- one axis of the texture coordinate would not vary. Once you add other attributes (normal in particular) then a cube would need 24 individual indexes (each with vertex, texture and normal) to have "flat" sides.
Another approach that may work for you is texture coordinate generation. However, needing 24 individual vertices for a cube is perfectly normal.