I have a set of geometry collection that I want to intersect in order to get a geometry collection containing the cross collection of all this set.
I tried to use 'intersection' method but this just compute the shared geometry between two geometry, it does not return a collection of geometrie representing the cross of two geometry.
If someone knows a good way to do this, I will appreciate it!
Thanks you
Your question is not specific about what are the geometric primitives used in your geometry. In any case, I suggest you use a computational geometry library like CGAL. Represent your geometric primitives using its classes and it has functions to compute the intersections of all types of primitives.
Related
The explanation presented in the three.js documentation about the command BufferGeometry is quite hard to understand to me.
It says..
BufferGeometry is a representation of mesh, line, or point geometry.
Includes vertex positions, face indices, normals, colors, UVs, and
custom attributes within buffers, reducing the cost of passing all
this data to the GPU.
I didn't quite understand what those sentences meant.
What is the purpose of BufferGeometry? How do you visualize BufferGeometry in real life?
Thank you!
An instance of this class holds the geometry data intended for rendering.
If you want to visualize these data, you have to define a material and the type of 3D object (mesh, lines or points). The code example of the documentation page shows the respective JavaScript statements.
I want to be able to reference a specific face on a specific mesh in a glTF file. I am confused by the notion of primitives however. Normally, I would use the face index (ie, in three.js) and I would always be able to reference the same face. However, sometimes meshes in glTF have multiple primitives. Do these use the same face buffer? Do they at least use consecutive face buffers? I am wondering if I can reference a face in a mesh using just one number (ie, a face index) or if I need to also use a primitive index.
Do mesh primitives share a pool of vertices?
Two glTF primitives in a single mesh could be related, or unrelated, the same ways as two glTF meshes each containing a single primitive. Two primitives could have:
same vertex attributes but different indices.
same vertex attributes AND indices, but different materials.
no shared vertex attributes or indices
entirely different draw modes (POINTS, LINES, TRIANGLES, ...)
So unless you're fully in control of the files you're loading, the default and safest assumption would be to treat each primitive as a completely separate mesh. If there are more specific cases you want to check for (like the first two bullets above), you can always add that as a later optimization.
If you're loading a glTF file into threejs, each primitive will become a separate THREE.Mesh under a common THREE.Group.
For further details, see the glTF specification section on Meshes.
I'm trying to learn more about performant geometries in THREE.js, and have come to understand that an indexed BufferGeometry and InstancedBufferGeometry are the two most performant geometry types.
My understanding so far is that in an indexed BufferGeometry, vertices that are re-used in a geometry are only added to the geometry once, and each instance of a given re-used vertex are referenced by their index position in the vertex array.
My understanding of the InstancedBufferGeometry is that this geometry allows one to create a "blueprint" of an object, send one copy of that object's vertices to a shader, then use custom attributes to modify each copy of the blueprint's position, rotation, scale, etc. [source]
I'd like to better understand: are there cases in which an indexed BufferGeometry will be more performant than the InstancedBufferGeometry.
Also, in the InstancedBufferGeometry, are there WebGL maximum parameters (such as maximum vertices per mesh) that one must consider so as to avoid making a mesh too large? How are the vertices in an InstancedBufferGeometry counted?
If anyone could help clarify the situations in which indexed BufferGeometry and InstancedBufferGeometry should be used, and the performance ceilings of InstancedBufferGeometry, I'd be very grateful.
[...] IndexedBufferGeometry and InstancedBufferGeometry are the two most performant geometry types.
Yes, BufferGeometries in general are the most performant way to deal with geometry-data as they store data in exactly the format that is used in the communication with the GPU via WebGL. Any plain Geometry is internally converted to a BufferGeometry before rendering.
You are also correct in your descriptions of the indexed and instanced geometries, but I'd like to note one more detail: In an indexed geometry, the instructions for the GPU how to assemble the triangles are separated from the vertex-data and presented to the GPU in a special index-attribute (as opposed to being an implied part of the vertices for non-indexed arrays).
I'd like to better understand: are there cases in which an IndexedBufferGeometry will be more performant than the InstancedBufferGeometry.
They do different things at different levels, so I don't think there are many use-cases where a choice between them makes much sense.
In fact, you can even create an instanced geometry based on a "blueprint"-geometry that has is an indexed BufferGeometry.
Let's dive a bit into the details to explain. An instanced geometry allows you to render multiple "clones" of the same "blueprint"-geometry in a single draw-call.
The first part of this, the creation of the blueprint, is identical to rendering a single geometry. For this, the attributes (positions, normals, uv-coordinates and possibly the index for an indexed geometry) need to be transferred to the GPU.
The special thing for instanced geometries are some extra attributes (in three.js InstancedBufferAttribute). These control how many times the geometry will be rendered and provide some instance-specific values. A typical use-case would be to have an additional vec3-attribute for the instance-position and a vec4-attribute for the quaternion per instance. But it really could be anything else as well.
In the vertex-shader, these special attributes look just like any other attribute and you need to manually apply the instance-specific updates per vertex. So instead of this:
attribute vec3 position;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
You would have something like this:
attribute vec3 position;
attribute vec3 instanceOffset; // this is the InstancedBufferAttribute
void main() {
gl_Position =
projectionMatrix
* modelViewMatrix
* vec4(position + instanceOffset, 1.0);
}
What you don't see here is that the vertex-shader in the instanced version will not only be called once per vertex of your geometry (as it is the case for regular rendering) but once per vertex and instance.
So there isn't actually any magic going on, instanced geometries are in fact nothing but a very efficient way to express duplication of entire geometries.
Also, in the InstancedBufferGeometry, are there WebGL maximum parameters (such as maximum vertices per mesh) that one must consider so as to avoid making a mesh too large?
I am not sure about that, but I didn't encounter any so far. If you are aware that rendering 1000 instances of an object with 1000 vertices will invoke the vertex-shader a million times that should help your judgement of performance implications.
If anyone could help clarify the situations in which IndexedBufferGeometry and InstancedBufferGeometry should be used, and the performance ceilings of InstancedBufferGeometry, I'd be very grateful.
You can (and maybe should) use indexed geometries for almost any kind of geometry. But they are not free of drawbacks:
when using indices, all attributes will get the same treatment. So for instance, you can't use per-face colors in indexed geometries (see Access to faces in BufferGeometry)
for point-clouds or geometries with little repeated vertices, they will do more harm than good (due to the extra amount of memory/bandwidth needed for the index)
most of the time though they will get a performance-benefit:
less memory/bandwidth required for vertex-data
GPUs can cache results of vertex-shaders and re-use them for repeated vertices (so, in an optimal case you'd end up with one VS-invocation per stored vertex, not per index)
For instanced geometries
if you have a larger number of somewhat similar objects where the differences can be expressed in just a few numbers, go for instanced geometries (simple case: render copies of the same object at different locations, complex case: render a forest by changing the tree's geometry based on some instance-attribute or render a crowd of people by changing the individual persons pose with an instance-attribute)
another thing I found quite inspiring: rendering of fat lines using instancing: use instancing to render a bunch of line-segments where each line-segment consists of 6 triangles (see https://github.com/mrdoob/three.js/blob/dev/examples/js/lines/LineSegmentsGeometry.js by #WestLangley)
Drawbacks:
as it is right now, there is no builtin-support for using regular materials together with instanced geometries. You have to write your shaders yourself. (to be precise: there is a way to do it, but it requires some intimate knowledge of how the three.js shaders work).
So, I want to start to make a game engine and I realized that I would have to draw 3D Objects and GUI(Immediate Mode) at the same time.
3D objects will use the perspective projection matrix and as GUI is in 2D space I will have to use Orthographic projection matrix.
So how can I implement that please anyone guide me. I'm not one of the professional Graphics programmers.
Also I'm using DirectX 11 so keep it that way.
To preface my answer, when I say "draw at the same time", I mean all drawing that takes place with a single call to ID3D11DeviceContext::Draw (or DrawIndexed/DrawAuto/etc). You might mean something different.
You do not required to draw objects with orthographic and perspective projections at the same time, and this isn't very commonly done.
Generally the projection matrix is provided to a vertex shader via a shader constant (or frequently via a concatenation of the World, View and Projection matrices). When you made a draw of a perspective object, you would bind one set of constants, when drawing an orthographic one, you'd set different ones. Frequently, different shaders are used to render perspective and orthographic objects, because they generally have completely different properties (eg. lighting, etc.).
You could draw the two different types of objects at the same time, and there are several ways you could accomplish that. A straightforward way would be to provide both projection matrices to the vertex shader, and have an additional vertex stream which determines which projection matrix to use.
In some edge cases, you might get some small performance benefit from this sort of batching. I don't suggest you do that. Make you life easier and use separate draw calls for orthographic and perspective objects.
In the example Interactive Raycasting Points there are 4 different functions to generate the point cloud:
1. generatePointcloud (with buffer geometry)
2. generateIndexedPointcloud (buffer geometry with indices)
3. generateIndexedWithOffsetPointcloud (buffer geometry with added drawcall)
4. generateRegularPointcloud (with normal geometry)
Could someone explain what the difference is between these 4 types, and if there are any performance benefits/certain situations where one is suited more than the others?
Thanks!
The purpose of the example Interactive Raycasting Points is to demonstrate that raycasting against THREE.Points works for a variety of geometry types.
So-called "regular geometry", THREE.Geometry, is the least memory-efficient geometry type, and in general, has longer load times than THREE.BufferGeometry.
BufferGeometry can be "indexed" or "non-indexed". Indexed BufferGeometry, when used with meshes, allows for vertices to be reused; that is, faces that share an edge can share a vertex. In the case of point clouds, however, I do not see a benefit to the "indexed" type.
BufferGeometry with draw calls -- now called groups -- allows for only a subset of the geometry to be rendered, and also allows for a different material index to be associated with each group.
The function generateIndexedWithOffsetPointcloud appears to have been named when draw calls, a.k.a. groups, were called "offsets".
I do not believe raycasting in three.js honors groups. I believe it raycasts against the entire geometry. In fact, I am not sure groups are working correctly at all in the example you reference.
three.js r.73