I would like to know is there any way to update the vertexes of a loaded json/glb file dinamically.
I am planning to set the positions of each vertex in x,y,z positions.
As i understand i can only access positions from a geometrybuffer but that will not have the indexing for the actual vertexes.. i would like to update the vertex like a vector positions
for example i have an array like below
array=[
{x:0,y:10,z,3},{x:0,y:10,z,3},{x:0,y:10,z,3}
]
now by using this i should be able to update each vertex of the glb/json to respective positions by same index order.
any help on this is highly appreciated
The file format used to load your model doesn't really matter, here — once it's loaded it is part of a THREE.Scene, which may contain some mix of Group, Object3D, Mesh, and other object types. It might have many meshes, so you'll have to "traverse" the scene hierarchy and update any meshes you need.
For example:
model.traverse((object) => {
if (object.isMesh) {
// modify object.geometry here.
}
});
To understand how to modify the mesh, see the Mesh and BufferGeometry documentation. To update the Nth vertex position you would do something like:
object.geometry.attributes.position.setXYZ(n, x, y, z);
Note that the position attribute stores positions of each vertex, and is a BufferAttribute instance.
Related
I am trying to fit known well-defined shapes (eg boxes, cylinders; with configurable positions, rotations and dimensions) to a set of points with normals generated from sampling a 3D mesh. My current method is to define a custom fit function for each shape and pass it to a third-party optimisation function:
fitness = get_fitness(shape_parameters, points)
best_parameters = external.optimise(get_fitness, initial_parameters, points)
(for reference, I'm currently using Python 3 and scipy.optimize.minimize with bounds, but language is irrelevant).
This fitness function for a rectangle would look something like
def get_fitness(parameters, points):
side_fitnesses = []
for side in [top, right, bottom, left, back, front]:
dists = get_side_distances(parameters, points, side)
ndevs = get_side_normal_deviations(parameters, points, side)
side_fitnesses.append(combine_dists_and_ndevs(dists, ndevs))
fitnesses = choose_best_side_for_each_point(side_fitnesses)
return mean(fitnesses)
However this means I have to determine outliers (with/without caching), and I can only fit one shape at a time.
For example (in 2D), for these points (with normals), I'd like the following result:
Notice that there are multiple shapes returned, and outliers are ignored. In general, there can be many, one or zero shapes in the input data. Post-processing can remove invalid (eg too small) results.
Note: my real problem is in 3D. I have segments of a 3D mesh representation of real-world objects, which means I have more information than just points/normals (such as face areas and connectivity) which are in the example above.
Further reading:
Not well-defined shape-fitting
Highly technical n-dimensional fitting
Primitive shape-fitting thesis
Unanswered SO question on something similar
PS: I'm not sure if StackOverflow is the best StackExchange site for this question
well so you will have to handle meshes with volume then. That changes things a lot ...
segmentate objects
be selecting all faces enclosing its inside ... So its similar to this:
Finding holes in 2d point sets?
so simply find a point inside yet unused mesh ... and "fill" the volume until you hit all the faces it is composed of. Select those faces as belonging to new object. and set them as used... beware touching objects may lead to usage of faces twice or more ...
You can do this also on vector math/space so just test if line from some inside point to a face is hitting any other face ... if no you found your surface face ... similar to Hit test
process object (optional)
you can further segmentate the object mesh into "planar" objects that it is composed of by grouping faces belonging to the same plane ... or inside enclosing edge/contour ... then detect what they are
triangle
rectangle
polygon
disc
from count and type of faces you can detect basic objects like:
cone = 1 disc + 1 curved surface with singular edge point parallel to disc center
box/cube = 6 rectangles/squares
cylinder = 2 discs + 1 curved surface with center axis going through discs centers
compute basic geometric properties of individual objects (optional)
like BBOX or OBB, surface, volume, geom. center, mass center, ...
Now just decide what type of object it is. For example ratio between surface area and volume can hint sphere or ellipsoid, if OBB matches sides it hints box, if geom and mass centers are the same it hints symmetrical object ...
pass the mesh to possible object type fitting function
so based on bullets #2,#3 you have an idea which object could be which shapes so just confirm it with your fitting function ...
to ease up this process you can use properties from #3 for example of this see similar:
ellipse matching
so you can come up with similar techniques for basic 3D shapes...
I use a pivot group to rotate my planegeometries instead of each one individually. After rotation of the pivot group-object, I want to find the new positions for each child/planegeometry-mesh to correspond to the positions that is relative to the actual world position.
How do I go about doing this?
The easy way
Like Craig mentioned, getWorldPosition is a function on Object3D (the base class of pretty much everything in the scene), which returns a new Vector3 of the object's world position.
var childPlaneWorldPosition = childPlane.getWorldPosition();
The harder way:
There are two methods for converting between local and world positions: localToWorld and worldToLocal.
These are also functions on Object3D, and take a Vector3. The vector is then (destructively) converted to the desired coordinate system. Just know that it's not smart enough to know if the vector you're giving it is already in the right coordinate system--you'll need to keep track of that.
So, to convert a child plane's position from local to world coordinates, you would do this:
// clone because localToWorld changes the vector passed to it
var childPlanePosition = childPlane.position.clone();
childPlane.parent.localToWorld(childPlanePosition);
Notice that localToWorld is called on childPlane's parent. This is because the childPlane is local to its parent, and therefore its position is local to its parent's coordinate system.
The hard(er to understand) way:
Each childPlane stores not only its local transformation matrix (childPlane.matrix), but also its world transformation matrix (childPlane.matrixWorld). You can, of course, get the world position directly from the matrixWorld property in one step.
var childWorldPosition = new THREE.Vector3(
childPlane.matrixWorld.elements[12],
childPlane.matrixWorld.elements[13],
childPlane.matrixWorld.elements[14]
);
Edit to answer some questions
"If I understand correctly, can I find the "real" position of the meshes in the pivot-group children-array?"
Yes. If you called:
pivotGroup.add(childPlane);
Then that childPlane will be listed in the pivotGroup.children array, which you could use to iterate over all of the childPlane objects
"And clone these to the position object for each meshes?"
If you want the planes to be in world coordinates (in the scene), but you used the above code to add them to the group, then they are no longer direct children of the scene. You would need to re-add them to the scene:
scene.add(childPlane);
And then apply their calculated world positions. That said, why not just leave them in the group?
(You didn't ask this one) "How would you leave the planes as direct children of the scene, but rotate them as a group?"
Well, you wouldn't. But three.js does this group rotation by multiplying matrices to come up with finalized world matrices for each plane. So you could do the same thing manually, by creating a rotation matrix, and applying it to all of your planes.
var rotMat = new THREE.Matrix4().makeRotationMatrix(x, y, z);
for(var i = 0; i < planesArray.length; ++i){ // I guess this would loop over your 3D array
planesArray.applyMatrix(rotMat);
}
Use plane_mesh.getWorldPosition()
I'm using OpenMesh to remesh/manage some mesh objects.
With subdivide/decimate/smooth and other tools from OpenFlipper, I can change the mesh topology.
This however results in vertex colors loosing their meaning, as new vertices will all have black color and there is no interpolation when mesh topology changes, resulting in visual artifacts.
Is there a way to tell OpenMesh to reproject vertex colors back to the old mesh to interpolate the vertex color?
If not, what would be a good way to do it manually? Is there any state of the art for vertex back-projection?
In OpenFlipper using requestTriangleBsp() you can request a BSP tree for your original mesh object. (You will have to keep a copy of your original mesh as long as you want to use that BSP tree.) Whenever you want to project a point onto your original mesh, you can then use the nearest() member function on the BSP tree in order to get the closest face to the supplied point. After that it's only a matter of projecting your point into that face, computing barycentric coordinates and interpolating the vertex colors.
I think you want to get this information for output mesh: VertexInfo = {origin face id, barycentric coordinate}. You can project vertex to original mesh to compute VertexInfo. However, it is not recommended to compute topology info from geometry info. Just thinking you have a box mesh that is nearly flat, I don't think you can get the right VertexInfo by reprojecting. The best way to get VertexInfo is to compute it in each concrete command by topology info, like you mentioned subdivide/decimate, etc.
I'll explain the problem I'm facing, cause probably I'm doing something wrong.
I have some models in blender, these are exported to json and used in three.js. In these models there are some planes, which then in js are replaced on the flight with another mesh to enable a cloth simulation.
The models can rotate once in the scene, and these planes being children of the models will also rotate. Moreover, the original planes from blender could have some rotation applied.
However we want the wind to be global, so for each plane and each frame, a global (world) wind direction vector is cloned and then transformed into the local coordinates of each plane, so that cloth particles can be moved correctly.
This is accomplished simply with :
globalDir = new THREE.Vector3(0,0,1); // Wind from north
// ...
var localDir = plane.worldToLocal(globalDir.clone());
// use localDir vector for moving around vertices base don wind
This "works", meaning that all clothes children of a single model are aligned to the same global wind, but :
nothing else changing, only refreshing the page, given the same values for the globalDir vector, the wind direction is always different.
from model to model, the direction is different.
It seems to be all about how the world matrix gets updated relatively to the object hierarchy, on the order the models are loaded and added, and so on.
I've been trying to add and remove calls to updateMatrix and updateMatrixWorld everywhere around, I'm asking for guidelines about how the methods updateMatrix, updateMatrixWorld, localToWorld and worldToLocal are supposed to be used.
I created a TriangleMesh in JavaFx. The vertices were assigned some initial texture coordinates. I want to animate the texture on the mesh by changing the texture coordinates of the vertices slowly over time. Is this possible? If so, how to do it? If not, what is the best way to achieve this effect?
It is possible...
One approach would be to create a property Double for uvStartX, uvStartY (u,v) ..
Override invalidated method, in the invalidated method, call setMesh(meshMethod)
change those values in a timeline..
I've done something similar with points, meshdivisions etc...
It works and is rather smooth too.
Hard part is making sure all your code maps to a point between 0-1.0 and keeping those points
lined up to each other.