Fitting geometries together - three.js

Say you have one 'body' geometry (from OBJ file) and several different 'lens' geometries, of different sizes, that can be 'mounted' on the body object.
How is the mount information best represented (I am thinking the data models for body and lens would need offset data representing the respective mount points)?
In THREE.js, for a chosen lens/body pair would a Group be used to 'put together' the two pieces?
New to THREE.js (sorry). . .

That sounds like a good approach.

Related

Why should I use the node structure for a non-rigged model?

I'm working with Vulkan since a few weeks. I have a question about the data structure of a model:
I'm using Assimp for loading fbx and dae files. The loaded model then usually contains several nodes (RootNode and their ChildNodes).
Should I keep this structure on non-rigged models? Or could I transform all the meshes (or rather their vertices) into world space at the first loading by multiplying with the offset matrix of the node (and then delete the node structure in my program)? Because I've never seen that someone transform a node (and their meshes) after loading, if the model isn't rigged.
Or is there any other reason why I should keep this structure?
Assimp also offers the flag aiProcess_PreTransformVertices to flatten the transformation hierarchy. You can also do it manually, of course, by using the aiNode::mTransformation matrices and multiplying them in the right order.
A potential problem with these approaches (especially with the flag) can be that the material properties of sub-meshes can get lost. Assimp doesn't care if sub-meshes have different material properties and just merges them, s.t. the material properties can get lost for a certain sub-mesh. This also applies to other properties like sub-mesh names. If sub-meshes get merged, only one (arbitrarily chosen?) sub-mesh name remains.
I.e. you'll want to prevent flattening the node hierarchy if you would like to use specific properties (names, material properties) of sub-meshes.
If your concern is only w.r.t. transforming them to world space: If you are never going to do something with the nodes in object space (like, e.g., transforming a sub-node in relation to a parent node), then I don't see a reason to not transform them into world space.

How to apply multiple textures on one model in aframe

a have a model in maya/blender, which has multiple UV's.
I thought that the .mtl has all the info about materials/textures ( as i can see the links in the .mtl ), but apperently i have to link every texture to an object # src="texture.jpg".
Is there any other way than combining those textures in photoshop/gimp, or breaking my model into separate .obj's having their own texture ?
Should i look more into the custom shading options in aframe/three.js # registerShader ?
The OBJ/MTL format does not support multiple UV sets. It may not support multiple materials on the same geometry either, I'm not sure. FBX and Collada do support multiple UVs, so you could try one of those.
But, searching for "threejs multiple UVs" shows that it is not easy to do multiple UVs without custom shaders, even once you have a newer model format. I would maybe try to bake your multiple UVs down into a single set in the modeling software, if that's possible.
MTL files can associate different texture maps with different material groups in the OBJ file, but the OBJ file can only describe a single set of UVs per poly face. Whether or not your OBJ writer or THREE's OBJ reader supports it is a different matter.
On a side note: the actual Wavefront OBJ spec is interesting in that it supported all kinds of things no one implemented after 1999 or so, including NURBS patches with trim curves, and 1D texture maps (essentially LUTs)
https://en.wikipedia.org/wiki/Wavefront_.obj_file

Nearest neighbours

I'm trying to find a lightweight way to find nearby objects in three.js.
I have a bunch of cubes, and I want each cube to be able to determine the nearest cubes to it on demand.
Is there a better way to do this than just iterating through all objects and calculating the distance between them? I know the renderer does something similar to what I want when it sorts to find the order to render with, but I'm not getting too far just trying to read the three.js code.
The renderer is doing the same thing you're describing but you may want to use KDTrees in your case.
Have a look at this example:
http://threejs.org/examples/webgl_nearestneighbour.html

Generating a multipoint topojson file

I am trying to create a map layer using D3 and leaflet for displaying a large number of unique GPS data points. I created it using geoJSON and Leaflet but the performance was poor. I finally got Topojson installed and working, but I cannot get it to produce a Multipoint geometry, only Point geometry which does not shrink the file much. I have passed in a CSV with all the points and used to geoJson file but only get 70,000 point geometries instead of one Multipoint. What am I missing? Do I need to write the Topojson myself? Want to avoid this if possible.
TopoJSON won't help you in this case. To quote the website:
Rather than representing geometries discretely, geometries in TopoJSON files are stitched together from shared line segments called arcs.
As you have no line segments, there's no point in using TopoJSON -- it won't reduce the size of the file.
+1 What Lars said. Your best bet is probably to load the point data as a CSV using d3.csv()instead of a GeoJSON or TopoJSON, as it is much more compact. You can then loop over the data, adding each point to a layer group.
That said, 70,000 is a lot and your map is still probably going to be very slow. You might want to think about using something like PostGIS (or CartoDB for that matter) and request only those points that are visible in a given map state.

Transform a set of 2d images representing all dimensions of an object into a 3d model

Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.

Resources