Best method for spline with lots of faces - three.js

Let's say I have a spline that is 20,000ft long. So I set the segments to 20,000. I also need 36 faces around each segment. I'm also grabbing information from a database about each segment faces.
Segment 1 [Face1: Red, Face2: Green, Face3: Black ..... Face 36: Red]
.
.
Segment 20000 [...]
What would be the best way to render something like this where I have a lot of data going into it without a lot of performance issues?

Indeed no simple matter. What I'd do myself is to probably disassemble THREE.Spline and change its logic to use BufferGeometry. So e.g. instead of creating a THREE.Vertice you'd add a position to the typed array. But that would probably take a while until you figured it out. Using a worker is not really a good idea because we have a memory issue here, not really a speed issue. And then you'd additionally have to pass the geometries between "threads".
Another approach is to build up your spline sequentially. So instead of
Create THREE.Spline with thousands of vertices
you could
Create THREE.Spline with 40 vertices which are around the camera. Display the inner 38 vertices. When camera moves create the following spline again with 40 vertices.
You'd have to have some camera logic like "where is my camera" and such. If you can do that computationally - perfect. If you don't know previously where your vertices are located or if there's no logic to their location then use a helper construct like http://threejs.org/examples/#webgl_nearestneighbour.

Why don't you check out the new BufferGeometry? I've managed to get a huge performance boost by doing so. Here's the docs entry to get you started: http://threejs.org/docs/#Reference/Core/BufferGeometry. Unfortunately, many examples are outdated, so beware. This one seems to work in r67, others don't: http://mrdoob.github.io/three.js/examples/#webgl_buffergeometry_rawshader.
Basically, you would preallocate a Float32Array of size 20000 * 36 * 3 * 3 (20000 segments * 36 faces * 3 vertices * 3 coordinates) and another one for colors. Then the arrays would be gradually filled with the data.

Related

Kalman Filter on a set of points belonging to the same object?

Let's say you're tracking a set of 20 segments with the same length belonging to the same 3D plane.
To visualize, imagine that you're drawing a set of segments of length 10 cm randomly on a sheet of paper. And make someone move this sheet in front of the camera.
Let's say those segments are represented by two points A and B.
Let's assume we manage to track A_t and B_t for all the segments. The tracked points aren't stable from frame to frame resulting in occasional jitter which might be solved by a Kalman filter.
My questions are concerning the state vector:
A Kalman filter for A and B for each segment (with 20 segments this results in 40 KF) is an obvious solution but it looks too heavy (knowing that this should run in real-time).
Since all the tracked points have the same properties (belonging to the same 3D plane, have the same length) isn't it possible to create one big KF with all those variables?
Thanks.
Runtime: keep in mind that the kalman equations involve matrix multiplications and one inversion. So having 40 states means having some 40x40 matrices. That will always take longer to calculate than running 40 one-state filters, where your matrices are 1x1 (scalar). Anyway, running the big filter only makes sense if you do know of a mathematical relationship between your states (=correlation), otherwise its output wise the same like running the 40 one-state filters.
With the information given thats really hard to tell. E.g. if your segments are always a polyline you could describe that differently in contrast to knowing nothing about the shape.

Plumb bob camera distortion in Three.js

I'm trying to replicate a real-world camera within Three.js, where I have the camera's distortion specified as parameters for a "plumb bob" model. In particular I have P, K, R and D as specified here:
If I understand everything correctly, I want to set up Three to do the translation from "X" on the right, to the "input image" in the top left. One approach I'm considering is making a Three.js Camera of my own, setting the projectionMatrix to ... something involving K and D. Does this make sense? What exactly would I set it to, considering K and D are specified as 1 dimensional arrays, of length 9 and 5 respectively. I'm a bit lost how to combine all the numbers :(
I notice in this answer that there are complicated things necessary to render straight lines as curved, they way they would be with certain camera distortions (like a fish eye lens). I do not need that for my purposes if that is more complicated. Simply rendering each vertex is the correct spot is sufficient.
This document shows the step by step derivation of the camera matrix (Matlab reference).
See: http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
So, yes. You can calculate the matrix using this procedure and use that to map the real-world 3D point to 2D output image.

Three.js: Interpreting renderer.info.render output

I'm looking at one of the Google Arts Experiments [link to page] that renders many images in a browser with WebGL + GLSL. I'm puzzled, though, by the fact that this scene includes 1,167,858 vertices and 389,286 quad faces, which equals 3 vertices per quad face (we see these numbers if we run renderer.info.render in the console on this page).
My question is: How in GLSL can one build or represent a quad face given fewer than 4 vertices? I'd be very grateful for any suggestions others can offer on this question!
More generally, are there tools one can use to investigate the ways a given page is using vertices, faces, and textures? I'd love to be able to really study the above-linked page as thoroughly as possible, so any tools that can help with this task would be very helpful!
The renderer.info.render isn't neccesarily 100 percent accurate. It only accumulates stats that it can gather with minimal overhead since the stats gathering is enabled all the time.. so take any discrepancies with a grain of salt. Also that demo is using InstancedBufferGeometry. Geometry instancing works differently than classical rendering.. in that the vertex stream is just used to parametize each instance of the rectangle.. so those 3 vertices are probably used to derive the rectangle width/height/uv coordinate and position.
In summary, you can use instancing to instance a bunch of rectangles and only have to specify the parameters that make it different.. i.e. position/tex coord/scaling, etc.

Very fast boolean difference between two meshes

Let's say I have a static object and a movable object which can be moved and rotated, what is the best way to very quickly calculate the difference of those two meshes?
Precision here is not so important, speed is though, since I have to use it in the update phase of the main loop.
Maybe, given the strict time limit, modifying the static object's vertices and triangles directly is to be preferred. Should voxels be preferred here instead?
EDIT: The use case is an interactive viewer of a wood panel (parallelepiped) and a milling tool (a revolved contour, some like these).
The milling tool can be rotated and can work oriented at varying degrees (5 axes).
EDIT 2: The milling tool may not pierce the wood.
EDIT 3: The panel can be as large as 6000x2000mm and the milling tool can be as little as 3x3mm.
If you need the best possible performance then the generic CSG approach may be too slow for you (but still depending on meshes and target hardware).
You may try to find some specialized algorithm, coded for your specific meshes. Let's say you have two cubes - one is a 'wall' and second is a 'window' - then it's much easier/faster to compute resulting mesh with your custom code, than full CSG. Unfortunately you don't say anything about your meshes.
You may also try to make it a 2D problem, use some simplified meshes to compute the result that will 'look like expected'.
If the movement of your meshes is somehow limited you may be able to precompute full or partial results for different mesh combinations to use at runtime.
You may use some space partitioning like BSP or Octrees to divide your meshes during precomputing stage. This way you could split one big problem into many smaller ones that may be faster to compute or at least to make the solution multi-threaded.
You've said about voxels - if you're fine with their look and limits you may voxelize both meshes and just read and mix two voxel values, instead of one. Then you would triangulate it using algorithm like Marching Cubes.
Those are all just some general ideas but we'll need better info to help you more.
EDIT:
With your description it looks like you're modeling some bas-relief, so you may use Relief Mapping to fake this effect. It's based on a height map stored as a texture, so you'd need to just update few pixels of the texture and render a plane. It should be quite fast compared to other approaches, the downside is that it's based on height map, so you can't get shapes that Tee Slot or Dovetail cutter would create.
If you want the real geometry then I'd start from a simple plane as your panel (don't need full 3D yet, just a front surface) and divide it with a 2D grid. The grid element should be slightly bigger than the drill size and every element is a separate mesh. In the frame update you'd cut one, or at most 4 elements that are touched with a drill. Thanks to this grid all your cutting operations will be run with very simple mesh so they may work with your intended speed. You can also cut all current elements in separate threads. After the cutting is done you'll upload to the GPU only currently modified elements so you may end up with quite complex mesh but small modifications per frame.

Grab objects near camera

I was looking at the http://threejs.org/examples/webgl_nearestneighbour.html and had a few questions come up. I see that they use the kdtree to stick all the particles positions and then have a function to determine the nearest particle and color it. Let's say that you have a canvas with around 100 buffered geometries with around 72000 vertices / geometry. The only way I know to do this is that you get the positions of the buffered geometries and then put them into the kdtree to determine the nearest vertice and go from there. This sounds very expensive.
What other way is there to return the objects that are near the camera. Something like how THREE.LOD does it? http://threejs.org/examples/#webgl_lod It has the ability to see how far an object is and render the different levels depending on the setting you inputted.
Define "expensive". The point of the kdtree is to find nearest neighbour elements quickly, its primary focus is not on saving memory (Although it does everything inplace on a typed array, it's quit cheap in terms of memory already). If you need to save memory you maybe have to find another way.
Yet a typed array with length 21'600'000 is indeed a bit long. I highly doubt you have to have every single vertex in there. Why not have a position reference point for every geometry part? And, if you need to get the vertices associated to that point, a dictionary. Then you can call myGeometryVertices[ geometryReferencePoint ].
Three.LOD works with raycasts. If you have a (few) hundred objects that might work well. If you have hundred thousands or even millions of positions you'll get some troubles. Also if you're not using meshes; you can't raytrace e.g. a particle.
Really just build your own logic with them. None of those two provide a prebuilt perfect-for-all-cases solution.

Resources