I'm trying to find a lightweight way to find nearby objects in three.js.
I have a bunch of cubes, and I want each cube to be able to determine the nearest cubes to it on demand.
Is there a better way to do this than just iterating through all objects and calculating the distance between them? I know the renderer does something similar to what I want when it sorts to find the order to render with, but I'm not getting too far just trying to read the three.js code.
The renderer is doing the same thing you're describing but you may want to use KDTrees in your case.
Have a look at this example:
http://threejs.org/examples/webgl_nearestneighbour.html
Related
I am working with RayCasting in three js from couple of days and have understood how it works.
at the moment I am able to intersect the object using raycasting but I am up with one problem.
I am trying to find if there is any obstacle intersecting my 5 meter away from my source and I am looking to check this 5 meter in all the direction(similar like a boundary of 5 meters around the object).
right now with what I am doing it only checks at one.
so can someone help me how can I achieve this ?
Using raycasting is not a precise and performant approach for this kind of collision detection since you would have to cast an infinite number of rays to produce a 100% reliable result.
You should work with bounding volumes instead and perform intersection tests between those. three.js provides a bounding sphere and AABB implementation in the core which are sufficiently tight bounding volumes for many use cases. You can also give the new
OBB class a try which is located in the examples. All three classes provide methods for intersection tests between all types.
I have an app that uses a Three.js ParticleSystem to render on the order of 50,000 points. I have spent a lot of time searching for efficient ways to do picking (ray-casting) so as to be able to interact with individual points but have not found a good solution. I am considering changing to just using an array of Particles instead of ParticleSystems.
My questions are:
Am I missing something; is there a good way to do picking with the ParticleSystem?
Will I suffer a performance hit using an array of Particles instead of the ParticleSystem, especially since I am taking advantage of the ability to pass several arrays of attributes into the shader.
Thanks for any insight anyone can provide!
You're checking 50,000 points every time. That's a bit too much.
You may want to split those points into different ParticleSystems... Like 10 objects with 5000 particles each.
Ideally each object would compose a different "quadrant" so the Raycaster can check the boundingSphere first and ignore all those points if not intersecting.
I was looking at the http://threejs.org/examples/webgl_nearestneighbour.html and had a few questions come up. I see that they use the kdtree to stick all the particles positions and then have a function to determine the nearest particle and color it. Let's say that you have a canvas with around 100 buffered geometries with around 72000 vertices / geometry. The only way I know to do this is that you get the positions of the buffered geometries and then put them into the kdtree to determine the nearest vertice and go from there. This sounds very expensive.
What other way is there to return the objects that are near the camera. Something like how THREE.LOD does it? http://threejs.org/examples/#webgl_lod It has the ability to see how far an object is and render the different levels depending on the setting you inputted.
Define "expensive". The point of the kdtree is to find nearest neighbour elements quickly, its primary focus is not on saving memory (Although it does everything inplace on a typed array, it's quit cheap in terms of memory already). If you need to save memory you maybe have to find another way.
Yet a typed array with length 21'600'000 is indeed a bit long. I highly doubt you have to have every single vertex in there. Why not have a position reference point for every geometry part? And, if you need to get the vertices associated to that point, a dictionary. Then you can call myGeometryVertices[ geometryReferencePoint ].
Three.LOD works with raycasts. If you have a (few) hundred objects that might work well. If you have hundred thousands or even millions of positions you'll get some troubles. Also if you're not using meshes; you can't raytrace e.g. a particle.
Really just build your own logic with them. None of those two provide a prebuilt perfect-for-all-cases solution.
Three.JS noob here trying to do 2d visualization.
I used d3.js to make an interactive visualization involving thousands of nodes (rectangle shaped). Needless to say there were performance issues during animation because Browsers have to create an svg DOM element for every one of those 10 thousand nodes.
I wish to recreate the same visualization using WebGl in order to leverage hardware acceleration.
Now ThreeJS is a library which I have choosen because of its popularity (btw, I did look at PixiJS and its api didn't appeal to me). I am wanting to know what is the best approach to do 2d graphics in three.js.
I tried creating one PlaneGeometry for every rectangle. But it seems that 10 thousand Plane geometries are not the say to go (animation becomes super duper slow).
I am probably missing something. I just need to know what is the best primitive way to create 2d rectangles and still identify them uniquely so that I can interact with them once drawn.
Thanks for any help.
EDIT: Would you guys suggest to use another library by any chance?
I think you're on the right track with looking at WebGL, but depending on what you're doing in your visualization you might need to get closer to the metal than "out of the box" threejs.
I recommend taking a look at GLSL and taking a look at how you can implement your visualization using vertex and fragment shaders. You can still use threejs for a lot of the WebGL plumbing.
The reason you'll probably need to get directly into GLSL shader work is because you want to take most of the poly manipulation logic out of javascript, at least as much as is possible. Any time you ask js to do a tight loop over tens of thousands of polys to update position, etc... you are going to struggle with CPU usage.
It is going to be much more performant to have js pass in data parameters to your shaders and let the vertex manipulation happen there.
Take a look here: http://www.html5rocks.com/en/tutorials/webgl/shaders/ for a nice shader tutorial.
I am trying to make a PointCloud mapping user with multiple kinects on Processing. I get the user's front and back with 2 kinects on opposite sides and generate both PointClouds.
The trouble is that the PointClouds X/Y/Z are not syncronized, it just puts the two of them on screen and it surely looks messy. There is a way to calculate or make a comparison between them, to translate the second PointCloud to "join" the first? I could translate the position manually, but if I move the sensors it will go off again.
Supposing all the Kinects are stationary, I guess you would have to go in this order:
decide on which Kinect to use as a global reference,
get parameters for a 3D transformation for each of the other Kinects - I'd try to
use PMatrix3D and applyMatrix(), although it may be slow,
apply the transformations on to each of the other Kinects' point clouds and draw
the clouds
I don't (yet) know how to get the transformation parameters for a Procrustes transformation, but assuming they won't change, you'd probably have to set up multiple reference points, maybe by displaying the point clouds from each pair of Kinects and registering the points you know are the same in both point clouds. After getting enough of them, construct a PMatrix3D and apply it inside push/popMatrix.
This is the approach used by this guy: http://www.youtube.com/watch?v=ujUNj1RDL4I
An alternative approach would be to use an Iterative Closest Point algorithm and construct 3D transform from its output. I'd really like an ICP or PCL library for Processing, if anyone knows a good one.