Find which object3D's the camera can see in Three.js - Raycast from each camera to object - three.js

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!

Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

Related

In SceneKit, how can one tile a texture on differently sized objects while keeping draw calls minimal?

To improve performance/fps in a SceneKit scene, I would like to minimise the number of draw calls. The scene contains a procedurally generated city, for which I generate houses of random heights (each an SCNBox) and tile them with a single, identical repeating facade texture, like so:
The proper way to apply the textures appears to be as follows:
let material = SCNMaterial()
material.diffuse.contents = image
material.diffuse.wrapS = SCNWrapMode.repeat
material.diffuse.wrapT = SCNWrapMode.repeat
buildingGeometry.firstMaterial = material
This works. But as written, it stretches the material to fit the size of the faces of the box. To resize the textures to maintain aspect ratio, one needs to add the following code:
material.diffuse.contentsTransform = SCNMatrix4MakeScale(sx, sy, sz)
where sx, sy and sz are appropriate scale factors derived from size of the faces in the geometry. This also works.
But that latter approach implies that every node needs a custom material, which in turn means that I cannot re-use a single material for all of the houses, which in turn means that every single node requires an extra draw call.
Is there a way to use a single texture material to tile all of the houses (without stretching the texture)?
Using a surface shader modifier (SCNShaderModifierEntryPointSurface) you could modify _surface.diffuseTexcoord based on scn_node.boundingBox.
Since the bounding box is dynamically fed to the shader all the objects will be using the same shader and will benefit from instancing (reducing the number of draw calls).
The SCNShadable.h header file has more details on that.

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

Three JS How to make ray or rays from camera to all object in rederer to check faceIndex

I have some project for child http://kinosura.kiev.ua/sova/ and i need to check faceIndex of all cubes in screen.
Now i use intersections array from mouse, but is working only when user pointer at the cube.
How to make ray or rays from camera to all object to check faceIndex ?
I try to make four rays to cubes but if i set cube.position as origin of like this:
raycaster.setFromCamera( cube1.positoin , camera )
I get empty array of intersections.
I also try to set static 2d vector as origin (get coordinate from mouse) but i have relative renderer size and this coordinate all time change... its not work(
Thanks for answer anyway.
I suggest that you try another approach It appears that your cubes do not cover one another, relative to the camera view. So use the surface normals, and compare them to the view direction to determine if they are facing the camera or facing away from the camera by a simple one-per-polygon dot product.
When you are creating your geometry, before adding it a THREE.Mesh call .generateFaceNormals() on it.
Instead of ray casting, iterate through all faces, grab the surface normal of the face, transform relative to the view (inverse transpose of the object's matrix), then dot(). might sound complicated, at first, but it's actually just a couple of steps and much faster than doing a lot of raycasts (which will probably include this anyway!)

Fixed size for certain objects

I have a 3D scene, which includes interface objects rendered over the same perspective camera. I want those objects to have a fixed size based on their distance from their center to the camera.
My current solution involves adding/removing those objects to an array and calculating the bounding sphere for each of those objects every frame and rescaling based on the camera distance to the center of the spheres.
But that doesn't feel right and will at some point cost too many resources once the count of those objects gets big enough. Is there any efficient way to solve this e.g. setting a fixed size for the objects? I don't really want to use a second camera, because that would display the interface objects in a weird way, as if they don't really belong there.
If I understand you correctly, you want an object to stay the same size as you move the camera closer/further. One method is to first "attach" the item to the camera, then offset it. This would be executed every frame.
fixedSizeObject.position.copy( this.camera.position );
fixedSizeObject.rotation.copy( this.camera.rotation );
fixedSizeObject.updateMatrix();
fixedSizeObject.translateZ( -30 ); //where -30 is the distance you'd like from the camera

OpenGL performance issue

I'm writing a 2D RPG using the LWJGL and Java 1.6. By now, I have a 'World' class, which holds an ArrayList of Tile (interface with basic code for every Tile) and a GrassTile class, which makes the use of a Spritesheet.
When using Immediate mode to draw a grid of 64x64 GrassTiles I get around 100 FPS and do this by calling the .draw() method from each tile inside the ArrayList, which binds the spritesheet and draws a certain area of it (with glTexCoord2f()). So I heard it's better to use VBO's, got a basic tutorial and tried to implement them on the .draw() method.
Now there are two issues: I don't know how to bind only a certain area of a texture to a VBO (the whole texture would be simply glBindTexture()) so I tried using them with colours only.
That takes me to second issue: I got only +20 FPS (120 total) which is not really what I expected, so I suppose I'm doing something wrong. Also, I am making a single VBO for each GrassTile while iterating inside the ArrayList. I think that's kind of wrong, because I can simply throw all the tiles inside a single FloatBuffer.
So, how can I draw similar geometry in a better way and how can I bind only a certain area of a Texture to a VBO?
So, how can I draw similar geometry in a better way...
Like #Ian Mallett described; put all your vertex data into a single vertex buffer object. This makes it possible to render your map in one call. If your map get 1000 times bigger you may want to implement a camera solution which only draws the vertices that are being shown on the screen, but that is a question that will arise later if you're planning on a significantly bigger map.
...and how can I bind only a certain area of a Texture to a VBO?
You can only bind a whole texture. You have to point to a certain area of the texture that you want to be mapped.
Every texture coordinate relates to a specific vertex. Every tile relates to four vertices. Common tiles in your game will share the same texture, hence the 'tile map' name. Make use of that. Place all your tile textures in a texture sheet and bind that texture sheet.
For every new 'tile' you create, check whether the area is meant to be air, grass or ground and then point to the part of the texture that corresponds to what you intend.
Let's say your texture area in pixels are 100x100. The ground area is 15x15 from the lower left corner. Follow the logic above explains the example code being shown below:
// The vertexData array simply contains information
// about a tile's four vertices (or six
// vertices if you draw using GL_TRIANGLES).
mVertexBuffer.put(0, vertexData[0]);
mVertexBuffer.put(1, vertex[1]);
mVertexBuffer.put(2, vertex[2]);
mVertexBuffer.put(3, vertex[3]);
mVertexBuffer.put(4, vertex[4]);
mVertexBuffer.put(5, vertex[5]);
mVertexBuffer.put(6, vertex[6]);
mVertexBuffer.put(7, vertex[7]);
mVertexBuffer.put(8, vertex[8]);
mVertexBuffer.put(9, vertex[9]);
mVertexBuffer.put(10, vertex[10]);
mVertexBuffer.put(11, vertex[11]);
if (tileIsGround) {
mTextureCoordBuffer.put(0, 0.0f);
mTextureCoordBuffer.put(1, 0.0f);
mTextureCoordBuffer.put(2, 0.15f);
mTextureCoordBuffer.put(3, 0.0f);
mTextureCoordBuffer.put(4, 0.15f);
mTextureCoordBuffer.put(5, 0.15f);
mTextureCoordBuffer.put(6, 0.15f);
mTextureCoordBuffer.put(7, 0.0f);
} else { /* Other texture coordinates. */ }
You actually wrote the solution. The only difference is that you should upload the texture coordinates data to the GPU.
This is the key:
I am making a single VBO for each GrassTile while iterating inside the ArrayList.
Don't do this. You make a VBO once, and then you update it if necessary. Making textures, VBOs, shaders, is the slowest possible use of OpenGL--no wonder you're getting problematic framerates--you're doing it O(n) times, each frame.
I think that's kind of wrong, because I can['t?] simply throw all the tiles inside a single FloatBuffer.
You only gain performance when you batch draw calls. This means that when you draw your tiles, you should draw all of them at once with one VBO.
//Initialize
Make a single VBO (or two: one for vertex, one for texture
coordinates, whatever--the key point is O(1) VBOs).
Fill your VBO with ALL of your tiles' data.
//Main loop
while (true) {
Draw the VBO with a single draw call,
thus drawing all your tiles all at once.
}

Resources