I've tried improving rendering time on my project by creating meshes and putting them as part of a larger geometry, and having just that single geometry as the object I add to the scene. I thought that I would still be able to manage picking of objects by having an array of the original meshes, and pass those to the raycaster. I used the following code:
var vector = new THREE.Vector3( ( loc_x / window.innerWidth ) * 2 - 1, - ( loc_y / window.innerHeight ) * 2 + 1, 0.5 );
projector.unprojectVector(vector, camera);
var raycaster = new THREE.Raycaster( camera.position, vector.sub( camera.position ).normalize() );
var objects = [];
var i = active_regions.length;
while (i--) {
objects = objects.concat(active_regions[i].mesh_entities);
}
var intersects = raycaster.intersectObjects( objects );
if ( intersects.length > 0 ) {
console.log("Intersection: " + intersects);
}
So in the above code, active_regions contains the original individual meshes, and I create an array on the fly to specify which objects I want to select from. Unfortunately intersects comes up empty.
If I modify my project slightly so that I have all those mesh_entities added to the scene individually, then the above code works and I can successfully select objects. Unfortunately, the whole scene then renders slowly.
What's a good way (or some good ways) for me to successfully check for intersection with the ray, without slowing down my rendering?
Thanks!
You need to update the Matrices for the objects not in the render scene manually as its done as part of the render process so if you are using your ghost scene, you don't need to render it, just update the matrices before doing the intersection:
scene_ghost.updateMatrixWorld(true);
I solved this by having a ghost scene. Essentially, I added all objects to the ghost scene as their individual meshes, and then when I use raycaster it works.
However, I had to use functions along these lines:
function flip_render_ghost(yes) {
if (yes == true) {
scene_ghost.add(camera);
render_ghost = true;
} else {
scene.add(camera);
render_ghost = false;
}
render();
}
function render() {
if (render_ghost == true) {
renderer.render( scene_ghost, camera );
} else {
renderer.render( scene, camera );
}
}
Whenever I am about to check for collisions, I flip to rendering ghost scene, check for hits, then flip back to normal rendering.
Edit: I have since discovered that objects cannot belong to multiple scenes (though geometries can be shared). So what I have done is created simple meshes for the picking scene. This requires more memory, but gives the option of having a simpler mesh to use for selection for faster picking. Also, it seemed to work for me to send the children of the ghost scene itself to the raycaster. You may need to, like me, add a property to each object in the ghost scene that references the main object you are trying to pick.
I am doing something similar in this and have verified that you need to render the scene to do proper raycasting. It is easy to optimize this however to just render both screens and have one clear over the other. You should be able to change your code to this, granted the second render call will clear over the first screen:
function render() {
renderer.render( scene_ghost, camera );
renderer.render( scene, camera );
}
Related
I've imported a 3D model into a THREE.Js project I've been working on, and I want to add several copies of them to the scene. Here's the code I've been using to duplicate it:
let ball = new THREE.Mesh();
loader.load( './ball.gltf', function ( gltf ) {
gltf.scene.traverse(function(model) { //for gltf shadows!
if (model.isMesh) {
model.castShadow = true;
model.material = sphereMaterial;
}
});
ball = gltf.scene;
scene.add( ball );
}, undefined, function ( error ) {
console.error( error );
} );
const ball2 = ball.clone()
ball2.position.set(0.5, 0.5, 0.5)
scene.add(ball2)
However, only one of the "balls" show up in the scene from the loader.load() call. Does anyone happen to know what I should do differently to successfully duplicate the model?
Your onLoad() handler (function ( gltf ) {...}) is asynchronous, and the code that clones ball (const ball2 = ball.clone()) is executing before ball is initialized with the GLTF data. So at the time ball.clone() executes, ball is simply the empty Mesh you created before loading the GLTF, and that empty Mesh is what gets cloned.
I suspect you were, at some point, getting a console error relating to reading clone on undefined, and that's why you added the line to initialize ball to an empty Mesh, which is unnecessary.
There are a few ways to handle this, but the simplest is to move the code that clones ball into the onLoad handler (i.e., after scene.add( ball )).
Proof of concept here.
I have one Renderer object and two scene objects. One scene contains the objects that should not be processed by the unrealbloom post-processing pass and the other scene contains all “glowing” objects.
Now I thought I could do:
const THREED_Composer = new THREE.EffectComposer( THREED_Renderer );
const THREED_RenderPass = new THREE.RenderPass( THREED_Scene, THREED_Camera );
const THREED_RenderPassGlow = new THREE.RenderPass(
THREED_SceneGlow, THREED_Camera );
const THREED_BloomPass = new THREE.UnrealBloomPass(
new THREE.Vector2(window.innerWidth, window.innerHeight), 0.5, 0.5, 0.4);
//THREED_BloomPass.renderToScreen = false; ???
THREED_Composer.addPass( THREED_RenderPassGlow );
THREED_Composer.addPass( THREED_BloomPass );
THREED_Composer.addPass( THREED_RenderPass );
The intention was to first render the glowing objects and then render the non-glowing objects over them. I want the non-glowing objects to be able to obscure the glowing objects.
My animate function looks like this:
function animate()
{
if(GLOBAL_FocusLost)
return;
requestAnimationFrame(animate);
update();
THREED_Composer.render();
}
Ultimate goal:
I want to have glowing monolith in the midst of a room that can be obscured by all other objects.
I tried to read my way through the documentation but I think that I do not understand it enough.
Cheers. Any help is very much appreciated!
I have a model, a background sky and a ground surface. Texturing the surface results in no surface.
I've tried the basic approach and come to the conclusion that it is probably that the scene is being rendered before the texture has finished loading. Having searched and found various possible solutions, I have tried several of them, without really understanding how they are supposed to work. None of them has worked. One problem is that it is an old problem and most of the suggestions involve outdated versions of the three.js library.
// Ground
// create a textured Ground based on an answer in Stackoverflow.
var loader = new THREE.TextureLoader();
loader.load('Textures/Ground128.jpg',
function (texture) {
var groundGeometry = new THREE.PlaneBufferGeometry(2000, 2000, 100, 100);
const groundMaterial = new THREE.MeshLambertMaterial({map: texture});
var ground = new THREE.Mesh(groundGeometry, groundMaterial);
ground.receiveShadow = true; //Illumination addition
ground.rotation.x = -0.5 * Math.PI; // rotate into the horizontal.
scene.add(ground);
}
);
// This variation does not work either
http://lhodges.users37.interdns.co.uk/me/downloads/Aphaia/Temple.htm
http://lhodges.users37.interdns.co.uk/me/downloads/Aphaia/Temple7jsV0.15b.htm
The first of the above is the complete page in which the ground is a plain billiard table green. The second is the page containing the above code.
There appear to be no error (Last time I tried.)
By the time your texture loads and you add the ground, your scene has already rendered (and there is no other render call).
You need to call renderer.render(scene, camera); after adding the ground to the scene.
// Ground
// create a textured Ground based on an answer in Stackoverflow.
var loader = new THREE.TextureLoader();
loader.load('Textures/Ground128.jpg',
function (texture) {
var groundGeometry = new THREE.PlaneBufferGeometry(2000, 2000, 100, 100);
const groundMaterial = new THREE.MeshLambertMaterial({map: texture});
var ground = new THREE.Mesh(groundGeometry, groundMaterial);
ground.receiveShadow = true; //Illumination addition
ground.rotation.x = -0.5 * Math.PI; // rotate into the horizontal.
scene.add(ground);
renderer.render(scene, camera); // <--- add this line
}
);
I am trying to use THREE.Raycaster to show an html label when the user hover an object. It works fine if I use THREE.Mesh but with THREE.Sprite it looks like that there is a space that increases with the scale of the object.
The creation process is the same for both scenario, I only change the type based on USE_SPRITE variable.
if ( USE_SPRITE ) {
// using SpriteMaterial / Sprite
m = new THREE.SpriteMaterial( { color: 0xff0000 } );
o = new THREE.Sprite( m );
} else {
// using MeshBasicMaterial / Material
m = new THREE.MeshBasicMaterial( { color: 0xff0000 } );
o = new THREE.Mesh(new THREE.PlaneGeometry( 1, 1, 1 ), m );
}
https://plnkr.co/edit/J0HHFMpDB5INYLSCTWHG?p=preview
I am not sure if it is a bug with THREE.Sprite or if I am doing something wrong.
Thanks in advance.
three.js r73
I would consider this a bug in three.js r.75.
Raycasting with meshes in three.js is exact. However, with sprites, it is an approximation.
Sprites always face the camera, can have different x-scale and y-scale applied (be non-square), and can be rotated (sprite.material.rotation = Math.random()).
In THREE.Sprite.prototype.raycast(), make this change:
var guessSizeSq = this.scale.x * this.scale.y / 4;
That should work much better for square sprites. The corners of the sprite will be missed, as the sprite is treated as a disk.
three.js r.75
This is my code:
var sprite = new THREE.Sprite(material);
sprite.renderDepth = 10;
The above renderDepth setting is invalid, it does not work for sprites.
How to solve this problem?
You want one sprite to always be on top.
Since SpriteMaterial does not support a user-specified renderDepth, you have to implement a work-around.
Sprites are rendered last when using WebGLRenderer.
The easiest way to do what you want is to have two scenes and two render passes, with one sprite in the second scene like so:
renderer.autoClear = false;
scene2.add( sprite2 );
then in the render loop
renderer.render( scene, camera );
renderer.clearDepth();
renderer.render( scene2, camera );
three.js r.64