This is my code:
var sprite = new THREE.Sprite(material);
sprite.renderDepth = 10;
The above renderDepth setting is invalid, it does not work for sprites.
How to solve this problem?
You want one sprite to always be on top.
Since SpriteMaterial does not support a user-specified renderDepth, you have to implement a work-around.
Sprites are rendered last when using WebGLRenderer.
The easiest way to do what you want is to have two scenes and two render passes, with one sprite in the second scene like so:
renderer.autoClear = false;
scene2.add( sprite2 );
then in the render loop
renderer.render( scene, camera );
renderer.clearDepth();
renderer.render( scene2, camera );
three.js r.64
Related
I'm having an issue when rendering a white material in ThreeJS version 87.
Here are the steps to replicate:
A white PNG image that is loaded as texture
This texture is used to create a MeshBasicMaterial (passed as parameter map)
The MeshBasicMaterial is used along a plane Geometry to create a Mesh
The Mesh is added to an empty Scene and rendered on a WebGLRenderer with alpha: true and clearColor as white
The problem is that the rendered texture now has grey edges on parts that should be fully white.
This happens with any image with white edges. I've also tried many different configurations for the renderer and the material but to no avail.
I've made a very simple CodePen that replicates the behavior as simple as possible. Does anyone know how can this problem be solved?
CodePen:
https://codepen.io/ivan-i1/pen/pZxwZX
var renderer, width, height, scene, camera, dataUrl, threeTexture, geometry, material, mesh;
width = window.innerWidth;
height = window.innerHeight;
dataUrl = '//data url from image';
threeTexture = new THREE.ImageUtils.loadTexture(dataUrl);
material = new THREE.MeshBasicMaterial({
map: threeTexture,
transparent: true,
alphaTest: 0.1
});
material.needsUpdate = true;
geometry = new THREE.PlaneGeometry(5, 5);
mesh = new THREE.Mesh(geometry, material);
mesh.position.z = -5;
scene = new THREE.Scene();
scene.add(mesh);
camera = new THREE.PerspectiveCamera( 70, window.innerWidth / window.innerHeight, 1, 1000 );
renderer = new THREE.WebGLRenderer({
alpha: true
});
document.body.appendChild( renderer.domElement );
renderer.setSize(width, height);
renderer.setClearColor( 0xffffff, 1 );
//renderer.render(scene, camera);
function render() {
//Finally, draw to the screen
requestAnimationFrame(render);
renderer.render(scene, camera);
}
render();
Any help is truly appreciated.
ThreeJS/87
Edit:
I think I'm lacking more precision on my post.
This is the original full alpha image:
It might not show because its all white
And this is the same image with different transparencies on 4 quadrants:
This one too might not show because its all white
I got a helpful answer where I was told to make the alphaTest higher, but the problem is that doing that wipes out the transparent parts out of the images, and I need to conserve those parts.
Here is a copy of the codepen with the updated images and showing the same (but slight) grey edges:
codepen
Sorry for not being as precise the first time, any further help is even more appreciated.
Set alphaTest to 0.9.. or higher.. observe the improvement.
Your star texture has gray or black in the area outside the star, which is why you're seeing a gray halo. You can fix it by filling the image with white, (but not changing the alpha channel) in your image editing tool.
Also, you should upgrade to latest three.js (r95)
edit:
I'm not sure what your exact expectation is.. but there are many different settings that control alpha blending in THREE. There is renderer.premultipliedAlpha = true/false (defaults to true) and material.transparent = true/false; material.alphaTest is a threshold value to control at what level alpha is ignored completely. There are also the material.blending, .blendEquation .blendEquation, .blendEquationAlpha, blendDst and blendSrc. etc. etc. You probably need to read up on those.
https://threejs.org/docs/#api/materials/Material
For instance.. here is your texture with:
renderer.premultipliedAlpha = false;
notice the black border on one quadrant of your texture.
https://codepen.io/manthrax/pen/KBraNB
as part of a project, I have to turn the camera around an object (position 0, 0,0) which remains to him still. For this, I want to know if the LookAt function is the one that is best suited , And also how does it work?
Integrating OrbitControls should be done with a few lines of code. So, the basic lines of code should be:
// init
var controls = new THREE.OrbitControls( camera, renderer.domElement );
controls.enableZoom = false; // optional
controls.enablePan = false; // optional
controls.center.set(0,0,0); // should be initially 0,0,0
controls.addEventListener( 'change', render ); // if you are not using requestAnimationFrame()
camera.position.z = 500; // should be bigger than the radius of your sphere
// render
function render() {
renderer.render(scene, camera);
}
<script src="js/controls/OrbitControls.js"></script>
Now, you should be able to rotate the camera around your sphere using your mouse.
All the other essential stuff (camera, renderer) can be found at the example: https://threejs.org/examples/#misc_controls_orbit
I am trying to use dat.gui with a very simple three.js (r73) scene but am running into an issue with rotate and pan not working after adding "renderer.domElement" to the trackballControls initialization. Zoom works as expected.
Without renderer.domElement, I get a working rotate, zoom, pan functionality but the dat.gui interface sliders "latch" when clicked, which is just annoying and not functional. The issue as described here: Issue while using dat.GUI in a three.js example.
Looked over more info here but didn't see a great resolution: https://github.com/mrdoob/three.js/issues/828
Also found this issue. Defining the container element aka renderer.domElement doesn't work. I am unable to click within outside of the canvas area without the scene rotating.
Allow mouse control of three.js scene only when mouse is over canvas
Has anyone run into the same thing recently? If so, what workarounds are possible? Any help is appreciated.
-
The code is setup as follows:
// setup scene
// setup camera
// setup renderer
// ..
var trackballControls = new THREE.TrackballControls(camera, renderer.domElement);
trackballControls.rotateSpeed = 3.0;
trackballControls.zoomSpeed = 1.0;
trackballControls.panSpeed = 1.0;
// ..
// render loop
var clock = new THREE.Clock();
function render() {
stats.update();
var delta = clock.getDelta();
trackballControls.update(delta);
requestAnimationFrame( render );
renderer.render( scene, camera );
}
I debugged the issue with the help of this post: Three.js Restrict the mouse movement to Scene only.
Apparently, if you append the renderer.domElement child after initializing the trackballControls, it doesn't know anything about the renderer.domElement object. This also does something strange to dat.gui as described previously.
Basically, make sure this line:
document.getElementById("WebGL-output").appendChild(renderer.domElement);
appears before this line:
var trackballControls = new THREE.TrackballControls(camera, renderer.domElement);
Make sure the renderer DOM element is added to the html before it is being used as a reference.
document.body.appendChild(renderer.domElement);
I've tried improving rendering time on my project by creating meshes and putting them as part of a larger geometry, and having just that single geometry as the object I add to the scene. I thought that I would still be able to manage picking of objects by having an array of the original meshes, and pass those to the raycaster. I used the following code:
var vector = new THREE.Vector3( ( loc_x / window.innerWidth ) * 2 - 1, - ( loc_y / window.innerHeight ) * 2 + 1, 0.5 );
projector.unprojectVector(vector, camera);
var raycaster = new THREE.Raycaster( camera.position, vector.sub( camera.position ).normalize() );
var objects = [];
var i = active_regions.length;
while (i--) {
objects = objects.concat(active_regions[i].mesh_entities);
}
var intersects = raycaster.intersectObjects( objects );
if ( intersects.length > 0 ) {
console.log("Intersection: " + intersects);
}
So in the above code, active_regions contains the original individual meshes, and I create an array on the fly to specify which objects I want to select from. Unfortunately intersects comes up empty.
If I modify my project slightly so that I have all those mesh_entities added to the scene individually, then the above code works and I can successfully select objects. Unfortunately, the whole scene then renders slowly.
What's a good way (or some good ways) for me to successfully check for intersection with the ray, without slowing down my rendering?
Thanks!
You need to update the Matrices for the objects not in the render scene manually as its done as part of the render process so if you are using your ghost scene, you don't need to render it, just update the matrices before doing the intersection:
scene_ghost.updateMatrixWorld(true);
I solved this by having a ghost scene. Essentially, I added all objects to the ghost scene as their individual meshes, and then when I use raycaster it works.
However, I had to use functions along these lines:
function flip_render_ghost(yes) {
if (yes == true) {
scene_ghost.add(camera);
render_ghost = true;
} else {
scene.add(camera);
render_ghost = false;
}
render();
}
function render() {
if (render_ghost == true) {
renderer.render( scene_ghost, camera );
} else {
renderer.render( scene, camera );
}
}
Whenever I am about to check for collisions, I flip to rendering ghost scene, check for hits, then flip back to normal rendering.
Edit: I have since discovered that objects cannot belong to multiple scenes (though geometries can be shared). So what I have done is created simple meshes for the picking scene. This requires more memory, but gives the option of having a simpler mesh to use for selection for faster picking. Also, it seemed to work for me to send the children of the ghost scene itself to the raycaster. You may need to, like me, add a property to each object in the ghost scene that references the main object you are trying to pick.
I am doing something similar in this and have verified that you need to render the scene to do proper raycasting. It is easy to optimize this however to just render both screens and have one clear over the other. You should be able to change your code to this, granted the second render call will clear over the first screen:
function render() {
renderer.render( scene_ghost, camera );
renderer.render( scene, camera );
}
have a 3d maze with walls and floor.
have an image with a key ( or other object its not important, but all of em are images and not 3d models ).
I want to display it on the floor and if the camera moves around the object needs to look the same without rotating the object. How can i achieve this?
Update1:
I created a plane geometry added the image ( its a transparent png ) and rotating at render. Its working good, but if i turn the camera sometimes the plane lose transparency for about a few milisec and the get a solid black background ( blinking ).
Any idea why?
here is the code:
var texture = new THREE.ImageUtils.loadTexture('assets/images/sign.png');
var material = new THREE.MeshBasicMaterial( {map: texture, transparent: true} );
plane = new THREE.Mesh(new THREE.PlaneGeometry(115, 115,1,1), material );
plane.position.set(500, 0, 1500);
scene.add(plane);
// at render:
plane.rotation.copy( camera.rotation );
This will be achieved by using:
function animate() {
not3dObject.rotation.z = camera.rotation.z;
not3dObject.rotation.x = camera.rotation.x;
not3dObject.rotation.y = camera.rotation.y;
...
render();
}