I'm trying to offset a texture that's being used as an environment map, but not having much luck.
The texture is loaded with the loadTextureCube() method, which gets applied to my mesh just fine, but the offsets don't seem to have any effect.
The texture is just a big gray circle to give a little bit of gloss.
Any thoughts on what I'm doing wrong?
var urls = [
'pos-x.png',
'neg-x.png',
'pos-y.png',
'neg-y.png',
'pos-z.png',
'neg-z.png'
];
var cubemap = THREE.ImageUtils.loadTextureCube(urls);
cubemap.offset.y = -.5;
cubemap.offset.x = -.5;
cubemap.needsUpdate=true;
I'm assuming based on loadTextureCube that your utilizing the cube shader approach to skyboxes. So far everything with your code is fine. The problem your seeing is that while your texture has offset properties, the material (or more specifically the cube shader program therein) does not have uniforms to pass this along to the fragmentation shader. This of course is assuming your doing something like this:
Eg. cube shader material
var skyboxGeo = new THREE.CubeGeometry( 5000, 5000, 5000 );
var cubeShader = THREE.ShaderUtils.lib[ "cube" ];
cubeShader.uniforms[ "tCube" ].value = cubemap;
var skyboxMat = new THREE.ShaderMaterial( {
fragmentShader: cubeShader.fragmentShader,
vertexShader: cubeShader.vertexShader,
uniforms: cubeShader.uniforms,
side: THREE.BackSide
});
var skybox = new THREE.Mesh( skyboxGeo, skyboxMat );
scene.add( skybox );
There's likely a number of work arounds, but you could alway try something like a MeshFaceMaterial on a standard cube to achieve the desired result:
Eg. standard material
var skyboxGeo = new THREE.CubeGeometry( 5000, 5000, 5000 );
var materialArray = [];
for (var i = 0; i < 6; i++) {
var cubeTex = THREE.ImageUtils.loadTexture( urls[i] );
cubeTex.offset.x = -.5;
cubeTex.offset.y = -.5;
materialArray.push( new THREE.MeshBasicMaterial({
map: cubeTex,
side: THREE.BackSide
}));
}
var skyboxMat = new THREE.MeshFaceMaterial( materialArray );
var skyBox = new THREE.Mesh( skyboxGeo, skyboxMat );
Hope that helps
~D
Related
I am attempting to have my skybox not be affected by the camera.far parameter. I would like to cull all other scene objects with this just not the skybox.
When I set skyBox.frustumCulled = false; it makes no difference. skyBox being the mesh of course.
Is this done by adding another render pass? If so I would need 2 different cameras one with a really high far value to allow viewing of the skybox right? How can this be done efficiently?
For clarity here is the snippet from my terrain object code used for drawing the skybox:
shader = THREE.ShaderLib["cube"];
shader.uniforms["tCube"].value = this.cubetexture;
mat = new THREE.ShaderMaterial({
uniforms: shader.uniforms,
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
depthWrite: false,
side: THREE.BackSide
});
geo = new THREE.BufferGeometry().fromGeometry(new THREE.BoxGeometry(1024, 1024, 1024));
mesh = new THREE.Mesh(geo, mat);
mesh.rotation.y += 90;
mesh.scale.x = mesh.scale.y = mesh.scale.z = 50;
mesh.frustumCulled = false;
mesh.matrixAutoUpdate = false;
mesh.rotationAutoUpdate = false;
mesh.updateMatrix();
this.skybox = mesh;
scene.add(this.skybox);
You're adding a skybox in the 'main' scene. A better way to accomplish a skydome would be to create a new scene. this will be the 'background' to your 'main' scene. There's also a discussion about skydomes v.s. skyboxes, simply put, a box saves polys, a dome looks better. in this example i'll be using a dome/sphere.
var renderer = new THREE.WebGLRenderer( {alpha: true, antialias: true} );
var mainScene = new THREE.Scene();
var mainCamera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 1, 20000 );
var skydome = {
scene: new THREE.Scene(),
camera: new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 1, 20000 );
};
skydome.material = new THREE.MeshBasicMaterial({color: 0x0F0F0F}) //the material for the skydome, for sake of lazyness i took a MeshBasicMaterial.
skydome.mesh = new THREE.Mesh(new THREE.SphereGeometry(100, 20, 20), skydome.material);
skydome.scene.add(skydome.mesh);
now, during the render function you adjust only the rotation of the skydome camera, not the position.
var render = function(){
requestAnimationFrame( render );
skydome.camera.quaternion = mainCamera.quaternion;
renderer.render(skydome.scene, skydome.camera); //first render the skydome
renderer.render(mainScene, mainCamera);//then render the rest over the skydome
};
renderer.autoclear = false; //otherwise only the main scene will be rendered.
How to add reflectionMaterial with environment map, am using two cameras and two scene in order to achieve it, based on the webgl_materials_cubemap
and am using Object that is loaded using OBJMTLLoder, i can see the both Environment map, and Object on my scene, but the reflection of environment is not working on the object..
find my code below :
var urls = [
'textures/cube/canary/pos-x.png',
'textures/cube/canary/neg-x.png',
'textures/cube/canary/pos-y.png',
'textures/cube/canary/neg-y.png',
'textures/cube/canary/pos-z.png',
'textures/cube/canary/neg-z.png'
];
var cubemap = THREE.ImageUtils.loadTextureCube(urls); // load textures
cubemap.format = THREE.RGBFormat;
var shader = THREE.ShaderLib['cube']; // init cube shader from built-in lib
shader.uniforms['tCube'].value = cubemap; // apply textures to shader
// create shader material
var skyBoxMaterial = new THREE.ShaderMaterial( {
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
uniforms: shader.uniforms,
depthWrite: false,
side: THREE.BackSide
});
skybox = new THREE.Mesh( new THREE.BoxGeometry( 1000, 1000, 1000 ), skyBoxMaterial );
scene.add( skybox );
var object = scene.getObjectByName ("myname", true);
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh )
{
//child.geometry.computeFaceNormals();
var geometry = child.geometry;
var reflectionMaterial = new THREE.MeshBasicMaterial({
color: 0xcccccc,
envMap: cubemap
});
mesh = new THREE.Mesh(geometry, reflectionMaterial);
sceneCube.add(mesh);
}
});
Here am just changing the scene.add(mesh); to sceneCube.add(mesh); the reflection worked but the camera doesn't ..
you can see the differences here
Example 1
Example 2
In the first demo you can see that the scene working fine with the environment and without Object reflection
In the second you can see that the Reflection working fine but the camera behavior gets wired
i fixed it myself by adding following line inside the Object traverse
child.material.map = reflectionMaterial;
child.material.needsUpdate = true;
However i don't know am doing is correct or not but it is worked as like expected,
I was playing with webGL and ThreeJS, then I've got the following issue:
Textures with large images gets pixelated when seen from distance.
Check the example: http://jsfiddle.net/4qTR3/1/
Below is the code:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 10, 7000);
var light = new THREE.PointLight(0xffffff);
light.position.set(0, 150, 100);
scene.add(light);
var light2 = new THREE.AmbientLight(0x444444);
scene.add(light2);
var renderer = new THREE.WebGLRenderer({
antialias: true
});
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
var geometry = new THREE.PlaneGeometry(500, 500, 10, 10);
//I use different textures in my project
var texture = new THREE.ImageUtils.loadTexture(TEST_IMAGE);
var textureBack = new THREE.ImageUtils.loadTexture(TEST_IMAGE);
textureBack.anisotropy = renderer.getMaxAnisotropy();
texture.anisotropy = renderer.getMaxAnisotropy();
//Filters
texture.magFilter = THREE.NearestFilter;
texture.minFilter = THREE.LinearMipMapLinearFilter;
textureBack.magFilter = THREE.NearestFilter;
textureBack.minFilter = THREE.LinearMipMapLinearFilter;
var materials = [
new THREE.MeshLambertMaterial({
transparent: true,
map: texture,
side: THREE.FrontSide
}),
new THREE.MeshLambertMaterial({
transparent: true,
map: textureBack,
side: THREE.BackSide
})];
for (var i = 0, len = geometry.faces.length; i < len; i++) {
var face = geometry.faces[i].clone();
face.materialIndex = 1;
geometry.faces.push(face);
geometry.faceVertexUvs[0].push(geometry.faceVertexUvs[0][i].slice(0));
}
planeObject = new THREE.Mesh(geometry, new THREE.MeshFaceMaterial(materials));
planeObject.overdraw = true;
planeObject.position.z = -5000;
scene.add(planeObject);
camera.position.z = 1000;
(function render() {
requestAnimationFrame(render);
planeObject.rotation.y += 0.02;
renderer.render(scene, camera);
})();
If the image of the texture has got text in it, the text becomes very pixelated with poor quality.
How can I fix this?
In order to not get pixelated you need to use mips but WebGL can't generate mips for non-power-of-2 textures. Your texture is 800x533, neither of those is a power of 2.
a couple of options
1) Scale the picture offline to powers of 2 like 512x512 or 1024x512
2) Scale the picture at runtime before making a texture.
Load the image yourself, once loaded make a canvas that is power-of-2. call drawImage(img, 0, 0, canvas.width, canvas.height) to scale the image into the canvas. Then load the canvas into a texture.
You also probably want to change your mag filtering from NearestFilter to LinearFilter.
Note: (1) is the better option. (2) takes time on the user's machine, uses more memory, and you have no guarantee what the quality of the scaling will be.
Example here.
Demo Can be seen here
I'm using a rather small image to make the skybox and for some reason it's being stretched instead of tiled. I found this tutorial on how to pattern a texture. I noted these lines
var crateTexture = new THREE.ImageUtils.loadTexture( 'images/crate.gif' );
crateTexture.wrapS = crateTexture.wrapT = THREE.RepeatWrapping;
crateTexture.repeat.set( 5, 5 );
var crateMaterial = new THREE.MeshBasicMaterial( { map: crateTexture } );
var crate = new THREE.Mesh( cubeGeometry.clone(), crateMaterial );
crate.position.set(60, 50, -100);
scene.add( crate );
So I tried using this method for the skybox and it didn't produce any change
var path = "/images/";
var urls = [
path + 'startile.png', path + 'startile.png',
path + 'startile.png', path + 'startile.png',
path + 'startile.png', path + 'startile.png'
];
var textureCube = THREE.ImageUtils.loadTextureCube( urls , new THREE.CubeRefractionMapping() );
textureCube.wrapS = textureCube.wrapT = THREE.RepeatWrapping;
textureCube.repeat.set( 10, 10 );
var material = new THREE.MeshBasicMaterial( { color: 0xffffff, envMap: textureCube, refractionRatio: 0.95 } );
// Skybox
var shader = THREE.ShaderLib[ "cube" ];
shader.uniforms[ "tCube" ].value = textureCube;
var material = new THREE.ShaderMaterial( {
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
uniforms: shader.uniforms,
side: THREE.BackSide
} ),
mesh = new THREE.Mesh( new THREE.CubeGeometry( 1200, 1200, 1200 ), material );
//mesh.overdraw = false;
// mesh.rotation.x = Math.PI * 0.1;
scene.add( mesh );
Any ideas?
Tiling is not supported for texture cubes. However, in your case, since all faces of your cube are identical, you can do something like this:
var geometry = new THREE.CubeGeometry( 1000, 1000, 1000 );
var texture = THREE.ImageUtils.loadTexture( "startile.png" );
texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
texture.repeat.set( 10, 10 );
var material = new THREE.MeshBasicMaterial( {
color: 0xffffff,
map: texture,
side: THREE.BackSide
} );
var mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
three.js r.59
For my project I need collision tests in Three.js. In my CollisionDetection class I'm trying to get a Raycaster to work. And I found some weirdness that I can't explain and can't find a way around:
My CollisionDetector works fine for Cubes.. but when I use Spheres instead, it doesn't give me the same results – Am I wrong to expect the same results as for the cubes? Or do I miss something else?
Here is my Code:
var renderer, camera, scene;
init();
animate();
function init() {
var container = document.getElementById("scene");
var width = window.innerWidth;
var height = window.innerHeight;
renderer = new THREE.WebGLRenderer();
renderer.setSize(width, height);
camera = new THREE.OrthographicCamera( 0, width, 0, height, 1, 10000 );
camera.position.z = 300;
scene = new THREE.Scene();
scene.add(camera);
container.appendChild(renderer.domElement);
var geometry = new THREE.SphereGeometry(10,16, 16);
//var geometry = new THREE.CubeGeometry( 10, 10, 10 );
var material1 = new THREE.MeshBasicMaterial( { color: 0xFF3333} );
var material2 = new THREE.MeshBasicMaterial( { color: 0xFF3333} );
var material3 = new THREE.MeshBasicMaterial( { color: 0xFF3333} );
var material4 = new THREE.MeshBasicMaterial( { color: 0xFF3333} );
var material5 = new THREE.MeshBasicMaterial( { color: 0xFF3333} );
var element1 = new THREE.Mesh( geometry, material1 );
var element2 = new THREE.Mesh( geometry, material2 );
var element3 = new THREE.Mesh( geometry, material3 );
var element4 = new THREE.Mesh( geometry, material4 );
var element5 = new THREE.Mesh( geometry, material5 );
element1.position.set(200,200,0);
element2.position.set(200,100,0);
element3.position.set(200,300,0);
element4.position.set(100,200,0);
element5.position.set(300,200,0);
scene.add(element1);
scene.add(element2);
scene.add(element3);
scene.add(element4);
scene.add(element5);
var CollisionDetector = new CollisionDetection();
CollisionDetector.addRay(new THREE.Vector3(0, -1, 0));
CollisionDetector.addRay(new THREE.Vector3(0, 1, 0));
CollisionDetector.addRay(new THREE.Vector3(1, 0, 0));
CollisionDetector.addRay(new THREE.Vector3(-1, 0, 0));
CollisionDetector.addElement(element1);
CollisionDetector.addElement(element2);
CollisionDetector.addElement(element3);
CollisionDetector.addElement(element4);
CollisionDetector.addElement(element5);
document.onclick = function(){
CollisionDetector.testElement(element1);
};
}
function CollisionDetection(){
var caster = new THREE.Raycaster();
var rays = [];
var elements = [];
this.testElement = function(element){
for(var i=0; i<rays.length; i++) {
caster.set(element.position, rays[i]);
var hits = caster.intersectObjects(elements, true);
for(var k=0; k<hits.length; k++) {
console.log("hit", hits[k]);
hits[k].object.material.color.setHex(0x0000ff);
}
}
}
this.addRay = function(ray) {
rays.push(ray.normalize());
}
this.addElement = function(element){
elements.push(element);
}
}
function animate() {
requestAnimationFrame( animate );
renderer.render( scene, camera );
}
Or best, see for yourself how it behaves: http://jsfiddle.net/mymL5/12/
On Click every element hit by a ray should turn blue and all hits are registered in the console.
Note the (imho) weird console output for spheres.
Also, why is the lower sphere not hit while the upper is?
You can switch between Cubes and Spheres by Commenting/Uncommenting lines 19/20
Can anyone help me? What am I not getting?
PS: I'm new to Three.js, so I'm probably being dumb.
Since this is homework-related, I am only going to provide some tips.
Your scene is rendering upside down because your args to orthographic camera are incorrect.
Your sphere is bigger than your cube.
Your rays are hitting the north and south poles of your spheres exactly. What is different about those points?
The material.side property tells Raycaster which side(s) of a face to consider the "front".
Your fidde example is running an old version (r.54) of three.js.
three.js r.58
Increased spheres size.
Rotated spheres by some non-trivial angle (so they don't get hit right in the N/S pole).
Now it works? :P
var geometry = new THREE.SphereGeometry(20,17, 17);
element1.position.set(0,0,0);
element2.position.set(0,100,0);
element3.position.set(100,0,0);
element4.position.set(0,-100,0);
element5.position.set(-100,0,0);
element1.rotation.set(0,0,10);
element2.rotation.set(0,0,10);
element3.rotation.set(0,0,10);
element4.rotation.set(0,0,10);
element5.rotation.set(0,0,10);
Still, ray test should be aware of the hitting exact vertex or edge of the triangle, so that might be considered as a place-to-improve for Three.js.
I filed an issue about this in the Three.js repository:
https://github.com/mrdoob/three.js/issues/3541