A simple z-fighting problem.
http://jsfiddle.net/jmchen/y03q54oa/
polygonOffset: true,
polygonOffsetFactor: 1.0,
polygonOffsetUnits: 4.0
Code works fine in latest Chrome, FF, even Edge, but in IE11 generates artifacts. My findings show, that in IE11, function polygonOffset changes Z-position of the mesh, not the depth buffor value!!! Can someone confirm my suspiction? Should I report bug in Microsoft or in THREEJS lib?
It seems there is problem in WebGLState.setPolygonOffset function:
if ( polygonOffset ) {
enable( gl.POLYGON_OFFSET_FILL );
if ( currentPolygonOffsetFactor !== factor || currentPolygonOffsetUnits !== units ) {
gl.polygonOffset( factor, units );
currentPolygonOffsetFactor = factor;
currentPolygonOffsetUnits = units;
}
} else {
disable( gl.POLYGON_OFFSET_FILL );
}
If there is one mesh with polygonOffset=true, library enables gl.POLYGON_OFFSET_FILL. It seems, in IE11, disabling this feature for other materials (polygonOffset=false) does not work properly. And the rest of meshes use the same factor and units -> artifacts are visible. If you set for other meshes polygonOffset=true and default values factor, units = {0,0) as in your example -> library sets new factor,units and everything is fine. I think, the function WebGLState.setPolygonOffset could be changed in order to set factor,units for every material, not only for polygonOffset=true, if there was at least one mesh with polygonOffset=true.
Regards
Waldemar
I got it to work in IE11 with the following:
var mesh0 = new THREE.Mesh(new THREE.BoxGeometry(50, 50, 50),
new THREE.MeshBasicMaterial({
color: 0x0000ff,
polygonOffset: true,
polygonOffsetFactor: 0,
polygonOffsetUnits: 0
}));
It seems to only happen when the two shapes are both meshes (the lines work fine). While it doesn't make much sense (every browser on Windows renders through ANGLE unless you force it into native GL mode), this is an IE problem.
I tested your original code in Edge, and it works fine. Microsoft has pretty much put IE to sleep, only producing security updates going forward.
Related
I am currently trying to create a mesh that is colored using a datatexture, my initial coloring shows up just fine, but now my next goal is to offset the texture along the y axis. very similar to this example.
http://math.hws.edu/graphicsbook/demos/c5/textures.html
How I create my texture / mesh:
this.colorTexture = new DataTexture(colors, this.frameWidth, frameCount, RGBFormat, FloatType, UVMapping, RepeatWrapping, RepeatWrapping);
const material = new MeshBasicMaterial({
side: FrontSide,
vertexColors: true,
wireframe: false,
map: this.colorTexture
});
this.mesh = new Mesh(geometry, material);
How I attempt to animate the texture using offset:
this.mesh.material.map.offset.y -= 0.001;
this.mesh.material.map.needsUpdate = true;
this.mesh.material.needsUpdate = true;
this.mesh.needsUpdate = true;
I have confirmed that the function I'm using to try to offset is being called during each animation frame, however the visualization itself is not animating or showing changes apart from the initial positioning of the colors I wrote to the texture.
Any help is greatly appreciated :)
The uv transformation matrix of a texture is updated automatically as long as Texture.matrixAutoUpdate is set to true (which is also the default value). You can simply modulate Texture.offset. There is no need to set any needsUpdate flags (Mesh.needsUpdate does not exist anyway).
It's best if you strictly stick to the code from the webgl_materials_texture_rotation example. If this code does not work, please demonstrate the issue with a live example.
I am trying to make use of Raycaster in a ThreeJS scene to create a sort of VR interaction.
Everything works fine in normal mode, but not when I enable stereo effect.
I am using the following snippet of code.
// "camera" is a ThreeJS camera, "objectContainer" contains objects (Object3D) that I want to interact with
var raycaster = new THREE.Raycaster(),
origin = new THREE.Vector2();
origin.x = 0; origin.y = 0;
raycaster.setFromCamera(origin, camera);
var intersects = raycaster.intersectObjects(objectContainer.children, true);
if (intersects.length > 0 && intersects[0].object.visible === true) {
// trigger some function myFunc()
}
So basically when I try the above snippet of code in normal mode, myFunc gets triggered whenever I am looking at any of the concerned 3d objects.
However as soon as I switch to stereo mode, it stops working; i.e., myFunc never gets triggered.
I tried updating the value of origin.x to -0.5. I did that because in VR mode, the screen gets split into two halves. However that didn't work either.
What should I do to make the raycaster intersect the 3D objects in VR mode (when stereo effect is turned on)?
Could you please provide a jsfiddle with the code?
Basically, if you are using stereo in your app, it means you are using 2 cameras, therefore you need to check your intersects on both cameras views, this could become an expensive process.
var cameras =
{ 'camera1': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000),
'camera2': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000)
};
for (var cam in cameras) {
raycaster.setFromCamera(origin, cameras[cam]);
//continue your logic
}
You could use a vector object that simulates the camera intersection to avoid checking twice, but this depends on what you are trying to achieve.
I encountered a similar problem, I eventually found the reason. Actually in StereoEffect THREE.js displays the meshes on the two eyes, but in reality is actually adds only one mesh to the scene, exactly in the middle of the line left-eye-mesh <-> right-eye-mesh, hidden to the viewer.
So when you use the raycaster, you need to use it on the real mesh on the middle, not the illusion displayed on each eye !
I detailled here how to do it
Three.js StereoEffect displays meshes across 2 eyes
Hopes it solves your problem !
You can use my StereoEffect.js file in your project for resolving problem. See example of using. See my Raycaster stereo pull request also.
we have a setup with two WebGLRenderers (using the clone of the same scene, to avoid issues). Same scene, same lights, same camera. Second renderer is used for snapshoting on demand (to avoid problems with aliasing of RT rendering, etc.).
All this works like a charm in Chrome but in Firefox (35.0.1) we are missing shadows completely (only one shadow caster in scene, Spotlight)... Is this a known issue/limitation of FF (windows7/8/8.1)?
Any insight greatly appreciated.
var renderer = new THREE.WebGLRenderer({
alpha : false,
antialias : true,
preserveDrawingBuffer : true // required to support .toDataURL()
});
//shadows
renderer.shadowMapSoft = true;
renderer.physicallyBasedShading = true;
renderer.shadowMapEnabled = true;
renderer.render(snapshot.scene, snapshot.camera);
var data = renderer.domElement.toDataURL("image/jpeg");
I forgot to mention directly in the post that shadows are missing only in the second webGLRenderer instance (snapshot one).
What should I debug in FF (some webgl implemetation structs?). When comparing Chrome and FF status of threejs scene/renderer/camera/lights all seems to be ok and the same between browsers.
This is problem with lending on floating point textures. See http://3dwayfinder.com/webgl-broken-in-firefox-35-0-1-for-windows/
I have the following logic to create a Three.js R69 WebGL renderer that is supposed to handle high DPI displays. It did for quite a while, but about a week ago one and only one three.js page started rendering like the high DPI was correctly set, but my 3D coordinate origin became the upper left corner of the rendering canvas rather than the expected center of the canvas. (No changes to my environment that I can tell, maybe the browsers auto-updated. I'm testing with Chrome, FF and Safari on OSX 10.10.1)
// create our renderer:
gCex3.renderer = new THREE.WebGLRenderer({ antialias:true, alpha:true,
devicePixelRatio: window.devicePixelRatio || 1
});
// Three.js R69: I started needing to explicitly set this so clear alpha is 1:
gCex3.renderer.setClearColor( new THREE.Color( 0x000000 ), 1 );
gCex3.rendererDOM = $('#A3DH_Three_wrapper');
gCex3.rendererDOM.append( gCex3.renderer.domElement );
// fbWidth & fbHeight are w,h of a div located within the page:
gCex3.renderer.setSize( gs.fbWidth, gs.fbHeight, true ); // 'true' means update the canvas style
Checking the latest R69 examples, they don't seem to do anything special for high dpi displays. Checking the WebGLRenderer source, it looks like the devicePixelRatio logic is now embedded within the WebGLRenderer() function.
I've tried that minimal logic in the examples, specifically this:
renderer = new THREE.WebGLRenderer( { antialias: false } );
renderer.setClearColor( new THREE.Color( 0x000000 ), 1 );
renderer.setSize( gs.fbWidth, gs.fbHeight, true );
And I see the same behavior: my coordinate system origin is the upper left of the rendering canvas.
Note that in Chrome, when running the javascript debugger, I see a "webgl context lost" event during the exiting of the previous page, but before this logic is being executed. Could the WebGLRenderer be getting created during the period when there is no WebGL context?
I found an odd behavior while working on my pet game. I wanted to draw few objects on canvas, some of them required image / icon to be rotated. It is quite common use case, usual solution is to make use of context's rotate method. Going with the blow I used also translate to nicely and consistently put images in desired place.
This worked fine, until I tried Chrome on Windows laptop, with hardware acceleration enabled. Images started to blink and teleport across the screen. I found that it is related to acceleration (turning off accelerated graphics fixes problem) and my best guess is that when updating frame the renderer assumes that drawing calls are independent and can be executed in parallel. When context transforms take place it is not the case because context state changes.
Example of undesired behavior: having a canvas element with ID 'screen' try the following:
var canvas = document.getElementById("screen"),
ctx = canvas.getContext("2d");
var drawrect = function () {
ctx.fillStyle = this.color;
ctx.translate(this.x+10, this.y+10);
ctx.rotate(this.rotation);
ctx.fillRect(-10, -10, 20, 20);
ctx.rotate(-this.rotation);
ctx.translate(-this.x-10, -this.y-10);
};
var red = {
x: 22,
y: 22,
rotation: 0,
color: "#ff0000",
draw: drawrect
};
var blu = {
x: 22,
y: 111,
rotation: 0,
color: "#0000ff",
draw: drawrect
};
function main_loop() {
ctx.clearRect(0, 0, 450, 450);
frameId = requestAnimationFrame(main_loop);
red.draw();
red.x += 1;
red.rotation +=0.1;
blu.draw();
blu.x += 1;
blu.rotation -=0.1;
}
main_loop();
Working example: http://jsfiddle.net/1u2d7uhr/7/ (tested on Chrome, Chromium, Firefox; accelerated Chrome glitches, others do not)
I was able to 'fix' this by removing translations and rendering rotating elements to separate canvas, which is then (after rotations) drawn onto the main one. This seems hackish to me though.
Is it code error on my part?
In this case what is the right way to render rotations (perhaps with this question I should go do codereview, but I'm not sure if this is the case)?
Or is it buggy behavior on browser side? I understand the logic behind it but it can be very much surprising (and cause some confusion) to developers. Or am I only one...