i'm using three.js version 92 with use of texture.
three.js warn demo
thanks to three.js i've the transformation of image's dimension in power of two automaticly, but i would like to suppress the warn that it show in the console without remove all the other warn from other modules or functions.
i would hide the three.js for my users, but i would like to use the rest of them from other libs.
is there a way to set the texture loader to hide them?
You have a few options.
The first and easiest is to remove the console.warn line from three.js. You've said that's not feasible due to how you acquire three.js, but there are other ways.
The next is to override the code where the warning is occurring. This warning (and there are several places where it can occur) happens in the WebGLRenderer object.
Download the THREE.js source.
Open in threejs/build/three.js in an editor.
Find the WebGLRenderer.
Copy that whole function into a JavaScript file you control.
Edit that new file to remove those console warnings.
Reference the new file in your project.
In the code where you interact with the THREE namespace, but before you create the WebGLRenderer object, replace THREE.WebGLRenderer like this:
THREE.WebGLRenderer = WebGLRenderer;
The reference to WebGLRenderer on the left is to the loaded THREE namespace. The reference to WebGLRenderer on the right is to the one from the new file (with the warnings removed).
Finally, you can just disable warnings entirely. This warning should only appear during the first renderer.render call when the texture is available. I personally don't recommend doing this unless you're comfortable with losing all console warnings in the interim between when you turn warnings off, and whenever the texture finishes loading. That would go something like this:
var originalWarning = console.warn; // back up the original method
console.warn = function(){}; // now warnings do nothing!
var tex = texLoader.load("texture.png", functon(){
renderer.render(scene, camera); // sends the texture to the GPU
console.warn = originalWarning; // turns warnings back on
});
var mat = new THREE.SomeMaterial({
map: tex
});
Related
I'm building a system which has a set of quads in front of each other, forming a layer system. This layers are being rendered by a orthographic camera with a render texture, which is used to generate a texture and save it to disk after the layers are populated. It happens that I need to disable some of those layers before the final texture is generated. So I built a module that disable those specific layers' mesh renderers and raise an event to start the render to texture conversion.
To my surprise, the final image still presents the disabled layers in the final image. I'm really confused about this, cause I already debugged the code in every way I could and those specific layers shouldn't be visible at all considering the code. It must have something to do with how often render textures update or some other obscure execution order. The entire module is composed of 3 or 4 classes with dozens of lines, so to exemplify the issue in a more succinct way, I'll post only the method where the RT is being converted into a texture with some checks I made just before the RT pixels are read into the new texture:
public void SaveTexture(string textureName, TextureFormat textureFormat)
{
renderTexture = GetComponent<Camera>().targetTexture;
RenderTexture.active = renderTexture;
var finalTexture = new Texture2D(renderTexture.width,
renderTexture.height, textureFormat, false);
/*First test, confirming that the marked quad' mesh renderer
is, in fact, disabled, meaning it shouldn't be visible in the camera,
consequently invisible in the RT. The console shows "false", meaning it's
disabled. Even so, the quad still being rendered in the final image.*/
//Debug.Log(transform.GetChild(6).GetChild(0).GetComponent<MeshRenderer>().enabled);
/*Second test, changing the object' layer, because the projection camera
has a culling mask enabled to only capture objects in one specific layer.
Again, it doesn't work and the quad content still being saved in the final image.*/
//transform.GetChild(6).GetChild(0).gameObject.layer = 0;
/*Final test, destroying the object to ensure it doesn't appear in the RT.
This also doesn't work, confirming that no matter what I do, the RT is
"fixed" at this point of execution and it doesn't update any changes made
on it's composition.*/
//Destroy(transform.GetChild(6).GetChild(0).gameObject);
finalTexture.ReadPixels(new Rect(0, 0, renderTexture.width,
renderTexture.height), 0, 0);
finalTexture.Apply();
finalTexture.name = textureName;
var teamTitle = generationController.activeTeam.title;
var kitIndex = generationController.activeKitIndex;
var customDirectory = saveDirectory + teamTitle + "/"+kitIndex+"/";
StorageManager<Texture2D>.Save(finalTexture, customDirectory, finalTexture.name);
RenderTexture.active = null;
onSaved();
}
Funny thing is, if I manually disable that quad in inspector (at runtime, just before trigger the method above), it works, and the final texture is generated without the disabled layer.
I tried my best to show my problem, this is one of those issues that are kinda hard to show here but hopefully somebody will have some insight of what is happening and what should I do to solve that.
There are two possible solutions to my issue (got the answer at Unity Forum). The first one, is to use the methods OnPreRender and OnPostRender to properly organize what should happens before or after the camera render update. What I end up doing though was calling the manual render method in the Camera, using the "GetComponenent().Render();" line, which updates the camera render manually. Considering that my structure was ready already, this single line solved my problem!
I have a scene that has 10 cubes in it.
See this fiddle: https://jsfiddle.net/gilomer88/tcpwez72/47/
When a user taps on one of the cubes, I want a new “details scene” to pop-open on top of the current "main scene" and have it display just that one cube the user tapped on.
When the user is done examining and playing with the cube they can close the “details scene”, thereby revealing the original "main scene" - which was really just sitting underneath it this whole time.
I keep getting the following error:
Uncaught TypeError: scope.domElement.addEventListener is not a function
Can't figure out why that's happening.
Note that the fiddle I made is 95% complete - it's just missing this bit of HTML you see here:
<canvas id="detailsCanvas"></canvas>
<div id="renderingSpot"></div>
(As soon as I added this HTML - and the CSS - everything started breaking up on me, no matter what I did to try fixing it - so I finally just took it out.)
Either way, I'm basically creating a new scene in which to display the "details view", along with a new renderer, OrbitControl, etc. - and the error occurs on this line:
const detailsOrbitController = new OrbitControls(detailsScene.userData.camera, detailsScene.userData.renderingElement);
(That line is in my makeDetailsScene() function - which you'll see in the fiddle.)
For what its worth, I was following this THREE.js example when I was putting this together: https://threejs.org/examples/?q=mul#webgl_multiple_elements
I'm not sure why you need this extra <div id="renderingSpot"></div>. You could also just pass the canvas to the OrbitControls. When I run the following code, I don't get any errors:
detailsScene.userData.renderingElement = document.getElementById("renderingSpot");
const detailsOrbitController = new OrbitControls(detailsScene.userData.camera, detailsScene.userData.renderingElement);
I made some changes to your fiddle: https://jsfiddle.net/xn19qbf7/
(I also made some HTML, CSS, JS additions such that the details view displays on top of the main canvas and that its scene gets rendered.)
I guess your problem was your original assignment with jQuery:
let renderingElement = $("#renderingSpot");
detailsScene.userData.renderingElement = renderingElement;
renderingElement is still a jQuery object. But OrbitControls doesn't accept a jQuery object. So, you should get the real element:
let renderingElement = $("#renderingSpot");
detailsScene.userData.renderingElement = renderingElement.get(0);
// or as above
detailsScene.userData.renderingElement = document.getElementById("renderingSpot");
After messing around with this demo of Three.js rendering a scene to a texture, I successfully replicated the essence of it in my project: amidst my main scene, there's a now sphere and a secondary scene is drawn on it via a THREE.WebGLRenderTarget buffer.
I don't really need a sphere, though, and that's where I've hit a huge brick wall. When trying to map the buffer onto my simple custom mesh, I get an infinite stream of the following errors:
three.js:23444 WebGL: INVALID_VALUE: pixelStorei: invalid parameter for alignment
three.js:23557 Uncaught TypeError: Cannot read property 'width' of undefined
My geometry, approximating an annular shape, is created using this code. I've successfully UV-mapped a canvas onto it by passing {map: new THREE.Texture(canvas)} into the material options, but if I use {map: myWebGLRenderTarget} I get the errors above.
A cursory look through the call stack makes it look like three.js is assuming the presence of the texture.image attribute on myWebGLRenderTarget and attempting to call clampToMaxSize on it.
Is this a bug in three.js or am I simply doing something wrong? Since I only need flat rendering (with MeshBasicMaterial), one of the first thing I did when adapting the render-to-texture demo above was remove all trace of the shaders, and it worked great with just the sphere. Do I need those shaders back in order to use UV mapping and a custom mesh?
For what its worth, I was needlessly setting needsUpdate = true on my texture. (The handling of needsUpdate apparently assumes the presence of a <canvas> that the texture is based on.)
How can I dynamically turn on and off antialiasing and shadows in WebGLRenderer?
Simply changing the properties of anti-aliasing and shadowMapEnable does not work. I looked in the source and found a method updateShadowMap () but it was removed in release 69.
UPDATE: OK, the answer to the second half of the question I found here
https://github.com/mrdoob/three.js/issues/2466
As a result the following code works fine:
renderer.shadowMapEnabled = false;
for(var i in tiles.children)
tiles.children[i].material.needsUpdate=true;
renderer.clearTarget( sun.shadowMap );
You can't enable/diable antialiasing from a WebGL context after creation. The only way is to create a new context and submit all the buffers and textures again.
So, ideally you would only need to create a new WebGLRenderer with the antialias boolean. This doesn't work yet thought, but I'm working to have it working ASAP.
I want to generate the top and perspective view of an object.
Input: A 3d object, maybe .obj or .dae file.
Output: the image files presenting the top and front view of the loaded object.
Here is some expected output:
The perspective view of a chair
The top view of a chair:
Can anyone give me some suggestions to solve this problem? Demo may be preferred
You could create a small three.js scene with your obj or collada-file loaded using the appropriate loaders. (see examples of the specific loaders). Then create the necessary cameras you want to have in the scene. See examples for orthographic and perspective cameras that come with three.js, too.
To produce the images you want, you could use the toDataURL function, see this thread here and use google
Three.js and HTML5 Canvas toDataURL
in essence, after the objects are loaded, you could do something like:
renderer.render(scene, topViewCamera);
dataurl = canvas.toDataURL();
renderer.render(scene, perspectiveCamera);
dataurl2 = canvas.toDataURL();
I think you could also use 2 renderTargets and then use those for output, too, but maybe if you are new to three.js, start with the toDataURL() method from HTML5.