Issues with text geometry in Threejs - three.js

I'm trying to render some 3d text using THREE.FontLoader. The object is in the scene but does not appear. The only thing I thought could be the problem is that the mesh appears to have a BufferGeometry instead of a TextGeometry, for whatever reason. Is there anything wrong with my code?
Link to my code:
https://puu.sh/w78xs/3e350985e1.png

I'm going to assume you have lights in your scene, and you camera is oriented correctly.
The loader.load call is asynchronous, but you're creating your mesh synchronously.
// This is an asynchronous call, which may take some time.
loader.load('/assets/delvetiker_regular.typeface.json',
function(font){
// This function is a callback, and is only executed AFTER load completes
geometry = ...;
});
//...
// At this point, geometry MAY OR MAY NOT EXIST.
// If it doesn't, this won't work.
mesh = new THREE.Mesh(geometry, mat);
If you move all the code you have at the bottom to inside your the loader callback, you should see a difference.
// This is an asynchronous call, which may take some time.
loader.load('/assets/delvetiker_regular.typeface.json',
function(font){
// This function is a callback, and is only executed AFTER load completes
geometry = ...;
mat = ...'
// At this point, geometry DOES EXIST.
mesh = new THREE.Mesh(geometry, mat);
super(...);
});
//...
I'm also assuming the call to super adds the mesh to the scene, but if it doesn't, you'll also need to call scene.add(mesh) in the loader callback.

Buffergeometry isn't the issue. If you look at the source, most THREE.XxxxXxxxxGeometry uses BufferGeometry somewhere behind the scenes.
First thing i saw is you have no lights in your scene. Try MeshBasicMaterial to make sure it's working. Phong expects lights. MeshBasicMaterial will just paint it a flat color.
Also make sure that your camera is not positioned inside the model and then also call camera.lookAt(textmesh.position); to make sure it's not looking away.

Related

Setting MatrixWorld of camera in threeJS

I'm trying to sync the camera orientation of one threejs scene with another by passing the world matrix of the first scene's camera to the second and then using
camera.matrix.fromArray(cam1array); //cam1array is the flattened worldmatrix of the 1st scene
camera.updateMatrixWorld( true );
However, when I print camera.matrixWorld for the second scene, it would appear that there has been no update (matrixWorld is the same as before the previous commands). The second scene does use OrbitControls so maybe they are overriding my commands, but I wondered if anyone could advise me on how to achieve the same world matrix for the second scene as the first?
I should also clarify that part of the issue is that it would appear that setting
camera.matrixAutoUpdate = false;
in order to prevent overriding seems to prevent OrbitControls from functioning corrrectly.

Render single Object / Mesh immediatly into a renderTarget

I need to render a single specific mesh from a scene into a texture using a THREE.WebGLRenderTarget. I already achieved that during the rendering of the scene, all other meshes, except this one, are being ignored. So, I basically achieved my goal. The thing i hate is, that there is still a lot of unnecessary work going on for my whole scene graph, during the render process of the scene. I need to render this texture every frame, so with my current method i get extrem fps drop downs. (There are lots of meshes in the whole scene graph).
So what i found was the function "renderBufferImmediate" from the THREE.WebGLRenderer. (Link to the renderer source code here) My pseudo code to achieve my goal would look like this:
var mesh = some_Mesh;
var renderer = some_WebGLRenderer;
var renderTarget = some_WebGLRenderTarget;
renderer.setRenderTarget(renderTarget);
var materialProperties = renderer.properties.get(mesh.material);
var program = materialProperties.program;
renderer.renderBufferImmediate(mesh, program, mesh.material);
var texture = renderTarget.texture;
The renderBufferImmediate function takes an instance of an THREE.Object3D, a WebGLShaderProgram and a THREE.Material. Problem i see here: The implementation of this function tries to lookup properties of the Object3D which, afaik doesn't exist. (Like "hasPositions" or "hasNormals"). In short: my approach doesn't work.
I would be grateful if someone could tell me if i can use this function for my purpose (Meaning i am currently using it wrong) or if there is another solution for my problem.
Thanks in advance.

How to apply 2-pass postprocessing without using EffectComposer?

I need to post-process the scene that I rendered previously on textureA (as render target), with my custom shader and save the result to textureB (input:textureA, output:textureB). Therefore, I don't need a scene and a camera. I think it's too simple to even bother with three.js classes like EffectComposer, Shaderpass, CopyShader, TexturePass, etc.
So, how do I setup this computing-like post-processing in a simple way?
I've create for you a fiddle that shows a basic post-processing effect without EffectComposer. The idea behind this code is to work with an instance of WebGLRenderTarget.
First, you draw the scene into this render target. In the next step, you use this render target as a texture for a plane which is rendered with an orthographic camera. The code in the render loop looks like this:
renderer.clear();
renderer.render( scene, camera, renderTarget );
renderer.render( sceneFX, cameraFX );
The corresponding material of this plane is your custom shader. I've used a luminosity shader from the official repo.
Therefore, I don't need a scene and a camera.
Please do it like in the example. That's the intended way of the library.
Demo: https://jsfiddle.net/f2Lommf5/5149/
three.js R91

TrackballControls change events

I have a static scene with no animation loop, and am trying to use the change event of TrackballControls to trigger the render function following the pattern in this thread, i.e.:
var controls = new THREE.TrackballControls( camera, renderer.domElement );
controls.addEventListener( 'change', render );
function render() {
renderer.render( scene, camera );
}
This works well with OrbitControls, but the change events don't fire when I substitute TrackballControls. However, if I add the line:
_this.update();
at the end of mousewheel(), mousemove(), and touchmove() functions in TrackballControls.js, I can get the change events to fire correctly (in my case, anyway). I am not sure if this is the best way to get the change events firing. Is forking a local copy of TrackballControls the best solution for this case, have I overlooked something, or does it make sense to change TrackballControls.js?
TrackballControls was written to require an animation loop in which controls.update() is called.
OrbitControls, on the other hand, can be used in static scenes in which the scene is rendered only when the mouse is moved, like so:
controls.addEventListener( 'change', render );
In either case, the controls are part of the examples -- not the library -- so you are free to hack them to your liking.
What you are proposing is fine if your scene is static and there is no damping required.
EDIT: corrected for three.js r.73

Render scene onto custom mesh with three.js

After messing around with this demo of Three.js rendering a scene to a texture, I successfully replicated the essence of it in my project: amidst my main scene, there's a now sphere and a secondary scene is drawn on it via a THREE.WebGLRenderTarget buffer.
I don't really need a sphere, though, and that's where I've hit a huge brick wall. When trying to map the buffer onto my simple custom mesh, I get an infinite stream of the following errors:
three.js:23444 WebGL: INVALID_VALUE: pixelStorei: invalid parameter for alignment
three.js:23557 Uncaught TypeError: Cannot read property 'width' of undefined
My geometry, approximating an annular shape, is created using this code. I've successfully UV-mapped a canvas onto it by passing {map: new THREE.Texture(canvas)} into the material options, but if I use {map: myWebGLRenderTarget} I get the errors above.
A cursory look through the call stack makes it look like three.js is assuming the presence of the texture.image attribute on myWebGLRenderTarget and attempting to call clampToMaxSize on it.
Is this a bug in three.js or am I simply doing something wrong? Since I only need flat rendering (with MeshBasicMaterial), one of the first thing I did when adapting the render-to-texture demo above was remove all trace of the shaders, and it worked great with just the sphere. Do I need those shaders back in order to use UV mapping and a custom mesh?
For what its worth, I was needlessly setting needsUpdate = true on my texture. (The handling of needsUpdate apparently assumes the presence of a <canvas> that the texture is based on.)

Resources