I need to render a single specific mesh from a scene into a texture using a THREE.WebGLRenderTarget. I already achieved that during the rendering of the scene, all other meshes, except this one, are being ignored. So, I basically achieved my goal. The thing i hate is, that there is still a lot of unnecessary work going on for my whole scene graph, during the render process of the scene. I need to render this texture every frame, so with my current method i get extrem fps drop downs. (There are lots of meshes in the whole scene graph).
So what i found was the function "renderBufferImmediate" from the THREE.WebGLRenderer. (Link to the renderer source code here) My pseudo code to achieve my goal would look like this:
var mesh = some_Mesh;
var renderer = some_WebGLRenderer;
var renderTarget = some_WebGLRenderTarget;
renderer.setRenderTarget(renderTarget);
var materialProperties = renderer.properties.get(mesh.material);
var program = materialProperties.program;
renderer.renderBufferImmediate(mesh, program, mesh.material);
var texture = renderTarget.texture;
The renderBufferImmediate function takes an instance of an THREE.Object3D, a WebGLShaderProgram and a THREE.Material. Problem i see here: The implementation of this function tries to lookup properties of the Object3D which, afaik doesn't exist. (Like "hasPositions" or "hasNormals"). In short: my approach doesn't work.
I would be grateful if someone could tell me if i can use this function for my purpose (Meaning i am currently using it wrong) or if there is another solution for my problem.
Thanks in advance.
Related
I'm building a system which has a set of quads in front of each other, forming a layer system. This layers are being rendered by a orthographic camera with a render texture, which is used to generate a texture and save it to disk after the layers are populated. It happens that I need to disable some of those layers before the final texture is generated. So I built a module that disable those specific layers' mesh renderers and raise an event to start the render to texture conversion.
To my surprise, the final image still presents the disabled layers in the final image. I'm really confused about this, cause I already debugged the code in every way I could and those specific layers shouldn't be visible at all considering the code. It must have something to do with how often render textures update or some other obscure execution order. The entire module is composed of 3 or 4 classes with dozens of lines, so to exemplify the issue in a more succinct way, I'll post only the method where the RT is being converted into a texture with some checks I made just before the RT pixels are read into the new texture:
public void SaveTexture(string textureName, TextureFormat textureFormat)
{
renderTexture = GetComponent<Camera>().targetTexture;
RenderTexture.active = renderTexture;
var finalTexture = new Texture2D(renderTexture.width,
renderTexture.height, textureFormat, false);
/*First test, confirming that the marked quad' mesh renderer
is, in fact, disabled, meaning it shouldn't be visible in the camera,
consequently invisible in the RT. The console shows "false", meaning it's
disabled. Even so, the quad still being rendered in the final image.*/
//Debug.Log(transform.GetChild(6).GetChild(0).GetComponent<MeshRenderer>().enabled);
/*Second test, changing the object' layer, because the projection camera
has a culling mask enabled to only capture objects in one specific layer.
Again, it doesn't work and the quad content still being saved in the final image.*/
//transform.GetChild(6).GetChild(0).gameObject.layer = 0;
/*Final test, destroying the object to ensure it doesn't appear in the RT.
This also doesn't work, confirming that no matter what I do, the RT is
"fixed" at this point of execution and it doesn't update any changes made
on it's composition.*/
//Destroy(transform.GetChild(6).GetChild(0).gameObject);
finalTexture.ReadPixels(new Rect(0, 0, renderTexture.width,
renderTexture.height), 0, 0);
finalTexture.Apply();
finalTexture.name = textureName;
var teamTitle = generationController.activeTeam.title;
var kitIndex = generationController.activeKitIndex;
var customDirectory = saveDirectory + teamTitle + "/"+kitIndex+"/";
StorageManager<Texture2D>.Save(finalTexture, customDirectory, finalTexture.name);
RenderTexture.active = null;
onSaved();
}
Funny thing is, if I manually disable that quad in inspector (at runtime, just before trigger the method above), it works, and the final texture is generated without the disabled layer.
I tried my best to show my problem, this is one of those issues that are kinda hard to show here but hopefully somebody will have some insight of what is happening and what should I do to solve that.
There are two possible solutions to my issue (got the answer at Unity Forum). The first one, is to use the methods OnPreRender and OnPostRender to properly organize what should happens before or after the camera render update. What I end up doing though was calling the manual render method in the Camera, using the "GetComponenent().Render();" line, which updates the camera render manually. Considering that my structure was ready already, this single line solved my problem!
I'm using three.js library to work with 3d models (mostly .glb but it shouldn't matter)
The idea is to import a 3d model that contains groups and meshes. I want to be able to move meshes between already existing groups within a model without changing the visual representation of the model.
Piece of my code is below. movedInternal is
movedMesh.matrixWorldNeedsUpdate = true; // not sure if it's needed
let meshPosition = new THREE.Vector3();
movedMesh.getWorldPosition(meshPosition);
oldParent.remove(movedMesh);
newParent.add(movedMesh);
movedMesh.worldToLocal(meshPosition);
movedMesh.position.set(meshPosition.x, meshPosition.y, meshPosition.z);
And this is not working. Mesh changes its global position because new parent’s position is not the same as previous parent position, but I expect it to stay where it was but change its local position considering new parent’s position.
What do I do wrong?
I think you need to update the movedMesh's world matrix after you change its parents. When you .remove() then .add(), it doesn't know that its parents have been updated (Three.js usually does it when you're rendering, but this is happening before the next frame render).
let meshPosition = new THREE.Vector3();
movedMesh.getWorldPosition(meshPosition);
oldParent.remove(movedMesh);
newParent.add(movedMesh);
// Here it needs to re-learn its new coordinates
movedMesh.updateMatrixWorld(true);
movedMesh.worldToLocal(meshPosition);
movedMesh.position.set(meshPosition.x, meshPosition.y, meshPosition.z);
I've been searching for this solution since a week and haven't been able to tackle the exact problem. Though my question is diferent but has same scenario, I would like to give a solution for those, looking for it.
I have a 3D object with different child meshes, and i want to replace one of my child mesh with another 3D model.
Below code snippet:
replacementMesh is a new 3D object, meshToReplace is the child mesh that needs to be replaced.
// Copy the transformation matrix from obj1 to obj2
replacementMesh.matrix.copy(meshToReplace.matrix);
// Update obj2's position, rotation, and scale to match the transformation matrix
replacementMesh.position.setFromMatrixPosition(replacementMesh.matrix);
replacementMesh.rotation.setFromRotationMatrix(replacementMesh.matrix);
replacementMesh.scale.setFromMatrixScale(replacementMesh.matrix);
meshToReplace.parent.add(replacementMesh);
meshToReplace.parent.remove(meshToReplace);
I'm trying to render some 3d text using THREE.FontLoader. The object is in the scene but does not appear. The only thing I thought could be the problem is that the mesh appears to have a BufferGeometry instead of a TextGeometry, for whatever reason. Is there anything wrong with my code?
Link to my code:
https://puu.sh/w78xs/3e350985e1.png
I'm going to assume you have lights in your scene, and you camera is oriented correctly.
The loader.load call is asynchronous, but you're creating your mesh synchronously.
// This is an asynchronous call, which may take some time.
loader.load('/assets/delvetiker_regular.typeface.json',
function(font){
// This function is a callback, and is only executed AFTER load completes
geometry = ...;
});
//...
// At this point, geometry MAY OR MAY NOT EXIST.
// If it doesn't, this won't work.
mesh = new THREE.Mesh(geometry, mat);
If you move all the code you have at the bottom to inside your the loader callback, you should see a difference.
// This is an asynchronous call, which may take some time.
loader.load('/assets/delvetiker_regular.typeface.json',
function(font){
// This function is a callback, and is only executed AFTER load completes
geometry = ...;
mat = ...'
// At this point, geometry DOES EXIST.
mesh = new THREE.Mesh(geometry, mat);
super(...);
});
//...
I'm also assuming the call to super adds the mesh to the scene, but if it doesn't, you'll also need to call scene.add(mesh) in the loader callback.
Buffergeometry isn't the issue. If you look at the source, most THREE.XxxxXxxxxGeometry uses BufferGeometry somewhere behind the scenes.
First thing i saw is you have no lights in your scene. Try MeshBasicMaterial to make sure it's working. Phong expects lights. MeshBasicMaterial will just paint it a flat color.
Also make sure that your camera is not positioned inside the model and then also call camera.lookAt(textmesh.position); to make sure it's not looking away.
I am developing an augmented reality project using Three.js and aruco-js. I made my code so that all my 3D Objects are added to the scene (empty) at the beginning but the data gets initially loaded on marker detection.
Now I want to create an interface for changing the objects appearance, starting with the possibility of scaling an object.
So I created an updateObject() function to set the new values like this:
function updateObject(object, rotation, translation)
{
...
...
...
// first method
object.scale.x = 200;
object.scale.y = 200;
object.scale.z = 200;
// second attempt
object.scale.set(300, 300, 300);
};
I tried both of the methods shown above to set the scale of my object but it has no effect to the rendered images I get. The interesting thing is that the values of the objects in my scene3d DOM object are the values I set in my function. But why doesn't it have any impact on the output?
I'm not very familiar with 3d programming in WebGL or Three.js, so if you could give me any hint where the problem might has it's origin I would really appreciate an answer.
FIX:
I took a closer look to the 3D objects I was loading and discovered that they have a children called "mesh" inside another children. By trying to change the scale of only the mesh I found out that it works this way. But I think it looks very ugly:
scene3d.children[visibleModels[0][0]+3].children[0].children[0].scale.set(2, 2, 2);
//visibleModels is a list of the markers/models that should be loaded
This is only a test for one single object to change but at least I found a way to solve this. Is this an ordinary way to change the scale of objects? If you have a better solution or anything to add feel free to contribute.
You could also try to scale the object by changing its matrix using the THREE.Matrix4.makeScale method instead:
object.matrix.makeScale( xScale, yScale, zScale );
Or the even simpler THREE.Matrix4.scale method:
object.matrix.scale( scale );
I want to generate the top and perspective view of an object.
Input: A 3d object, maybe .obj or .dae file.
Output: the image files presenting the top and front view of the loaded object.
Here is some expected output:
The perspective view of a chair
The top view of a chair:
Can anyone give me some suggestions to solve this problem? Demo may be preferred
You could create a small three.js scene with your obj or collada-file loaded using the appropriate loaders. (see examples of the specific loaders). Then create the necessary cameras you want to have in the scene. See examples for orthographic and perspective cameras that come with three.js, too.
To produce the images you want, you could use the toDataURL function, see this thread here and use google
Three.js and HTML5 Canvas toDataURL
in essence, after the objects are loaded, you could do something like:
renderer.render(scene, topViewCamera);
dataurl = canvas.toDataURL();
renderer.render(scene, perspectiveCamera);
dataurl2 = canvas.toDataURL();
I think you could also use 2 renderTargets and then use those for output, too, but maybe if you are new to three.js, start with the toDataURL() method from HTML5.