I have few scenes with cameras in one page.
One of them is a map view.
I can simply load this scene by function, but is in THREE js simple way to remove whole scene?
When I do not showing map, I don't want to keep objects loaded and rendered for better performance.
I found only how to remove child by child in loop. But I hope there is a better solution.
Just removing THREE objects from your scene is not enough to delete them from memory. You have to call the dispose() methods on the objects' geometries, materials and textures.
https://github.com/mrdoob/three.js/issues/5175
After you call your dispose and remove methods, do a diagnostic like this (where this.renderer is your THREE.Renderer):
if (this.renderer && (this.renderer.info.memory.geometries || this.renderer.info.memory.programs || this.renderer.info.memory.textures)) {
loge("VIEW.clear: geometries=" + this.renderer.info.memory.geometries + " programs=" + this.renderer.info.memory.programs + " textures=" + this.renderer.info.memory.textures);
}
If the number of programs, geometries and textures isn't stable, you are inviting performance issues and a memory leak.
Related
I'm building a system which has a set of quads in front of each other, forming a layer system. This layers are being rendered by a orthographic camera with a render texture, which is used to generate a texture and save it to disk after the layers are populated. It happens that I need to disable some of those layers before the final texture is generated. So I built a module that disable those specific layers' mesh renderers and raise an event to start the render to texture conversion.
To my surprise, the final image still presents the disabled layers in the final image. I'm really confused about this, cause I already debugged the code in every way I could and those specific layers shouldn't be visible at all considering the code. It must have something to do with how often render textures update or some other obscure execution order. The entire module is composed of 3 or 4 classes with dozens of lines, so to exemplify the issue in a more succinct way, I'll post only the method where the RT is being converted into a texture with some checks I made just before the RT pixels are read into the new texture:
public void SaveTexture(string textureName, TextureFormat textureFormat)
{
renderTexture = GetComponent<Camera>().targetTexture;
RenderTexture.active = renderTexture;
var finalTexture = new Texture2D(renderTexture.width,
renderTexture.height, textureFormat, false);
/*First test, confirming that the marked quad' mesh renderer
is, in fact, disabled, meaning it shouldn't be visible in the camera,
consequently invisible in the RT. The console shows "false", meaning it's
disabled. Even so, the quad still being rendered in the final image.*/
//Debug.Log(transform.GetChild(6).GetChild(0).GetComponent<MeshRenderer>().enabled);
/*Second test, changing the object' layer, because the projection camera
has a culling mask enabled to only capture objects in one specific layer.
Again, it doesn't work and the quad content still being saved in the final image.*/
//transform.GetChild(6).GetChild(0).gameObject.layer = 0;
/*Final test, destroying the object to ensure it doesn't appear in the RT.
This also doesn't work, confirming that no matter what I do, the RT is
"fixed" at this point of execution and it doesn't update any changes made
on it's composition.*/
//Destroy(transform.GetChild(6).GetChild(0).gameObject);
finalTexture.ReadPixels(new Rect(0, 0, renderTexture.width,
renderTexture.height), 0, 0);
finalTexture.Apply();
finalTexture.name = textureName;
var teamTitle = generationController.activeTeam.title;
var kitIndex = generationController.activeKitIndex;
var customDirectory = saveDirectory + teamTitle + "/"+kitIndex+"/";
StorageManager<Texture2D>.Save(finalTexture, customDirectory, finalTexture.name);
RenderTexture.active = null;
onSaved();
}
Funny thing is, if I manually disable that quad in inspector (at runtime, just before trigger the method above), it works, and the final texture is generated without the disabled layer.
I tried my best to show my problem, this is one of those issues that are kinda hard to show here but hopefully somebody will have some insight of what is happening and what should I do to solve that.
There are two possible solutions to my issue (got the answer at Unity Forum). The first one, is to use the methods OnPreRender and OnPostRender to properly organize what should happens before or after the camera render update. What I end up doing though was calling the manual render method in the Camera, using the "GetComponenent().Render();" line, which updates the camera render manually. Considering that my structure was ready already, this single line solved my problem!
I would like to preface this by saying I am still quite new to gameMakerstudio and I do not know all there is to know about how the software works and this is probably the root cause of my problem as I do not know why I am having this current layering issue.
I have been having an issue where I have TWO DrawGUI events in separate objects within the same room.
The first object is a Fog Of War that draws a GUI and reveals the map as the player moves, and keeps explored places visible but not in view.
The second object is the joystick where a player will use their thumb to drag the stick to move the player.
Ever since I have implemented the Fog of War. I have been unable to view the joystick. It appears that the fog of war draws overtop of it and I am unable to use it.
I understand there are other draw events where I can do this.
Draw
Draw GUI
Draw Begin
Draw End
Draw GUI BEGIN
Draw GUI END
After changing where I have the code drawing.
Example: At first the joystick and the fog were both in Draw GUI, After moving one from Draw GUI to Draw GUI Begin, the same issue appears.
I have made sure to place the joystick at the top most level in the room and the fog of war at the bottom most layer.
I have tried to apply depth the object
oJoystick_Stick.depth = -100;
this does not achieve anything.
Is there another way to force two objects on the GUI layer to be on top of the other?
To my understanding, DrawGUI always prioritises the objects drawn there above anything in Draw, including Begin Draw and End Draw. This is because DrawGUI is for an interface (like the stats you see about your character like health, ammo ect.), and objects drawn there aren't part of the room itself. You may also have noticed that the objects drawn in DrawGUI also follows the camera/Viewpoint.
So, to clear up the draw priority:
First is the DrawGUI layer that places objects in front of everything, like an interface.
After that, the depth variable and layers inside the room has priority, each layer has also given a depth value, with an interval of 100.
If the depth is also the same (for example when they're in the same layer), then the order of the objects and code loaded decides the order drawn.
The latter is not always reliable when multiple objects are overlapping at the same depth, because if the objects are redrawn in-game again (e.g. a persistent object been loaded into a new room, or pause and unpause using instance_activate_all), then the order of objects drawn may differ. Keep in mind when overlapping objects, that they are placed in different layers to prevent mixed priorities.
I've however not used a Fog of War system myself, so I don't know if it's build-in or not, but I wouldn't recommend placing them in the DrawGUI, as that should be reserved for the interface layout. With the default Draw options, you'll have more flexibility to the layers inside the room.
Some advice about DrawGUI
DrawGUI Begin --> call the event before every drawGUI in that moment
DrawGUI --------> the event is called sync with the screen refresh rate
DrawGUI End ----> call the event after every drawGUI in that moment
so GM-Studio "pipeline" for the DrawGUI is like the step event, we have a BEGIN, a CURRENT, an END.
To prioritize the object render, GM-Studio use the depth in-build variabile. Take the 0 as the reference value. Object with depth value > 0 are rendering as last. Object with depth value < 0 are rendering infront everything.
Check the depth during the calling of the instance_create_depth() function, check where the depth variabile is changing, check in the room editor for each instance layer the depth value. and z-order.
I need to render a single specific mesh from a scene into a texture using a THREE.WebGLRenderTarget. I already achieved that during the rendering of the scene, all other meshes, except this one, are being ignored. So, I basically achieved my goal. The thing i hate is, that there is still a lot of unnecessary work going on for my whole scene graph, during the render process of the scene. I need to render this texture every frame, so with my current method i get extrem fps drop downs. (There are lots of meshes in the whole scene graph).
So what i found was the function "renderBufferImmediate" from the THREE.WebGLRenderer. (Link to the renderer source code here) My pseudo code to achieve my goal would look like this:
var mesh = some_Mesh;
var renderer = some_WebGLRenderer;
var renderTarget = some_WebGLRenderTarget;
renderer.setRenderTarget(renderTarget);
var materialProperties = renderer.properties.get(mesh.material);
var program = materialProperties.program;
renderer.renderBufferImmediate(mesh, program, mesh.material);
var texture = renderTarget.texture;
The renderBufferImmediate function takes an instance of an THREE.Object3D, a WebGLShaderProgram and a THREE.Material. Problem i see here: The implementation of this function tries to lookup properties of the Object3D which, afaik doesn't exist. (Like "hasPositions" or "hasNormals"). In short: my approach doesn't work.
I would be grateful if someone could tell me if i can use this function for my purpose (Meaning i am currently using it wrong) or if there is another solution for my problem.
Thanks in advance.
Currently, I am trying to dive deeper into grouped objects or better hierachic ordered objects. However I got a strange issue with the position and the visibility of a object3D and childs of those.
So i have a set of objects, a light source with is the main object so far, and a few spheres for example, which are childs of the the main object.
The problem is, that child objects, which are positioned behind another object ( siblings ) ( from the camera view ) are visible. And all childs objects appear infront of the main object while those actually positioned behind the main object.
Sadly i can't reproduce this in a codePen so i had to take some pictures of the scene, many apologies for that.
In the pictures, the camera rotates clockwise around the light source (main object to the left).
So basically what I am doing is:
mObj = new THREE.Object3D();
mObj.add(new THREE.Mesh(someGeometry, someMaterial);
cObj = new THREE.Object3D();
cObj.add(new THREE.Mesh(someGeometry, someMaterial);
mObj.add(cObj);
scene.add(mObj);
Is that related to the object hierachic order or something else?
The second more negligible issue is, that on one of my pc's, those parts of objects which are dark (because of no lighting), generate those strange black/grey faces, which i cant explain. Maybe Graphicscard/driver or something?
What's the distance between these objects? It could be a floating point rounding issue. I've run into it myself.
If it's so, and you're getting flickering models, you'll need to keep your camera and the active model at the origin, and move the universe around you to keep precision in the important places (near the player).
I am developing an augmented reality project using Three.js and aruco-js. I made my code so that all my 3D Objects are added to the scene (empty) at the beginning but the data gets initially loaded on marker detection.
Now I want to create an interface for changing the objects appearance, starting with the possibility of scaling an object.
So I created an updateObject() function to set the new values like this:
function updateObject(object, rotation, translation)
{
...
...
...
// first method
object.scale.x = 200;
object.scale.y = 200;
object.scale.z = 200;
// second attempt
object.scale.set(300, 300, 300);
};
I tried both of the methods shown above to set the scale of my object but it has no effect to the rendered images I get. The interesting thing is that the values of the objects in my scene3d DOM object are the values I set in my function. But why doesn't it have any impact on the output?
I'm not very familiar with 3d programming in WebGL or Three.js, so if you could give me any hint where the problem might has it's origin I would really appreciate an answer.
FIX:
I took a closer look to the 3D objects I was loading and discovered that they have a children called "mesh" inside another children. By trying to change the scale of only the mesh I found out that it works this way. But I think it looks very ugly:
scene3d.children[visibleModels[0][0]+3].children[0].children[0].scale.set(2, 2, 2);
//visibleModels is a list of the markers/models that should be loaded
This is only a test for one single object to change but at least I found a way to solve this. Is this an ordinary way to change the scale of objects? If you have a better solution or anything to add feel free to contribute.
You could also try to scale the object by changing its matrix using the THREE.Matrix4.makeScale method instead:
object.matrix.makeScale( xScale, yScale, zScale );
Or the even simpler THREE.Matrix4.scale method:
object.matrix.scale( scale );