Cache scene in Three.js - caching

PIXI.js has Container#cacheAsBitmap which causes the container to "render" itself to an image, save that, render the image instead of its children and when a child is added or removed or updated, the cache is updated.
What's the alternative for Three.js (but instead of an image it would be a mesh)?

I may not be understanding your question properly, but your reply to Sabee's answer was helpful. It sounds like you're looking to either merge multiple geometries into a single mesh or implement a form of model instancing, with the goal of reducing draw calls.
There is more than one way to accomplish this, depending on your requirements. You can merge multiple geometries into a single geometry object, and provide either one material or an array of materials (where each index corresponds to one of the merged geometries). You can also use GPU-accelerated instancing to achieve a similar effect with only a single copy of the geometry.
I'll refer to Dusan Bosnjak's excellent Medium series on instancing, which starts here: https://medium.com/#pailhead011/instancing-with-three-js-36b4b62bc127
As well, here are the three.js examples regarding instancing: https://threejs.org/examples/?q=instanc#webgl_buffergeometry_instancing_dynamic

Pixi.js is a 2D javascript library, using WebGL to render the images(frames) into html5 canvas. Three.js allows the creation of Graphical Processing Unit (GPU)-accelerated 3D animations using WebGL.
The browser cannot store rendered 3D frames, this work allows the GPU Accelerated Render Cache, which depends on what hardware's they run. Helpful post understanding what's going on behind the scenes.
But you can cahce your assets in browser like images, json objects of 3D models and etc.
In Three.js Cache class is a global object, used by assets loaders (TextureLoader, ImageLoader, AudioLoader ...), by default is disabled (false). To enable it you can set THREE.Cache.enabled = true ;
I think by default the browser should cache the textures for performance reasons, but if you want to be sure simply enable the cache by force code it in Three.js. Also the creator of Three.js answered this question.

Related

FPS drop when model appears - three.js

When I use three.js to load my Collada file, FPS is only 5-7.
I try to optimize it with Blender and Meshlab, I can load it smooth but model becomes worse.
Can anyone explain to me why my model is rendered with a low frame rate?
You can download my model right here.
Can anyone explain to me why my model is lag when load.
Your model is rendered with 66011 draw calls. You can see this information by inspecting the WebGLRenderer.info object in your debugger. Such a high amount of draw calls is unfavorable and most likely the main reason for your bad performance.
So the first thing you should try is to merge geometries in your content creation tool (e.g. Blender). Also avoid the usage of multiple materials per 3D object.
BTW: Instead of using Collada, export your model as glTF and then load it via GLTFLoader. It's the recommended 3D format of three.js. More information right here:
https://threejs.org/docs/index.html#manual/en/introduction/Loading-3D-models

Monogame Extended Tiled

I'm making an isometric city builder using Monogame Extended and Tiled. I've got everything set-up and now i need to somehow access the specific tiles so i can change them at runtime as the user clicks on a tile to build an object. The problem is, i can't seem to find a "map.GetLayer("Layername").GetTile(x,y) or .SetTile(x,y) function or something similar.
Now what i can do is edit the xml(.tmx) file which has a matrix in it that represents the map and it's drawn tiles. The problem with this is that i need to build the map in the content pipeline again after editing for the changes to be displayed. I can't really build at runtime or can i?
Thanks in advance!
Something like this will get you part way there.
var tileLayer = map.GetLayer<TiledMapTileLayer>("layername");
TiledMapTile tile;
if(tileLayer.TryGetTile(x, y, out tile))
{
// do something with tile
}
However, there's only a limited amount of things you can actually do with the tile once you've got it from the map.
There's no such thing as a SetTile method because changing tile data at runtime is not currently supported. This is a limitation of the renderer, which has been optimized for rendering very large maps by building static geometry that can't be changed once it's loaded into the graphics card.
There has been some discussion about building another renderer that would handle dynamic map changes but at this stage nothing like that has been implemented in the library. You could always have a go at implementing a simple renderer yourself, a really basic one is not as hard as you might think.
An alternative approach to dealing with this kind of problem might be to pre-process the map data before giving it to the renderer. The idea would be to effectively separate the layers of the map that are static from those that are dynamic and render the dynamic tiles as normal sprites. Just a thought, I'm not sure about the details of how this might work.
I plan to eventually revisit the Tiled API in the next major version of MonoGame.Extended. Don't hold your breath, these things can take a lot of time, but I am paying attention to the feedback and kinds of problems people are experiencing with the existing API.
Since the map data is stored in a XML (or csv) file which runs through the Content Pipeline you can not change it at runtime.
Anyways, in a city builder you usually do not change existing tiles but you place object on top of existing tiles.

The render loop freezes temporarly at first "contact" of the camera frustum with multiple meshes

I have a basic scene into wich I am loading objects using the JSONLoader. The objects themselves have very small footprint, for example: milk carton: 560kb with textures, 34 kb json file.
When rendering, let's say 10 new objects, if I orbit the camera to bring them into view, the animation loop freezes for a second or so. After this first freeze, the camera orbits smoothly no matter how many objects. Loading dynamically the objects would be a solution, but for my specific use case, I still need to load at least 50 objects at first load.
Update - I have added the preload functions I use in my production project, and I also added 21 different models just to illustrate my specific scenario. I have tried the following solution:
preloading the json files,
reading the source path to the textures,
loading them with texture loader,
overwritting the maps of the json material objects with the preloaded textures,
finally releasing the objects into scene. The same behavior occurs again.
Try to click the setCamera link to se how laggy it is. I need to cut this lag to 0ms. Thanks for support!
Working demo: http://demo.adrianmoisa.ro/flexikom-loader/ First try to orbit the camera up and down to check it's working ok, then left and right. Any advice is much appreciated!
Looking at your code I see you are loading the same object 10 times, creating 10 meshes that are all the same. All use the same geometry and the same material. This is where your lag comes from. Both from the loading (asynchronous request to the server) and the object creation.
What you need to do is to load one object and create one material that you will assign to the object. Then you clone() the object 10 times assigning the different position that you want to each cloned object.
Gaitat is correct that you should not be loading the same object 10 times. But I think the lag is directly related to the textures.
You should load the textures outside of the loop.
How it is now, you are loading 30 textures onto the gpu when you could be just loading 2 (at least I think this is how it is working).
Profiling the page shows that texture2D is taking a lot of time.
I am almost certain that this will stop the lag.

How to redraw partially in opengl Es 2.0

As per my need I want to redraw only some part of the scene for each frame instead
of redrawing the entire scene only if some portion of it is updated.
Is there a way to do that in OpenGL ES 2.0?
Please any input on this will be really helpful
OpenGL does not really support incremental rendering. You need to draw the entire frame every time you are asked to redraw.
The closest I can think of is that you render your static data to an offscreen framebuffer, using a FBO (Frame Buffer Object). You should be able to find plenty of examples online and in books if you look for keywords like "OpenGL FBO". You will be using calls like glGenFramebuffers(), glBindFramebuffer(), glFramebufferTexture2D(), etc.
Once you rendered the static content into an FBO, you can copy it to the default framebuffer at the start of each redraw, and then render the dynamic content on top of it. This can be a worthwhile method if rendering the static content is very expensive. Otherwise, doing the copy from FBO to default framebuffer can be more expensive than simply re-rendering the static content each time.
The above is pretty easy if the static content is in the background, and the dynamic content is completely in front of it. If static and dynamic content overlap, it gets trickier. You will then have to restore the depth buffer resulting from rendering the static content each time before starting to render the dynamic content. I can't think of a good way to do that in ES 2.0. The features to do this relatively smoothly (depth textures, glBlitFramebuffer) are only in ES 3.0 and later.
There is one other option that I don't think is very appealing, but I wanted to mention it for completeness sake: EGL defines a EGL_SWAP_BEHAVIOR attribute that can be set to EGL_BUFFER_PRESERVED. One big caveat is that it's optional, and not supported on all devices. It also only preserves the color buffer, and not auxiliary buffers, like the depth buffer. If you want to read up on it anyway, see eglSwapBuffers and eglSurfaceAttrib.

How to import Blender 3D animation to iPhone OpenGL ES?

I am trying to do animations on iPhone using OpenGL ES. I am able to do the animation in Blender 3D software. I can export as a .obj file from Blender to OpenGL and it works on iPhone.
But I am not able to export my animation work from Blender 3D to OpenGL. Can anyone please help me to solve this?
If you have a look at this article by Jeff LaMarche, you'll find a blender script that will output a 3D model to a C header file. There's also a followup article that improves upon the aforementioned script.
After you've run the script, it's as simple as including the header in your source, and passing the array of vertices through your drawing function. Ideally you'd want a method of loading arbitrary model files at runtime, but for prototyping this method is the simplest to implement.
Seeing as you already have a method of importing models (obj) then the above may not apply. However, the advantage of using a blender script is that you can then modify the script to suit your own needs, perhaps also exporting bone information or model keyframes.
Well first off, I wouldn't recommend .obj for this purpose since the obj file format doesn't support animation, only static 3D models. So you'll need to export the animation data as a separate file that you load at the same time as the obj.
Which file format I would recommend depends on what exactly your animations are. I don't remember off the top of my head what file formats Blender supports, but as I recall it does not export Collada files with animation, which would be the most general recommendation. Other options would be md2 for character animations, or 3ds for simple "rigid objects moving around" animations. I think Blender's FBX exporter will work, although that file format may be too complicated for your needs.
That said, and assuming you only need simple rigid object movements, you could use .obj for the 3D model shapes and then write a simple Python script to export a file from Blender that has at the keyframes listed, with the frame, position, and rotation for each keyframe. Then load that data in your code and play back those keyframes on the 3D model.
This is an old question and since then some new iOS frameworks have been released such as GLKit. I recommend relying on them as much as possible when you can, since they take care of many inherent conversions like this, though I haven't researched the specifics. Also, while not on iOS, the new Scene Graph technology for OS X (which will likely arrive on iOS) in the future, take all this quite a bit further and a crafty individual could do some conversions with that tool and then take the output to iOS.
Also have a look at SIO2.
I haven't used recent versions of Blender, but my understanding is that it supports exporting mesh animation as a sequence of .obj files. If you can already display a single .obj in your app, then displaying several of them one after another will achieve what you want.
Now, note that this is not the most efficient form to export this type of animation, since each .obj file will have a lot of duplicated info. If your mesh stays fixed over time (i.e. only the vertices move with the polygon structure, uv coords, etc. all fixed) then you can just import the entire first .obj and from the rest just read the vertex array.
If you wanted to optimize this even more, you could compress the vertex arrays so that you only store the differences from the previous frame of the animation.
Edit: I see that Blender 2.59 has export to COLLADA. According to the Blender manual, you can export object transformations, and you can also export baked animation for rigged objects. The benefit for you in supporting the COLLADA format in your iPhone app is that you are free to switch between animation tools, since most of them export this format.

Resources