Rendering large STL files in threejs - three.js

I have this stl model that has >3 million vertices and have to render it on screen. Right now, loading the model using a Threejs STL loader crashes the browser. Based on some research, I'm considering doing some of the following:
Stream the STL file to the client and have the STL loader load the model in 1mb chunks
Every time a chunk gets loaded, I can create a buffer geometry and construct a mesh from it. I can do this for every chunk and add all the meshes to the scene. Thus the scene will have multiple meshes in it, but look like it's the original model.
I have a pretty good idea of how to do step 2, but is the first part possible? Can a three.js STL loader load an STL in chunks (assuming it's being delivered to the client in chunks) as I've described?

The Three.js STL loader won't do that for you (it will always load the whole file and then parse).
You'd have to load and parse the file in chunks yourself. It is a pretty straightforward file format (https://en.wikipedia.org/wiki/STL_(file_format) ), you'd have to handle the case where a STL triangle structure spans chunks, however.
Another option, if it makes sense for your application, is to read and preprocess the file into a format that can be read in chunks in the browser.

Related

unity3d - large file with animations

I have a project where a large truck model is used (around 500MB), very detailed cad model is converted to fbx format in 3dsmax.
There are animations in the fbx model too.
Question 1:
Should I have all the animations as clips in the same fbx file or have separate animation files.
As having seperate fbx files for each animation will increase the overall app size.
Question 2:
How to optimize the mesh which is around 500MB with lots/plenty of child objects (as is it is a very detailed mesh) for performance. Will culling reduce draw-calls or combining mesh reduce draw-call. Is there a way to reduce tris/polycount in the mesh for optimization.
If you really care about the app size - go for the same file. Otherwise it's generally much more convenient to have separate .fbx files for animations.
You can try limited dissolving and decimation in Blender to reduce the vertices/poly count.

Three.js FBXLoader failed to load a big FBX model in three.js

When I load a large FBX model, which is 100MB in three.js, Chrome crashes, though it doesn't crash when I load a small FBX model. Can anyone tell me how to fix it?
Load less than 100MB?
100MB is an extremely large single asset for a page, three.js or otherwise. I highly recommend some optimization of your asset. Even if you get the contents into memory, three.js may not be able to render the contents.
Assuming everything in there is absolutely required, you could break the model into smaller chunks and load a whole bunch of assets. That might be possible. You also might have better luck with other 3d solutions (non-JS webpage) which could handle such a (still quite fat) asset.

XNA Texture loading speed (for extra large Texture sizes)

[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.

Sharing VBOs across multiple mesh objects

I'm working on a very small game engine that uses OpenGL ES 2.0. I'm having a bit of a design issue with integrating VBOs into my Mesh Class.
The problem is that I don't want to instantiate a new VBO for each mesh, and I want the VBO size to be determined by the number of meshes I load into it (not just a fixed size of 2MB or something).
Since there's no realloc function for VBOs, I need to batch load all my vertex data at once. This is ok, since I only have 4 or 5 small meshes. So I created a MeshList class.
I call MeshList.AddMesh(Mesh mesh) and it aggregates the vertex/index data of the mesh object and returns the offsets into the array of vertex data/index data back to the mesh that was added. This way the mesh knows where it is in the VBO (but not which VBO it's in).
However, none of the MeshList data is uploaded into a VBO until I call MeshList.BindToVBO(). But now, none of my meshes know which VBO they're in. So I was thinking of creating an array of pointers in MeshList that point to integer member variables in each Mesh class that would hold the VBO Handle. This way, when BindToVBO() is called, it iterates over the pointer array and updates the VBO Handles in the mesh objects.
I figured, this way it gives me the flexibility of having different mesh objects in different VBOs or all in one VBO. The only concern I have is whether or not this is a good design.
It's not clear to someone glancing at the code that MeshList.BindToVBO() is updating a whole bunch of mesh objects. I mean, MeshList does interact with all of the Mesh objects prior to the BindToVBO() call, but there's nothing explicitly saying that by passing a Mesh object to MeshList.AddMesh(), it's essentially subscribing it's VBOHandle members to updates at some point in the future.
I've tried to make this as clear as I can. Let me know if something needs clarification.
Honestly to me sounds like a lot of trouble for a dubious payoff. Do you have a reason to believe that putting multiple meshes in the same buffer is going to make a noticeable in your performance?
It sounds like premature optimization to me.
Sure, if you have a particle system with 50,000 particles I could see wanting that to be in a shared buffer, but in general I don't know if there's a benefit to storing two arbitrary meshes in the same buffer. It just sounds like a huge potential for bugs and headaches.

Vertex Animation stored in FBX file without using Point Cache?

Everything I've found seems to indicate that in order to export a vertex animation a point cache file must also be generated, but that means in addition to the FBX file a whole new folder with that cache data must also be built. Is there no way to store the (vertex) animation data entirely in the FBX file?
That's correct. The FBX stores the mesh/topology, and the point cache stores the offsets of the vertices over time.
The FBX file format stores mesh topology, shapes and skin deformers, but not the actual vertex cache data since it can be of various formats, such as MCX (Maya), PC2 (Max) or ABC (Alembic). Also, for access and performance reasons, it is preferable that the cache data stays in a separate file so that software can read asynchronously from it without having to deal with the complexity of the FBX data model.

Resources