Three.js FBXLoader failed to load a big FBX model in three.js - three.js

When I load a large FBX model, which is 100MB in three.js, Chrome crashes, though it doesn't crash when I load a small FBX model. Can anyone tell me how to fix it?

Load less than 100MB?
100MB is an extremely large single asset for a page, three.js or otherwise. I highly recommend some optimization of your asset. Even if you get the contents into memory, three.js may not be able to render the contents.
Assuming everything in there is absolutely required, you could break the model into smaller chunks and load a whole bunch of assets. That might be possible. You also might have better luck with other 3d solutions (non-JS webpage) which could handle such a (still quite fat) asset.

Related

3d gltf model rendering optimization (threejs)

I have issues loading some 3d gltf models using threejs on iPad. Loading works fine actually, it loads up fine on desktop computers and android tablets, but in my specific case it needs to run on an iPad tablet and the page keeps crashing because it uses up all of the memory trying to render the model (I guess Android gives the browser more memory to use).
My question is how to optimize the model in order for it to be able to run on iPad? My first thought was that the number of vertices/indices etc. affects rendering, but it turned out that a model with more vertices and indices was able to load while the "optimized" model couldn't load. We throw the model into babylon online previewer to see its info and the thing I noticed is that the older model with more vertices and indices had less meshes and less draw calls than the new one that doesn't work. So is that something that we should focus on optimizing instead of number of vertices and indices?
The problem is that we need to optimize the model to render on iPad but I can't figure out which part of the model needs to be optimized so any help would be much appreciated!
P.S. I tried implementing DRACO compression and DRACOLoader but it doesn't help because it just compresses the file, and once it needs to be rendered on screen that compression doesn't matter at all because it's basically still the same 3d file that needs to be rendered. I can share code if needed, but I don't think it matters because there is no issues with the loading, it's just that the model is not optimized.
Oversized textures were the problem. We had textures that were 2048x2048px but it was just one color inside. So I reduced all of the textures to 1x1px and it worked perfectly.

AnimatedSprite vs AnimatedImage in QML

In QML I have multiple ways of including animations. Within others there are
AnimatedImage
AnimatedSprite
which both seem to be of similar use. With the right tools, it is quite easy to transform a sprite sheet into an animated gif or MNG file which could be handled by an AnimatedImage. The other way around is not that much harder.
In the documentation of Sprite they say:
The sprite engine internally copies and cuts up images to fit in an easier to read internal format, which leads to some graphics memory limitations. Because it requires all the sprites for a single engine to be in the same texture, attempting to load many different animations can run into texture memory limits on embedded devices. In these situations, a warning will be output to the console containing the maximum texture size.
On the other hand, the AnimatedImage usually caches the single frames, especially when the animation should loop, (which might also bring the maximum texture size to risk?)
I know that the Sprite has some fancy state machine and stuff, but the AnimatedSprite seems to be stripped of this.
As the production of content for either of those is the same work, I want to know if one of them is superior in any usecase, or whether their usecases and their performance are just entirely the same and which one to use is a question of flavour.
Actually I did not find a single reference that mentioned both in the same context...

WebGL vs CSS3D for large scatter plot of images

I am building a web application which will display a large number of image thumbnails as a 3D cloud and provide the ability to click on individual images to launch a large view. I have successfully done this in CSS3D using three.js by creating a THREE.CSS3DObject for each thumbnail and then append the thumbnail as an svg:image.
It works great for upto ~1200 thumbnails and then performance starts to drop off (very low FPS and long load time). By the time you hit 2500 thumbnails it is unusable. Ideally I want to work with over 10k thumbnails.
From what I can tell I would be able to achieve the same result by creating each thumbnail as a WebGL mesh with texture. I am a beginner with three.js though, so before I put in the effort I was hoping for guidance on whether I can expect performance to be better or am I just asking too much of 3D in the browser?
As far as rendering goes, CSS3 should be relatively okay for rendering quite big amount of "sprites". But 10k would probably be too much.
WebGL would probably be a better option though. You could also take care about further optimizations, storing thumbnails in atlas texture or such...
But rendering is just one part. Event handling can be serious bottleneck if not handled carefully.
I don't know how you're handling mouse clock event and transition towards fullsize image, but attaching event listener to each of 2.5k+ objects probably isn't a good choice anyway. With pure WebGL you could use imagespace for detecting clicked object. Encoding each tile with different id/color and using that to determine what's clicked. I imagine that WebGL/CSS3D combo could use this approach as well.
To answer question, WebGL should handle 10k fine. Maybe you'll need to think about some perf optimization if your rectangles are big and they take a significant amount on the screen, but there are ways around it if that problem appears.

Is JSON model format is better for THREE.js

I am using Three.js to create a simple game, i load about 100 low poly models in obj format but the performance is not smooth, all models size not more than 18 MB, if i use JSON format will it be faster although the size will be more than double?
i tried Collada but for simple objects like my case obj is faster, if JSON is not the best solution, what is the best one?
Not any one file format is better overall, depending on your needs and requirements external software used and if it consist of animation .Personally I generally don't use json that much i use obj but json is heavily supported by three.js.. but that's more of an opinion.
There are many factors as too why your application can be heavy.
with out source code or the model files themselves I can only speculate.
Few things to consider:
Are your models optimized as best you can , 100 models in one scene is quiet allot at one time at 18mb, is this including textures?.
Are Textures compressed and reused.This will increase performance.
From shadows , lighting and animation types all have an impact, Google has plenty of resources to offer you.
There are several techniques to keep your poly count down: subdivision is a good example of this, there is a really useful article on this.
http://www.kadrmasconcepts.com/blog/2011/11/06/subdivision-surfaces-with-three-js/
Also LOD Level OF DETAIL is visible depending on how far or near an object is.
A great useful explanation here:
http://www.pheelicks.com/2014/03/rendering-large-terrains/
Three.js supports this with out any added libs..
Detail and how you render it is the key for best performance..
Even down to how you have set up your project can have a major influence.Take a look at functions and how you use them, for example on mouse move and dom element clicks can slow your three.js app dramatically if they are not optimized and used efficiently.
Reuse and share is your best option, There is no point in loading the same model twice because one is blue and the other is green...
I think that there is no better format? It really depends what you need and what not. However, for me I will go for obj!

XNA Texture loading speed (for extra large Texture sizes)

[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.

Resources