I'm about to perform some benchmark experiments for WebGL (Three.js). I'm thinking about loading time (from link click to loaded scene), loading time for files, and frame rate.
With many tests I must gather statistics which can be put into diagrams. All I have now is a FPS counter. Are there some good scripts to use for this?
What I need is a way of saving fps number each frame to a text file.
And time counters for loading scene and files.
Please help!
Related
I have started using blender to make simple 3d animations, but my laptop will heat up too fast when rendering, and also it takes too much time to render even simple scenes.
I just want to output easily and fast a low res image of the scene, like the ones on the 3D View, but I didn't find any easy nor fast way to do that.
Preferably I will be using cycles.
Can someone help me?
Making renders faster is almost a university degree by itself. Here is a forty seven minute video with some tips.
The simplest step with any render engine is to reduce the resolution. In the render dimensions panel set the percentage to 50% or 25%.
Using 2.79 you can do opengl renders, which is what you see in the viewport. You can enable the display of only render objects to hide all the viewport overlays.
For cycles set the samples low and use the denoising options. The daily builds, which will be 2.81 when released also includes the new denoise composite node that uses Intel's OpenImageDenoise for better results.
If you really want to speed up render times, use blender 2.80 and the EVEE render engine, it is almost realtime and offers amazing results. With 2.80 almost all of the material nodes from cycles will produce the same result in evee and for 2.81 this has been improved even more.
I have doing some research on three.js to display 3D model on webpage and I succeeded. However, the load time (preprocessing time) is pretty slow as you can see from here the load time is 11+ seconds and my file size is 69.5MB; during that moment I also cannot do other action on webpage like click the button, write something on textbox or etc. I tried to search for the solutions to reduce the load time but most solutions are reduce my file size but the resolution will be reduced too.
So my questions are:
How to reduce the load time (preprocessing time) without reducing file size?
How to load models asynchronously so that i can do other action on the webpage?
Thank you.
I am loading a ThreeJS scene on a website and I would like to optimize it depending on the capacity of the graphic card.
Is there a way to quickly benchmark the client computer and have some data that will let me decide how demanding or simple has to be my scene in order to run at a decent FPS ?
I am thinking of a benchmark library that can be easily plugged or a benchmark-as-a-service. And it has to run without the user noticing.
you can use stats.js to monitor performance. it is used in almost all threejs examples and is inluded in the treejs base.
the problem with this is that the framerate is locked to 60fps, so you cant tell how much ms get lost by vsynch
the only thing i found to be reliable is take the render time and increase quality if its under a limit, decrease it if it takes too long
I have started looking into the client side performance of my app using Chrome's Timeline tools. However, whilst I have found many articles on how to use them, information on how to interpret the results is more sparse and often vague.
Currently I am looking at scroll performance and attempting to hit 60FPS.
This screenshot show's the results of my most recent timeline recording.
As can be seen most frames are over 60 FPS and several are over 30 FPS.
If I zoom in on one particular frame - the one with duration 67.076ms I can see a few things:
the duration of the frame is 67ms, but aggregated time is 204ms
201ms of this time is spent painting BUT the two paint events in
this frame are of duration 1.327 ms and 0.106 ms
The total duration for the JS event, update layer tree and paint events is
only 2.4 ms
There is a long green hollow bar (rasterize Paint)
which lasts the duration of the frame and in fact starts before and
continues after it.
I have a few questions on this:
the aggregated time is far longer than the frame time - is it
correct to assume that these are parrallel processes?
the paint time for the frame (204ms) far exceeds the time for the two paint events (1.433ms) - is this because it includes the
rasterize paint events
why does the rasterize paint event span multiple frames?
where would one start optimizing this?
Finally can someone point me to some good resources on understanding this?
This is somewhat unfortunate result of the way the 'classic' waterfall Timeline view coalesces multiple events. If you expand that long "Rasterize Paint" event, you'll see a bunch of individual events, which are going to be somewhat shorter. You may really want to switch to Flame Chart mode for troubleshooting rendering performance, where the rasterize events are shown on appropriate threads.
I am building a web application which will display a large number of image thumbnails as a 3D cloud and provide the ability to click on individual images to launch a large view. I have successfully done this in CSS3D using three.js by creating a THREE.CSS3DObject for each thumbnail and then append the thumbnail as an svg:image.
It works great for upto ~1200 thumbnails and then performance starts to drop off (very low FPS and long load time). By the time you hit 2500 thumbnails it is unusable. Ideally I want to work with over 10k thumbnails.
From what I can tell I would be able to achieve the same result by creating each thumbnail as a WebGL mesh with texture. I am a beginner with three.js though, so before I put in the effort I was hoping for guidance on whether I can expect performance to be better or am I just asking too much of 3D in the browser?
As far as rendering goes, CSS3 should be relatively okay for rendering quite big amount of "sprites". But 10k would probably be too much.
WebGL would probably be a better option though. You could also take care about further optimizations, storing thumbnails in atlas texture or such...
But rendering is just one part. Event handling can be serious bottleneck if not handled carefully.
I don't know how you're handling mouse clock event and transition towards fullsize image, but attaching event listener to each of 2.5k+ objects probably isn't a good choice anyway. With pure WebGL you could use imagespace for detecting clicked object. Encoding each tile with different id/color and using that to determine what's clicked. I imagine that WebGL/CSS3D combo could use this approach as well.
To answer question, WebGL should handle 10k fine. Maybe you'll need to think about some perf optimization if your rectangles are big and they take a significant amount on the screen, but there are ways around it if that problem appears.