threejs benchmark and progressive enhancement - three.js

I am loading a ThreeJS scene on a website and I would like to optimize it depending on the capacity of the graphic card.
Is there a way to quickly benchmark the client computer and have some data that will let me decide how demanding or simple has to be my scene in order to run at a decent FPS ?
I am thinking of a benchmark library that can be easily plugged or a benchmark-as-a-service. And it has to run without the user noticing.

you can use stats.js to monitor performance. it is used in almost all threejs examples and is inluded in the treejs base.
the problem with this is that the framerate is locked to 60fps, so you cant tell how much ms get lost by vsynch
the only thing i found to be reliable is take the render time and increase quality if its under a limit, decrease it if it takes too long

Related

How to quick render simple scenes in low res in Blender?

I have started using blender to make simple 3d animations, but my laptop will heat up too fast when rendering, and also it takes too much time to render even simple scenes.
I just want to output easily and fast a low res image of the scene, like the ones on the 3D View, but I didn't find any easy nor fast way to do that.
Preferably I will be using cycles.
Can someone help me?
Making renders faster is almost a university degree by itself. Here is a forty seven minute video with some tips.
The simplest step with any render engine is to reduce the resolution. In the render dimensions panel set the percentage to 50% or 25%.
Using 2.79 you can do opengl renders, which is what you see in the viewport. You can enable the display of only render objects to hide all the viewport overlays.
For cycles set the samples low and use the denoising options. The daily builds, which will be 2.81 when released also includes the new denoise composite node that uses Intel's OpenImageDenoise for better results.
If you really want to speed up render times, use blender 2.80 and the EVEE render engine, it is almost realtime and offers amazing results. With 2.80 almost all of the material nodes from cycles will produce the same result in evee and for 2.81 this has been improved even more.

What is the WEBGL version/level required for the smaa post-processing example?

The SMAA post-processing example is by far the best antialiasing method in my tests, but it's extremely complex and I'm worried that most-likely it's not WEBGL-1.0, so it won't run on older PCs and devices at all.
Anyone knows what version is it?
And what is the actual load on GPU, is there a tool to inspect the milliseconds per frame? Relying just on dropped framerate is next to useless.
You can use the SMAAPass with WebGL 1. WebGL 2 is not even supported by three.js. Looking at the respective shader code, I would say it should also compile on older hardware.
I don't have any concrete performance measurements but I can guarantee that this pass will add noticeable overhead to your application, especially on mobile devices.
three.js R92

Three.js: What's the upper limit for holding 60 FPS on an average desktop?

I'm currently working on a game using Three.js. I've been studying software engineering for four years and have been working professionally on backends for two, but I've barely touched on graphics aside from some simple Unity experimenting.
I currently have ~22,000 vertices and ~8,000 faces according to renderstats.js, and my desktop (above average) can't run it above 20 FPS. I'm using Lambert material as well as a single ambient light, so I feel like this isn't too much to ask.
With these figures in mind, is this the expected behavior for three.js rendering?
I would be pretty sure that is not end of the line and you are probably missing some possibilities for massive performance-improvements.
But just to give you some numbers first,
if you leave everything fancy away (including three.js) and just render an ultra-simple point-cloud with one fragment rendered per point, you can easily get to rendering 10-20 million (yes, million) points/vertices on an average GPU.
just with simple shapes and material, I already got three.js to render something in the range of 500k triangles (at 1080p-resolution) at 60FPS without problem. You can probably take those numbers times 10 for latest high-end GPUs.
However, these kinds of numbers are not really helpful.
Some hints:
if you want to debug your rendering-performance, you should first add some metrics. Renderstats is good, but I'd recommend integrating http://spite.github.io/rstats/ for this (see the example).
generally the choice of material shouldn't matter too much, the GPU is way more capable than most people think. It's more likely a problem somewhere else in the pipeline. EDIT from comment: In some cases, like hi-resolution displays with slow GPUs (think mobile-devices) this might be less true and complicated shader-code can slow down your site, but it might worth be looking at the other points first. As the rendering itself happens off-thread (so you can't measure it's duration using regular tools like the devtools-profiler), you can use the EXT_disjoint_timer_query-extension to get some information about what is going on on the GPU.
the number of drawcalls shouldn't be too high: three.js needs to do a single drawcall for every Mesh and Points-object rendered in the scene and too many objects are generally a far bigger problem than objects with lots of vertices. Reducing the number of drawcalls can be done by merging multiple geometries into one and making use of multi-materials, vertex-colors and things like that.
if you are doing postprocessing, the GPU needs to render every pixel on screen several times. This might as well massively limit your performance. This can be optimized by merging multiple postprocessing-passes into one (I admit, that'd be a lot of hard work..)
another problem could be on the JS side: you should use the profiler or timeline-view from the chrome devtools to see if maybe it's the javascript that is taking too much time per frame (shouldn't be more than 8-12ms per frame). I've been told there are ways to optimize the javascript-performance as well :)

Kinect 2 - Significant delay on hand motion

I'm using the Kinect 2 to perform rotation and zooming of a virtual camera showing on an 3D object by moving the hand in all three directions. The problem I currently tackle with is that these operations are executed with some noticeable delay. If my hand is in a steady position again, the camera still continues to move for a short time. It feels like if I push the camera instead of control them in real time. Perhaps the frame rate is a problem. As far as I know the Kinect has 30 FPS while my application has 60 FPS (VSync enabled).
What could be the cause for this issue? How can I control my camera without any significant delay?
The Kinect is a very graphic and process intensive piece of hardware. For your application I'd suggest a minimum specification of a GTX960 and a 4th gen i7 processor. Your hardware will be a prime factor in how fast you can compute Kinect data.
You're also going to want to avoid using loops as much as possible and instead rely on multi threading and if you are looping be certain that there are no foreach loops as they take longer to execute. It is very important that your code is running the data read from the Kinect and the position command asynchronously.
The Kinect will never be real time responsive. There is just too much data that it is handling, the best you can do is optimize your code and increase your hardware power to shrink the response time.

WebGL vs CSS3D for large scatter plot of images

I am building a web application which will display a large number of image thumbnails as a 3D cloud and provide the ability to click on individual images to launch a large view. I have successfully done this in CSS3D using three.js by creating a THREE.CSS3DObject for each thumbnail and then append the thumbnail as an svg:image.
It works great for upto ~1200 thumbnails and then performance starts to drop off (very low FPS and long load time). By the time you hit 2500 thumbnails it is unusable. Ideally I want to work with over 10k thumbnails.
From what I can tell I would be able to achieve the same result by creating each thumbnail as a WebGL mesh with texture. I am a beginner with three.js though, so before I put in the effort I was hoping for guidance on whether I can expect performance to be better or am I just asking too much of 3D in the browser?
As far as rendering goes, CSS3 should be relatively okay for rendering quite big amount of "sprites". But 10k would probably be too much.
WebGL would probably be a better option though. You could also take care about further optimizations, storing thumbnails in atlas texture or such...
But rendering is just one part. Event handling can be serious bottleneck if not handled carefully.
I don't know how you're handling mouse clock event and transition towards fullsize image, but attaching event listener to each of 2.5k+ objects probably isn't a good choice anyway. With pure WebGL you could use imagespace for detecting clicked object. Encoding each tile with different id/color and using that to determine what's clicked. I imagine that WebGL/CSS3D combo could use this approach as well.
To answer question, WebGL should handle 10k fine. Maybe you'll need to think about some perf optimization if your rectangles are big and they take a significant amount on the screen, but there are ways around it if that problem appears.

Resources