I have been experimenting with Greensock's TweenMax JS and Three.js. As both libraries use requestAnimationFrame (rAF), I needed to decide which library should handle this.
If I use rAF built into Three.js, it runs around 30fps and is pretty smooth.
If I use TweenMax eg: TweenMax.ticker.addEventListener('tick', animate); it runs about 55-60fps but is a little choppy.
I can change the fps in TweenMax with TweenMax.ticker.fps(30); which as expected runs similar to the Three.js rAF version.
My question is which method is preferred and is one considered best practice over the other? Also if I choose Three.js am I able to change the fps in its rAF implementation?
Finally, how would you decide on the fps to suit a wider audience? Limiting to 30fps seems ok but is a bit arbitrary, some users may be capable of much higher rates than I am allowing.
UPDATE :
From the feedback from mrdoob and jack, I have tried both rAF in three.js and rAF using TweenMax, with antialias on and off.
antialias on:
Three.js rAF (default) - 30fps smooth.
http://jsfiddle.net/cR7Lx/21/
TweenMax rAF (default) - 55-60fps slightly choppy.
http://jsfiddle.net/cR7Lx/23/
TweenMax rAF (fps(30)) - 30fps smooth.
http://jsfiddle.net/cR7Lx/24/
antialias off:
Three.js rAF (default) - 30 - 60fps slightly choppy.
TweenMax rAF (default) - 92-120fps slightly choppy.
TweenMax rAF (fps(30)) - 30fps smooth.
Could do with someone who knows how requestAnimationFrame works under the hood to help explain the differences, for now I will use either TweenMax 30fps or Three.js both with antialias on.
Just to clarify, the default TweenMax RAF behavior doesn't put a cap on the fps because...well...that's the point of requestAnimationFrame in the first place - it is intended to be something that the browser dictates (and it's typically around 60fps). Setting a specific fps with TweenMax.ticker.fps() simply puts a cap on it (unless you set TweenMax.ticker.useRAF(false) in which case it will use a setTimout() to get as close as possible to the fps you set).
I noticed someone said that you must set an fps in TweenMax in order to make it smooth and I just wanted to clarify that it isn't true - doing so just skips RAF updates if/when they occur too quickly - I can't imagine how that'd make anything smoother. It would likely do the opposite.
Only use TweenMax.ticker.fps() if you want to LOWER the frame rate below the normal browser refresh rate which, again, is typically around 60fps. If you're looking for the smoothest results, I'd probably stick with the default configuration of TweenMax. You could try turning off requestAnimationFrame and using a very high fps (TweenMax.ticker.useRAF(false); TweenMax.ticker.fps(100)), but the down side there is that you lose all the benefits of RAF, like improved battery life on mobile devices when the tab is inactive, synchronized updates with the native browser refreshes, etc.
The biggest cause of jerky behavior is graphics rendering in the browser which is unrelated to the JavaScript animation engine.
I'm not familiar with Three.js, so I can't speak to its capabilities or which option would be better to use to drive your other stuff (sorry). All I can say is I'm a big GreenSock fan (ha ha)
When i inspected both implemantation in fact they are actually quite similar. They both use this implemantation with a small difference.
Three.js implemantation calculate your last call and delay the next call for (16 ms - lastCall) to fixed your fps 60. If it cant make it in 16 ms it 'll naturally wait as long as it takes.
TweenMax implemantation does not fixed your fps initally. Instead you have to call fps manually. Which makes that choppy situation. Because you have to give always right fps value ( not much - not less ) to adjust the delay time.
EDIT :
I have to edit TweenMax behaviour cause i missed the constructor call this.fps(fps) at the end of the ticker class which set default fps to 60 when fps is undefined.
Related
I use this -
if (timeSinceLastClick > 60)
{
Time.timeScale = 0;
}
But that doesn't freeze either the ar camera/ar tracking or audio. Besides the animation, everything else seems to keep running.
I'd like to pause the app completely so the users can save their batteries and avoid heating
If you want to save battery/avoid heating I'd recommend either cull as much as you can or lower the target framerate.
Settings TimeScale to 0 will affect Rigidbodies and all scripts that multiply their movements by Time.deltaTime (which is highly recommended to achieve framerate-independency). It still computes every frame, as often as possible. So see above.
If you have occlusion culling baked, you could move the camera to an area that doesn't point at any renderable objects. A dynamic cube in front of the cam will probably not work as it's not baked into Occlusion Culling. But you could disable the main Camera and render the pause-menu on a second camera that is UI only for example.
The animation in my 2D game is 24FPS. Is there any good reason not to set the game's target frame rate to 24FPS? Wouldn't that increase performance consistency and increase battery life on mobile? What would I be giving up?
You write nothing about the kind of game, but I will try to answer anyway.
Setting 24 FPS would indeed increase performance consistency and battery life.
The downside is, besides getting laggy visuals, an increased input lag. That will not only effect the 3D controls but also every UI-Button. Your game will feel a bit more laggy than other games, a very subtile feeling that will sum up after a while.
You could get away with 24, depending on the nature of your game, you should test it with different people. Some are more sensitive to that issue than others.
If you set up the animations to have their correct framerate, Unity will interpolate the animation to the games framerate. So there is no need to have the same values on animations and the game itself.
I'm testing this on mobile, i have a 30 frame animation with 30 frame rate, i build it to my mobile with in-game target frame rate of 30 and 60. since the animation frame rate would be time base in unity, the animation would be exactly 1 second on both build.
This is what i assume would happen :
1) on 30fps build, the animation would play 1 frame on each in-game frame, so it would be 1 second.
2) on 60fps build, the animation would play 1 frame on every 2 in-game frame, so it would be 1 second as well.
My question is, why would the 60fps build look better compare with 30fps build? since the animation would just play 30 frame throughout 1 second.
There aren't anything else in the scene, only the animation, so nothing else would distract the feeling of 60fps would look better.
hope you all can understand the question, and if anyone have the same feeling, or you can test it yourself to see if you feel the same, feel free to comment, answer or discuss. Thanks.
I think i might have the answer, that's because since the animation would be time base, unity would fill better on empty keyframe in 60fps. example : set a position keyframe on 1st frame, then set another position key frame at 30th frame, unity would effectively play this as a 60 frame rate animation since there are so many empty keyframe.
I'm not sure if this is the exact answer, if someone can confirm this or there are no other answer i'll rate this as answer.
I find the question very vague and lacking specifics of what kind of animation is being discussed and how it is being used. 2D or 3D? Sprites or geometry? Frames of bitmap graphics or tweened motion created within Unity?
As an example: If your animation was a bitmap sprite animation and the sprite did not ever change coordinate positions, then 30 frames of bitmap animation played over a time duration of 1 second would appear EXACTLY THE SAME in a 60fps build as it would in a 30fps build.
If you're also moving the sprite from one XY to another set of XY, then the MOTION would appear smoother in the 60fps build due to interpolation of the coordinate positions (as you said in your own answer). But your bitmap frames still are playing exactly the same (1 image per 1/30ths of a second).
Similarly, if the animation discussed here is geometry, then everything is based on interpolation of the shapes and thus yes, 60 fps allows for 60 unique frames of animation.
I am loading a ThreeJS scene on a website and I would like to optimize it depending on the capacity of the graphic card.
Is there a way to quickly benchmark the client computer and have some data that will let me decide how demanding or simple has to be my scene in order to run at a decent FPS ?
I am thinking of a benchmark library that can be easily plugged or a benchmark-as-a-service. And it has to run without the user noticing.
you can use stats.js to monitor performance. it is used in almost all threejs examples and is inluded in the treejs base.
the problem with this is that the framerate is locked to 60fps, so you cant tell how much ms get lost by vsynch
the only thing i found to be reliable is take the render time and increase quality if its under a limit, decrease it if it takes too long
So I have established the base for an SFML game using a tilemap. For the most part, I've optimized it so it can run at a good 60 fps. However, I can only get this 60 fps if at some point the map is halfway off the screen so there is less of it being rendered. This seems like it would make sense, less being drawn means it runs faster, but once the fps increases it stay permanently, even if I then make the entire screen rendering the map with the map. I can't understand this irregularity with the fps in that I either have to start the map slightly offset, or move the map offscreen for a moment to get a solid fps. Clearly there isn't a problem with the ability of my computer to render at this fps, as it can stay there once it starts, but I can't understand why the map has to be offscreen momentarily for it to achieve this speed.