I am creating a webgl game, I have it fairly well optimized, however there is one problem, my framerate limiter destorys performance. I know what your thinking "duh, of course it is...its an fps limiter". Well the issue is that it is not behaving at expected. Here is the code:
renderTimer = null;
function animate() {
clearTimeout(renderTimer);
renderTimer = setTimeout(function () {
_frame = requestAnimationFrame(animate);
}, 33);
render();
}
function render(){
// operations for mesh positioning/animation
handleObjects();
renderer.render(scene, camera);
}
On my desktop this works as expected, gameplay is smooth and is held at 29-30fps.
On my laptop fps drops to 22-24 and gameplay is jerky. If I change the interval delay to 16ms, gameplay is relatively smooth and holds at about 35fps. If I remove the interval all together gameplay is completely smooth and holds at about 45fps.
I dont completely understand this behavior. If the cap is 30fps why does my laptop performance drop below 25fps? I would expect it to be also 25fps without the interval, but yet it goes faster. Curious.
I would happily just remove the interval however I do want my fps capped at 30, players getting a higher fps than that would be at an advantage.
Thoughts?
There are a couple of considerations:
setTimeout in javascript can not really be counted on being very accurate.
Browser can do it's own things like garbage collection at any time, delaying stuff.
Rendering itself (webgl/three.js + your own game logic) takes time. Even though you create the timeout before main render call, it still introduces 33msec idle time.
Well, I'm not actually sure about the last point, now that I think about it. Anyway I have observed similar problems, and I managed to make a solution that works pretty OK, it will keep the framerate smooth and +/- 1-2 FPS accurate to a target framerate if the computer can handle such framerate at all. It's a hack though.
First you could take a look at the requestAnimationFrame implementation in Three.js (for browsers that do not have it built-in):
requestAnimationFrame = function ( callback ) {
var currTime = Date.now(), timeToCall = Math.max( 0, 16 - ( currTime - lastTime ) );
var id = self.setTimeout( function() { callback( currTime + timeToCall ); }, timeToCall );
lastTime = currTime + timeToCall;
return id;
};
You can see it adjusts the setTimeout based on the last call (targeting around 60FPS). So that's one solution you could try.
What I did (the hack that seems to work well), is to have the timeout value as a variable, with initial value being the original 33 or whatever the desired framerate is. Then on each frame, I record the Date.now(), and compare it to the previous frame time. If we missed the budgeted frame time, we decrement the timeout value by 1. If we were faster than desired, we increment timeout by 1. So the code is continually, smoothly, adjusting the timeout to match desired framerate. By only slightly incrementing/decrementing, we avoid the problems with unpredictable garbage collections etc completely throwing off calculations and messing things up. And it just works (tm)
I won't post the code because there is so much more going on my render loop (only render new frame if something is changed etc), it would be tedious to isolate a relevant code example.
Related
I have an application I was running that I created with Processing where I was drawing a lot of objects to the screen. Normally the sketch runs at 60 fps and predictably when a lot of stuff is drawn to the screen, it starts to reduce. I wanted to see what changing Processing's renderer would do, as there is a P3D option when you set the size. P3D is a '3D graphics renderer that makes use of OpenGL-compatible graphics hardware.'
I noticed that the performance improved when i used this in that I could draw more objects to the screen before framerate dropped, without really having to change anything in the code. Then i noticed something odd.
I started up the computer the next day and ran the program again and noticed that suddenly the framerate was lower, around 50 fps. There didn't seem to be anything wrong with my computer as it wasn't doing anything else. Then I thought it probably has something to do with the graphics card. I opened a youtube video and it seemed to be fine. Then I ran the sketch again and it went back up to 60fps.
I just want to know what might be going on here hardware wise. I'm using an NVIDIA GTX970 (i think its OC edition). It seems to me that watching the video sort of jump started the card and made it perform on the processing sketch properly. Why didn't the sketch itself make that happen?
as an example:
Vector<DrawableObject> objects;
void setup()
{
size(400,400, P3D); /// here is the thing to change. P3D is an option
objects = new Vector<DrawableObject>();
for(int i=0;i<400;i++)
{
objects.add(new DrawableObject());
}
}
void draw()
{
for(int i=0; i<objects.size(); i++)
{
DrawableObject o = objects.get(i);
o.run();
}
}
I have a simple AVPlayer-based OS X app that plays local media. It has a skip forward and backward feature, based on -seekToTime:. On some media, there is an annoying 3-7 second delay in getting the media to continue playing (especially going forward). I have tried - seekToTime:toleranceBefore:toleranceAfter: , with variable tolerances. No luck.
Posting previously solved issue for the record... I noticed that the seekToTime: skipping worked fine when the playback was paused. I immediately (i.e., several weeks later) realized that it might make sense to stop the playback before seeking, then restarting. So far, problem is 100% solved, and it is blazing fast. Might be of some use to people trying to do smooth looping (but I don't know how to trigger the completion handler signaling the end of the loop). Don't know if it works with iOS. Sample code is attached:
-(void) movePlayheadToASpecificTime
{
// assumes this is a method for an AVPlayerView subclass, properly wired with IB
// self.player is a property of AVPlayerView, which points to an AVPlayer
// save the current rate
float currentPlayingRate = self.player.rate;
// may need fancier tweaking if the time scale changes within asset
int32_t timeScale = self.player.currentItem.asset.duration.timescale;
// stop playback
self.player.rate = 0;
Float64 desiredTimeInSeconds = 4.5; // or whatever
// convert desired time to CMTime
CMTime target = CMTimeMakeWithSeconds(desiredTimeInSeconds, timeScale);
// perform the move
[self.player seekToTime:target];
// restore playing rate
self.player.rate = currentPlayingRate;
}
I am a student learning to become a comic artist.
Now we have this course called "Media" in wich we have to make an interactive program using a program called processing.
I have to show this to a jury in 2 days but I am litteraly stuck with these codes for the past 3 weeks, I just can't get it to work the way I want it to, so here I ask you if anyone would be able to help me with this.
What I want to make :
Basically I wanted it to be interactive without being interactive, So I tried to accomplish this by making a Buddha-themed program.
So what does it have to do? I think it shouldn't be all that hard, all I want it to do is take the amount of sound it gets, and when the sound is below a certain amount, the screen, wich is completely white, will start fading to black, whenever there is sound it will rapidly become white again.
So after 30 seconds of no sound it should be completely black and it should go into a new mechanism where it will start fading the black screen ( there is a picture with the word "emptyness" behind it ) so that word should start becoming visible very slowly ( approx 30 seconds again ) then when that picture is completely visible it should start fading again and start showing a picture of a buddha ( wich is behind that picture with the word ) and that's all I want it to do.
So now I will show you what I have, I've got the screen fading whenever it's really quite, but that's where I get stuck, I don't know how to set the timer, how to set the images behind it etc :
import ddf.minim.*;
Minim minim;
AudioInput in;
PImage img;
int a = 125;//sound value
int fade = 0;//starting fade, big fade is darker
int stmin = 2; //fadestep darker
int stplus = 20; //fadestep lighter
float gw = 0.001;//sensitivity smaller = more sensitive
void setup() {
img = loadImage("emptyness.jpg");
background(0);
size(1000, 1000);
frameRate(10); // Maximum 30 frames/images per second
minim = new Minim(this);
// get a line in from Minim, default bit depth is 16
in = minim.getLineIn(Minim.STEREO, 640);
}
void draw() {
image(img, 10,10);
fill(255);
rect(0,0,1000,1000);
if (abs(in.left.get(a))> (gw)) {
fade = fade-stplus;
}
else {
fade = fade+stmin;
}
fade = constrain(fade,0,300);
fill(0,fade);
rect(0,0,1000,1000);
}
void stop()
{
// always close Minim audio classes when you are done with them
in.close();
minim.stop();
super.stop();
}
I really hope someone can help me with this for posting this here really is my last resort, I have only 2 days left untill my jury, i've been trying, getting crashes, and the worst thing of all, I really don't understand anything about java or processing cause we never got any lessons about it, they just expected us to 'find out ourselves'
thanks for reading this, and hopefully someone can help me
greetz and lots of thanks in advance
The advice I gave you on the Processing forum still stands: You have to break your problem down into smaller individual steps and take on those steps one at a time instead of trying to tackle the whole thing at once.
Can you create a simple sketch that just fades to black after 30 seconds?
Can you create a simple sketch that fades to black after 30 seconds, but then fades back to white when you click the mouse?
Can you create a simple sketch that shows you whether it can hear sound?
Now can you combine those ideas to create a sketch that fades to black after 30 seconds, but fades back to white when it hears a sound?
This might seem like a lot for 2 days (and that's a lesson in time management), but you'll have better luck if you take a step back and focus on one small thing at a time instead of your whole project. That will also allow you to ask more specific questions, as this one is too broad to really answer without doing your homework for you. And you don't want to cheat, do you?
I run quite a simple loop creating 30 new Cube meshes:
for(i=0; i<30; i++){
var zPos = 0 + i * (cubeHeight+ySpace) + cubeHeight/2;
cube = new THREE.Mesh(new THREE.CubeGeometry(cubeWidth, cubeWidth, cubeHeight), material);
cube.position.z = zPos;
cube.castShadow = true;
cube.recieveShadow = true;
parent.add(cube);
}
This runs terribly slow. What could the causes be?
(I assume I should be able to re-render 30 boxes continuously without performance issues?)
We need a few more details to completely answer your question:
What version of three.js are you using?
What else is going on in the scene?
What render timer method are you using? (setInterval, setTimeout, or requestAnimationFrame)
My guesses as to why it could be slow:
Some other section of code is actually using up more of the time before this code executes.
Your render isn't being called often enough and it looks choppy.
Your computer doesn't support some function of three.js and it is using a work around to make up for it.
Your computer javascript timer may be slow. (Depends on platform and browser.)
You are creating and destroying these blocks without caching techniques. (You should overwrite old values without using the new operator as much as possible. Requesting memory can be expensive in timing.)
new THREE.CubeGeometry(...); should be initalized once for all outside the for-loop, you need just 1 geometry for all cubes -> because they are all the same, you have to share the instace of this geometry. I hope it helps.
You could also check how many instances of three.js you have running on your computer. Maybe you run some demos from their website in the background (also in other browsers).
Close them will give you more performance.
I am trying to get a quite simple openGL ES 1 program run a smooth solid 60fps on a couple devices out there, and I get stuck on HTC desire. The phone itself is quick, snappy, powerful, and overall a breeze to use ; however, I can't seem to display anything fullscreen at 60fps with OpenGL. After getting stuck for a long time with my app, I decided to make a test app with code right out the sample code from the documentation.
Here is what I am doing. Simple initialization code with GLSurfaceView. I have three versions of onDrawFrame, all dead simple. One is empty. One contains only glClear. One contains just enough state to only draw a fullscreen quad. Trace times before, and after. There is no view other than my GLSurfaceView in my program. I can't explain the times I get.
In all cases, the onDrawFrame function itself always finishes under 2ms. But very often, onDrawFrame does not get called again before 30~40ms, dropping my frame rate all the way to 30fps or less.
I get around 50fps with an empty onDrawFrame, 45 with glClear and 35 with a quad.
The same code runs at 60 fps on the HTC Magic, on the Samsung Galaxy S, on the Sharp ISO1. Sony Experia X10 caps at a solid 30fps because of its screen. I have been doing much more complicated scenes at a solid 60fps on the HTC Magic which is very underpowered compared to the Desire. I don't have a Nexus One in handy to test.
Sure, I except buffer swapping to block for a couple milliseconds. But it just jumps over frames all the time.
Trying to find out what the phone is doing outside of the onDrawFrame handler, I tried to use Debug.startMethodTracing. There is no way I can get the trace to reflect the actual time the phone spends out of the loop.
At the end of onDrawFrame, I use startMethodTracing then save the current time (SystemClock.uptimeMillis) in a variable. At the start of the next one I Log.e the time difference since the function last exited, and stopMethodTracing. This will get called over and over so I arrange for stopping once I get a trace for an iteration with a 40+ ms pause.
The time scale on the resulting trace is under 2ms time, as if the system was spending 38ms outside of my program.
I tried a lot of things. Enumerating EGL configs and try them all one after the other. Just to see if it changed anything, I switched to a render when dirty scheme requesting a redraw at each frame. To no avail. Whatever I do, the expected gap of 14~16ms to swap buffers will take 30+ms around half the time, and no matter what I do it seems like the device is waiting for two screen refreshes. ps on the device shows my application at around 10% cPU, and System_server at 35%. Of course I also tried the obvious, killing other processes, rebooting the device... I always get the same exact result.
I do not have the same problem with canvas drawing.
Does anyone know why the Desire (and afaict the Desire only) behaves like this ?
For reference, here is what my test code looks like :
public class GLTest extends Activity {
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mGLView = new GLSurfaceView(this);
mGLView.setRenderer(new ClearRenderer());
setContentView(mGLView);
}
#Override
protected void onPause() {
super.onPause();
mGLView.onPause();
}
#Override
protected void onResume() {
super.onResume();
mGLView.onResume();
}
private GLSurfaceView mGLView;
}
class ClearRenderer implements GLSurfaceView.Renderer {
public void onSurfaceCreated(GL10 gl, EGLConfig config) {}
public void onSurfaceChanged(GL10 gl, int w, int h) { gl.glViewport(0, 0, w, h); }
long start;
long end;
public void onDrawFrame(GL10 gl)
{
start = System.currentTimeMillis();
if (start - end > 20)
Log.e("END TO START", Long.toString(start - end));
// gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
end = System.currentTimeMillis();
if (end - start > 15)
Log.e("START TO END", Long.toString(end - start));
}
}
You should look at this http://www.google.com/events/io/2010/sessions/writing-real-time-games-android.html
He recommends that you keep the framerate at 30fps not 60fps.
Maybe I've got the answer: The opengl driver may decide to do a large part of the actual rendering in a later step. This step is by default done right after onDrawFrame and seems to be the reason why the device is idling after leaving the method. The good news is that you can include this step right into your onDrawFrame method: just call gl.glFinish() - this will do the final rendering and returns when it is finished. There should be no idle time afterwards. However, the bad news is that there was actually no idling time, so you won't be able to get this time back (I had some illusions about how fast my rendering was... now I have to face the real slowness of my implementation ;) ). What you should know about glFinish: There seems to be an os level bug that causes deadlocks on some htc devices (like the desire), which is still present in 2.2 as far as i understood. It seems to happen if the animation runs for some hours. However, on the net exists a patched opengl surface view implementation, which you could use to avoid this problem. Unfortunately I don't have the link right now, but you should be able to find it via google (sorry for that!). There might be some way to use the time spent in glFinish(): It may be gpu activity for some (all?) part, so the cpu would be free to do other processing.