Low performance with Chrome, but not other browsers - p5.js

I have an performance issue with P5.js when rendered in Chrome, but not other browsers. I get around 10-15 fps in Chrome, even when tested with a simple moving ellipse. I've tried restarting my computer, doing a clean install of Chrome, updating the libraries, but nothing worked. When performing, my CPU usage goes up quite a bit (2.5 Ghz i7) to around 50% usage. Chrome dev tools doesn't really help either, showing "program" being the culprit for CPU usage.
The weirdest thing is that a couple weeks ago, everything waas going smooth and no problem was ever encountered.
Thus I wondered if any of you guys ever had such an issue or knows what's going on. Thanks!
Edit: Here's the simple a simple testing code that runs very poorly on my machine.
function setup() {
createCanvas(windowWidth,windowHeight);
}
let a = 0;
let b = 0;
function draw() {
background(0);
fill(255);
ellipse(a,b,200);
a += 1;
b += 1;
}

Related

graphics card getting activated during video? A test using a java (Processing) program

I have an application I was running that I created with Processing where I was drawing a lot of objects to the screen. Normally the sketch runs at 60 fps and predictably when a lot of stuff is drawn to the screen, it starts to reduce. I wanted to see what changing Processing's renderer would do, as there is a P3D option when you set the size. P3D is a '3D graphics renderer that makes use of OpenGL-compatible graphics hardware.'
I noticed that the performance improved when i used this in that I could draw more objects to the screen before framerate dropped, without really having to change anything in the code. Then i noticed something odd.
I started up the computer the next day and ran the program again and noticed that suddenly the framerate was lower, around 50 fps. There didn't seem to be anything wrong with my computer as it wasn't doing anything else. Then I thought it probably has something to do with the graphics card. I opened a youtube video and it seemed to be fine. Then I ran the sketch again and it went back up to 60fps.
I just want to know what might be going on here hardware wise. I'm using an NVIDIA GTX970 (i think its OC edition). It seems to me that watching the video sort of jump started the card and made it perform on the processing sketch properly. Why didn't the sketch itself make that happen?
as an example:
Vector<DrawableObject> objects;
void setup()
{
size(400,400, P3D); /// here is the thing to change. P3D is an option
objects = new Vector<DrawableObject>();
for(int i=0;i<400;i++)
{
objects.add(new DrawableObject());
}
}
void draw()
{
for(int i=0; i<objects.size(); i++)
{
DrawableObject o = objects.get(i);
o.run();
}
}

glfwSwapBuffers() and vertical refresh on Windows

I want to do something that is very trivial with OpenGL and GLFW: I want to scroll a 100x100 white filled rectangle from left to right and back again. The rectangle should be moved by 1 pixel per frame and the scrolling should be perfectly smooth. This is my code:
int main(void)
{
GLFWwindow* window;
int i = 0, mode = 0;
if(!glfwInit()) return -1;
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if(!window) {
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glfwSwapInterval(1);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 640, 0, 480, -1, 1);
glDisable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(1.0, 1.0, 1.0);
while(!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glRecti(i, 190, i + 100, 290);
glfwSwapBuffers(window);
glfwPollEvents();
if(!mode) {
i++;
if(i >= 539) mode = 1;
} else {
i--;
if(i <= 0) mode = 0;
}
}
glfwTerminate();
return 0;
}
On Mac OS X and Linux this code is working really fine. The scrolling is perfectly sync'ed with the vertical refresh and you cannot see any stuttering or flickering. It is really perfectly smooth.
On Windows, things are more difficult. By default, glfwSwapInterval(1) doesn't have any effect on Windows when desktop compositing is enabled. GLFW docs say that this has been done because enabling the swap interval with DWM compositing enabled can lead to severe jitter. This behaviour can be changed by compiling GLFW with GLFW_USE_DWM_SWAP_INTERVAL defined. In that case, the code above works really fine on Windows as well. The scrolling is perfectly sync'ed and there is no jitter. I tested it on a variety of different machines running XP, Vista, 7, and 8.
However, there has to be a very good reason that made the GLFW authors disable the swap interval on Windows by default so I suppose that there are many configurations where it will indeed lead to severe jitter. Maybe I was just lucky that none of my machines showed it. So defining GLFW_USE_DWM_SWAP_INTERVAL is not really a solution I can live with because there has to be a reason why it is disabled by default, although it somewhat escapes me that the GLFW team didn't come up with a nicer solution because as it stands now, GLFW programs aren't really portable because of this issue. Take the code above as an example: It will be perfectly sync'ed on Linux and OS X, but on Windows it will run at lightning speed. This somewhat defies GLFW's concept of portability in my eyes.
Given the situation that GLFW_USE_DWM_SWAP_INTERVAL cannot be used on Windows because the GLFW team explicitly warns about its use, I'm wondering what else I should do. The natural solution is of course a timer which measures the time and makes sure that glfwSwapBuffers() is not called more often than the monitor's vertical refresh rate.
However, this also is not as simple as it sounds since I cannot use Sleep() because this would be much too imprecise. Hence, I'd have to use a polling loop with QueryPerformanceCounter(). I tried this approach and it pretty much works but the CPU usage is of course up to 100% now because of the polling loop sleep. When using GLFW_USE_DWM_SWAP_INTERVAL, on the other hand, CPU usage is at a mere 1%.
An alternative would be to set up a timer that fires at regular intervals but AFAIK the precision of CreateTimerQueueTimer() is not very satisfying and probably doesn't yield perfectly sync'ed results.
To cut a long story short: I'd like to ask what is the recommended way of dealing with this problem? The example code above is of course just for illustration purposes. My general question is that I'm looking for a clean way to make glfwSwapBuffers() swap buffers in sync with the monitor's vertical refresh on Windows. On Linux and Mac OS X this is already working fine but on Windows there is the problem with severe jitter that the GLFW docs talk about (but which I don't see here).
I'm still somewhat puzzled that GLFW doesn't provide an inbuilt solution to this problem and pretty much leaves it up to the programmer to workaround this. I'm still a newbie to OpenGL but from my naive point of view, I think that having a function that swaps buffers in sync with vertical refresh is a feature of fundamental importance so it escapes me why GLFW doesn't have it on Windows.
So once again my question is: How can I workaround the problem that glfwSwapInterval() doesn't work correctly on Windows? What is the suggested approach to solve this problem? Is there a nicer way than using a poll timer that will hog the CPU?
I think your issue has solved itself by a strange coincidence in timing. This commit has been added to GLFW's master branch just a few days ago, and it is removing the GLFW_USE_DWM_SWAP_INTERVAL because they are now using DWM's DWMFlush() API to do the syncing when DWM is in use. The changelog for this commit includes:
[Win32] Removed GLFW_USE_DWM_SWAP_INTERVAL compile-time option
[Win32] Bugfix: Swap interval was ignored when DWM was enabled
So probably grabbing the newest git HEAD for GLFW is all you need to do.

Webgl and three.js running great on chrome but HORRIBLE on firefox

Basically I am downloading a zip file and extracting a collada file to load in the browser. This works freaking awesome in chrome but is REALLY slow with model movement from the mouse in Firefox. I cant seem to figure this out or if there's a setting I'm missing to speed up Firefox or what. The file is loaded up here
http://moneybagsnetwork.com/3d/test.htm
Its using jsunzip and three.js to load everything. I've bypassed the jsunzip and that's not the issue. I've also dumbed down the model to not use any event listeners and no lights and that didn't help one bit. Completely stumped here and the model really isn't that big :/
Here is a link to a zip of the files I'm using
http://moneybagsnetwork.com/3d/good.zip
Sorry about the multiple commented lines. I might turn things back on if this gets fixed.
I have noticed that Chrome is usually way faster and more responsive with Three.js applications than Firefox. The difference is not so much on the WebGL side, but at the plain Javascript supporting code.
Looking at your code, it seems you do some very heavy javascript stuff on your onmousemove function. This could very well cause much of the performance gap between the browsers. Mousemove is executed many many times during each and every mousemovement, so it quickly adds up to slow performance. It could also be that Firefox actually creates more mousemove events for similat cursor movements (not sure).
You could either move most of the raycasting and other stuff from mousemove to mouseclick. Alternatively, you could implement a delayed mousemove so that the function is called only maximum of X times per second, or only when the mouse has stopped. Something like this (untested but should work):
var mousemovetimer = null;
function onmousemove(event){
if (mousemovetimer) {
window.clearTimeout(mousemovetimer);
}
mousemovetimer = window.setTimeout(delayedmousemove, 100);
}
function delayedmousemove(event) {
// your original mousemove code here
}
Maybe your graphics card is in our blacklist. There is usually a note about this towards the bottom of about:support.
Cards can be blacklisted for various reasons, missing drivers / features, occasional crashes ... see:
http://www.sitepoint.com/firefox-enable-webgl-blacklisted-graphics-card/
To enable WebGL, set webgl.force-enabled to true.
To enable Layers Acceleration, set layers.acceleration.force-enabled to true
To enable Direct2D in Windows Vista/7, set gfx.direct2d.force-enabled to true

Poor Canvas2D performance with Firefox on Linux

I just hit something particularly hard to debug when doing some pretty intensive rendering with Canvas2D. I use all kinds of things, from globalCompositeOperation to multiple off-screen canvases, with some drawImage magic in-between.
It works perfectly fine and smooth on :
Chrome (26) [OSX 10.7.5]
Safari (6.0.2) [OSX 10.7.5]
Firefox (Both 18 and 20 Aurora) [OSX 10.7.5]
Chrome (24) [Windows 7]
Firefox (12) [Windows 7]
Chromium (24) [Archlinux, Gnome 3]
EDIT: Added tests for Windows 7. Strangely enough, it works for FF12 (I had an old version on my dual boot) but there's a definite performance hit after upgrading to FF18. It's not as bad on Windows as it is on Linux though, and the same version on OSX works flawlessly. Regression maybe?
For some reason, on Firefox and Linux (I tried both 18 and 20 Aurora), I have bad rendering performances when dragging and rendering at the same time.
If I fire-and-forget an animation, it is on par with Chrome/Safari, but if I drag and render, I often end up only seeing the end frame after I release the drag.
Neither requestAnimationFrame nor a direct rendering on the mouse event handler work.
After profiling, the reported timings for the rendering parts are well within the range of acceptable (up to 100ms at the absolute worst), and definitely do not correspond to what I see on the screen.
I tried reducing the load by removing some stuff, ending up with reported render times under 15ms, but what I saw didn't change.
What baffles me is that it works almost everywhere else except with Firefox on Linux. Any idea as to where I should look, a bug report or a solution to my problem?
I have fully switched to Chrome on linux because of this issue. It stems from the old 2d rendering engine they are using called Cairo, which is old and out-dated. Azure was to replace this engine and they have it done basically all the platforms except linux.
http://blog.mozilla.org/joe/2011/04/26/introducing-the-azure-project/
https://bugzilla.mozilla.org/show_bug.cgi?id=781731
I think I know where you should look based on this:
If I fire-and-forget an animation, it is on par with Chrome/Safari, but if I drag and render, I often end up only seeing the end frame after I release the drag.
This is probably a double-buffering bug with Firefox on linux.
Canvas implementations have double-buffering built in. You can see it in action on any browser in a simple example like this one: http://jsfiddle.net/simonsarris/XzAjv/ (which uses a setTimeout vs extra work to illustrate that clearing does not happen right away)
The implementations try to delay all rendering by rendering it to an internal bitmap, and then all at once (at the next pause) render it to the canvas. This stops the "flickering" effect when clearing a canvas before redrawing a scene, which is good.
But it seems there's a plain old bug in the Linux Firefox. During your drag and render it seems to not be updating the canvas, probably in an attempt to buffer, but seems to be doing so when it should not be. This would explain why it works in fire-and-forget scenarios.
So I think a bug report is in order. I haven't got any linux machines lying around so I can't reproduce it and submit something myself to be certain though, sorry.
This is in reply to a comment: You could, during the mouse move, dispatch the drawing portion to a tiny timer.
For instance:
// The old way
theCanvas.addEventListener('mousemove', function() {
// if we're dragging and are redrawing
drawingCode();
}, false);
// The new way
theCanvas.addEventListener('mousemove', function() {
// if we're dragging and are redrawing
// in 1 millisecond, fire off drawing code
setTimeout(function() { drawingCode(); }, 1);
}, false);
There isn't such a method, its totally hidden. What you could do is, during mouse move, dispatch

HTC Desire specific OpenGL ES 1 frame rate - can't get it right

I am trying to get a quite simple openGL ES 1 program run a smooth solid 60fps on a couple devices out there, and I get stuck on HTC desire. The phone itself is quick, snappy, powerful, and overall a breeze to use ; however, I can't seem to display anything fullscreen at 60fps with OpenGL. After getting stuck for a long time with my app, I decided to make a test app with code right out the sample code from the documentation.
Here is what I am doing. Simple initialization code with GLSurfaceView. I have three versions of onDrawFrame, all dead simple. One is empty. One contains only glClear. One contains just enough state to only draw a fullscreen quad. Trace times before, and after. There is no view other than my GLSurfaceView in my program. I can't explain the times I get.
In all cases, the onDrawFrame function itself always finishes under 2ms. But very often, onDrawFrame does not get called again before 30~40ms, dropping my frame rate all the way to 30fps or less.
I get around 50fps with an empty onDrawFrame, 45 with glClear and 35 with a quad.
The same code runs at 60 fps on the HTC Magic, on the Samsung Galaxy S, on the Sharp ISO1. Sony Experia X10 caps at a solid 30fps because of its screen. I have been doing much more complicated scenes at a solid 60fps on the HTC Magic which is very underpowered compared to the Desire. I don't have a Nexus One in handy to test.
Sure, I except buffer swapping to block for a couple milliseconds. But it just jumps over frames all the time.
Trying to find out what the phone is doing outside of the onDrawFrame handler, I tried to use Debug.startMethodTracing. There is no way I can get the trace to reflect the actual time the phone spends out of the loop.
At the end of onDrawFrame, I use startMethodTracing then save the current time (SystemClock.uptimeMillis) in a variable. At the start of the next one I Log.e the time difference since the function last exited, and stopMethodTracing. This will get called over and over so I arrange for stopping once I get a trace for an iteration with a 40+ ms pause.
The time scale on the resulting trace is under 2ms time, as if the system was spending 38ms outside of my program.
I tried a lot of things. Enumerating EGL configs and try them all one after the other. Just to see if it changed anything, I switched to a render when dirty scheme requesting a redraw at each frame. To no avail. Whatever I do, the expected gap of 14~16ms to swap buffers will take 30+ms around half the time, and no matter what I do it seems like the device is waiting for two screen refreshes. ps on the device shows my application at around 10% cPU, and System_server at 35%. Of course I also tried the obvious, killing other processes, rebooting the device... I always get the same exact result.
I do not have the same problem with canvas drawing.
Does anyone know why the Desire (and afaict the Desire only) behaves like this ?
For reference, here is what my test code looks like :
public class GLTest extends Activity {
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mGLView = new GLSurfaceView(this);
mGLView.setRenderer(new ClearRenderer());
setContentView(mGLView);
}
#Override
protected void onPause() {
super.onPause();
mGLView.onPause();
}
#Override
protected void onResume() {
super.onResume();
mGLView.onResume();
}
private GLSurfaceView mGLView;
}
class ClearRenderer implements GLSurfaceView.Renderer {
public void onSurfaceCreated(GL10 gl, EGLConfig config) {}
public void onSurfaceChanged(GL10 gl, int w, int h) { gl.glViewport(0, 0, w, h); }
long start;
long end;
public void onDrawFrame(GL10 gl)
{
start = System.currentTimeMillis();
if (start - end > 20)
Log.e("END TO START", Long.toString(start - end));
// gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
end = System.currentTimeMillis();
if (end - start > 15)
Log.e("START TO END", Long.toString(end - start));
}
}
You should look at this http://www.google.com/events/io/2010/sessions/writing-real-time-games-android.html
He recommends that you keep the framerate at 30fps not 60fps.
Maybe I've got the answer: The opengl driver may decide to do a large part of the actual rendering in a later step. This step is by default done right after onDrawFrame and seems to be the reason why the device is idling after leaving the method. The good news is that you can include this step right into your onDrawFrame method: just call gl.glFinish() - this will do the final rendering and returns when it is finished. There should be no idle time afterwards. However, the bad news is that there was actually no idling time, so you won't be able to get this time back (I had some illusions about how fast my rendering was... now I have to face the real slowness of my implementation ;) ). What you should know about glFinish: There seems to be an os level bug that causes deadlocks on some htc devices (like the desire), which is still present in 2.2 as far as i understood. It seems to happen if the animation runs for some hours. However, on the net exists a patched opengl surface view implementation, which you could use to avoid this problem. Unfortunately I don't have the link right now, but you should be able to find it via google (sorry for that!). There might be some way to use the time spent in glFinish(): It may be gpu activity for some (all?) part, so the cpu would be free to do other processing.

Resources