I just hit something particularly hard to debug when doing some pretty intensive rendering with Canvas2D. I use all kinds of things, from globalCompositeOperation to multiple off-screen canvases, with some drawImage magic in-between.
It works perfectly fine and smooth on :
Chrome (26) [OSX 10.7.5]
Safari (6.0.2) [OSX 10.7.5]
Firefox (Both 18 and 20 Aurora) [OSX 10.7.5]
Chrome (24) [Windows 7]
Firefox (12) [Windows 7]
Chromium (24) [Archlinux, Gnome 3]
EDIT: Added tests for Windows 7. Strangely enough, it works for FF12 (I had an old version on my dual boot) but there's a definite performance hit after upgrading to FF18. It's not as bad on Windows as it is on Linux though, and the same version on OSX works flawlessly. Regression maybe?
For some reason, on Firefox and Linux (I tried both 18 and 20 Aurora), I have bad rendering performances when dragging and rendering at the same time.
If I fire-and-forget an animation, it is on par with Chrome/Safari, but if I drag and render, I often end up only seeing the end frame after I release the drag.
Neither requestAnimationFrame nor a direct rendering on the mouse event handler work.
After profiling, the reported timings for the rendering parts are well within the range of acceptable (up to 100ms at the absolute worst), and definitely do not correspond to what I see on the screen.
I tried reducing the load by removing some stuff, ending up with reported render times under 15ms, but what I saw didn't change.
What baffles me is that it works almost everywhere else except with Firefox on Linux. Any idea as to where I should look, a bug report or a solution to my problem?
I have fully switched to Chrome on linux because of this issue. It stems from the old 2d rendering engine they are using called Cairo, which is old and out-dated. Azure was to replace this engine and they have it done basically all the platforms except linux.
http://blog.mozilla.org/joe/2011/04/26/introducing-the-azure-project/
https://bugzilla.mozilla.org/show_bug.cgi?id=781731
I think I know where you should look based on this:
If I fire-and-forget an animation, it is on par with Chrome/Safari, but if I drag and render, I often end up only seeing the end frame after I release the drag.
This is probably a double-buffering bug with Firefox on linux.
Canvas implementations have double-buffering built in. You can see it in action on any browser in a simple example like this one: http://jsfiddle.net/simonsarris/XzAjv/ (which uses a setTimeout vs extra work to illustrate that clearing does not happen right away)
The implementations try to delay all rendering by rendering it to an internal bitmap, and then all at once (at the next pause) render it to the canvas. This stops the "flickering" effect when clearing a canvas before redrawing a scene, which is good.
But it seems there's a plain old bug in the Linux Firefox. During your drag and render it seems to not be updating the canvas, probably in an attempt to buffer, but seems to be doing so when it should not be. This would explain why it works in fire-and-forget scenarios.
So I think a bug report is in order. I haven't got any linux machines lying around so I can't reproduce it and submit something myself to be certain though, sorry.
This is in reply to a comment: You could, during the mouse move, dispatch the drawing portion to a tiny timer.
For instance:
// The old way
theCanvas.addEventListener('mousemove', function() {
// if we're dragging and are redrawing
drawingCode();
}, false);
// The new way
theCanvas.addEventListener('mousemove', function() {
// if we're dragging and are redrawing
// in 1 millisecond, fire off drawing code
setTimeout(function() { drawingCode(); }, 1);
}, false);
There isn't such a method, its totally hidden. What you could do is, during mouse move, dispatch
Related
I am developing an aframe project on my MacBook pro, late 2013. When running the project, the fan of my computer always spins fast, regardless which browser I use (firefox, safari, chrome) and the project size (also happens with a project just containing a simple a-box).
aframe-stats show me that my project (1028244 vertices, 342748 faces) still runs with 20 fps.
Is it somehow possible to limit the frame rate to 10fps in order to keep my computer quite? Or any other way to limit the flop-consumption of the aframe project? I already tried a native approach with sudo cputhrottle plugin-container 10 but that did not just throttle the aframe-renderer but the whole firefox browser. Can I pull the break somewhere in the JavaScript or the Browser settings?
It's difficult to say without your project code. Large data sets will simply crank out even a high spec macbook pro. I have found it helpful to pause any rendering whenever possible to quiet the users' machines.
I personally removed automated next animation frame rendering in favor of waiting for controls and objects to change.
For example:
this.controls.addEventListener( 'change', function(e){ addToRenderStack(); });
A simple function addtorenderstack puts in a new value in a list for a render, with the expectation that the render will occur at some point in the future and not right away. the list can also be used to log who requested the render in the call stack, and narrow down performance hogs.
addtorenderstack places a render request in a list. In the requestanimationframe loop, if the list has any length, a render is called on the scene. The stack is immediately cleared rather than processed one by one. If controls or animations continue to make render requests, the list will have a length again and request animationframe will process them in the same way with another render.
In this way, the code only renders when absolutely required. This saved me much grinding on framerate and the fans only come on during intensive operations and then shutdown when its complete, much like a typical 3d game experience.
Your mileage may vary depending on what's happening in your app. I work in engineering so often the view of the 3d world is stopped as an engineer examines or shows a model.
First of all after 4 hours of debugging I have no problem with my code. But I'm curious why I had issue that I had.
I created fullscreen window with d3d11 rendering. Problem occurred when I tried to alt-tab window and didn't have Present() in my loop (I simply found this issue before implementing rendering function). In that case after minimizing window Red and Blue channels on my screen were swapped (yes, literally).
It took me long time to find because I suspected my swap chain or window itself (sdl). Can you help me find the reason of this bug- for educational purposes?
This usually is due to a graphics driver bug with RGBA swap chains. You can try updating your driver (run Windows Update). But to improve compatibility you can change your swap chain surface format to BGRA (specifically, B8G8R8A8_UNORM). As long as you are just doing normal rendering (and not doing anything fancy like UpdateSubresource directly to the back buffer), you should be able to leave everything else as-is and it will render correctly.
Basically I am downloading a zip file and extracting a collada file to load in the browser. This works freaking awesome in chrome but is REALLY slow with model movement from the mouse in Firefox. I cant seem to figure this out or if there's a setting I'm missing to speed up Firefox or what. The file is loaded up here
http://moneybagsnetwork.com/3d/test.htm
Its using jsunzip and three.js to load everything. I've bypassed the jsunzip and that's not the issue. I've also dumbed down the model to not use any event listeners and no lights and that didn't help one bit. Completely stumped here and the model really isn't that big :/
Here is a link to a zip of the files I'm using
http://moneybagsnetwork.com/3d/good.zip
Sorry about the multiple commented lines. I might turn things back on if this gets fixed.
I have noticed that Chrome is usually way faster and more responsive with Three.js applications than Firefox. The difference is not so much on the WebGL side, but at the plain Javascript supporting code.
Looking at your code, it seems you do some very heavy javascript stuff on your onmousemove function. This could very well cause much of the performance gap between the browsers. Mousemove is executed many many times during each and every mousemovement, so it quickly adds up to slow performance. It could also be that Firefox actually creates more mousemove events for similat cursor movements (not sure).
You could either move most of the raycasting and other stuff from mousemove to mouseclick. Alternatively, you could implement a delayed mousemove so that the function is called only maximum of X times per second, or only when the mouse has stopped. Something like this (untested but should work):
var mousemovetimer = null;
function onmousemove(event){
if (mousemovetimer) {
window.clearTimeout(mousemovetimer);
}
mousemovetimer = window.setTimeout(delayedmousemove, 100);
}
function delayedmousemove(event) {
// your original mousemove code here
}
Maybe your graphics card is in our blacklist. There is usually a note about this towards the bottom of about:support.
Cards can be blacklisted for various reasons, missing drivers / features, occasional crashes ... see:
http://www.sitepoint.com/firefox-enable-webgl-blacklisted-graphics-card/
To enable WebGL, set webgl.force-enabled to true.
To enable Layers Acceleration, set layers.acceleration.force-enabled to true
To enable Direct2D in Windows Vista/7, set gfx.direct2d.force-enabled to true
I’m building a mobile web application, and even though I’m still in a prototyping kind of the process, I’m having a hard time fixing certain performance problems.
The application itself (works all smooth in desktop browsers, but significantly sluggish in Mobile Safari): Hancards webapp prototype. You may login as mifeng:wangwang or create a new user.
The overall clumsy performance could be tolerable though, except for one thing: the browser simply crashes (!) when you open a set page, tap ‘view’ (enlarge all cards) and then try to go back to the previous page.
The code that gets executed when ‘view’ is tapped is this (very sluggish by itself as well; any way to improve it?):
if ($(this).hasClass('big')) {
$('.card').unwrap().removeClass('big flippable').addClass('small');
$(this).removeClass('big');
}
else {
$('.card').wrap('<div class="bigCardWrap" />').removeClass('small').addClass('big flippable');
$(this).addClass('big');
}
And another thing, a pretty weird bug. Very often the ‘word of the day‘ block won’t display the text node for the last element (<div class="meaning">), even though it’s in the code. The text will not show unless you ‘shake’ the DOM anyhow (unticking and ticking back one of the associated CSS properties can also achieve that). This happens in both desktop and mobile Safari browsers.
The code that writes it in there is this:
// While we are here, also display the Word of the day
$.post('ajax.php', {action: 'stuff:showWotd'}, function(data) {
// Decode the received data
var msg = decodeResponse(data);
// Insert the values
$('.wotd .hanzi').text(msg.content[0]['hanzi']);
$('.wotd .pinyin').text(msg.content[0]['pinyin']);
$('.wotd .meaning').text(msg.content[0]['meaning']);
});
I don’t expect you to advice me on how to fix the performance of the whole application (I will probably have to revise the overall scope of the project instead of trying to find workarounds), but I at least would like to see how to solve these two problems. Thank you!
The only performance issue I see in the script is the wrap/unwrap calls - adding and removing elements from the DOM tends to be fairly slow, and you can probably get the same effect by always having a wrapper element and changing its class rather than adding or removing it.
However, the performance issues you are seeing are most likely in your css:
3D transforms can be much faster than 2D due to hardware acceleration. It looks like you already have this, though you do need to be careful about which elements it is applied to
Shadows have real performance issues, especially when animated. Removing them will probably fix most of the slowness.
Rearranging background images can help - A single background image under transparent pages is faster than having a background image for each page.
I am making digital instruments for a car. These instruments will be constantly updated by information through ajax. These instruments will be served from a server onboard the vehicle through WLAN (fast) to my iPhone 3G. Is imperative to the success of the project that the updating of the tachometer is smooth and very responsive. Otherwise, it will look retarded.
The first problem I encountered was when I made this demo where tachometer moves quickly back and forth between zero and a thousand RPM: http://www.kingoslo.com/instruments/ When viewed on my iPhone 3G, the arrow simply doesn't move back and forth smoothly enough.
This javascript works by changing the source of the arrow img-element (which is semi transparant {4 color png} floating on top the static picture of the scale {16 color png}, by the way).
I've been made aware of new image editing features in HTML5, and wondered if any of those, or any other methods will be more responsive. Also, I am getting an iPhone 4 for xmas, so that may be a bit faster, but I've got the feeling that it still will fall short for the current build, especially when I add the constant ajax updating that is required to keep the instruments change values as the driver drives along.
Thank you for your time. Any help is greatly appreciated.
Kind regards,
Marius
I think using canvas will result in much faster animation - it was created to handle drawing, whilst manipulating DOM elements is comparably expensive.
Mobile Safari is compatible with canvas.
Alternatively, you could try incorporating all the angles as one large CSS sprite, and then just manipulating its background-position CSS property (element.style.backgroundPosition in the JavaScript DOM API).