performance of layered canvases vs manual drawImage() - performance

I've written a small graphics engine for my game that has multiple canvases in a tree(these basically represent layers.) Whenever something in a layer changes, the engine marks the affected layers as "soiled" and in the render code the lowest affected layer is copied to its parent via drawImage(), which is then copied to its parent and so on up to the root layer(the onscreen canvas.) This can result in multiple drawImage() calls per frame but also prevents rerendering anything below the affected layer. However, in frames where nothing changes no rendering or drawImage() calls take place, and in frames where only foreground objects move, rendering and drawImage() calls are minimal.
I'd like to compare this to using multiple onscreen canvases as layers, as described in this article:
http://www.ibm.com/developerworks/library/wa-canvashtml5layering/
In the onscreen canvas approach, we handle rendering on a per-layer basis and let the browser handle displaying the layers on screen properly. From the research I've done and everything I've read, this seems to be generally accepted as likely more efficient than handling it manually with drawImage(). So my question is, can the browser determine what needs to be re-rendered more efficiently than I can, despite my insider knowledge of exactly what has changed each frame?
I already know the answer to this question is "Do it both ways and benchmark." But in order to get accurate data I need real-world application, and that is months away. By then if I have an acceptable approach I will have bigger fish to fry. So I'm hoping someone has been down this road and can provide some insight into this.

The browser cannot determine anything when it comes to the canvas element and the rendering as it is a passive element - everything in it is user rendered by the means of JavaScript. The only thing the browser does is to pipe what's on the canvas to the display (and more annoyingly clear it from time to time when its bitmap needs to be re-allocated).
There is unfortunately no golden rule/answer to what is the best optimization as this will vary from case to case - there are many techniques that could be mentioned but they are merely tools you can use but you will still have to figure out what would be the right tool or the right combination of tools for your specific case. Perhaps layered is good in one case and perhaps it doesn't bring anything to another case.
Optimization in general is very much an in-depth analysis and break-down of patterns specific to the scenario, that are then isolated and optimized. The process if often experiment, benchmark, re-adjust, experiment, benchmark, re-adjust, experiment, benchmark, re-adjust... of course experience reduce this process to a minimum but even with experience the specifics comes in a variety of combinations that still require some fine-tuning from case to case (given they are not identical).
Even if you find a good recipe for your current project it is not given that it will work optimal with your next project. This is one reason no one can give an exact answer to this question.
However, when it comes canvas what you want to achieve is a minimum of clear operations and minimum areas to redraw (drawImage or shapes). The point with layers is to groups elements together to enable this goal.

Related

How to properly manage drawing many different shapes on google maps from a speed and data standpoint

I have an app that goes out and gets a large number of points for each zip code in a given geography. It then turns those points into a polygon roughly (since the data had to be shrunk down to send in a timely manner) representing the boundaries of a zip code and then places them on GoogleMaps. Each zip code has a popup and a color with additional info.
My question is: What is the best method of trying to keep the script from crashing on devices like iPad when the script has not hung but just needs time to process through all the data coming back to make a shape and draw it on the map?
My current thought is web workers doing part of the computation but since it still needs to come back to the main thread because it needs the window and document object there might be alternatives that I havent thought of.
The fastest way to do it would be to move the heavy rendering to the server-side, though that may not be practical in many cases.
If you do want to take that route, check out Google Maps Engine, a geo DB that can render large tables of polygons by rendering the shapes server-side and sending them to the client as map tiles.
If you're keen on keeping it client-side, then you can avoid locks on platforms like the iPad by releasing control back to the browser as much as possible. Use setTimeout to run the work asynchronously and try to break it up such that you only process a single row or geometry per setTimeout call.

D3: What are the most expensive operations?

I was rewriting my code just now and it feels many magnitudes slower. Previously it was pretty much instant, now my animations take 4 seconds to react to mouse hovers.
I tried removing transitions and not having opacity changes but it's still really slow.
Though it is more readable. - -;
The only thing I did was split large functions into smaller more logical ones and reordered the grouping and used new selections. What could cause such a huge difference in speed? My dataset isn't large either...16kb.
edit: I also split up my monolithic huge chain.
edit2: I fudged around with my code a bit, and it seems that switching to nodeGroup.append("path") caused it to be much slower than svg.append("path"). The inelegant thing about this though is that I have to transform the drawn paths to the middle when using svg while the entire group is already transformed. Can anyone shed some insight and group.append vs svg.append?
edit3: Additionally I was using opacity:0 to hide all my path line before redrawing, which caused it to become slower and slower because these lines were never removed. Switched to remove();
Without data it is hard to work with or suggest a solution. You don't need to share private data but it helps to generate some fake data with the same structure. It's also not clear where your performance hit comes if we can't see how many dom elements you are trying to make/interact with.
As for obvious things that stand out, you are not doing things in a data driven way for drawing your segments. Any time you see a for loop it is a hint that you are not using d3's selections when you could.
You should bind listEdges to your paths and draw them from within the selection, it's ok to transform them to the center from there. also, you shouldn't do d3.select when you can do nodeGroup.select, this way you don't need to traverse the entire page when searching for your circles.

Speed of large amount of animated bitmaps in EaselJS

I seem to have a bit of trouble with using a large amount of animated bitmaps (all based on the same spritesheet) when using EaselJS. When I run a couple of these at once on my stage, there is no problem at all, but when running a higher amount of them at the same time (starting at around 30 to 40) whilst moving them around I start to have them "flicker" quite a bit, even at an fps of around 10.
I'm not using any shadows or anything else along those lines. Just using animated bitmaps and moving them around.
Does anyone have any good advice around increasing this performance?
Without seeing your code it's hard to know exactly where the bottleneck is. But here are a few places to start looking (starting with the more trivial fixes):
Make sure you are using a modern browser. In the very least, check across a few other browsers/platforms to see if that has any significant change in performance. From what I understand, EaselJS performance is significantly worse on non-hardware accelerated canvas implementations.
If you can, use createJS's version of TweenJS over other tweening libraries. TweenJS will tie itself to EaselJS's Ticker class, which is more efficient.
Do not call stage.update() unless absolutely necessary. Since stage.update() is such an expensive call, you should be as stingy as possible. In fact, you shouldn't really call it at all if you are using the Ticker to regularly update the stage.
Cache wisely and aggressively. If you have complex static elements on the stage, caching them will save some cycles. However, there is an overhead to caching so save it for containers with a lot of static elements or complexly drawn shapes.
Lower the frequency that EaselJS checks for mouseovers. If you have enabled mouse over on the stage, pass in a lower frequency (documentation). If you don't need it (if you are only listening to clicks), don't enable it at all. Monitoring mouse overs is pretty dang expensive, especially if you have plenty of elements on stage.
Set stage.snapToPixelsEnabled to true. This may or may not help. Theoretically, rendering bitmaps on whole pixels is much more efficient, however this may cause some animations to become jagged and I haven't played around with it enough to know what the other pros and cons are.
I was able to get decent performance with around 600-800 spritesheets at 30FPS and basic tweening using Chrome on a 4 year old iMac (just a quick test).
try using several Stage objects at the same time.

How to handle large numbers of pushpins in Bing Maps

I am using Bing Maps with Ajax and I have about 80,000 locations to drop pushpins into. The purpose of the feature is to allow a user to search for restaurants in Louisiana and click the pushpin to see the health inspection information.
Obviously it doesn't do much good to have 80,000 pins on the map at one time, but I am struggling to find the best solution to this problem. Another problem is that the distance between these locations is very small (All 80,000 are in Louisiana). I know I could use clustering to keep from cluttering the map, but it seems like that would still cause performance problems.
What I am currently trying to do is to simply not show any pins until a certain zoom level and then only show the pins within the current view. The way I am currently attempting to do that is by using the viewchangeend event to find the zoom level and the boundaries of the map and then querying the database (through a web service) for any points in that range.
It feels like I am going about this the wrong way. Is there a better way to manage this large amount of data? Would it be better to try to load all points initially and then have the data on hand without having to hit my web service every time the map moves. If so, how would I go about it?
I haven't been able to find answers to my questions, which usually means that I am asking the wrong questions. If anyone could help me figure out the right question it would be greatly appreciated.
Well, I've implemented a slightly different approach to this. It was just a fun exercise, but I'm displaying all my data (about 140.000 points) in Bing Maps using the HTML5 canvas.
I previously load all the data to the client. Then, I've optimized the drawing process so much that I've attached it to the "Viewchange" event (which fires all the time during the view change process).
I've blogged about this. You can check it here.
My example does not have interaction on it but could be easily done (should be a nice topic for a blog post). You would have thus to handle the events manually and search for the corresponding points yourself or, if the amount of points to draw and/or the zoom level was below some threshold, show regular pushpins.
Anyway, another option, if you're not restricted to Bing Maps, is to use the likes of Leaflet. It allows you to create a Canvas Layer which is a tile-based layer but rendered in client-side using HTML5 canvas. It opens a new range of possibilities. Check for example this map in GisCloud.
Yet another option, although more suitable to static data, is using a technique called UTFGrid. The lads that developed it can certainly explain it better than me, but it scales for as many points as you want with a fenomenal performance. It consists on having a tile layer with your info, and an accompanying json file with something like an "ascii-art" file describing the features on the tiles. Then, using a library called wax it provides complete mouse-over, mouse-click events on it, without any performance impact whatsoever.
I've also blogged about it.
I think clustering would be your best bet if you can get away with using it. You say that you tried using clustering but it still caused performance problems? I went to test it out with 80000 data points at the V7 Interactive SDK and it seems to perform fine. Test it out yourself by going to the link and change the line in the Load module - clustering tab:
TestDataGenerator.GenerateData(100,dataCallback);
to
TestDataGenerator.GenerateData(80000,dataCallback);
then hit the Run button. The performance seems acceptable to me with that many data points.

Any name for this concept?

Say, we have a program that gets user input or any other unpredictable events at arbitrary moments of time.
For each kind of event the program should perform some computation or access a resource, which is reasonably time-consuming to be considered. The program should output a result as fast as possible. If next events arrive, it might be acceptable to drop previous computations and take up new ones.
To complicate it further, some computations/resource access might be interdependent, i.e. produce data that can be used in other computations.
What's important we know the pattern in which these events usually occur. For example: their relative frequency with respect to each other, or a common order and time intervals in which they happen.
The task is to make an algorithm which deals with the problem in the most statistically efficient way. Approaches yielding sub-optimal solutions can be more than sufficient.
Is there a concept which embraces designing such algorithms?
Example:
A tabbed internet browser.
When told to load different web pages in several tabs, should decide whether to load the page in an active tab with higher priority, to render just the visible part of the page or pre-render the full page, if so what to do first - pre-render the whole page for the active tab or render other tabs instead, etc.
(I know nothing about how browsers actually work, but assuming it is this way won't hurt)
I think scheduling algorithms deal with these kind of scenarios.
What you're describing is a prioritizing application scheduler. You would need to be more specific to determine which algorithm would be best, but here's a list that you might find useful.
I am tossing keywords: Scheduling with pre-emption? Also, prefetching, double-buffering
I don't know a lot about it but this sounds like something that the reactor patern may be used for.

Resources