How to draw graphs using d3.js for a big dataset? - d3.js

I tried creating 10 linecharts all of them had 3000 points, 300*300 svg size. It crashed my browser, I checked task manager, google renderer was going crazy with memory utilization 1.2G and CPU utilization 100%.

There's no easy solution for things like this. You can scrutinize your code and make it as efficient as possible, but no matter what, if your code needs to do hundreds of thousands of operations in one "thread" things will freeze up.
A general solution to avoid this freeze-up is to split the drawing process into smaller tasks, which you call asynchronously (i.e. from inside a setTimeout). This way the browser doesn't lock up for extended periods while it runs your JS code, and perhaps (I'm no expert on this) the garbage collector has a chance to clean things up midway too.
The result is not a faster overall draw time, but to a user it "feels" faster, because the browser doesn't freeze. And you can even add a progress bar then.
Some drawing operations can't be broken down into sub-tasks. For example, you can't split up svg.line(), the d3 function that generates your graph's path definitions. However, you can split up the drawing code of the 10 charts such that it draws one chart at a time on every tick of a setTimeout. You can also similarly split up the preparation of the data from the actual drawing.
I wrote an answer to a different scenario but a similar problem here: CSS transitions blocked by JavaScript

Related

How to properly manage drawing many different shapes on google maps from a speed and data standpoint

I have an app that goes out and gets a large number of points for each zip code in a given geography. It then turns those points into a polygon roughly (since the data had to be shrunk down to send in a timely manner) representing the boundaries of a zip code and then places them on GoogleMaps. Each zip code has a popup and a color with additional info.
My question is: What is the best method of trying to keep the script from crashing on devices like iPad when the script has not hung but just needs time to process through all the data coming back to make a shape and draw it on the map?
My current thought is web workers doing part of the computation but since it still needs to come back to the main thread because it needs the window and document object there might be alternatives that I havent thought of.
The fastest way to do it would be to move the heavy rendering to the server-side, though that may not be practical in many cases.
If you do want to take that route, check out Google Maps Engine, a geo DB that can render large tables of polygons by rendering the shapes server-side and sending them to the client as map tiles.
If you're keen on keeping it client-side, then you can avoid locks on platforms like the iPad by releasing control back to the browser as much as possible. Use setTimeout to run the work asynchronously and try to break it up such that you only process a single row or geometry per setTimeout call.

performance of layered canvases vs manual drawImage()

I've written a small graphics engine for my game that has multiple canvases in a tree(these basically represent layers.) Whenever something in a layer changes, the engine marks the affected layers as "soiled" and in the render code the lowest affected layer is copied to its parent via drawImage(), which is then copied to its parent and so on up to the root layer(the onscreen canvas.) This can result in multiple drawImage() calls per frame but also prevents rerendering anything below the affected layer. However, in frames where nothing changes no rendering or drawImage() calls take place, and in frames where only foreground objects move, rendering and drawImage() calls are minimal.
I'd like to compare this to using multiple onscreen canvases as layers, as described in this article:
http://www.ibm.com/developerworks/library/wa-canvashtml5layering/
In the onscreen canvas approach, we handle rendering on a per-layer basis and let the browser handle displaying the layers on screen properly. From the research I've done and everything I've read, this seems to be generally accepted as likely more efficient than handling it manually with drawImage(). So my question is, can the browser determine what needs to be re-rendered more efficiently than I can, despite my insider knowledge of exactly what has changed each frame?
I already know the answer to this question is "Do it both ways and benchmark." But in order to get accurate data I need real-world application, and that is months away. By then if I have an acceptable approach I will have bigger fish to fry. So I'm hoping someone has been down this road and can provide some insight into this.
The browser cannot determine anything when it comes to the canvas element and the rendering as it is a passive element - everything in it is user rendered by the means of JavaScript. The only thing the browser does is to pipe what's on the canvas to the display (and more annoyingly clear it from time to time when its bitmap needs to be re-allocated).
There is unfortunately no golden rule/answer to what is the best optimization as this will vary from case to case - there are many techniques that could be mentioned but they are merely tools you can use but you will still have to figure out what would be the right tool or the right combination of tools for your specific case. Perhaps layered is good in one case and perhaps it doesn't bring anything to another case.
Optimization in general is very much an in-depth analysis and break-down of patterns specific to the scenario, that are then isolated and optimized. The process if often experiment, benchmark, re-adjust, experiment, benchmark, re-adjust, experiment, benchmark, re-adjust... of course experience reduce this process to a minimum but even with experience the specifics comes in a variety of combinations that still require some fine-tuning from case to case (given they are not identical).
Even if you find a good recipe for your current project it is not given that it will work optimal with your next project. This is one reason no one can give an exact answer to this question.
However, when it comes canvas what you want to achieve is a minimum of clear operations and minimum areas to redraw (drawImage or shapes). The point with layers is to groups elements together to enable this goal.

D3: What are the most expensive operations?

I was rewriting my code just now and it feels many magnitudes slower. Previously it was pretty much instant, now my animations take 4 seconds to react to mouse hovers.
I tried removing transitions and not having opacity changes but it's still really slow.
Though it is more readable. - -;
The only thing I did was split large functions into smaller more logical ones and reordered the grouping and used new selections. What could cause such a huge difference in speed? My dataset isn't large either...16kb.
edit: I also split up my monolithic huge chain.
edit2: I fudged around with my code a bit, and it seems that switching to nodeGroup.append("path") caused it to be much slower than svg.append("path"). The inelegant thing about this though is that I have to transform the drawn paths to the middle when using svg while the entire group is already transformed. Can anyone shed some insight and group.append vs svg.append?
edit3: Additionally I was using opacity:0 to hide all my path line before redrawing, which caused it to become slower and slower because these lines were never removed. Switched to remove();
Without data it is hard to work with or suggest a solution. You don't need to share private data but it helps to generate some fake data with the same structure. It's also not clear where your performance hit comes if we can't see how many dom elements you are trying to make/interact with.
As for obvious things that stand out, you are not doing things in a data driven way for drawing your segments. Any time you see a for loop it is a hint that you are not using d3's selections when you could.
You should bind listEdges to your paths and draw them from within the selection, it's ok to transform them to the center from there. also, you shouldn't do d3.select when you can do nodeGroup.select, this way you don't need to traverse the entire page when searching for your circles.

Speed of large amount of animated bitmaps in EaselJS

I seem to have a bit of trouble with using a large amount of animated bitmaps (all based on the same spritesheet) when using EaselJS. When I run a couple of these at once on my stage, there is no problem at all, but when running a higher amount of them at the same time (starting at around 30 to 40) whilst moving them around I start to have them "flicker" quite a bit, even at an fps of around 10.
I'm not using any shadows or anything else along those lines. Just using animated bitmaps and moving them around.
Does anyone have any good advice around increasing this performance?
Without seeing your code it's hard to know exactly where the bottleneck is. But here are a few places to start looking (starting with the more trivial fixes):
Make sure you are using a modern browser. In the very least, check across a few other browsers/platforms to see if that has any significant change in performance. From what I understand, EaselJS performance is significantly worse on non-hardware accelerated canvas implementations.
If you can, use createJS's version of TweenJS over other tweening libraries. TweenJS will tie itself to EaselJS's Ticker class, which is more efficient.
Do not call stage.update() unless absolutely necessary. Since stage.update() is such an expensive call, you should be as stingy as possible. In fact, you shouldn't really call it at all if you are using the Ticker to regularly update the stage.
Cache wisely and aggressively. If you have complex static elements on the stage, caching them will save some cycles. However, there is an overhead to caching so save it for containers with a lot of static elements or complexly drawn shapes.
Lower the frequency that EaselJS checks for mouseovers. If you have enabled mouse over on the stage, pass in a lower frequency (documentation). If you don't need it (if you are only listening to clicks), don't enable it at all. Monitoring mouse overs is pretty dang expensive, especially if you have plenty of elements on stage.
Set stage.snapToPixelsEnabled to true. This may or may not help. Theoretically, rendering bitmaps on whole pixels is much more efficient, however this may cause some animations to become jagged and I haven't played around with it enough to know what the other pros and cons are.
I was able to get decent performance with around 600-800 spritesheets at 30FPS and basic tweening using Chrome on a 4 year old iMac (just a quick test).
try using several Stage objects at the same time.

Performance issue with QGraphicsScene::createItemGroup

I'm using the Qt graphics API to display layers in some GIS software.
Each layer is a group, containing graphic primitives. I have a
performance issue when loading fairly large data sets, for example
this is what happens when making a group composed of ~96k circular
paths (points from a shapefile):
callgrind image http://f.imagehost.org/0750/profile-createItemGroup.png
The complete callgrind dump is here.
The QGraphicsScene::createItemGroup() call takes about 150 seconds to
complete on my 2.4GHz core2, and it seems all this time is used in
QGraphicsItemPrivate::updateEffectiveOpacity(), which itself consumes
37% of its time calling QGraphicsItem::flags() 4 billion times (the
data comes from a test script with no GUI, just a scene, not even tied
to a view).
All the rest is pretty much instantaneous (creating the items,
reading the file, etc...). I tried to disable the scene's index before
creating the group and obtained similar results.
What could I do to improve performances in this case ? If I can't is there a way to create groups faster ?
After studying the source code a little bit, I found out that the updateEffectiveOpacity has O(n²) with regard to the children of the item's parent item (search for the method qt_allChildrenCombineOpacity). This is probably also the reason that method disappeared in Qt 4.6 and apparently been replaced by something else. Anyway, you should try out setting the ItemDoesntPropagateOpacityToChildren flag on the group item (i.e. you'll have to create it yourself), at least while adding all the items.

Resources