D3: What are the most expensive operations? - d3.js

I was rewriting my code just now and it feels many magnitudes slower. Previously it was pretty much instant, now my animations take 4 seconds to react to mouse hovers.
I tried removing transitions and not having opacity changes but it's still really slow.
Though it is more readable. - -;
The only thing I did was split large functions into smaller more logical ones and reordered the grouping and used new selections. What could cause such a huge difference in speed? My dataset isn't large either...16kb.
edit: I also split up my monolithic huge chain.
edit2: I fudged around with my code a bit, and it seems that switching to nodeGroup.append("path") caused it to be much slower than svg.append("path"). The inelegant thing about this though is that I have to transform the drawn paths to the middle when using svg while the entire group is already transformed. Can anyone shed some insight and group.append vs svg.append?
edit3: Additionally I was using opacity:0 to hide all my path line before redrawing, which caused it to become slower and slower because these lines were never removed. Switched to remove();

Without data it is hard to work with or suggest a solution. You don't need to share private data but it helps to generate some fake data with the same structure. It's also not clear where your performance hit comes if we can't see how many dom elements you are trying to make/interact with.
As for obvious things that stand out, you are not doing things in a data driven way for drawing your segments. Any time you see a for loop it is a hint that you are not using d3's selections when you could.
You should bind listEdges to your paths and draw them from within the selection, it's ok to transform them to the center from there. also, you shouldn't do d3.select when you can do nodeGroup.select, this way you don't need to traverse the entire page when searching for your circles.

Related

performance of layered canvases vs manual drawImage()

I've written a small graphics engine for my game that has multiple canvases in a tree(these basically represent layers.) Whenever something in a layer changes, the engine marks the affected layers as "soiled" and in the render code the lowest affected layer is copied to its parent via drawImage(), which is then copied to its parent and so on up to the root layer(the onscreen canvas.) This can result in multiple drawImage() calls per frame but also prevents rerendering anything below the affected layer. However, in frames where nothing changes no rendering or drawImage() calls take place, and in frames where only foreground objects move, rendering and drawImage() calls are minimal.
I'd like to compare this to using multiple onscreen canvases as layers, as described in this article:
http://www.ibm.com/developerworks/library/wa-canvashtml5layering/
In the onscreen canvas approach, we handle rendering on a per-layer basis and let the browser handle displaying the layers on screen properly. From the research I've done and everything I've read, this seems to be generally accepted as likely more efficient than handling it manually with drawImage(). So my question is, can the browser determine what needs to be re-rendered more efficiently than I can, despite my insider knowledge of exactly what has changed each frame?
I already know the answer to this question is "Do it both ways and benchmark." But in order to get accurate data I need real-world application, and that is months away. By then if I have an acceptable approach I will have bigger fish to fry. So I'm hoping someone has been down this road and can provide some insight into this.
The browser cannot determine anything when it comes to the canvas element and the rendering as it is a passive element - everything in it is user rendered by the means of JavaScript. The only thing the browser does is to pipe what's on the canvas to the display (and more annoyingly clear it from time to time when its bitmap needs to be re-allocated).
There is unfortunately no golden rule/answer to what is the best optimization as this will vary from case to case - there are many techniques that could be mentioned but they are merely tools you can use but you will still have to figure out what would be the right tool or the right combination of tools for your specific case. Perhaps layered is good in one case and perhaps it doesn't bring anything to another case.
Optimization in general is very much an in-depth analysis and break-down of patterns specific to the scenario, that are then isolated and optimized. The process if often experiment, benchmark, re-adjust, experiment, benchmark, re-adjust, experiment, benchmark, re-adjust... of course experience reduce this process to a minimum but even with experience the specifics comes in a variety of combinations that still require some fine-tuning from case to case (given they are not identical).
Even if you find a good recipe for your current project it is not given that it will work optimal with your next project. This is one reason no one can give an exact answer to this question.
However, when it comes canvas what you want to achieve is a minimum of clear operations and minimum areas to redraw (drawImage or shapes). The point with layers is to groups elements together to enable this goal.

How to draw graphs using d3.js for a big dataset?

I tried creating 10 linecharts all of them had 3000 points, 300*300 svg size. It crashed my browser, I checked task manager, google renderer was going crazy with memory utilization 1.2G and CPU utilization 100%.
There's no easy solution for things like this. You can scrutinize your code and make it as efficient as possible, but no matter what, if your code needs to do hundreds of thousands of operations in one "thread" things will freeze up.
A general solution to avoid this freeze-up is to split the drawing process into smaller tasks, which you call asynchronously (i.e. from inside a setTimeout). This way the browser doesn't lock up for extended periods while it runs your JS code, and perhaps (I'm no expert on this) the garbage collector has a chance to clean things up midway too.
The result is not a faster overall draw time, but to a user it "feels" faster, because the browser doesn't freeze. And you can even add a progress bar then.
Some drawing operations can't be broken down into sub-tasks. For example, you can't split up svg.line(), the d3 function that generates your graph's path definitions. However, you can split up the drawing code of the 10 charts such that it draws one chart at a time on every tick of a setTimeout. You can also similarly split up the preparation of the data from the actual drawing.
I wrote an answer to a different scenario but a similar problem here: CSS transitions blocked by JavaScript

Speed of large amount of animated bitmaps in EaselJS

I seem to have a bit of trouble with using a large amount of animated bitmaps (all based on the same spritesheet) when using EaselJS. When I run a couple of these at once on my stage, there is no problem at all, but when running a higher amount of them at the same time (starting at around 30 to 40) whilst moving them around I start to have them "flicker" quite a bit, even at an fps of around 10.
I'm not using any shadows or anything else along those lines. Just using animated bitmaps and moving them around.
Does anyone have any good advice around increasing this performance?
Without seeing your code it's hard to know exactly where the bottleneck is. But here are a few places to start looking (starting with the more trivial fixes):
Make sure you are using a modern browser. In the very least, check across a few other browsers/platforms to see if that has any significant change in performance. From what I understand, EaselJS performance is significantly worse on non-hardware accelerated canvas implementations.
If you can, use createJS's version of TweenJS over other tweening libraries. TweenJS will tie itself to EaselJS's Ticker class, which is more efficient.
Do not call stage.update() unless absolutely necessary. Since stage.update() is such an expensive call, you should be as stingy as possible. In fact, you shouldn't really call it at all if you are using the Ticker to regularly update the stage.
Cache wisely and aggressively. If you have complex static elements on the stage, caching them will save some cycles. However, there is an overhead to caching so save it for containers with a lot of static elements or complexly drawn shapes.
Lower the frequency that EaselJS checks for mouseovers. If you have enabled mouse over on the stage, pass in a lower frequency (documentation). If you don't need it (if you are only listening to clicks), don't enable it at all. Monitoring mouse overs is pretty dang expensive, especially if you have plenty of elements on stage.
Set stage.snapToPixelsEnabled to true. This may or may not help. Theoretically, rendering bitmaps on whole pixels is much more efficient, however this may cause some animations to become jagged and I haven't played around with it enough to know what the other pros and cons are.
I was able to get decent performance with around 600-800 spritesheets at 30FPS and basic tweening using Chrome on a 4 year old iMac (just a quick test).
try using several Stage objects at the same time.

Concerning Octrees

I am creating a Minecraft like terrain engine thing, and I was wondering what exactly octrees are. With my engine I have seperated each part of it into chunks or regions - which from what I have read has something to do with it. Also, I was wondering if indices do increase performance within a game and if so how much? Any other ideas/ways to increase performance would be much appreciated. Note that I have already included backface culling and that if the box or a side is hidden don't show that side.
Read this excellent article on FlipCode
Googleing for Octree and flipcode or Gamedev.net will give you a lot of references.
Thoughts on performance are hard to give because a lot depends on what you are doing. (how many changes are being made to the 'world', are there any objects moving, what do you want to use the Octree for (visibility, collision detection, rendering, ...) Read about K-d-trees too because they might be more appropriate for your problem.

Storing Large 2D Game Worlds

I've been experimenting with different ideas of how to store a 2D game world. I'm interested in hearing techniques of storing large quantities of objects while managing the set that's visible ( lets say 100,000 tiles square ). Obviously the techniques can vary based on how the game renders that space.
Lets assume that we're describing a scrolling 2d game world rather than screen based as you could fairly easily do screen based rendering from such a setup while the converse is a bit more messy.
Looking for language agnostic solutions here so it's more helpful to others.
Edit: I think a good answer here would be a general review of the ideas to consider when thinking about this, as some of the responders have attempted, but also begin to explain how different solutions would apply to those scenarios. It's a somewhat complex question, so I would expect a good answer to reflect that.
Quadtrees are a fairly efficient solution for storing data about a large 2-dimensional world and the objects within it.
You might get some ideas on how to implement this from some spatial data structures like range or kd trees.
However, the answer to this question would vary considerably depending exactly on how your game works.
Are we talking a 2D platformer with 10 enemies onscreen, 20 more offscreen but "active", and an unknown number more "inactive"? If so, you can probably store your whole level as an array of "screens" where you manipulate the ones closest to you.
Or do you mean a true 2D game with lots of up/down movement too? You might have to be a bit more careful here.
The platform is also of some importance. If you're implementing a simple platformer for desktop PCs, you probably wouldn't have to worry about performance as much as you would on an embedded device. This is no excuse to be naive about it, but you might not have to be terribly clever either.
This is a somewhat interesting question I think. Presumably someone smarter than I who has experience with implementing platformers has thought these things out already.
Break the world into smaller areas, and deal with them. Any solution to this problem is going to boil down to this concept (such as quadtrees, mentioned in another answer). The differences will be in how they subdivide the world.
How much data is stored per tile? How fast can players move across the world? What's the behavior of NPCs, etc., that are offscreen? Do they just reset when the player comes back (like old Zelda games)? Do they simply resume where they were? Do they do some kind of catch-up script?
How much different rendering data is going to be needed for different areas?
How much of the world can be seen at one time?
All of these questions are going to immpact your solution, as well as the capabilities of your platform. Coming up with a general answer for these without having a reasonable idea of these parameters is going to be a bit difficult.
Assuming that your game will only update what is visible and some area around what is visible, just break the world in "screens" (a "screen" is a rectangular area on the tilemap that can fill the whole screen). Keep in memory the "screens" around the visible area (and some more if you want to update entities which are close to the character - but there is little reason to update an entity that far away) and have the rest on disk with a cache to avoid loading/unloading of commonly visited areas when you move around. Some setup like:
+---+---+---+---+---+---+---+
|FFF|FFF|FFF|FFF|FFF|FFF|FFF|
+---+---+---+---+---+---+---+
|FFF|NNN|NNN|NNN|NNN|NNN|FFF|
+---+---+---+---+---+---+---+
|FFF|NNN|NNN|NNN|NNN|NNN|FFF|
+---+---+---+---+---+---+---+
|FFF|NNN|NNN|VVV|NNN|NNN|FFF|
+---+---+---+---+---+---+---+
|FFF|NNN|NNN|NNN|NNN|NNN|FFF|
+---+---+---+---+---+---+---+
|FFF|NNN|NNN|NNN|NNN|NNN|FFF|
+---+---+---+---+---+---+---+
|FFF|FFF|FFF|FFF|FFF|FFF|FFF|
+---+---+---+---+---+---+---+
Where "V" part is the "screen" where the center (hero or whatever) is, the "N" parts are those who are nearby and have active (updating) entities, are checked for collisions, etc and "F" parts are far parts which might get updated infrequently and are prone to be "swapped" out (stored to disk). Of course you might want to use more "N" screens than two :-).
Note btw that since 2D games do not usually hold much data instead of saving the far away parts to disk you might want to just keep them in memory compressed.
You probably want to use a single int or byte array that links to block types. If you need more optimization from there, then you'll want to link to more complicated data structures like oct trees from your array. There is a good discussion on a Java game forum here: http://www.javagaming.org/index.php/topic,20505.30.html text
Anything with links becomes very expensive because the pointer takes up something like 8 bytes each, depending upon the language, so depending upon how populated your world is it can get expensive very quickly (8 pointers 8 bytes each is 64 bytes per item, and a byte array is 1 byte per item). So unless 1/64 of your world is empty, a byte array is going to be a much better option. You're also going to need to spend a lot of time iterating down the tree whenever you're doing a lookup for collision or whatever else - a byte array will be an instantaneous lookup.
Hopefully that's detailed enough for you. :-)

Resources