How can I make transitions between dc.js charts smooth when I have a map with more than 20k points? - dc.js

I've recently built a smaller version of a prototype data explorer incorporating crossfilter, dc.js, and leaflet.markerCluster. The small version, (prototype dashboard), works properly. The problem I am having is when I try to scale it up to 20k points or more.
The charts still render correctly, and the map works to update the charts smoothly when zooming or panning, but when I interact with one of the charts, the transitions between the other charts are no longer smooth. They jump to their next position rather than smoothly transitioning.
I tried removing the map and this restored the transitions between the other charts to a nice smooth transition again.
I'm wondering if the re-rendering process is getting caught up with the 20k points each time an interaction occurs.
If anyone has any suggestions about where I might look for a solution I'd be grateful.

Thanks for posting a block, that makes things easier to test.
I simulated a lot more points by generating 200 rows for each of yours ~ 46k rows. I saw only a little stuttering at 100x ~ 23k rows (2017 iMac with plenty of RAM).
Leaflet.markercluster is known to be slow with more than 10K points. With 46k rows it took about 475ms for Leaflet.markercluster to clear and add the Leaflet layers:
Since there is only one thread in JavaScript (unless you use workers), D3 needs to get timeouts (actually requestAnimationFrame) every 16ms or so in order to produce fluid animation.
One workaround is to delay the map redrawing 500ms until the others have done:
dc.override(mapChart, 'redraw', function() {
window.setTimeout(() => mapChart._redraw(), 500);
});
Fork of your block with workaround.
Of course, this also makes the map take 500ms longer to redraw. And if you click around fast enough, the last map redraw will still be running when it's trying to draw the charts.
You could also try the chunked addLayers options but I think you would have to set the chunkedInterval so low that it would also slow down the markerclusters.
Processing this much data efficiently is possible in JavaScript - obviously crossfilter has no problem here. I don’t know if the cloistering algorithm is inherently too expensive. Someone on the issue suggested pre-aggregating the points, but I think this would mean you wouldn't be able to see individual points.

Related

Three.js: What's the upper limit for holding 60 FPS on an average desktop?

I'm currently working on a game using Three.js. I've been studying software engineering for four years and have been working professionally on backends for two, but I've barely touched on graphics aside from some simple Unity experimenting.
I currently have ~22,000 vertices and ~8,000 faces according to renderstats.js, and my desktop (above average) can't run it above 20 FPS. I'm using Lambert material as well as a single ambient light, so I feel like this isn't too much to ask.
With these figures in mind, is this the expected behavior for three.js rendering?
I would be pretty sure that is not end of the line and you are probably missing some possibilities for massive performance-improvements.
But just to give you some numbers first,
if you leave everything fancy away (including three.js) and just render an ultra-simple point-cloud with one fragment rendered per point, you can easily get to rendering 10-20 million (yes, million) points/vertices on an average GPU.
just with simple shapes and material, I already got three.js to render something in the range of 500k triangles (at 1080p-resolution) at 60FPS without problem. You can probably take those numbers times 10 for latest high-end GPUs.
However, these kinds of numbers are not really helpful.
Some hints:
if you want to debug your rendering-performance, you should first add some metrics. Renderstats is good, but I'd recommend integrating http://spite.github.io/rstats/ for this (see the example).
generally the choice of material shouldn't matter too much, the GPU is way more capable than most people think. It's more likely a problem somewhere else in the pipeline. EDIT from comment: In some cases, like hi-resolution displays with slow GPUs (think mobile-devices) this might be less true and complicated shader-code can slow down your site, but it might worth be looking at the other points first. As the rendering itself happens off-thread (so you can't measure it's duration using regular tools like the devtools-profiler), you can use the EXT_disjoint_timer_query-extension to get some information about what is going on on the GPU.
the number of drawcalls shouldn't be too high: three.js needs to do a single drawcall for every Mesh and Points-object rendered in the scene and too many objects are generally a far bigger problem than objects with lots of vertices. Reducing the number of drawcalls can be done by merging multiple geometries into one and making use of multi-materials, vertex-colors and things like that.
if you are doing postprocessing, the GPU needs to render every pixel on screen several times. This might as well massively limit your performance. This can be optimized by merging multiple postprocessing-passes into one (I admit, that'd be a lot of hard work..)
another problem could be on the JS side: you should use the profiler or timeline-view from the chrome devtools to see if maybe it's the javascript that is taking too much time per frame (shouldn't be more than 8-12ms per frame). I've been told there are ways to optimize the javascript-performance as well :)

Efficiently rendering tiled map using SpriteKit

As an exercise, I decided to write a SimCity (original) clone in Swift for OSX. I started the project using SpriteKit, originally having each tile as an instance of SKSpriteNode and swapping the texture of each node when that tile changed. This caused terrible performance, so I switched the drawing over to regular Cocoa windows, implementing drawRect to draw NSImages at the correct tile position. This solution worked well until I needed to implement animated tiles which refresh very quickly.
From here, I went back to the first approach, this time using a texture atlas to reduce the amount of draws needed, however, swapping textures of nodes that need to be animated was still very slow and had a huge detrimental effect on frame rate.
I'm attempting to display a 44x44 tile map where each tile is 16x16 pixels. I know here must be an efficient (or perhaps more correct way) to do this. This leads to my question:
Is there an efficient way to support 1500+ nodes in SpriteKit and which are animated through changing their textures? More importantly, am I taking the wrong approach by using SpriteKit and SKSpriteNode for each tile in the map (even if I only redraw the dirty ones)? Would another approach (perhaps, OpenGL?) be better?
Any help would be greatly appreciated. I'd be happy to provide code samples, but I'm not sure how relevant/helpful they would be for this question.
Edit
Here are some links to relevant drawing code and images to demonstrate the issue:
Screenshot:
When the player clicks on the small map, the center position of the large map changes. An event is fired from the small map the central engine powering the game which is then forwarded to listeners. The code that gets executed on the large map the change all of the textures can be found here:
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/GameScene.swift#L489
That code uses tileImages which is a wrapper around a Texture Atlas that is generated at runtime.
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/TileImages.swift
Please excuse the messiness of the code -- I made an alternate branch for this investigation and haven't cleaned up a lot of residual code that has been hanging around from pervious iterations.
I don't know if this will "answer" your question, but may help.
SpriteKit will likely be able to handle what you need but you need to look at different optimizations for SpriteKit and more so your game logic.
SpriteKit. Creating a .atlas is by far one of the best things you can do and will help keep your draw calls down. Also as I learned the hard way keep a pointer to your SKTextures as long as you need them and only generate the ones you needs. For instance don't create textureWithImageNamed#"myImage" every time you need a texture for myImage instead keep reusing a texture and store it in a dictionary. Also skView.ignoresSiblingOrder = YES; helps a bunch but you have to manage your own zPosition on all the sprites.
Game logic. Updating every tile every loop is going to be very expensive. You will want to look at a better way to do that. keeping smaller arrays or maybe doing logic (model) updates on a background thread.
I currently have a project you can look into if you want called Old Frank. I have a map that is 75 x 75 with 32px by 32px tiles that may be stacked 2 tall. I have both Mac and iOS target so you could in theory blow up the scene size and see how the performance holds up. Not saying there isn't optimization work to be done (it is a work in progress), but I feel it might help get you pointed in the right direction at least.
Hope that helps.

Improving Quartz2D drawing performance

I'm using Core-Plot to perform some charting. However, the performance of the chart starts to get slow after adding 2 hosting views, and attempting to scroll the 3 charts together.
Using the time profiler, I found that for the majority of time is spent on two functions, CGSFillDRAM8by1 and CGSColorMaskCopyARGB8888.
What can I do to improve the performance of these two functions? It seems that these two functions are the bottleneck in my drawing performance.
Make sure you set the blend mode to copy instead of normal, that should help some. You can also change properties of the path such as the miter limit.
If you don't need to save the chart and it's just for viewing, I would just use a CAShapeLayer and attach a path to it representing your chart. That will render far faster than quartz2d.

Millions of Google Map Marker using MarkerClusterer, JSON/AJAX

I am developing large geo location web site. There are over 2.5 million places to show on Google Map with markers and info window (when marker clicked).
I am using MarkerClusterer to narrow down the load of individual marker.
But, I am afraid if so much data in browser (JSON etc) would really kill the page.
Any suggestions to load on demand JSON by identifying the map bounds when panning is changed.
Any recommendations to resource also appreciated.
Have a look at Cluster I think it may do what you want:
Only the markers currently visible actually get created.
If too many markers would be visible, then they are grouped together into cluster
markers
You can look for a quadkey. A quadkey is perfect to reduce the dimension complexity and build clusters of the point of interest. There are many different methods like z curve, hilbert curve, peano curve. To further limit the constraints you can attach the cluster thing to the bounding box and the zoom level of the google maps.
There is a version of marker Clusterer that works for v3 of the google maps api, but that isn't the issue here. The issue is that you'd still be handling the underlying data in the browser with JS (2.5 million places retrieved thru JSON/AJAX). That is most likely too much, unless you're on a fast connection using the fastest computers with a lot of ram.
For those contemplating this issue on their own sites, keep in mind that more and more mobile devices are accessing these sites, and the javascript on such devices just can't handle nearly as many points. My own site broke with the latest release of iOS6, and now I have to accommodate by changing my js to an easier system load.
But to get back to the answer at hand, what you'll have to do is make a new ajax call whenever the map bounds change, and if the zoom goes too far out, you'll have to limit the number retrieved and implement some system to show the user that not all results are shown. My site uses a limit of 250, if I recall correctly, and shows a bounding rectangle around the locations (along with markerclusterer to cluster them). Before populating with real data, I did a test database of thousands and thousands, and this number seemed to be the best tradeoff of performance and information. (But that was before I went mobile and before v3 of the api). v3 is supposed to be more streamlined, but mobile devices are limited, so you'll have to test.
I am using marker clusterer plus library with a marker size cap of 200 and default zoom level 8. On zoom change or drag, another 200 markers will come on the map.
If you zoom-out the markers will be clustered and vice-versa.

Google Maps API v3, lots of markers, clustering and performance

I have about 5000 markers I need to render on Google Map. I'm currently using the API (v3) and there are performance issues on slower machines, especially in IE. I have done the following already to help speed things up:
Used a simple marker class that extends OverlayView and renders a single DIV element per marker
Implemented the MarkerClusterer library to cluster the markers at different levels
Render GIFs for IE, instead of alpha PNGs
Are there faster clustering classes? Any other tips? I'm trying to avoid server-side clustering unless this is the only option left to squeeze performance out of the system.
Thanks
I used a method that loads all the markers onto the page, and then listens for the map to finish panning.
When the map has finished panning, I first check the zoom level - if it's too high I don't display anything. If it's at an acceptable level, I then loop through the markers I have stored and see if they fall into the bounding box of the map. If they do, they get added. A second loop then removes any that have moved out of the view.
The highest number I've used is about 30,000 markers with this method, although I have it so you must be zoomed in quite far to see them. In areas of higher concentration of markers it's obviously a little slower but it's useable.
The solution mentioned above works for much higher number of markers. We use it for millions of GPS points at backend (including polygons etc). The only problem is some logic behind like proper caching of spatial queries, or fetching new results only, if user moves a map for more than X meters. There is a lot of work to make it done, but for viewing real high number of points, there is nothing better.
Marker clusteres are usually working at browser side, so these is still need to load all points at once - and this makes this method unusable for large numbers.
You can check it out at http://www.tixik.com/london-2354567.htm live (just click ,,plan a trip " and start planning. Just try to move a map, zoom in or out and all points will show/hide on map zoom/drag.

Resources