Google Maps API v3, lots of markers, clustering and performance - performance

I have about 5000 markers I need to render on Google Map. I'm currently using the API (v3) and there are performance issues on slower machines, especially in IE. I have done the following already to help speed things up:
Used a simple marker class that extends OverlayView and renders a single DIV element per marker
Implemented the MarkerClusterer library to cluster the markers at different levels
Render GIFs for IE, instead of alpha PNGs
Are there faster clustering classes? Any other tips? I'm trying to avoid server-side clustering unless this is the only option left to squeeze performance out of the system.
Thanks

I used a method that loads all the markers onto the page, and then listens for the map to finish panning.
When the map has finished panning, I first check the zoom level - if it's too high I don't display anything. If it's at an acceptable level, I then loop through the markers I have stored and see if they fall into the bounding box of the map. If they do, they get added. A second loop then removes any that have moved out of the view.
The highest number I've used is about 30,000 markers with this method, although I have it so you must be zoomed in quite far to see them. In areas of higher concentration of markers it's obviously a little slower but it's useable.

The solution mentioned above works for much higher number of markers. We use it for millions of GPS points at backend (including polygons etc). The only problem is some logic behind like proper caching of spatial queries, or fetching new results only, if user moves a map for more than X meters. There is a lot of work to make it done, but for viewing real high number of points, there is nothing better.
Marker clusteres are usually working at browser side, so these is still need to load all points at once - and this makes this method unusable for large numbers.
You can check it out at http://www.tixik.com/london-2354567.htm live (just click ,,plan a trip " and start planning. Just try to move a map, zoom in or out and all points will show/hide on map zoom/drag.

Related

How can I make transitions between dc.js charts smooth when I have a map with more than 20k points?

I've recently built a smaller version of a prototype data explorer incorporating crossfilter, dc.js, and leaflet.markerCluster. The small version, (prototype dashboard), works properly. The problem I am having is when I try to scale it up to 20k points or more.
The charts still render correctly, and the map works to update the charts smoothly when zooming or panning, but when I interact with one of the charts, the transitions between the other charts are no longer smooth. They jump to their next position rather than smoothly transitioning.
I tried removing the map and this restored the transitions between the other charts to a nice smooth transition again.
I'm wondering if the re-rendering process is getting caught up with the 20k points each time an interaction occurs.
If anyone has any suggestions about where I might look for a solution I'd be grateful.
Thanks for posting a block, that makes things easier to test.
I simulated a lot more points by generating 200 rows for each of yours ~ 46k rows. I saw only a little stuttering at 100x ~ 23k rows (2017 iMac with plenty of RAM).
Leaflet.markercluster is known to be slow with more than 10K points. With 46k rows it took about 475ms for Leaflet.markercluster to clear and add the Leaflet layers:
Since there is only one thread in JavaScript (unless you use workers), D3 needs to get timeouts (actually requestAnimationFrame) every 16ms or so in order to produce fluid animation.
One workaround is to delay the map redrawing 500ms until the others have done:
dc.override(mapChart, 'redraw', function() {
window.setTimeout(() => mapChart._redraw(), 500);
});
Fork of your block with workaround.
Of course, this also makes the map take 500ms longer to redraw. And if you click around fast enough, the last map redraw will still be running when it's trying to draw the charts.
You could also try the chunked addLayers options but I think you would have to set the chunkedInterval so low that it would also slow down the markerclusters.
Processing this much data efficiently is possible in JavaScript - obviously crossfilter has no problem here. I don’t know if the cloistering algorithm is inherently too expensive. Someone on the issue suggested pre-aggregating the points, but I think this would mean you wouldn't be able to see individual points.

Spritekit: Efficiently create many SpriteNodes

I am using SpriteKit to write a bullet-hell style shoot-em-up, and the SK framework seems to be able to handle hundreds of nodes at 60fps without any problems at all. My problem however is that sometimes I need to spawn 50+ nodes in a single frame, and that does seem to cause the odd hitch in framerate. Are there any tricks I should be using to make sure that creation of many nodes is as performant as possible?
I am re-using SKTextures, should I also have a persistent collection of SKSpriteNodes and SKActions that get 'recycled' instead of creating them new?
You can pre-create all nodes before game scene loads(using completion handler) and when needed, just show them i.e. setHidden = NO. This way you don't have to recreate nodes again and again. When nodes are not needed, just set them hidden. You can read more here. Just find part about lasers. I think that would be one way to resolve framerate drop caused by spawning many nodes at same time. And I hope you use atlases...To be sure that everything works correctly enable your draws and nodes count info in view controller and check stats. If you don't use atlases, or use them incorrectly it may happen that draws count is high (comparing to nodes count).
About atlases from the docs:
When you create a texture atlas, you want to strike a balance between
collecting too many textures or too few. If you use too few images,
Sprite Kit may still need many drawing passes to render a frame. If
you include too many images, then large amounts of texture data may
need to be loaded into memory at once. Because Xcode builds the
atlases for you, you can switch between different atlas configurations
with relative ease. So experiment with different configurations of
your texture atlases and choose the combination that gives you the
best performance.
A couple points to keep in mind:
Remove any nodes no longer needed from parent.
Do not use usesPreciseCollisionDetection unless you absolutely have to.
Use a texture atlas.
Do not go overboard with the number of nodes. Stay realistic and only use the minimum needed.

Three js performance

Hy!
I am working with huge vertice objects, I am able to show lots of modells, because I have split them into smaller parts(Under 65K vertices). Also I am using three js cameras. I want to increase the performance by using a priority queue, and when the user moving the camera show only the top 10, then when the moving stop show the rest. This part is not that hard, but I dont want to put modells to render, when they are behind another object, maybe send out some Rays from the view of the camera(checking the bounding box hit) and according hit list i can build the prior queue.
What do you think?
Also how can I detect if I can load the next modell or not.(on the fly)
Option A: Occlusion culling, you will need to find a library for this.
Option B: Use a AABB Plane test with camera Frustum planes and object bounding box, this will tell you if an object is in cameras field of view. (not necessarily visible behind object, as such a operation is impossible, this mostly likely already done to a degree with webgl)
Implementation:
Google it, three js probably supports this
Option C: Use a max object render Limit, prioritized based on distance from camera and size of object. Eg Calculate which objects are visible(Option B), then prioritize the closest and biggest ones and disable the rest.
pseudo-code:
if(object is in frustum ){
var priority = (bounding.max - bounding.min) / distanceToCamera
}
Make sure your shaders are only doing one pass. As that will double the calculation time(roughly depending on situation)
Option D: raycast to eight corners of bounding box if they all fail don't render
the object. This is pretty accurate but by no means perfect.
Option A will be the best for sure, Using Option C is great if you don't care that small objects far away don't get rendered. Option D works well with objects that have a lot of verts, you may want to raycast more points of the object depending on the situation. Option B probably won't be useful for your scenario, but its a part of c, and other optimization methods. Over all there has never been an extremely reliable and optimal way to tell if something is behind something else.

Millions of Google Map Marker using MarkerClusterer, JSON/AJAX

I am developing large geo location web site. There are over 2.5 million places to show on Google Map with markers and info window (when marker clicked).
I am using MarkerClusterer to narrow down the load of individual marker.
But, I am afraid if so much data in browser (JSON etc) would really kill the page.
Any suggestions to load on demand JSON by identifying the map bounds when panning is changed.
Any recommendations to resource also appreciated.
Have a look at Cluster I think it may do what you want:
Only the markers currently visible actually get created.
If too many markers would be visible, then they are grouped together into cluster
markers
You can look for a quadkey. A quadkey is perfect to reduce the dimension complexity and build clusters of the point of interest. There are many different methods like z curve, hilbert curve, peano curve. To further limit the constraints you can attach the cluster thing to the bounding box and the zoom level of the google maps.
There is a version of marker Clusterer that works for v3 of the google maps api, but that isn't the issue here. The issue is that you'd still be handling the underlying data in the browser with JS (2.5 million places retrieved thru JSON/AJAX). That is most likely too much, unless you're on a fast connection using the fastest computers with a lot of ram.
For those contemplating this issue on their own sites, keep in mind that more and more mobile devices are accessing these sites, and the javascript on such devices just can't handle nearly as many points. My own site broke with the latest release of iOS6, and now I have to accommodate by changing my js to an easier system load.
But to get back to the answer at hand, what you'll have to do is make a new ajax call whenever the map bounds change, and if the zoom goes too far out, you'll have to limit the number retrieved and implement some system to show the user that not all results are shown. My site uses a limit of 250, if I recall correctly, and shows a bounding rectangle around the locations (along with markerclusterer to cluster them). Before populating with real data, I did a test database of thousands and thousands, and this number seemed to be the best tradeoff of performance and information. (But that was before I went mobile and before v3 of the api). v3 is supposed to be more streamlined, but mobile devices are limited, so you'll have to test.
I am using marker clusterer plus library with a marker size cap of 200 and default zoom level 8. On zoom change or drag, another 200 markers will come on the map.
If you zoom-out the markers will be clustered and vice-versa.

How taxing would a game map grid be to a web browser?

Suppose we're making a strategy game (think Civilization) in a web browser. The game has a visible map portion - say 30x30 squares. Each square is 30x30px and has several overlaid images - the terrain, resources, units, roads, etc. The classical way of drawing this would be with a huge <table> where each cell would contain absolutely positioned images. It would probably be rendered in Javascript to reduce traffic. But it's still several thousand images and a huge table.
Can the browser take it? Will the performance not drop below any acceptable limits? Alternatively I could keep a pre-rendered map image with as many overlays as possible, but that would be more work, I think.
You should really look into using the canvas element which does not require the browser to store and compute the whole layout and other DOM stuff.
That being said, a modern browser on a high-performance workstation can display hundreds of images at the same time as demonstrated with the FishIETank. However, many devices - ranging from smart phones to old PCs - can not. Oh, and using a table is probably slower than a div with position:relative; or absolute and absolutely images therein.
Look at online games like grepolis, they already do some sort of a grid like game, and modern browsers can take this easily.

Resources