Improving Quartz2D drawing performance - performance

I'm using Core-Plot to perform some charting. However, the performance of the chart starts to get slow after adding 2 hosting views, and attempting to scroll the 3 charts together.
Using the time profiler, I found that for the majority of time is spent on two functions, CGSFillDRAM8by1 and CGSColorMaskCopyARGB8888.
What can I do to improve the performance of these two functions? It seems that these two functions are the bottleneck in my drawing performance.

Make sure you set the blend mode to copy instead of normal, that should help some. You can also change properties of the path such as the miter limit.
If you don't need to save the chart and it's just for viewing, I would just use a CAShapeLayer and attach a path to it representing your chart. That will render far faster than quartz2d.

Related

Autodesk forge custom geometry

I am currently facing the problem that rendering custom geometry into my forge is allocation a huge amount of memory. I am using the technique suggested on the autodesk website: https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/custom-geometry/.
I need to render up to 300 custom geometrys into my viewer but by trying to do so the site is simply crashing and my memory jumps over 5 GB. Is there any good way to render a large amount on custom geometry into the forge and keeping the performance up to a usefull level?
Thx, JT
I'm afraid this is not something Forge Viewer will be able to help with. Here's why:
The viewer contains many interesting optimizations for efficient loading and rendering of a single complex model. For example, it builds a special BVH of all the geometries in a model so that they can be traversed and rendered in an efficient way. The viewer can also handle very complex models by moving their geometry data in and out of the GPU to make room for others. But again, all this assumes that there's just one (or only a few) models in the scene. If you try and add hundreds of models into the scene instead, many of these optimizations cannot be applied anymore, and in some cases they can even make things worse (imagine that the viewer suddenly has to traverse 300 BVHs instead of a single one).
So what I would recommend would be: try and avoid situations where you would have hundreds of separate models in the scene. If possible, consider "consolidating" them into a single model. For example, if the 300 models were Inventor assemblies that you need to place at specific positions, you could:
aggregate all the assemblies using Design Automation for Inventor, and convert the result into a single Forge model, or
create a single Forge model with all 300 geometries, and then move them around during runtime using the Viewer APIs
If none of these options work for you, you could also take a look at the new format called SVF2 (https://forge.autodesk.com/blog/svf2-public-beta-new-optimized-viewer-format) which significantly reduces the memory footprint.

How can I make transitions between dc.js charts smooth when I have a map with more than 20k points?

I've recently built a smaller version of a prototype data explorer incorporating crossfilter, dc.js, and leaflet.markerCluster. The small version, (prototype dashboard), works properly. The problem I am having is when I try to scale it up to 20k points or more.
The charts still render correctly, and the map works to update the charts smoothly when zooming or panning, but when I interact with one of the charts, the transitions between the other charts are no longer smooth. They jump to their next position rather than smoothly transitioning.
I tried removing the map and this restored the transitions between the other charts to a nice smooth transition again.
I'm wondering if the re-rendering process is getting caught up with the 20k points each time an interaction occurs.
If anyone has any suggestions about where I might look for a solution I'd be grateful.
Thanks for posting a block, that makes things easier to test.
I simulated a lot more points by generating 200 rows for each of yours ~ 46k rows. I saw only a little stuttering at 100x ~ 23k rows (2017 iMac with plenty of RAM).
Leaflet.markercluster is known to be slow with more than 10K points. With 46k rows it took about 475ms for Leaflet.markercluster to clear and add the Leaflet layers:
Since there is only one thread in JavaScript (unless you use workers), D3 needs to get timeouts (actually requestAnimationFrame) every 16ms or so in order to produce fluid animation.
One workaround is to delay the map redrawing 500ms until the others have done:
dc.override(mapChart, 'redraw', function() {
window.setTimeout(() => mapChart._redraw(), 500);
});
Fork of your block with workaround.
Of course, this also makes the map take 500ms longer to redraw. And if you click around fast enough, the last map redraw will still be running when it's trying to draw the charts.
You could also try the chunked addLayers options but I think you would have to set the chunkedInterval so low that it would also slow down the markerclusters.
Processing this much data efficiently is possible in JavaScript - obviously crossfilter has no problem here. I don’t know if the cloistering algorithm is inherently too expensive. Someone on the issue suggested pre-aggregating the points, but I think this would mean you wouldn't be able to see individual points.

kineticjs layers redraw optimization

background: i am developing a real-time multiplayer html5 canvas game using kineticjs which primarily will be played on MOBILE PHONE BROWSERS. There's a lot going on in the game such as socket communication with the server every second, redrawing and animations using kineticjs based on the server response and all this on top of a heavy graphic interface. The game functions well in desktop browsers however is SLUGGISH on mobilephones. So, I need to find all the ways in which the code can be optimized.
My questions,
lets say I need to redraw a particular part of the screen based on a server response that I just received from server, should I keep these need-to-be-redrawn elements in a separate layer, so that I need to redraw fewer elements. As in my case I need to redraw ever second, will this lead to performance improvement?
If the answer to the above is yes, then what is the optimal number of layers in which I should divide my layout. I ask this because I have a lot of different parts on the screen that need to be redrawn based on different server responses (although not all at the same time), if all these need to be put in separate layers, I need to know how far I can stretch the logic above, for example can I have 10 different layers without sacrificing the performance which is the aim of all this exercise anyway.
Eric Rowell (creator of KineticJS) has done some stress tests here: http://www.html5canvastutorials.com/labs/html5-canvas-kineticjs-drag-and-drop-stress-test-with-1000-shapes/
And he says this:
"Create 10 layers each containing 1000 shapes to create 10,000 shapes. This greatly improves performance because only 1,000 shapes will have to be drawn at a time when a circle is removed from a layer rather than all 10,000 shapes."
"Keep in mind that having too many layers can also slow down performance. I found that using 10 layers each made up of 1,000 shapes performs better than 20 layers with 500 shapes or 5 layers with 2,000 shapes."
So your takeaway is that
1.Yes, multiple canvas layers which isolate different groups of redrawables is the way to go.
And,
2.To optimize the tradeoff ( overhead of multiple canvas’s vs simplicity of 1 canvas ), you need to test with your own particular code in the environments they will be operating within.
Good luck with your game :)

Google Maps API v3, lots of markers, clustering and performance

I have about 5000 markers I need to render on Google Map. I'm currently using the API (v3) and there are performance issues on slower machines, especially in IE. I have done the following already to help speed things up:
Used a simple marker class that extends OverlayView and renders a single DIV element per marker
Implemented the MarkerClusterer library to cluster the markers at different levels
Render GIFs for IE, instead of alpha PNGs
Are there faster clustering classes? Any other tips? I'm trying to avoid server-side clustering unless this is the only option left to squeeze performance out of the system.
Thanks
I used a method that loads all the markers onto the page, and then listens for the map to finish panning.
When the map has finished panning, I first check the zoom level - if it's too high I don't display anything. If it's at an acceptable level, I then loop through the markers I have stored and see if they fall into the bounding box of the map. If they do, they get added. A second loop then removes any that have moved out of the view.
The highest number I've used is about 30,000 markers with this method, although I have it so you must be zoomed in quite far to see them. In areas of higher concentration of markers it's obviously a little slower but it's useable.
The solution mentioned above works for much higher number of markers. We use it for millions of GPS points at backend (including polygons etc). The only problem is some logic behind like proper caching of spatial queries, or fetching new results only, if user moves a map for more than X meters. There is a lot of work to make it done, but for viewing real high number of points, there is nothing better.
Marker clusteres are usually working at browser side, so these is still need to load all points at once - and this makes this method unusable for large numbers.
You can check it out at http://www.tixik.com/london-2354567.htm live (just click ,,plan a trip " and start planning. Just try to move a map, zoom in or out and all points will show/hide on map zoom/drag.

HTML5 Canvas Performance: Loading Images vs Drawing

I'm planning on writing a game using javascript / canvas and I just had 1 question: What kind of performance considerations should I think about in regards to loading images vs just drawing using canvas' methods. Because my game will be using very simple geometry for the art (circles, squares, lines), either method will be easy to use. I also plan to implement a simple particle engine in the game, so I want to be able to draw lots of small objects without much of a performance hit.
Thoughts?
If you're drawing simple shapes with solid fills then drawing them procedurally is the best method for you.
If you're drawing more detailed entities with strokes, gradient fills and other performance sensitive make-up you'd be better off using image sprites. Generating graphics procedurally is not always efficient.
It is possible to get away with a mix of both. Draw graphical entities procedurally on the canvas once as your application starts up. After that you can reuse the same sprites by painting copies of them instead of generating the same drop-shadow, gradient and strokes repeatedly.
If you do choose to draw sprites you should read some of the tips and optimization techniques on this thread.
My personal suggestion is to just draw shapes. I've learned that if you're going to use images instead, then the more you use the slower things get, and the more likely you'll end up needing to do off-screen rendering.
This article discusses the subject and has several tests to benchmark the differences.
Conculsions
In brief — Canvas likes small size of canvas and DOM likes working with few elements (although DOM in Firefox is so slow that it's not always true).
And if you are planing to use particles I thought that you might want to take a look to Doodle-js.
Image loading out of the cache is faster than generating it / loading it from the original resource. But then you have to preload the images, so they get into the cache.
It really depends on the type of graphics you'll use, so I suggest you implement the easiest solution and solve the performance problems as they appear.
Generally I would expect copying a bitmap (drawing an image) to get faster compared to recreating it from primitives, as the complexity of the image gets higher.
That is drawing a couple of squares per scene should need about the same time using either method, but a complex image will be faster to copy from a bitmap.
As with most gaming considerations, you may want to look at what you need to do, and use a mixture of both.
For example, if you are using a background image, then loading the bitmap makes sense, especially if you will crop it to fit in the canvas, but if you are making something that is dynamic then you will need to using the drawing API.
If you target IE9 and FF4, for example, then on Windows you should get some good performance from drawing as they are taking advantage of the graphics card, but, for more general browsers you will want to perhaps look at using sprites, which will either be images you draw as part of the initialization and move, or load bitmapped images.
It would help to know what type of game you are looking at, how dynamic the graphics will need to be, how large the bitmapped images would be, what type of framerate you are hoping for.
The landscape is changing with each browser release. I suggest following the HTML5 Games initiative that Facebook has started, and the jsGameBench test suite. They cover a wide range of approaches from Canvas to DOM to CSS transforms, and their performance pros and cons.
http://developers.facebook.com/blog/post/454
http://developers.facebook.com/blog/archive
https://github.com/facebook/jsgamebench
If you are just drawing simple geometry objects you can also use divs. They can be circles, squares and lines in a few CSS lines, you can position them wherever you want and almost all browser support the styles (you may have some problems with mobile devices using Opera Mini or old Android Browser versions and, of course with IE7-) but there wouldn't be almost any performance hit.

Resources