I am creating an Firemonkey (FMX) interface that includes several TChart components in order to display data with number of points from 1,000,000 to 10,000,000 points (C++ builder 10.4). The curves are displayed quickly by following the advice given in Steema's note “Real-Time Charting in TeeChart”. However, 2 problems arise:
Even with a number of points equal to 1,000,000, successive zooms slow down the interface enormously (beyond 3 zooms, this becomes quite painful for the user).
When the number of points is high, the interface becomes very slow and so interactions with the TChart are no longer smooth.
I have tested changing many parameters of the TChart or of the TFastLineSeries but without success. I also tested by modifying the global display variables "GlobalUseGPUCanvas" and "GlobalUseDirect2D" without being able to observe any improvement. Also, if I compare with TChart VCL, the display of the curves is a little slower but interactions like the zoomming are instantaneous and do not seem to slow down the interface (same behavior for the number of points) as in the case of the FMX.
Do you have any idea on how to solve these 2 problems?
Thank you,
Related
I'm currently working on a game using Three.js. I've been studying software engineering for four years and have been working professionally on backends for two, but I've barely touched on graphics aside from some simple Unity experimenting.
I currently have ~22,000 vertices and ~8,000 faces according to renderstats.js, and my desktop (above average) can't run it above 20 FPS. I'm using Lambert material as well as a single ambient light, so I feel like this isn't too much to ask.
With these figures in mind, is this the expected behavior for three.js rendering?
I would be pretty sure that is not end of the line and you are probably missing some possibilities for massive performance-improvements.
But just to give you some numbers first,
if you leave everything fancy away (including three.js) and just render an ultra-simple point-cloud with one fragment rendered per point, you can easily get to rendering 10-20 million (yes, million) points/vertices on an average GPU.
just with simple shapes and material, I already got three.js to render something in the range of 500k triangles (at 1080p-resolution) at 60FPS without problem. You can probably take those numbers times 10 for latest high-end GPUs.
However, these kinds of numbers are not really helpful.
Some hints:
if you want to debug your rendering-performance, you should first add some metrics. Renderstats is good, but I'd recommend integrating http://spite.github.io/rstats/ for this (see the example).
generally the choice of material shouldn't matter too much, the GPU is way more capable than most people think. It's more likely a problem somewhere else in the pipeline. EDIT from comment: In some cases, like hi-resolution displays with slow GPUs (think mobile-devices) this might be less true and complicated shader-code can slow down your site, but it might worth be looking at the other points first. As the rendering itself happens off-thread (so you can't measure it's duration using regular tools like the devtools-profiler), you can use the EXT_disjoint_timer_query-extension to get some information about what is going on on the GPU.
the number of drawcalls shouldn't be too high: three.js needs to do a single drawcall for every Mesh and Points-object rendered in the scene and too many objects are generally a far bigger problem than objects with lots of vertices. Reducing the number of drawcalls can be done by merging multiple geometries into one and making use of multi-materials, vertex-colors and things like that.
if you are doing postprocessing, the GPU needs to render every pixel on screen several times. This might as well massively limit your performance. This can be optimized by merging multiple postprocessing-passes into one (I admit, that'd be a lot of hard work..)
another problem could be on the JS side: you should use the profiler or timeline-view from the chrome devtools to see if maybe it's the javascript that is taking too much time per frame (shouldn't be more than 8-12ms per frame). I've been told there are ways to optimize the javascript-performance as well :)
Todays displays have a quite huge range in size and resolution. For example, my 34.5cm × 19.5cm display (resulting in a diagonal of 39.6cm or 15.6") has 1366 × 768 pixels, whereas the MacBook Pro (3rd generation) with a 15" diagonal has 2880×1800 pixels.
Multiple people complained that everything is too small with such high resolution displays (see example). That is simple to explain when developers use pixels to define their GUI. For "traditional displays", this is not a big problem as the pixels might have about the same size on most monitors. But on the new monitors with much higher pixel density the pixels are simply smaller.
So how can / should user interface developers deal with that problem? Is it possible to get the physical size of the screen? Is it possible to set physical sizes instead of pixel-based ones? Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
(While css seems to support cm, when I try here it, it is not the set size).
how can / should user interface developers deal with that problem?
Use a toolkit or framework that support resolution independence. WPF is built from the ground up to be resolution-independent, but even old framework like Windows Forms can learn new tricks. OSX/iOS and Windows (or browser if we're talking about web) itself may try to take care the problem by automatic scaling, but if there's bitmap graphic involved, developers might need to provide different bitmaps such in Android (which face most varying resolution and densities compared to other OS)
Is it possible to get the physical size of the screen?
No, and developers shouldn't care about it. Developers should only care about the class of the device (say, different UI for tablet and smartphone), and perhaps the DPI to decide which bitmap resource to use. Vector resource and font should be scaled by the framework.
Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
Depend on when you last read about it. Windows support is still spotty, even for the internal apps itself, and while anyone developing in WPF or UWP have it easy, don't expect major third party apps to join soon. OSX display scaling seems to work a bit better, while modern mobile OS are either running on limited range of resolution (iOS and Windows Phone) or handle every resolution imaginable quite nicely (Android)
There are a few ways to deal with different screen sizes, for example when I make mobile apps in java, I either use DIP(Density Independent Pixels; They stay at a fixed size) or make objects occupy a percentage of the screen with simple math. As for web development, you can use VW and VH (Viewport Width and Viewport Height), by adding these to the end of a value instead of px, the objects take up a percentage of the viewport. For example 100vh takes 100% of the viewport height. Then what I think is the best way to do it, but time consuming, is to use a library like Bootstrap that automatically resizes elements, even when the window is resized. W3Schools has a good tutorial on bootstrap and more detailed explainations on any of these options can be looked up with an easy google search.
The design of the GUI in today display diversity era is real challenge. I would suggest several hints, mainly about the GUI applications design:
Never set or expect constant pixel size of the text - the user can change it from the system settings of the OS. Use some real-world measures for the text and check its pixel size when drawing. Provide some way to put the random size text in the boundaries of the window.
Never set or expect constant pixel size of the GUI widgets. Try to position them on the window in some adaptive way - according to the size of the window. Most GUI widget toolkits today have such instruments.
Never set or expect constant pixel size dialog windows. Let the OS to choose the size for you and then use what you get (X). Or, if you need to set some size and position (Windows), define it as a percent of the screen size.
If possible use scalable image formats for the icons. SVG is great for icons actually. Using sets of bitmap icons with different sizes is acceptable, but highly non-optimal as memory use and still will not provide perfect scaling in most cases.
First, a disclaimer. I'm well aware of the std answer for X vs Y - "it depends". However, I'm working on a very general purpose product, and I'm trying to figure out "it depends on what". I'm also not really able to test the wide variety of hardware, so try-and-see is an imperfect measure at best.
I've been doing some googling, and I've found very little reference to using an offline render target/surface for hittesting. I'm not sure of the nomenclature, what I'm talking about though is using very simple shaders to render a geometry ID (for example) to a buffer, then reading the pixel value under the mouse to see what geometry is directly under the mouse pointer.
I have however, found 101 different tutorials on doing triangle intersection, a la D3DXIntersect & DirectX sample "Pick".
I'm a little curious on this - I would have thought using HW was the standard method. By all rights, it should be many orders of magnitude faster, and should scale far better.
I'm relatively new to graphics programming, so here are my assumptions, for you to disabuse.
1) A simple shader that does geometry transform & writes a Node + UV value should be nearly free.
2) The main cost in the HW pick method would be the buffer fetch, when getting the rendered surface back off the GPU for the CPU to read over. I have no idea how costly this is. us? ms? seconds? minutes?
3) This may be obvious, but I am assuming that Triangle Intersection (D3DXIntersect) is only possible on the CPU.
4) A possible cost people want to avoid is the cost of the extra render target(s) (zbuffer+surface). I'm a'guessing about 10 megs for 1024x1280 (std screen size?). This is acceptable to me, although if I could render a smaller surface (trade accuracy for memory) I would do so (is that possible?).
This all leads to a few thoughts.
1) For very simple scenes, triangle intersection may be faster. Quite what is simple/complex is hard to guess at this point. I'm looking at possible 100s of tris to 10000s. Probably not much more than that.
2) The HW buffer needs to be rendered regardless of whether or not its used (in my case). However, it can be reused without cost (ie, click-drag, where mouse tracks across a static scene)
2a) Possibly, triangle intersection may be preferable if my scene updates every frame, or if I have limited mouse interaction.
Now I've finished writing, I see a similar question has been asked: (3D Graphics Picking - What is the best approach for this scenario). My problem with this is (a) why would you need to re-render your picking surface for click-drag as your scene hasn't actually changed, and (b) wouldn't it still be faster than triangle intersection?
I welcome thoughts, criticism, and any manner of side-tracking :-)
background: i am developing a real-time multiplayer html5 canvas game using kineticjs which primarily will be played on MOBILE PHONE BROWSERS. There's a lot going on in the game such as socket communication with the server every second, redrawing and animations using kineticjs based on the server response and all this on top of a heavy graphic interface. The game functions well in desktop browsers however is SLUGGISH on mobilephones. So, I need to find all the ways in which the code can be optimized.
My questions,
lets say I need to redraw a particular part of the screen based on a server response that I just received from server, should I keep these need-to-be-redrawn elements in a separate layer, so that I need to redraw fewer elements. As in my case I need to redraw ever second, will this lead to performance improvement?
If the answer to the above is yes, then what is the optimal number of layers in which I should divide my layout. I ask this because I have a lot of different parts on the screen that need to be redrawn based on different server responses (although not all at the same time), if all these need to be put in separate layers, I need to know how far I can stretch the logic above, for example can I have 10 different layers without sacrificing the performance which is the aim of all this exercise anyway.
Eric Rowell (creator of KineticJS) has done some stress tests here: http://www.html5canvastutorials.com/labs/html5-canvas-kineticjs-drag-and-drop-stress-test-with-1000-shapes/
And he says this:
"Create 10 layers each containing 1000 shapes to create 10,000 shapes. This greatly improves performance because only 1,000 shapes will have to be drawn at a time when a circle is removed from a layer rather than all 10,000 shapes."
"Keep in mind that having too many layers can also slow down performance. I found that using 10 layers each made up of 1,000 shapes performs better than 20 layers with 500 shapes or 5 layers with 2,000 shapes."
So your takeaway is that
1.Yes, multiple canvas layers which isolate different groups of redrawables is the way to go.
And,
2.To optimize the tradeoff ( overhead of multiple canvas’s vs simplicity of 1 canvas ), you need to test with your own particular code in the environments they will be operating within.
Good luck with your game :)
I have about 5000 markers I need to render on Google Map. I'm currently using the API (v3) and there are performance issues on slower machines, especially in IE. I have done the following already to help speed things up:
Used a simple marker class that extends OverlayView and renders a single DIV element per marker
Implemented the MarkerClusterer library to cluster the markers at different levels
Render GIFs for IE, instead of alpha PNGs
Are there faster clustering classes? Any other tips? I'm trying to avoid server-side clustering unless this is the only option left to squeeze performance out of the system.
Thanks
I used a method that loads all the markers onto the page, and then listens for the map to finish panning.
When the map has finished panning, I first check the zoom level - if it's too high I don't display anything. If it's at an acceptable level, I then loop through the markers I have stored and see if they fall into the bounding box of the map. If they do, they get added. A second loop then removes any that have moved out of the view.
The highest number I've used is about 30,000 markers with this method, although I have it so you must be zoomed in quite far to see them. In areas of higher concentration of markers it's obviously a little slower but it's useable.
The solution mentioned above works for much higher number of markers. We use it for millions of GPS points at backend (including polygons etc). The only problem is some logic behind like proper caching of spatial queries, or fetching new results only, if user moves a map for more than X meters. There is a lot of work to make it done, but for viewing real high number of points, there is nothing better.
Marker clusteres are usually working at browser side, so these is still need to load all points at once - and this makes this method unusable for large numbers.
You can check it out at http://www.tixik.com/london-2354567.htm live (just click ,,plan a trip " and start planning. Just try to move a map, zoom in or out and all points will show/hide on map zoom/drag.