QCombobox works very slow with QSqlQueryModel with large model - performance

I have few comboboxes with very dig data sets within ~ 100K rows and more. I tried it with QStandardItemModel - works fast enough if model is preloaded, also model loading takes few seconds if performed in separate thread. Tried comboboxes with QSqlQueryModel without threading to improve performance but experienced it works much slower than QStandardItemModel (in our project QSqlQueryModel works very fast with such amount of data with QTreeView for example). What could be the problem here? Is there a way to speed-up combobox, some parameters?
P.S. Suggested by Qt doc QComboBox::AdjustToMinimumContentsLengthWithIcon does not speed things much: dialog with such combos starts too long and exits 10-20 sec. AdjustToMinimumContentsLength works a little bit faster but anyway delays are too long.

Found the solution. 1st thought was to find what model will work faster, for example QStringListModel to replace QStandardItemModel or QSqlQueryModel. However seems that they work almost same speed. 2nd I found in Qt doc that combobox by default uses QStandardItemModel to store the items and a QListView subclass displays the popuplist. You can access the model and view directly (with model() and view()). That was strange for me as I know QTreeView works just fine with even bigger amount of data, and simpler QListView also inherited from QAbstractItemView should do this as well. I start to digg into QListView and found properties in it which had solved the problem: now combobox opens immediately on large amount of data. Static function was written to tweak all of such combos (comments with explanation are from Qt doc):
void ComboboxTools::tweak(QComboBox *combo)
{
// For performance reasons use this policy on large models
// or AdjustToMinimumContentsLengthWithIcon
combo->setSizeAdjustPolicy(QComboBox::AdjustToMinimumContentsLength);
QListView *view = (QListView *)combo->view();
// Improving Performance: It is possible to give the view hints
// about the data it is handling in order to improve its performance
// when displaying large numbers of items. One approach that can be taken
// for views that are intended to display items with equal sizes
// is to set the uniformItemSizes property to true.
view->setUniformItemSizes(true);
// This property holds the layout mode for the items. When the mode is Batched,
// the items are laid out in batches of batchSize items, while processing events.
// This makes it possible to instantly view and interact with the visible items
// while the rest are being laid out.
view->setLayoutMode(QListView::Batched);
// batchSize : int
// This property holds the number of items laid out in each batch
// if layoutMode is set to Batched. The default value is 100.
}

Related

Qt performance - avoid crashing

I have such problem with my application that´s running in real time.
It´s getting values an plotting two graphs. Also is updating a few (8) QLabels and displaying them in window. The application should run for a very long time and processing data so I wanted to test it in faster regime and I set the sending values to the application for each 10ms. When I start the application everything works fine, but after some time QLabels stuck or plotting graphs stuck and in the end application crashes.
I would like to ask if 10ms is too fast for updating QLabels and graphs (QCustomPlot) and that is the reason of crashing the program or I need to search for the problem somewhere else?
void mainWnd::useDataFromPipe(double val) {
if(val<=*DOF) {
FSOcounter=FSOcounter+1.0;
lastRecievedFSO->saveValue(val);
updateGraphMutex->lock();
FSOvectorXshort->replace(1, FSOcounter);
updateGraphMutex->unlock();
if(!firstDataFSO()) {
if(val>maximumRecievedFSO->loadValue()) maximumRecievedFSO->saveValue(val);
else if(val<minimumRecievedFSO->loadValue()) minimumRecievedFSO->saveValue(val);
}
else {
maximumRecievedFSO->saveValue(val);
minimumRecievedFSO->saveValue(val);
}
numberOfRecievedValuesFSO->saveValue(numberOfRecievedValuesFSO->loadValue()+1.0);
}
else {
RFcounter=RFcounter+1.0;
lastRecievedRF->saveValue(val);
updateGraphMutex->lock();
RFvectorXshort->replace(1, RFcounter);
updateGraphMutex->unlock();
if(!firstDataRF()) {
if(val>maximumRecievedRF->loadValue()) maximumRecievedRF->saveValue(val);
else if(val<minimumRecievedRF->loadValue()) minimumRecievedRF->saveValue(val);
}
else {
maximumRecievedRF->saveValue(val);
minimumRecievedRF->saveValue(val);
}
numberOfRecievedValuesRF->saveValue(numberOfRecievedValuesRF->loadValue()+1.0);
}
}
where saveValue() and loadValue() are
void infoLine::saveValue(double val) {
*value=val;
valueLabel->setText(QString("<b>%1</b>").arg(val));
}
double infoLine::loadValue(void) {
return *value;
}
You're not "updating" the labels every 10ms, you're merely calling setText or similar methods on them every 10ms. The labels will be updated as dictated by how many events are in the event queue. As long as you "update" the labels by calling setText etc., you're OK - this should not be a problem.
A crash is usually due to a memory bug. A memory bug typically can be due to running out of memory, a double free, or access to deallocated memory.
What you describe seems like a memory bug due to running out of memory. The most trivial explanation is that you're leaking memory. That's an easy one - smart pointers (QScopedPointer and QSharedPointer) will help with that.
You may also not be leaking memory, but recursing too deeply if you indeed recurse into the event loop.
You may also be updating the widget(s) incorrectly, resulting in forced, repeated, expensive repaints that are not compressed into one repaint.
You'd need to show how you "update" the plot. While the labels are implemented correctly, and repeated calls to various QLabel setXxxx methods are cheap, the custom plot class may simply be implemented incorrectly.
The idiomatic, and indeed the only correct way of updating a widget in Qt is:
Set some data members.
Call update().
This is what QLabel::setText does internally, and it is what any other widget should be doing as well. This functionality should be exposed in a setXxxx-like method, and is an implementation detail, although an important one. The users of the widget don't need to be aware of it.
The update events are posted to the widget and are compressed, such that repeated updates without returning to the event loop result in only one repaint.
It's really hard to tell without seeing a self-contained, minimal example. By minimal I mean nothing that has no bearing on the issue should be left in.

How do I preload items in a SproutCore ListView?

I've got a relatively fast SproutCore app that I'm trying to make just a tad bit faster.
Right now, when the user scrolls my SC.ListView and they scroll into view some list items that have not been loaded from the server (say from a relationship), the app automatically makes a call to the server to load these records. Even though this is fast, there is still a short period of time where my list items are blank.
I know that I can make them say "Loading..." or something like that (and I have), but I was wondering: is there was a way to pre-load my "off-screen" records so that as the user scrolls, the list items are already loaded?
My ListItemViews will be fairly large (pixel-wise), so even loading double the amount of data is not going to be killer from an AJAX perspective, and it would be nice if as the user scrolled, the content was always loaded (unless they scroll SUPER-SUPER-fast, in which case I'm okay with them seeing a loading indicator).
I currently found a solution by adding the following to my SC.ListView, but I've noticed some major performance issues on mobile and they are directly related to making this change, so I was wondering if there was a better way.
contentIndexesInRect: function(rect) {
rect.height = rect.height * 2;
return sc_super();
}
Overriding contentIndexesInRect is the way I would do this. I would do less than double it though – I might get the result from sc_super() and then add a few extra items to the resulting index set. (I believe it comes back frozen, so you may have to clone-edit-freeze.) One or two extra may give you enough breathing room to get the stuff loaded, without contributing nearly as much to the apparent performance issue.
I'm surprised that it results in major performance issues though. It sounds to me like your list items themselves may be heavier-weight than they need to be – for example, they may have a lot of bindings to hook and unhook. If that's what's going on, you may benefit more from improving their efficiency.
I think you would be better served to load the additional data outside of the context of what the list is actually displaying. For instance, forcing more list items to render in order to trigger additional requests does result in having the extra data available, but also adds several unnecessary elements to the DOM, which is actually detrimental to overall performance. In fact these extra elements are most likely the cause of the major slowdown on mobile once you get to a sufficient number of extras.
Instead, I would first ensure that your list item views are properly pooling so that only the visible items are updating in place with as little DOM manipulation as possible. Then second, I would lazily load in additional data only after the required data is requested. There are quite a few ways to do this depending on your setup. You might want to add some logic to a data source to trigger an additional request on each filled request range or you might want to do something like override itemViewForContentIndex in SC.CollectionView as the point to trigger the extra data. In either case, I imagine it could look something like this,
// …
prefetchTriggered: function (lastIndex) {
// A query that will fetch more data (this depends totally on your setup).
var query = SC.Query.remote(MyApp.Record, {
// Parameters to pass to the data source so it knows what to request.
lastIndex: lastIndex
});
// Run the query.
MyApp.store.find(query);
},
// …
As I mention in the comments above, the structure of the request depends totally on your setup and your API so you'll have to modify it to meet your needs. It will work better if you are able to request a suitable range of items, rather than one-at-a-time.

What is the biggest performance issue in Emberjs?

Last 6 months I'm developing an Emberjs components.
I started had first performance problems when I tried develop a table component. Each cell in this table was Ember.View and each cell had a binding to object property. When the table had 6 columns and I tried to list about 100 items, it caused that browser was frozen for a while. So I discovered, that it is better write a function which return a string instead of using handlebars and handle bindings manually by observers.
So, is there any good practice how to use minimum of bindings? Or how to write bindings and not lose a lot of performance. For example... not use bindning for a big amount of data?
How many Ember.View objects is possible append to page?
Thanks for response
We've got an ember application that displays 100s of complex items on a single page. Each item uses out-of-box handlebars bindings to output about a dozen properties. It renders very quickly (< 100ms) and very little of that time is spent on bindings.
If you find the browser is hanging at 6 columns and 100 items there is for sure something else is wrong.
So, is there any good practice how to use minimum of bindings? Or how to write bindings and not lose a lot of performance.
Try setting Ember.LOG_BINDINGS = true to see what is going on. Could be the properties you are binding to have circular dependencies or costly to resolve. See this post for great tips on debugging:
http://www.akshay.cc/blog/2013-02-22-debugging-ember-js-and-ember-data.html
How many Ember.View objects is possible append to page?
I don't believe there is a concrete limit, for sure it will depend on browser. I wouldn't consider optimizing unless there were more than a few thousand.
Check out this example to see how ember can be optimized to handle very large tables:
http://addepar.github.com/ember-table/
Erik Bryn recently gave a talk about Ember.ListView which would drastically cut down the number of views on a page by reusing list cells which have gone out of the viewport.
Couple this technique with group-helper to reduce the number of metamorph script tags if your models multiple properties at the same time. Additionally, consider using {{unbound}} if you don't need any of the properties to live update.

MFC CListrCtrl: Update virtual-list data in bulk

Is there a way, either natively supported by CListCtrl, or with some coding technique, to update a chunk of data in a virtual-list-mode CListCtrl instead of one at a time?
By default, when the list requires a data for a cell, we handle it via LVS_GETDISPINFO. If I have say 8x8 (64) cells visible and constantly updating, it's basically calling the handler to LVS_GETDISPINFO 64 times. This is fine and expected behavior, but I believe there is a small overhead in calling this function repeatedly, as opposed to just doing it in a for-loop for all the 64 cells. And this is a problem to me because my control constantly updates all the 64 cells (imagine something like a TCP packet trace).
CListCtrl supports caching of course (although useless in my situation), but again I feel there is overhead in calling the LVS_GETDISPINFO handler over and over again. A simple example would be, say to determine if my pointer to the database is valid (not null) before obtaining the data ... in essence this line of code is called 64 times, when I could have done it just 1 time, then for-loop the pointer to get the data of my 64 cells. Also, the pointer is just a simple example, there is more that I am doing (which cannot be avoided) that I won't explain since it requires code.
As time is of the essence, I can't go back to re-write my own list ctrl that is more efficient, as it takes time to duplicate the other benefits of CListCtrl that I am enjoying by inheriting it directly. The only issue now is speed. If there is a way, say something like a handler that passes in a null-terminated array of cells to update so that we can update this in bulk in just one function, that would be great.
Or maybe is it possible to know what cells are pending update in LVS_GETDISPINFO, so that, if possible, I update all the cells, and validate the entire update to stop getting LVS_GETDISPINFO?
Any ideas? Thanks in advance.

How to extract or load a large array into listview?

I have an array of 50k items. I want to load all the items into my listview control as fast as possible. Using a loop is not the solution as a loop is very slow.
The underlying ListView control has a virtual mode which means your app only passes a count to the control and it then calls back periodically to get information on the entries that are visible. Unfortunatly, this functionality is not exposed by the VB6 common controls but you can still use the underlying control.
See this vbVision example.
There is no way to load in bulk as far as I know, but there are other tricks to make it a bit faster. One is to prevent the control from updating (repainting) during the load. This can be done as simply as hiding it while loading. Another technique is to load a chunk of records up front (say 2K) and then use a timer to load the rest in chunks in the background.
But honestly, I doubt the usefulness of a grid with 50K items displayed. That is too much data to present to a user in one pass. Have you considered refactoring your UI to limit the amount of data a user has to sift through at one time.

Resources