QTreeView: Filtering contents - looking for best practices - proxy

I have a QTreeView in which I wish to filter the contents. I only wish to filter these contents on the top level children (the ones immediately below the root index). Currently I am accomplishing this by creating a simple filtering method in my QTreeView subclass and selectively hiding those rows that do not match.
While the above approach seems to work fine, I am wondering whether I should re-implement this using a QSortFilterProxyModel. If so, what would be the advantages?
If I change to using the QSortFilterProxyModel, I have a few (hopefully small) questions:
1) Since I am filtering only on the top-level children, I would have to re-implement whatever method was actually doing the sorting so that it would leave all the grandchildren alone, right?
2) My data model has a number of custom methods in it that are responsible for unique keyboard navigation and the like. Do I re-implement these in the proxy model and have them point to my data model's methods? If so, how to I reference the model? I can't seem to find any thing comparable to a QTreeView's model() method.
Thanks!

Using a derived class from QSortFilterProxyModel is better. You keep responsibilities of sorting outside of your tree view.
To reuse at maximum the existing code, you can override filterAcceptsRow like this
bool MySortFilterProxyModel::filterAcceptsRow(int sourceRow,
const QModelIndex &sourceParent) const
{
if( sourceParent.IsValid() ) return true; // Don't filter other than top level
return QSortFilterProxyModel( sourceRow, sourceParent );
}
For the custom methods, you will need to implement them in your proxy. Then for the navigation, you may need to used mapToSource and mapFromSource to convert proxy index to orignal model index

Related

How to model updated items with UICollectionViewDiffableDataSource

I'm struggling to understand how to use UICollectionViewDiffableDataSource and NSDiffableDataSourceSnapshot to model change of items.
Let's say I have a simple item which looks like this:
struct Item {
var id: Int
var name: String
}
Based on the names of the generic parameters, UICollectionViewDiffableDataSource and NSDiffableDataSourceSnapshot should operate not with Item itself, but only with identifier, which Int in this example.
On the other hand, again based on names of generic parameters, UICollectionView.CellRegistration should operate on complete Item's. So my guess is that UICollectionViewDiffableDataSource.CellProvider is responsible for finding complete Item's by id. Which is unfortunate, because then aside from snapshots, I need to maintain a separate storage of items. And there is a risk that this storage may go out of sync with snapshots.
But it is still not clear to me how do I inform UICollectionViewDiffableDataSource that some item changed its name without changing its id. I want UICollectionView to update the relevant cell and animate change in content size, but I don't want insertion or removal animation.
There are two approaches that would work solve your problem in this scenario.
The first is to conform your Item model to the hashable protocol. This would allow you to use the entire model as an identifier, and the cell provider closure would pass you an object of type Item. UICollectionViewDiffableDataSource would use the hash value for each instance of your model (which would consider both the id and name properties, thereby solving your name changing issue) to identify the data for a cell. This is better than trying to trick the collection view data source into considering only the id as the identifier because, as you stated, other aspects of the model might change. The whole point of structs is to act as a value-type, where the composition of all the model's properties determine its 'value'...no need to trick the collection view data source into looking only at Item.id.
Do as you said, and create a separate dictionary in which you can retrieve the Items based on their id's. While it is slightly more work to maintain a dictionary, it is a fairly trivial difference in terms of lines of code. All you should do is dump and recalculate the dictionary every time you apply a new snapshot. In this case, to update a cell when the model changes, make sure to swap out the model in your dictionary and call reloadItem on your snapshot.
While the second option is generally my preferred choice because the point of diffable data source is to allow for the handling of massive data sets by only concerning the data source with a simple identifier for each item, in this case your model is so simple that there's really no concern about wasted compute time calculating hash values, etc. If you think your model is likely to grow over time, I would probably go with the dictionary approach.

Overriding backbone model prototype vs extending model

I'm new to Backbone.js and in my recent project I need a custom validation mechanism for models. I see two ways I could do that.
Extending the Backbone.Model.prototype
_.extend(Backbone.Model.prototype, {
...
});
Creating custom model that inherit from Backbone model
MyApp.Model = Backbone.Model.extend({ ... });
I quite unsure which one is a good approach in this case. I'm aware that overriding prototype is not good for native objects but will that applies to backbone model prototype as well? What kind of problems I'll face if I go with the first approach?
You are supposed to use the second approach, that's the whole point of Backbone.Model.extend({}).
It already does your first approach + other near tricks to actually setup a proper inheritance chain (_.extend is only doing a copy of the object properties, you can look up the difference in code for Backbone's extend() and Underscore's _.extend, they are very large and not very interesting. Just extending the .prototype isn't enough for 'real' inheritance).
When I first read your question, I misunderstood and thought you were asking whether to extend from your own Model Class or directly extend from Backbone Extend. It's not your question, so I apologize for the first answer, and just to keep a summary here: you can use both approach. Most large websites I saw or worked on first extend from Backbone.Model to create a generic MyApp.Model (which is why I got confused, that's usually the name they give to it :)), which is meant to REPLACE the Backbone.Model. Then for each model (for instance User, Product, Comment, whatever..), they'll extend from this MyApp.Model and not from Backbone.Model. This way, they can modify some global Backbone behavior (for all their Models) without changing Backbone's code.
_.extend(Backbone.Model.prototype, {...mystuff...}) would add your property/ies to every Backbone.Model, and objects based on it. You might have meant to do the opposite, _.extend({...mystuff...}, Backbone.Model) which won't change Backbone itself.
If you look at the annotated Backbone source you'll see lines like
_.extend(Collection.prototype, Events, { ... Collection functions ...} )
This adds the Events object contents to every Collection, along with some other collection functions. Similarly, every Model has Events:
_.extend(Model.prototype, Events, { ... Model functions ...})
This seems to be a common pattern of making "classes" in Javascript:
function MyClass(args) {
//Do stuff
}
MyClass.prototype = {....}
It's even used in the Firefox source code.

How to calculate figure's size including all sub figures (that have separate edit parts) in GEF?

I'm trying to draw diagram that contains a single entity which holds multiple elements inside.
My MVC structure looks something like this:
Model: contains EntityModel.java and ElementModel.java which represents my model objects.
View: EntityFigure.java and ElementFigure.java
Controller: EntityEditPart.java and ElementEditPart.java
I'm overriding getModelChildren() in EntityEditPart.java to return list of ElementModel.java so that is how GEF knows that an element "belongs" to an entity.
Since I would like to calculate my entity's figure size and include the embedded elements in this calculation, I cannot call entityFigure.getPreferredSize() during createFigure() in EntityEditPart.java since at this point - the elements figures do not exists (createFigure() in ElementEditPart.java is not invoked yet).
I'm looking for a place to set my entity figure after all child figures were created.
I though about overriding addNotify() in ElementEditPart.java, however, it is being called after creating a specific inner element and not after all elements created.
Any ideas?
Hope I was clear enough...
You can do it in an extension of
refreshChildren()
method of an edit part, since all the child creation is done in refreshChildren() of superclass's (AbstractEditPart) refresh method:
public void refresh() {
refreshVisuals();
refreshChildren();
}
Or, you can just extend
refresh()

Best practices when writing glue code

I asked this question to get some opinions on the subject of glue code.
For example, imagine you have a class (pseudocode):
class MyClass
int attribute a
string attribute b
And to represent that data model, you have BOTH a slider and a text box to represent a, and a text box and say... the window label to represent b.
Obviously, when one of these view objects is changed, you want to update the others. However, updating the entire view is obviously inefficient.
method onSomethingHappened(uiObject)
model.appropriateAttribute = uiObject.value
The question is, what is your opinion on what to do next? Should the model object implement a callback that notifies a listener when the value has been changed, allowing one to write glue code like:
method modelChangedCallback(model, attribute)
uiObject1.value = model.a
uiObject2.value = model.a
Where you might examine what the attribute that changed is, and respond accordingly? This is the model in Objective-C and Cocoa on Mac, for the most part.
OR, would you rather have the responsibility lie completely in the glue code?
method onSomethingHappened(uiObject)
model.appropriateAttribute = uiObject.value
self.updateForAttribute("appropriateAttribute")
Both of these approaches can get pretty hairy (as is the problem with glue code) when your project gets large. Maybe there are other approaches. What do you think?
Thanks for any input!
For me I think it comes down to where the behavior is needed. In the situation you describe, the fact that you are binding multiple controls to a property is what is driving the requirement, so it doesn't make sense to add code to the model to support that.
In a web-based model I would probably put the logic in the web page since that can be done rather cheaply using Javascript. If I don't have that luxury (i.e. I'm dealing with a "dumb" view), then it would probably make sense to do it in the controller, or model glue code. If this sort of thing becomes common enough, I may go as far as creating some form of generic helper to reduce the amount of boiler-plate code I have to deal with.

Displaying computed data with external dependencies

I'm building a report that needs to include an 'estimate' column, which is based on data that's not available in the dataset.
Ideally I'd like to be able to define a Java interface
public int getEstimate(int foo_id, int bar_id, int quantity);
where foo_id, bar_id and quantity are available in the row I want the estimate presented.
There will be multiple strategies for producing the estimate so it would be good to use an interface to allow swapping them when needed.
Looking at the BIRT docs, I think it's possible I ought to be using the event handler mechanisms, but that seems to only allow defining a class to use and I'd somehow like to inject a configured estimator.
A non-obfuscated example might be to say that I have a dataset which includes an IP address column, and I'd like to be able to use some GeoIP service to resolve the country from the IP address. In that case I'd have an interface public String getCountryName(String address) and the actual implementations may use MaxMind, a local cache or some other system.
How would I go about doing this?
Or.. would I be better off by writing a scripted data source that can integrate the computed data before delivering it to BIRT?
Or.. some sort of scripted data source that is then used to create a join data set?
I think a Scripted Data Source would work fine, but a Java-based event handler would be more straightforward. You can implement it as a simple POJO and get access to any and all the complex objects and tools that will allow you to calculate your estimate. The simplest solution of all may simply to be adding a calculated field to the data set.
When creating the calculated field, you can get pretty complex in terms of the scripting logic you can leverage in order to produce the resultant value. The nicest thing about this route is that all the other column values in the row (which I assume you need to calculate the estimate) are made available via the Expression editor. You can pull in complex objects (POJOs) to help in your calculations here as well by using the "Packages" object (i.e. var red = new Packages.redwood.HelloWorld())
If you want to create the Event Handler class, here is what I would do. I would create a text object and bind the onCreate even to your POJO (by extending the TextItemEventAdapter) and override the "onCreate" method. There you can do any work you want to and at the end simply call 'text.setText(theEstimateResult);' to make the estimate itself visible. As far as accessing data values to do your calculations, You can get to those in the POJO too. I assume the estimate will be a part of a larger table of values. You can access any specific row value via the reportContext.
Those are the two ideas I would give a try first. The computed column is the fastest to implement and the least likely to throw you a curve during deployment. Let me know which way you choose and we can hash it out further if needed.

Resources