Nebula NatTable persist state - filter

We have used NatTable in one of our applications. The nat table is used to load the contents from an XML file and has sorting and filtering functions implemented. I read a post about Persistence in Nat table http://www.eclipse.org/nattable/documentation.php?page=persistence.
This topic is new to me so I am not able to change our implementation successfully to save the filter row state on close of table and load the filter row state (as saved earlier) when the table is loaded again later. Can you please tell me the steps to be followed for this operation. I referred the PersistentNatExample.
There is a class in our application which extends FilterRowHeaderComposite. I have overridden the saveState() method in this class to store the properties object to file system. When i debugged the code and when it reaches the saveState() method in FilterRowDataProvider class the "filterIndexToObjectMap" object is empty. I suspect this is why the filter state is not saved and loaded as expected. Have I missed anything?

Related

Can I commit a portion of an #Transactional sequence?

I have a Spring Boot application, and have a webservice where a user can POST a model of a CollegeCourse instance which includes links between that class and the Students who are taking it. (The data is used to store rows in the association table, since those classes have a many-to-many relationship.) This works fine.
Say the enrollment in the course changes. The User expects to send the same JSON structure to the webservice handling the PUT call. The code took the easy path for updating, first finding and deleting all the existing CollegeCourse-Student links, then saving the new links. (Rather than iterating through the two lists, matching up items.) This part worked also as given.
We then added a uniqueness constraint to the CollegeCourse-Student association table, so that said table could not have a single Student linked to one CollegeCourse multiple times. This crashed and burned. A debugging session revealed the culprit: the delete of the CollegeCourse-Student records did not actually remove them from the database until the transaction completed. Thus, when we tried to add the new links back in, any holdovers from the original POST conflicted with what was already in the database.
The service handling the PUT is preceded by a #Transactional annotation. I tried moving the code to find and delete the associations in a separate method, and tried both #Transactional(propagation=Propagation.REQUIRED) and REQUIRES_NEW, but neither prevented failing the uniqueness constraint. I also added #EnableTransactionManagement to my Application class - same story. Is there a simple solution to my dilemma?
Without knowing exactly what your repository looks like, have you tried to do a manual flush on the entity manager after the deletions?
Something along the lines of
entityManager.flush();
Or, if you're using a Spring Data JPA repository, you should be able to define a flush method in that interface and call it.

Core Data cross storage fetched property

I'm currently wrapping my head around some problem with Core Data.
I have one user model in its own store that I do not have any control over, as it gets shipped with a framework. A Persistent Store Coordinator, Managed Object Model and Context for this model gets created automatically and cannot be touched. In it, this model has a single user entity
On the other hand, I have a properties model with a properties entity in it that I have complete control over. In there I store properties for some user entities in the other store. Both user and property entities have an id attribute similar to a foreign key.
This model has it's own Persistent Store Cordinator, Managed Object Model and Context.
What I now want is to have the associated user entity as an attribute of the properties entity so I might be able to bind to key-paths similar to myproperty.user.someValueOfTheUserEntity (I'm aware that myproperty might be an array when using fetched properties).
However, as cross-store relationships are not supported I thought of using a weak relationship via Fetched Properties. That one would just have to match the two corresponding id attributes. I have created a Fetched Property for the user in Xcode and the required accessors in my properties entity's class file (As suggested in other questions, I'm treating the values returned by the Fetched Property as an array).
However, I'm unable to set a destination entity for the Fetched Property in Xcode, as the target entity resides in a completely different store. Would I also have to define my user entity in the properties store? If so, how does Core Data know that that entity shall be fetched not from my properties store but from the users store?
Some threads mentioned using configurations for this, but I cannot find any documentation that goes further than mentioning "use configurations for this".
Can somebody enlighten me on how to set up cross-storage fetched properties? #
You can use several persistent stores that share the same data model:
Use single data model (xcdatamodeld) and add all your entities
Create configurations (Editor/Add Configuration) for each "logical set" of
entities that should be stored in separate store file
Assign (Drag) entities to appropriate configurations
Add configured persistent stores to your context (see below)
Configure fetched properties
// 1. Add "static", read-only store
[coordinator addPersistentStoreWithType:NSSQLiteStoreType
configuration:#"your static configuration name goes here..."
URL:storeUrl
options:#{
NSReadOnlyPersistentStoreOption: #(YES),
NSInferMappingModelAutomaticallyOption : #(YES)
}
error:&error];
// 2. Add "dynamic", writable content
[coordinator addPersistentStoreWithType:NSSQLiteStoreType
configuration:#"your dynamic configuration name goes here..."
URL:storeUrl
options:#{
NSMigratePersistentStoresAutomaticallyOption: #(YES),
NSInferMappingModelAutomaticallyOption : #(YES)
}
error:&error];

how can i update an object/entity that is not completely filled out?

I have an entity with several fields, but on one view i want to only edit one of the fields. for example... I have a user entity, user has, id, name, address, username, pwd, and so on. on one of the views i want to be able to change the pwd(and only the pwd). so the view only knows of the id and sends the pwd. I want to update my entity without loading the rest of the fields(there are many many more) and changing the one pwd field and then saving them ALL back to the database. has anyone tried this. or know where i can look. all help is greatly appreciated.
Thx in advance.
PS
i should have given more detail. im using hibernate, roo is creating my entities. I agree that each view should have its own entity, problem is, im only building controllers, everything was done before. we were finders from the service layer, but we wanted to use some other finders, they seemed to not be accessible through the service layer, the decision was made to blow away the service layer and just interact with the entities directly (through the finders), the UserService.update(user) is no longer an option. i have recently found a User.persist() and a User.merge(), does the merge update all the fields on the object or only the ones that are not null, or if i want one to now be null how would it know the difference?
Which technologies except Spring are you using?
First of all have separate DTOs for every view, stripped only to what's needed. One DTO for id+password, another for address data, etc. Remember that DTOs can inherit from each other, so you can avoid duplication. And never pass business/ORM entities directly to view. It is too risky, leaks in some frameworks might allow users to modify fields which you haven't intended.
After the DTO comes back from the view (most web frameworks work like this) simply load the whole entity and fill only the fields that are present in the DTO.
But it seems like it's the persistence that is troubling you. Assuming you are using Hibernate, you can take advantage of dynamic-update setting:
dynamic-update (optional - defaults to false): specifies that UPDATE SQL should be generated at runtime and can contain only those columns whose values have changed.
In this case you are still loading the whole entity into memory, but Hibernate will generate as small UPDATE as possible, including only modified (dirty) fields.
Another approach is to have separate entities for each use-case/view. So you'll have an entity with only id and password, entity with only address data, etc. All of them are mapped to the same table, but to different subset of columns. This easily becomes a mess and should be treated as a last resort.
See the hibernate reference here
For persist()
persist() makes a transient instance persistent. However, it does not guarantee that the
identifier value will be assigned to the persistent instance immediately, the assignment
might happen at flush time. persist() also guarantees that it will not execute an INSERT
statement if it is called outside of transaction boundaries. This is useful in long-running
conversations with an extended Session/persistence context.
For merge
if there is a persistent instance with the same identifier currently associated with the session, copy the state of the given object onto the persistent instance
if there is no persistent instance currently associated with the session, try to load it from the database, or create a new persistent instance
the persistent instance is returned
the given instance does not become associated with the session, it remains detached
persist() and merge() has nothing to do with the fact that the columns are modified or not .Use dynamic-update as #Tomasz Nurkiewicz has suggested for saving only the modified columns .Use dynamic-insert for inserting not null columns .
Some JPA providers such as EclipseLink support fetch groups. So you can load a partial instance and update it.
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/AttributeGroup

Accessing properties of Core Data objects via bindings from non-Core Data objects

I have a set of data created by another app and stored in XML format on disk. Since this data is managed by this other app, I don't want to bother with loading this data into a Core Data store for two reasons: 1) it would be redundant storage of the same data, and 2) I would have to constantly update my own Core Data store to match updates in the XML file produced by the other app.
However, I have data created in my own app that needs to be associated with the data from the XML from the other app, and I want to save the data created in my own app to disk.
To accomplish this, the XML data from the other app has persistent, unique IDs associated with each object stored in the XML file. I store these unique IDs in my own Core Data store. Upon every launch of my app, I load the XML data created by the other app, and then I can access the corresponding data in my own app via Core Data by issuing a fetch request for managed objects matching the unique ID.
OtherAppObjects represents items loaded from the XML data. They have their own unique properties in addition to the uniqueID. These OtherAppObjects are controlled by an NSArrayController. Then I have MyManagedObjects which are loaded from the Core Data store, and have distinct unique properties in addition to a uniqueID.
I have a table view which needs to display properties from both the OtherAppObjects as well as the MyManagedObjects, so I want to be able to access and set properties of the MyManagedObjects via bindings from the OtherAppObjects. Thus, I figured that I could create a correspondingMyManagedObject property of the OtherAppObjects, and then I'd be able to access the Core Data properties of the MyManagedObject via bindings.
For example, if I wanted to display property "foo" of the OtherAppObjects, and "bar" of the MyManagedObjects in the table view, I could simply bind one table column to the NSArrayController with a model key path of "foo", and bind the second table column to the model key path of "correspondingMyManagedObject.bar".
This works when not dealing with multiple threads, or when passing around a single managed object context. But since that's "strongly discouraged", I wanted to try to do this the right way by passing around a single persistent store coordinator, but creating separate managed object contexts.
However, this breaks down. The problem is that when the table view attempts to access the bar property, it needs to first access the correspondingMyManagedObject property. So, the OtherAppObject dutifully creates a new managed object context and a corresponding fetch request with the appropriate uniqueID and returns the managed object. But in doing so, it releases the managed object context and now the managed object is no longer valid, so the table view can't access the bar property!
I see only two ways around this, and I wanted to verify that there isn't another easier way to do this:
Load the objects from the XML data into my own Core Data store. In essence, create ManagedOtherAppObjects from the OtherAppObjects, with a relationship to the MyManagedObjects, and then accessing via bindings will be peachy. However, this means there's redundant storage of the same data on disk, and I'll have to recreate the ManagedOtherAppObjects every single time I launch the app (because the XML file is updated fairly frequently).
Create custom setters/getters on the OtherAppObject class. So, for example, I'd create -(NSValue *)bar and -(void)setBar:(NSValue *)newValue methods in OtherAppObject. Then, instead of binding the table view column to the key value path "correspondingMyManagedObject.bar" of OtherAppObjects, I'd just bind it to the key path "bar" of OtherAppObjects. These methods would be able to fetch the corresponding MyManagedObject and retrieve or set the value within the managed object context, and then return the correct value.
This second method isn't particularly appealing because I'd have to create two custom methods for every single property of MyManagedObject (and for properties of other managed objects for which MyManagedObject has a relationship).
I suppose I could create the generalized methods -(NSValue *)retrieveCoreDataPropertyUsingKeyPath:(NSString *)keyPath and -(void)setCoreDataProperty:(NSValue *)newValue usingKeyPath:(NSString *)keyPath , but I'd still have to create shell setters/getters for each individual property.
[UPDATE: Hmm, maybe I could just override valueForKeyPath: and setValue:forKeyPath:, and then everything would work OK?]
Is this correct, or am I missing something?
One variation on option #1 that could be worth a try would be to set things up so that you have a single persistent store coordinator that splits the objects between two separate persistent stores. You would keep MyManagedObjects (MMO) the same, being stored separately on disk, but then the OtherAppObjects (OAO) could either be backed by some temporary store on disk (e.g. in ~/Library/Caches or something) or just by an in-memory store.
Upon launch, you would create your PSC and add the store containing the MMOs. You would then add a second store to the PSC (using -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:]), read in the XML file and create all the OAOs, and associate those objects with that store using -[NSManagedObjectContext assignObject:toPersistentStore:].
Core Data doesn't allow directly modeling relationships between objects in different stores, but you could still do the lookup via unique ID like you're doing now to associate a MMO with an OAO. The difference would be that the OAO could simply use its own managed object context to fetch the MMO, so you would be sure that the MMO would stick around at least as long as the OAO.
Then, when you quit the app, you'd either delete the temporary store in ~/Library/Caches, or if using an in-memory store, just let it disappear into the ether, leaving the other store with the MMOs intact.

How to refresh relational property of a LINQ class?

I have two instances of a program that manipulate same Northwind database.
When I add some records to the database from one of the instances (for example adding some orders to Orders table with a customer foreign key John), I can query these new records from the other instance of the program properly. The problem begins when I want to access these new records using John.Orders. In this situation, the second instance of the program does not see newly added records. What should I do?
The problem you are having is probably related to the time you keep the LINQ to SQL DataContext class alive. It should typically be destroyed after each unit of work you do with it (since it follows the 'unit of work' design pattern), which typically means after each use case / business transaction.
You are probably keeping the DataContext class alive during the entire lifetime of the application. The DataContext class is not suited for this, because it will cache all objects it had once retrieved meaning that your data will get stale.
Create a new DataContext class for every operation or every time the user opens a new form / screen.

Resources