Uniqueness validation performance - performance

When performing uniqueness validation in Core Data in the usual way (via NSManagedObject validate…), complexity is O=(n²) because every entity is going to compare itself to every other entity of its type.
Is there a straightforward way to get linear performance for Core Data uniqueness validations? Unfortunately, there doesn't seem to be a class-level or context-level validation.

There is no default implementation for validation because it very much depends on your application and business logic.
If you are importing data, it is best to gather all of the unique IDs and then perform a single fetch to determine existence.
If you are creating a new record then I recommend doing the one off, expensive, fetch to determine uniqueness.

Related

What is the best way of updating data in SpringBoot?

For put request, Do I always have to check the old data and change the changed fields in order to update the existing data? Is it the right way to check for each data change?
I do not know of any project which takes the effort to only update fields that were actually changed.
Usually what you'd do is that you just override all fields in your table with the new value as this is the easiest and most reliable way of doing so.
Also consider, that custom logic that decides what to update also needs to be maintained and can have bugs. If you end up having a bug in that logic, most likely, you'll realize that you have data consistency errors which might be unfixable.
Most likely, when you use Spring Boot, you would probably also use Spring Data JPA and Hibernate which are going to take care of mapping your objects to your database. In that case, Hibernate is going to decide on the update strategy you use anyways.
If you are worried about data consistency and concurrent updates to the same record, I would recommend looking into Optimistic Locking, which is an easy way to handle that issue. It's very easy to setup by just adding a version column to your table.

Should there be a abstraction layer between database and model?

What I see a lot is that people use a Object Relational Mapper (ORM) for doing SQL stuff when working in a MVC environment. But if i really have complex queries I would like to write this whole query myself. What is the best practice for this kind of situation?
Having a Abstraction Layer between your model and the database with the complex queries
Still using the model with creating specific methodes that handle the queries
Or is there any other way that might be better? please tell me :)
Consider the Single Responsibility Principle. Specifically, the question would be...
"If I put data access logic in my model, what will that mean when I need to change something?"
Any time you need to change business logic, you're also changing the objects which maintain data access logic. So the data access logic also needs to be re-tested. Conversely, any time you need to change data access logic, you're also changing the objects which maintain business logic. So the business logic also needs to be re-tested.
As the logic expands, this becomes more difficult very quickly.
The idea behind the Single Responsibility Principle is to separate the dependencies of different roles which can enact changes to the application. (Keep in mind that "roles" doesn't map 1-to-1 with "people." One person may have multiple roles, but it's still important to separate those roles.) It's a matter of simpler support. If you want to make a change to a database query (say, for performance reasons) which shouldn't have any visible affect on anything else in the system, then there's no reason to be changing objects which contain business logic.
1. Having a Abstraction Layer between your model and the database with the complex queries
Yes, you should have a persistence abstraction that sits between storage (database or any other data source) and you business logic. Your business logic should not depend on "where", "how" and even "if" the data is actually stored.
Basically, your code should (at least - try to) adhere to SOLID principles, but as #david already pointed out: you are already violating the first on on that list.
Also, you should consider using a service layer which would be responsible for dealing with interaction between implementation of domain model and your persistence abstraction (doesn't matter whether you are using custom written data mappers or some 3rd party ORM).
In the article (more like excerpt, actually) the "MVC model" is actually all three concentric circles together. Domain model is not code. It actually is trm that describs the accumulated knowledge about the project. Most of domain model gets turned into pieces of code. Those pieces are referred to as domain objects.
2. Still using the model with creating specific methodes that handle the queries
This would imply implementation of active record. It is useful, but mostly misused pattern, for cases when your objects have no (or almost none) business logic. Basically - you should use active record only if all you need are glorified setter an getters, that talk to database.
Active record pattern is a very good choice when you need to quickly prototype something, but it should not be used, when you are attempting to implement fully realized model layer.
ORM's in general do not specifically have any drawbacks versus using direct SQL to fetch data from the database. ORM's as the name implies help in keeping your Relational model (designed using your SQL DDL's or using JPA annotations) and OO model in sync and help them integrate well together.
When using a ORM, you can write your queries in JPQL which is Object oriented SQL. So instead of writing queries that manipulate tables, you are writing queries that manipulate objects. You use the relationships between these objects to get your desired result. Now I understand that sometimes its easier to just write Native SQL, so the JPA specification allows you to run native sql! This just returns you list of "Generic Objects" which you can organize any way you like. When you choose to go this route and actually pick a JPA provider, like Hibernate, these providers have extended functionalities. So if you do have complex relationships you can use libraries like Hibernate Criteria Builder to help you create queries for those complex relationships.
So, if building a large MVC application, it would generally be a good idea to have this abstraction layer in the middle - handling all these relationships. It makes it easier on you the developer to just look at the big picture and the business side of the application.
Imho, no. I think, even the ORM layer adds often more complexity as needed. The databases have very good and sophisticated mechanisms for high-level data manipulation. Triggers, views, constraints, complex keying-indexing, (sub)transactions, stored procedures, and procedural extensions of the query language were normally much more as enough for everything.
The ORMs can't give, because of their structural barriers, a real interface to this feature set.
And the common practice is that the applications use practically only a nosql record service from all of this, and implement in an unneeded "middleware" which were the mission of the database.
Which I see really interesting, if the feature set of the databases got some OO-like interface (see "sql abstract types"), and the client-side logic went in the application (see "REST"). This practically eliminated the need of the middle layer.

Cost of time-stamping as a method of concurrency control with Entity Framework

In concurrency, in optimistic concurrency the way to control the concurrency is using a timestamp field. However, in my particular case, not all the fields need to be controlled in respect to concurrency.
For example, I have a products table, holding the amount of stock. This table has fields like description, code... etc. For me, it is not a problem that one user modifies these fields, but I have to control if some other user changes the stock.
So if I use a timestamp and one user changes the description and another changes the amount of stock, the second user will get an exception.
However, if I use the field stock instead of concurrency exception, then the first user can update the information and the second can update the stock without problems.
Is it a good solution to use the stock field to control concucrrency or is it better to always use a timestamp field?
And if in the future I need to add a new important field, then I need to use two fields to control concurrency for stock and the new one? Does it have a high cost in terms of performance?
Consider the definition of optimistic concurrency:
In the field of relational database management systems, optimistic concurrency control (OCC) is a concurrency control method that assumes that multiple transactions can complete without affecting each other, and that therefore transactions can proceed without locking the data resources that they affect. (Wikipedia)
Clearly this definition is abstract and leaves a lot of room for your specific implementation.
Let me give you an example. A few years back I evaluated the same thing with a bunch of colleagues and we realized that in our application, on some of the tables, it was okay for the concurrency to simply be based on the fields the user was updating.
So, in other words, as long as the fields they were updating hadn't changed since they gathered the row, we'd let them update the row because the rest of the fields really didn't matter and and row was going to get refreshed on udpate anyway so they would get the most recent changes by other users.
So, in short, I would say what you're doing is just fine and there aren't really any hard and fast rules. It really depends on what you need. If you need it to be more flexible, like what you're talking about, then make it more flexible -- simple.

A Spring DAO that can adapt to changes in the data

For application developers, I suppose the traditional paradigm for writing an application with domain objects that can be persisted to an underlying data store (SQL database for arguments sake), is to write the domain objects and then write (or generate) the table structure. There is a tight coupling between what the domain object looks like and what the structure of underlying data store looks like. So if you want to add a piece of information to your domain object, you add the field to your code and then add a column to the appropriate database table. All familiar?
This is all well and good for data stores that have a well defined structure (I'm mainly talking about SQL databases whereby the tables and columns are pre-defined and fixed), but now a number of alternatives to the ubiquitous SQL database exist and these often do not constrain the data in this way. For instance, MongoDB is a NoSQL database whereby you divide data into collections but aside from that there is no structuring of the data. You don't define new columns when you want to add a new field.
Now to the question: given the flexibility of a data store like MongoDB, how would one go about achieving a similar kind of flexibility in the domain objects that represent this data? So for instance if I'm using Spring and creating my own domain obejcts, when I add a "middleName" field to my data, how can I avoid having to add a "middleName" field to my domain object? I'm looking for some kind of mechanism/approach/framework to dynamically inspect the data and have access to it in my domain object without having to make a code change every time. All ideas welcome.
I think you have a couple of choices:
You can use a dynamic programming language and not have domain objects (clojure for example)
If you're fixed on using java, the mongo java driver returns data in DBObject which is essentially a Map. So the default behavior already provides what you want. It's only when you map the DBObject into domain objects, using a library like morphia (or spring-data), that you even have to worry about domain objects at all.
But, if I was using java, I would stick with the standard convention of domain objects mapped via morphia, because I think adding a field is a very minor inconvenience when compared against the benefits.
I think the question is inherintly paradoxical-
On one hand, you want to have domain objects, i.e. objects that represent the data (and behaviour) of your problem domain.
On the other hand, you say that you don't want your domain objects to be explicitly influenced by changes to the data.
But when you have objects that represent your problem domain, you want to do just that- to represent your problem domain.
So that if, for example, middle name is added, then your representation of the real-life 'User' entity should change to accomodate this change to the real-life user; perhaps not only by adding this piece of data to your object, but also adding some related behaviour (validation of middle name, or some functionality related to it).
In essense, what I'm trying to say here is that when you have (classic OO) domain objects, you may need to change your behaviour / functionality along with your data, and since you don't have any automatic way of changing your behaviour, the question of automatically changing your data becomes irrelevant.
If you don't want behaviour associated with your data, then you essentialy have DTOs, and #Kevin's answer is what you're looking for.
Honestly, it sounds more like you're looking for some kind of blackbox DTO where, like you describe, fields are added or removed "arbitrarily" depending on the data. This makes me inclined to suggest a simple Map to do the job. You can't really have a domain-driven design if your domain model is constantly changing.

Thread-safe unique entity instance in Core Data

I have a Message entity that has a messageID property. I'd like to ensure that there's only ever one instance of a Message entity with a given messageID. In SQL, I'd just add a unique constraint to the messageID column, but I don't know how to do this with Core Data. I don't believe it can be done in the data model itself, so how do you go about it?
My initial thought is to use a validation method to do a fetch on the NSManagedObject's context for the ID, see if it finds anything but itself, and if so, fails the validation. I suspect this will work - but I'm worried about the performance of something like that. I went through a lot of effort to minimize the fetch requests needed for the entire import routine, and having it validate by performing a fetch for every single new message entity seems a bit excessive. I can get all pre-existing objects I need and identify all the new objects I need to insert into the store using just two fetch queries before I do the actual work of importing and connecting everything together. This would add a fetch to every single update or insert in addition to those two - which would seem to eliminate any performance advantage I had by pre-processing the import data in the first place!
The main reason this is an issue is that the importer can (potentially) run several batches concurrently on several threads and may include some overlapping/duplicate data that needs to ultimately result in just one object in the store and not duplicate entries. Is there a reasonable way to do this and does what I'm asking for make sense for Core Data?
The only way to guarantee uniqueness is to do a fetch. Fortunately you can just do a -countForFetchRequest:error: and check to see if it is zero or not. That is the least expensive way to guarantee uniqueness at this time.
You can probably accomplish this in the validation or run it in the loop that is processing the data. Personally I would do it above the creation of the NSManagedObject so that you do not have the unnecessary allocs when a record already exists.
I don't think there is a way to easily guarantee an attribute is unique without doing a lot of work on your own. You can, of course use CFUUIDCreate to create a globally unique UUID, which should be unique, even in a multithreaded environment. But...
The objectID (type NSManagedObjectID) of all managed objects is guaranteed to be unique within the persistent store coordinator. Since you can add arbitrarily many persistent stores to the coordinator, this guarantee basically guarantees that the objectIDs are globally unique. Why don't you use the objectID as your messageID? You can't, of course, change the objectID once it's assigned (and it won't get assigned until the context containing the inserted object is saved; until then it will be a temporary but still unique ID).
So you have a NSManagedContext for each thread, backed by the same persistent store, is that correct? And before you save the NSManagedContext, you'd like to make sure the messageID is unique, that is, that you are not updating an existing row, and that it is not in one of the other contexts, correct?
Given that model (correct me if I misunderstand), I think you'd be better served having one object that manages access to the persistent store. That way, all threads would update one context and you can do your validation in there, using Marcus's -countForFetchRequest:error: suggestion. Granted, that places a bottleneck on this operation.
Just to add my 2 cents: I think inconsistencies will occur sooner or later anyway, and the only way to mitigate them seems to be to do it on an application-level with rather complex code.
So in my case I decided to allow duplicate values for what are supposed to be "unique" fields.
I added code, however, that detects these problems later (e.g. when a fetch that should return 1 object returns more than 1) and fixes them when they occur (usually by deleting).
It's a "go ahead, make a mistake, ill fix it later for you"-strategy.
This is not ideal, of course, but a valid way to attack this problen, imho.

Resources