I am using Core Data for its storage features. At some point I make external API calls that require me to update the local object graph. My current (dumb) plan is to clear out all instances of old NSManagedObjects (regardless if they have been updated) and replace them with their new equivalents -- a trump merge policy of sorts.
I feel like there is a better way to do this. I have unique identifiers from the server, so I should be able to match them to my objects in the store. Is there a way to do this without manually fetching objects from the context by their identifiers and resetting each property? Is there a way for me to just create a completely new context, regenerate the object graph, and just give it to Core Data to merge based on their unique identifiers?
Your strategy of matching, based on the server's unique IDs, is a good approach. Hopefully you can get your server to deliver only the objects that have changed since the time of your last update (which you will keep track of, and provide in the server call).
In order to update the Core Data objects, though, you will have to fetch them, instantiate the NSManagedObjects, make the changes, and save them. You can do this all in a background thread (child context, performBlock:), but you'll still have to round-trip your objects into memory and back to store. Doing it in a child context and its own thread will keep your UI snappy, but you'll still have to do the processing.
Another idea: In the last day or so I've been reading about AFIncrementalStore, an NSIncrementalStore implementation which uses AFNetworking to provide Core Data properties on demand, caching locally. I haven't built anything with it yet but it looks pretty slick. It sounds like your project might be a good use of this library. Code is on GitHub: https://github.com/AFNetworking/AFIncrementalStore.
I'm trying to introduce caching into an existing server application because the database is starting to become overloaded.
Like many server applications we have the concept of a data layer. This data layer has many different methods that return domain model objects. For example, we have an employee data access object with methods like:
findEmployeesForAccount(long accountId)
findEmployeesWorkingInDepartment(long accountId, long departmentId)
findEmployeesBySearch(long accountId, String search)
Each method queries the database and returns a list of Employee domain objects.
Obviously, we want to try and cache as much as possible to limit the number of queries hitting the database, but how would we go about doing that?
I see a couple possible solutions:
1) We create a cache for each method call. E.g. for findEmployeesForAccount we would add an entry with a key account-employees-accountId. For findEmployeesWorkingInDepartment we could add an entry with a key department-employees-accountId-departmentId and so on. The problem I see with this is when we add a new employee into the system, we need to ensure that we add it to every list where appropriate, which seems hard to maintain and bug-prone.
2) We create a more generic query for findEmployeesForAccount (with more joins and/or queries because more information will be required). For other methods, we use findEmployeesForAccount and remove entries from the list that don't fit the specified criteria.
I'm new to caching so I'm wondering what strategies people use to handle situations like this? Any advice and/or resources on this type of stuff would be greatly appreciated.
I've been struggling with the same question myself for a few weeks now... so consider this a half-answer at best. One bit of advice that has been working out well for me is to use the Decorator Pattern to implement the cache layer. For example, here is an article detailing this in C#:
http://stevesmithblog.com/blog/building-a-cachedrepository-via-strategy-pattern/
This allows you to literally "wrap" your existing data access methods without touching them. It also makes it very easy to swap out the cached version of your DAL for the direct access version at runtime quite easily (which can be useful for unit testing).
I'm still struggling to manage my cache keys, which seem to spiral out of control when there are numerous parameters involved. Inevitably, something ends up not being properly cleared from the cache and I have to resort to heavy-handed ClearAll() approaches that just wipe out everything. If you find a solution for cache key management, I would be interested, but I hope the decorator pattern layer approach is helpful.
Our app was originally built with NHibernate and its limitations of batch processing in mind. However, over time it has transformed into a data cruncher and we are observing a significant performance decay.
The session ends up having to maintain about 1000 objects or more and our profiling has revealed that auto flushing and dirty checking are the biggest offenders here. We tried shutting auto flush and managing it ourselves on Save/Update operations but that led to disastrous performance for a batch save/update.
We're now looking at the option of evicting unrequired objects from the session.
I came across 2nd level-cache eviction method (sessionFactory.Evict(typeof(Cat));) which lets us evict by type but we do not use a 2nd level cache. Can I still use this method to evict objects from the 1st level cache?
I also read about one pattern of fetching objects, evicting them from session, and then reassociating them, if needed, with session by calling Update() on them. Is this a recommended and accepted pattern cause I also read that NH3 has put up a wall to this? (We can still use it as we have not upgraded to NH3)
While we realize that we are not using NHibernate in the best way, we are just looking to improve the current situation somehow. Answers to the above questions and any other suggestions/recommendations are greatly appreciated. Thanks.
Update
After looking at NH documentation and code, I realize that 1 is probably not possible. I'm still looking at some pointers or tips on using Evict(). I was able to drastically reduce the number of objects in a session. But still do not know if there is a price to pay while updating or deleting evicted objects. Thanks for your help in advance.
It's hard to say without knowing more about your requirements but maybe you could use IStatelessSession. It doesn't have a 1st level cache to worry about.
Ayende has a good post on using it for bulk operations
here
Why not use more sessions, instead of one large one? That, in conjunction with turning off autoflush has helped me in the past. Also, you should really think about using HQL for bulk updates if possible.
I know that this is old, but I just came across this while looking for something else -- having just solved this. I did solve as Trent mentioned, by using more than one session. I would create one session to fetch all of the objects I wanted, then closed that session. The case I had, was iterating through the list and operating on each object and trying to commit on each iteration. I would then create the foreach over my list, creating and disposing of a new session inside the loop, reattaching my object from the list to the new session. That took a process that was taking about 2.5 hours down to 2 minutes 40 seconds!
See this article for the inspiration to how I solved it -- although not exactly as I have unit of work wrappers around NHibernate:
http://weblogs.asp.net/ricardoperes/archive/2013/03/21/attaching-disconnected-entities-in-nhibernate-without-going-to-the-database.aspx
Im trying to find a good way to handle memcache keys for storing, retrieving and updating data to/from the cache layer in a more civilized way.
Found this pattern, which looks great, but how do I turn it into a functional part of a PHP application?
The Identity Map pattern: http://martinfowler.com/eaaCatalog/identityMap.html
Thanks!
Update: I have been told about the modified memcache (memcache-tag) that apparently does do a lot of this, but I can't install linux software on my windows development box...
Well, memcache use IS an identity map pattern. You check your cache, then you hit your database (or whatever else you're using). You can go about finding information about the source by storing objects instead of just values, but you'll take a performance hit for that.
You effectively cannot ask the cache what it contains as a list. To mass invalidate, you'll have to keep a list of what you put in and iterate it, or you'll have to iterate every possible key that could fit the pattern of concern. The resource you point out, memcache-tag can simplify this, but it doesn't appear to be maintained inline with the memcache project.
So your options now are iterative deletes, or totally flushing everything that is cached. Thus, I propose a design consideration is the question that you should be asking. In order to get a useful answer for you, I query thus: why do you want to do this?
I have been playing with some LINQ ORM (LINQ directly to SQL) and I have to admit I like its expressive powers . For small utility-like apps, It also works quite fast: dropping a SQL server on some surface and you're set to linq away.
For larger apps however, the DAL never was that big of an issue to me to setup, nor maintain, and more often than not, once it was set, all the programming was not happening there anyway...
My, honest - I am an ORM newbie - question : what is the big advantage of ORM over writing a decent DAL by hand?
(seems like a double, couldn't find it though)
UPDATE : OK its a double :-) I found it myself eventually :
ORM vs Handcoded Data Access Layer
Strong-typing
No need to write the DAL yourself => time savings
No need to write SQL code yourself =>
less error-prone
I've used Hibernate in the past to dynamically create quite complex queries. The logic involved to create the appropriate SQL would have been very time-consuming to implement, compared with the logic to build the appropriate Criteria. Additionally, Hibernate knew how to work with various different databases, so I didn't need to put any of that logic in our code. We had to test against different databases of course, and I needed to write an extension to handle "like" queries appropriately, but then it ran against SQL Server, Oracle and HSqldb (for testing) with no issues.
There's also the fact that it's more code you don't have to write, which is always a nice thing :) I can't say I've used LINQ to SQL in anything big, but where I've used it for a "quick and dirty" web-site (very small, rarely updated, little benefit from full layer abstraction) it was lovely.
I used JPA in a project, and at first I was extremely impressed. Gosh it saved me all that time writing SQL! Gradually, however, I became a bit disenchanted.
Difficulty defining tables without surrogate keys. Sometimes we need tables that don't have surrogate keys. Sometimes we want a multicolumn primary key. TopLink had difficulties with that.
Forced datastructure relationships. JPA uses annotations to describe the relationship between a field and the container or referencing class. While this may seem great at first site, what do you do when you reference the objects differently in the application? Say for example, you need just specific objects that reference specific records based on some specific criteria (and it needs to be high-performance with no unnecessary object allocation or record retrieval). The effort to modify Entity classes will almost always exceed the effort that would have existed had you never used JPA in the first place (assuming you are at all successful getting JPA to do what you want).
Caching. JPA defines the notion of caches for your objects. It must be remembered that the database has its own cache, typically optimized around minimizing disk reads. Now you're caching your data twice (ignoring the uncollected GC heap). How this can be an advantage is beyond me.
Data != Objects. For high-performance applications, the retrieval of data from the DB must be done very efficiently. Forcing object creation is not always a good thing. For example, sometimes you may want arrays of primitives. This is about 30 minutes of work for an experienced programmer working with straight JDBC.
Performance, debugging.
It is much more difficult to gauge the performance of an application with complex things going on in the (sub-optimal, autogenerated) caching subsystem, further straining project resources and budgets.
Most developers don't really understand the impedence mismatch problem that has always existed when mapping objects to tables. This fact ensures that JPA and friends will probably enjoy considerable (cough cough) success for the forseeable future.
Well, for me it is a lot about not having to reinvent/recreate the wheel each time I need to implement a new domain model. It is simply a lot more efficient to use for instance nHibernate (my ORM of choice) for creating, using and maintaining the data access layer.
You don't specify exactly how you build your DAL, but for me I used to spend quite some time doing the same stuff over and over again. I used to start with the database model and work my way up from there, creating stored procedures etc. Even if I sometimes used little tools to generate parts of the setup, it was a lot of repetitive coding.
Nowadays I start with the domain. I model it in UML, and for most of the time I'm able to generate everything from that model, including the database schema. It need a few tweaks here and there, but with my current setup I get 95% of the job with the data access done in no time at all. The time I save I can use to fine tune the parts that need tuning. I seldom need to write any SQL statements.
That's my two cents. :-)
Portability between different db vendors.
My, honest - i am an ORM newbie - question : what is the big advance of ORM over writing a decent DAL by hand?
Not all programmers are willing or even capable of writing "a decent DAL". Those who can't or get scared from the mere thought of it, find LINQ or any other ORM a blessing.
I personally use LINQ to manipulate collections in the code because of its expressiveness. It offers a very compact and transparent way to perform some common tasks on collections directly in code.
LINQ will stop being useful to you when you will want to create very specific and optimized queries by hand. Then you are likely to get a mixture of LINQ queries intermingled with custom stored procedures wired into it. Because of this considerations, I decided against LINQ to SQL in my current project (since I have a decent (imho) DAL layer). But I'm sure LINW will do just fine for simple sites like maybe your blog (or SO for that matter).
With LINQ/ORM there may also be a consideration of lagging for high traffic sites (since each incoming query will have to be compiled all over again). Though I have to admit I do not see any performance issues on SO.
You can also consider waiting for the Entity Framework v2. It should be more powerful than LINQ (and hopefully not that bad as v1 (according to some people)).
Transparent persistence - changes get saved (and cascaded) without you having to call Save(). At first glance this seems like a nightmare, but once you get used to working with it rather than against it, your domain code can be freed of persistence concerns almost completely. I don't know of any ORM other than Hibernate / NHibernate that does this, though there might be some...
The best way to answer the question is to understand exactly what libraries like Hibernate are actually accomplishing on your behalf. Most of the time abstractions exist for a reason, often to make certain problems less complex, or in the case Hibernate is almost a DSL for expression certain persistance concepts in a simple terse manner.
One can easily change the fetch strategy for collections by changing an annotation rather than writing up lots of code.
Hibernate and Linq are proven and tested by many, there is little chance you can achieve this quality without lots of work.
Hibernate addresses many features that would take you months and years to code.
Also, while the JPA documentation says that composite keys are supported, it can get very (very) tricky quickly. You can easily spend hours (days?) trying to get something quite simple working. If JPA really makes things simpler then developers should be freed from thinking too much about these details. It doesn't, and we are left with having to understand two levels of abstraction, the ORM (JPA) and JDBC. For my current project I'm using a very simple implementation that uses a package protected static get "constructor" that takes a ResultSet and returns an Object. This is about 4 lines of code per class, plus one line of code for each field. It's simple, high-performance, and quite effective, and I retain total control. If I need to access objects differently I can add another method that reads different fields (leaving the others null, for example). I don't require a spec that tells me "how ORMs must (!) be done". If I require caching for that class, I can implement it precisely as required.
I have used Linq, I found it very useful. I saves a lot of your time writing data access code. But for large applications you need more than DAL, for them you can easily extent classes created by it. Believe me, it really improves your productivity.