I have a couple of questions about core data model migration.
I have a pretty complex data model with a couple cases of entity inheritance. I was going to make some changes to the data model in a new version and try and setup migration but when it migrated the store I lost some of the data that belonged to an entity that inherited from another entity.
In my case I have a few entities that all inherit from a "Resource" entity. This resource entity has a attribute "name". When I try to migrate the data store all entities that inherit from the "Resource" entity lose their name.
Is their any way to get model migration working for a data model with inheritance? I have already shipped a beta and I need to make a couple of updates to the model but I obviously don't want the users to lose all of their data.
Thanks
Try "playing" on your new model with Column properties > Versioning > Renaming identier, entering the previous field name, which I guess is the same. I doubt that will work with inheritance, but that is worth the try... (That not so documented feature, allowing to keep data across renames, saved me several times).
If that doesn't work, I'm afraid you have to do a "manual migration"... with Model mappings and other things... which is imho a bit complex. See Apple documentation on this topic... I then would suggest to just rollback your changes and forget inheritance, quicker & easier, even if it is less "clean". Or just assume your users will loose some data, at beta stage this is not so important... (Or maybe you can just collect old data in memory/plist file before migrating model an then repopulate)
Good luck! CoreData automatic model migration is great, but take care that it will work only with simple modifications...
Oh, just one another trick, add -com.apple.CoreData.SQLDebug 1 to your app launch arguments, and you will get all sql requests generated by CoreData... That might help you to understand the migration process. (and some other things...)
Related
I've made a small change to my database (just added a new entity to my model) and created a lightweight mapping model to handle the migration.
The migration still seems to be quite slow, looking at the log of the migration, it seems all SQL Lite tables are created again and all data migrated.
So, this is how Core Data works? I can't have a faster migration?
ps. I've a complex model, with 30 entities, and many relationships. They are not inheriting from the same parent entity. Maybe core data is not designed to handle such complexity?
Migration is a relatively rare event. It can take a while, especially for large, complex models with lots of data.
I am not entirely sure that new tables are created, but I think that this is indeed happening. It is how I would implement the migration if Core Data did not do it for me.
Here are some suggestions for improvement:
Make sure the migration occurs in the background. Inform your user and keep the app responsive as much as possible.
Perhaps something went wrong when you "created a lightweight mapping model". If you are doing not more than a certain subset of changes (see the docs), "Lightweight Migration" does not require you to create a mapping model.
We are developing an application based on DDD principles. We have encountered a couple of problems so far that we can't answer nor can we find the answers on the Internet.
Our application is intended to be a cloud application for multiple companies.
One of the demands is that there are no physical deletions from the database. We make only passive deletion by setting Active property of entities to false. That takes care of Select, Insert and Delete operations, but we don't know how to handle update operations.
Update means changing values of properties, but also means that past values are deleted and there are many reasons that we don't want that. One of the primary reason is for Accounting purposes.
If we make all update statements as "Archive old values" and then "Create new values" we would have a great number of duplicate values. For eg., Company has Branches, and Company is the Aggregate Root for Branches. If I change Companies phone number, that would mean I have to archive old company and all of its branches and create completely new company with branches just for one property. This may be a good idea at first, but over time there will be many values which can clog up the database. Phone is maybe an irrelevant property, but changing the Address (if street name has changed, but company is still in the same physical location) is a far more serious problem.
Currently we are using ASP.NET MVC with EF CF for repository, but one of the demands is that we are able to easily switch, or add, another technology like WPF or WCF. Currently we are using Automapper to map DTO's to Domain entities and vice versa and DTO's are primary source for views, ie. we have no view models. Application is layered according to DDD principle, and mapping occurs in Service Layer.
Another demand is that we musn't create a initial entity in database and then fill the values, but an entire aggregate should be stored as a whole.
Any comments or suggestions are appreciated.
We also welcome any changes in demands (as this is an internal project, and not for a customer) and architecture, but only if it's absolutely neccessary.
Thank you.
Have you ever come across event sourcing? Sounds like it could be of use if you're interested in tracking the complete history of aggregates.
To be honest I would create another table that would be a change log inserting the old record and deleted records etc etc into it before updating the live data. Yes you are creating a lot of records but you are abstracting this data from live records and keeping this data as lean as possible.
Also when it comes to clean up and backup you have your live date and your changed / delete data and you can routinely back up and trim your old changed / delete and reduced its size depending on how long you have agreed to keep changed / delete data live with the supplier or business you are working with.
I think this would be the best way to go as your core functionality will be working on a leaner dataset and I'm assuming your users wont be wanting to check revision and deletions of records all the time? So by separating the data you are accessing it when it is needed instead of all the time because everything is intermingled.
I have predefined tables in the database based on which I have to develop a web application.
Should I base my model classes on the structure of data in the tables.
But a problem is that the tables are very poorly defined and there is much redundant data in them (which I can not change!).
Eg. in 2 tables three columns are same.
Table: Student_details
Student_id , Name, AGe, Class ,School
Table :Student_address
Student_id,Name,Age, Street1,Street2,City
I think you should make your models in a way that would be best suited for how they will be used. Don't worry about how the data is stored or where it is stored... otherwise why go through the trouble of layering your code. Why not just do the direct DB query right in your view? So if you are going to create an abstraction of your data... "model" ... make one that is designed around how it will be used... not how it will be or is persisted.
This seems like a risky project - presumably, there's another application somewhere which populates these tables. As the data model is not very sound from a relational point of view, I'm guessing there's a bunch of business/data logic glued into that app - for instance, putting the student age into the StudentAddress table.
I'd support jsobo in recommending you build your business logic independently of the underlying persistance mechanism, and that you try to keep your models as domain focused as possible, without too much emphasis on how the database happens to be structured.
You should, however, plan on spending a certain amount of time translating your domain models into their respective data representations and dealing with whatever quirks the data model imposes. I'd strongly recommend containing all this stuff in a separate translation layer - don't litter it throughout the rest of the application.
in early development stages the database is subject to continuous changes. I'm toying around with LinqToSQL and in most cases the Entity Model is just a 1:1 representation of the DB.
How can i keep the model up to date with the db changes?
Thanks.
I noticed that there is an "update model from database" command available if you right-click the Entity Framework design surface. I couldn't find such a thing for LINQ to SQL, so you might have to maintain by hand.
OTOH, it's just XML, so you could "just write some code".
The other thing to add is that I prefer the fact that in EF, I don't have to keep up to date with the physical database. I'm defining the entities that developers will use to access the data, and separately I'm defining the mapping between those entities and the logical database structure.
They don't need to be the same. If I want to split a table into two, or combine two entities into one table, I can do this, without requiring developers to rewrite their code.
This question is addressed to a degree in this question on LINQ to SQL .dbml best practices, but I am not sure how to add to a question.
One of our applications uses LINQ to SQL and we have currently have one .dbml file for the entire database which is becoming difficult to manage. We are looking at refactoring it a bit into separate files that are more module/functionality specific, but one problem is that many of the high level classes would have to be duplicated in several .dbml files as the associations can't be used across .dbml files (as far as I know), with the additional partial class code as well.
Has anyone grappled with this problem and what recommendations would you make?
Take advantage of the namespace settings. You can get to it in properties from clicking in the white space of the ORM.
This allows me to have a Users table and a User class for one set of business rules and a second (but the same data store) Users table and a User class for another set of business rules.
Or, break up the library, which should also have the affect of changing the namespacing depending on your company's naming conventions. I've never worked on an enterprise app where I needed access to every single table.
Past a certain size it probably becomes easier to work with the xml instead of the dbml designer.
I have written a tool too! Mine is for scripting changes to dbml files using c# so you can rerun them and not lose changes. See my blog http://www.adverseconditionals.com 4 more details
The approach that we've used it to keep 2 .dbml files. One of them holds the Stored Procs and all production DB access is done through this. The other is in a unit test folder and holds tables and their relationships and is used for DB data manipulation and querying for unit tests.
I have written a utility to address exactly that problem, I needed a quick app to let you select only the database objects you need. In my case I often needed a complex view, but no tables.
http://www.codeplex.com/SqlMetalInclude/