Entity Framework 4.3 data migrations: non-existing database - asp.net-mvc-3

I want to handle such situation:
I have an existing model (e.g. version 1.0);
I've added several data migrations and run them one-by-one consequentially;
for some reasons the existing database has been dropped.
I created the database from the scratch by using CreateDatabaseIfNotExists<MyContext> which will produce the latest scheme already with empty __MigrationHistory table.
While the next execution of website the data migrations error will occur.
The one way I found how to handle this is manually filling the __MigrationHistory table with all data migrations metadata which looks like not very optimistic.
Is any other way to handle such situation (scheme comparing for e.g.)?

Related

Populating tables in production with Laravel

In Laravel we have the concept of seeding data that is used in conjunction with model factories in order to populate data for testing environment.
How should we proceed (where to put the code) when we need to populate data for production? For example I might have a permission table, and I need to add some default permissions along with the schema creation. After a while I might need to add to my app a new permission. Should these data insertion stay together with the migrations?
What mechanism should we use for inserting data? Models or data array? My problem with data array is that no business from models, will be helpful: like: casts or relationships.
I know, that there two discussions about this subject, but for me the solutions do no cover all the problems:
Laravel : Migrations & Seeding for production data
Laravel DB Seeds - Test Data v Sample Data
How should we proceed (where to put the code) when we need to populate data for production?
Our team makes a brand new migration for inserting production seeds (separate from the migration that creates the table). That way, if you need to add more seeds to your production data in the future, you can simply make a new standalone migration.
For example, your first migration could be 2016_03_05_213904_create_permissions_table.php, followed by your production seeds: 2016_03_05_214014_seed_permissions_table.php.
You could put this data in the same migration as your table creation, but in my opinion, the migration becomes less readable and arguably violates SRP. If you needed to add more seeds in the future, you would have two different "standards" (one group of seeds in your original migration, and another in a separate migration).
To answer your second question:
What mechanism should we use for inserting data?
I would always use your model's create() method for inserting production seeds. This ensures that any event listeners that are listening for your model's creation event properly fire. As you said, there could be extra code that needs to fire when your model is created.

Don't Leave Me Table

I am trying to create a rake task that will rollback the database but keep one table. I would guess that the easiest way to do that would be to store that table (maybe in seeds.rb) then re-insert it. My ORM is activerecord and my database is postgresql.
If you are needing to do the rollback solely on your development environment, you could do the rollback, edit the migration file to contain only the one table you want to keep, and then re-migrate. (Don't forget you may need to rollback both dev and test environments).
If you're in a team that already has performed this migration, you're probably better off not rolling back. Instead you could create a new migration that undoes all but the one table's changes.
Do you mean drop all tables except one?
You can list Postgres tables via tables.
Different ways to list tables are here.
Then you could use drop_table (note cascade).

Rewrite PK and related FK based on an oracle sequence

I want to migrate a subset of customer data from one shared database environment to another shared database environment. I use hibernate and have quite a few ID and FK_ID columns which are auto generated from an oracle sequence.
I have a liquibase change log that I exported from jailer which has the customer specific data.
I want to be able to rewrite all of the sequence ID columns so that they don't clash with what's already in the target database.
I would like to avoid building something that my company has to manage, and would prefer to upstream this to liquibase.
Is anyone aware of anything within liquibase that might be a good place to start.
I would like to either do this on the liquidbase xml before passing it to 'update' command, or as part of the update command itself. Ideally as part of the update command itself.
I am aware that I would need to make liquibase aware of which columns are PK sequence columns and the related FK columns. The database structure does have this all well defined, so I should be able to read this into the update process.
Alternatively I had thought I could use the extraction model csv from jailer
Jailer - http://jailer.sourceforge.net/
I would suggest that for one-time data migrations like this, Liquibase is not the best tool. It is really better for schema management rather than data management. I think that an ETL tool such as Pentaho would be a better solution.
I actually managed to figure it out for myself with the command line 'update' command of liquibase by using a custom change exec listener.
1) I pushed a MR to liquibase to allow registration of a change exec listener
2) I implemented my own change exec listener that intercepts each insert statement and rewrites each FK and PK field to one that is not as yet allocated in the target database. I achieve this by using a oracle sequence. In order to avoid having to go back to the database each time for a new sequence, I implemented my own version of the hibernate sequence caching
https://github.com/liquibase/liquibase/pull/505
https://github.com/pellcorp/liquibase-extensions
This turned out to be quite a generic solution and in concert with some fixes upstreamed to jailer to improve the liquibase export support its a very viable and reusable solution.
Basic workflow is:
1) Export a subset of data from source db using jailer to liquibase xml
2) Run the liquibase update command, with the custom exec change listener against the target.
3) TODO Run the jailer export on the target db and compare with the original source data.

Update H2Database schema with ORMLite

I am using H2Database With ORMLite. we have 60 tables all created with ORMLite "create if not exists", Now we are going to provide a major release and requirement is to update old version database. But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column. any suggestions. I have seen some other posts regarding OrmLite for Android SqlLite. How can this approach be used for other db. e.g Like this post
ORMLite update of the database
But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column.
I'm not sure there is any easy answer here. ORMLite doesn't directly provide any magic capabilities to make the migration of data any easier. Here are some thoughts however:
You will need to use some sort of SQL logic to determine whether your application has the "old" or "new" schema installed. You could use raw SQL to look for the existance of particular tables or columns. Might be a good idea going forward to store a meta table with database version which Android gets for free.
You can create new and old versions of each of your entities (OldAccount versus Account) and map them both to the same table with the #DatabaseTable(tableName = "accounts"). Then you can read the old entities using the oldAccountDao.iterator(), convert them to new entities and (as long as you aren't mucking with the primary key) update them using the new accountDao.update(...).
You can certain come up with a series of SQL statements that will need to be performed in the proper order to change the schema. Then call the dao.exectuteRaw(...) with them in order.
Obviously the new entities will just be created.
You might want to consider dumping a backup file of all tables somewhere before the conversion process and telling the user about it in case there is some failure so your users could revert and run the old version of your application.
Hopefully something here is helpful.

EF 5 - Code first migrations optimization

I use Entity framework 5 code first with enabled migrations. I made many changes to my model classes and now I have too much migrations classes because after every change I updated the database.
Now I want to merge all my updates to get one "initial class" or 2 so that I could run the update-database command only once if I have to create my database again.
Is that possible with no code (too heavy), I mean with a command for instance ?
Thanks.
The solution is based on whether you want to keep existing data in databases (if you have production databases this is definitely a must) or you can simply drop your database.
I. Cuurrent database can be droped
First you need to delete all migration steps and delete your current database then run the command
add-migration Initial
This way you will have only one migration step instead of a lot.
II. Data must be kept
First create a backup of the current database (that is set as the default database in your solution), then drop the database so when you run the add-migration command the Entity Framework will think that no migrations were applied yet.
After this do the steps as described in the first part and then you will have only one migration step named Initial. After this run the
update-database
command which will create a database matching your current model however only with one row in the __MigrationHistory table. Save this row from the __MigrationHistory table.
Now you can restore the database you've just backed up, delete all rows in the __MigrationHistory table and insert that one row you have saved before. After this the Entity Framework will correctly think that the database and your model is up-to-date and that this was only achieved by running the migration step initial and this way you can keep all your data.
Of course for doing this to multiple databases you only need to do these steps once and for the other databases you just need to delete the current rows in the __MigrationHistory table and insert the new row for the Initial migration step.

Resources