How to squash/merge migrations in flyway - oracle

Let's say I have migrations script from V1_1 to V1_300 - it is quite a huge number and takes very long period of time. But from time to time there is a release - can I, from the point of flyway, somehow merge all those migrations so:
All migrations from V1_1 to V1_300 will be in one file (for instance: V2_1)
Amount of time taken by these migrations will drop
Checking for overlaps manually is really time-consuming. Thank you in advance for your answers.

We had the same problem on my project and decided to do a roll up of versions already deployed to production. To roll up incremental changes into 1 file, I ran the migration from scratch in a database then dump (export) back the whole database in 1 SQL file.
I named the file using the last version of the migration. In your case V1_300__rollup.sql. Then you can continue adding new versions: V2_1, V2_2, etc. and repeat when you want to roll up.

Related

Is there an alternative migrations workflow for very early stages of development. Laravel

My reasoning, you can skip to below if not interested
I understand how the general procedure of migrations work and the purpose they serve, and very happy to use them in the way that is expected, that is by adding and removing fields as nessesary throughout the applications life.
My query is that at the very beginning of a project I rarely know many of the fields I will need in a given table, and at the very early stages of my projects I want to get the main features and relationships set up, and maybe just use some dud fields before the client makes up their minds on things.
The bottom line is it hurts my OCD knowing there are extra migration files sitting there that potentially look nothing like v1.0 of the project... once im at v0.5 I may decide im far enough along to start properly managing migrations.
Thats my thoughts, but here is the question:
What is the cleanest steps to reuse the same migration script again and again in the early stages of a project while there is no worry about loss of data or rolling back.
Just to add to this i would not want to refresh the entire migration because I would really prefer to keep any data I am playing around and especially the user table for staying logged in to the backend etc.
Would it be wrong to do this:
Could I just remove the migration row in the table, then run the migration?
This feels like it would have side effects, and possibly screw up rolling back, is that the case? What part does the migration table play, as this seems to work in practice?
Final words
Please bear in mind this is just a concept I'm trying to get my head around. If its absolutely bad practice no matter the circumstance I can accept it!
Edit:
Create a new seeder with php artisan make:seeder UserSeeder
Edit the seeder to "seed" the necessary data. Ex:
DB::table('users')->insert([
'name' => 'John Doe',
'email' => 'johndoe#example.com',
'password' => Hash::make('password'),
]);
Then call the built-in artisan function php artisan migrate:fresh --seed which will drop all tables and re-run all of your migrations, then seed your data with your seeders.
You can read more about this process here.
Original:
If you plan to support a live application in the long run, chances are you will have very many of these separate migration files that will still hurt your OCD every time you make a new one. It happens to me every time I create a new one migration to alter a table, haha.
However, in development I can understand your point if you're working on a private codebase and no other developers will be trying to keep up with your changes. If you do, any changes you make to the old migration files will be very tough for them to mimic as the migrations table keeps track of what migrations need to be run (if any), so if someone else tried to migrate after you've changed a previous migration, nothing would happen.
What I would do, is either set up a database seeder in Laravel so you can quickly reseed the data in the table if you rollback a migration, or get a sql dump of your table and reinsert it after you've migrated again.
Another thing you could consider is, not to worry about the migrations directory at this point in development and once you're ready to deploy or push, go through your table alterations and kind of "refactor" them into your desired migrations. But definitely run some thorough testing after this to ensure you're not missing any columns or alterations.

rake db:schema:dump and rake db:schema:load equivalent in Sequel

I tried following through the source code and the docs, but after a lost morning I give up: at the same time it's as if not enough assumptions are made in the SchemaDumper and at the same time there's no SchemaLoader and following through the sequel command source code it seems that it clobbers migration information to-date (since it has no "migrations to date" kind of information in the resulting file).
The motivation to do this is a failed migration in tests (Sequel thinks the tables are not there, yet they are so it breaks both on migrating to new versions and the check pending migrations check fails) - and a previous experience that running all migrations from start of history to today is generally a bad way to put up a database.
I've got this thus far:
namespace :schema do
task :dump => :migrations_environment do
schema = without_sequel_logging{ DB.dump_schema_migration }
File.open("db/schema.rb", 'w') {|f| f.write(schema) }
end
task :load => :migrations_environment do
Sequel::Migrator.run(DB, "db/schema.rb")
end
end
normally the load fails since the Migrator makes a load of assumptions ranging starting from that it will be given a folder full of files in a specific order, yet this is apparently exactly what sequel -m and sequel -d should do according to current source code - and sequel -m and sequel -d combo are apparently what you should use when you want to do schema dump & schema load.
Any ideas?
I think you are misunderstanding the point of Sequel's schema dump and load. Schema dumping should only be used if you have an existing database and would like to produce a migration from it, either for review of what tables/columns exist, or for loading into an empty database. Loading a migration dumped by the schema dumper should only be done on an empty database.
If you already have an existing test database that is not empty (i.e. previous migrations have been applied to it), you shouldn't be using schema dump and load, you should just run the migrator on the test database. In general, it's best to migrate your test database before you migrate your development database, so you can then run your tests and see if the migration breaks anything.
The only time you should have to run all migrations since the beginning is if you have an empty database. If you migrate your test databases similar to the way you migrate your development and production databases, you are generally only applying a single migration at a time.
Note that the schema dumper only handles a small fraction of what is possible, and will only work correctly for the simplest cases. It doesn't handle dumping views, functions, triggers, partial/functional indexes, and a whole range of other things. For all but the simplest cases, use the database's tools to dump and load schema.

EF 5 - Code first migrations optimization

I use Entity framework 5 code first with enabled migrations. I made many changes to my model classes and now I have too much migrations classes because after every change I updated the database.
Now I want to merge all my updates to get one "initial class" or 2 so that I could run the update-database command only once if I have to create my database again.
Is that possible with no code (too heavy), I mean with a command for instance ?
Thanks.
The solution is based on whether you want to keep existing data in databases (if you have production databases this is definitely a must) or you can simply drop your database.
I. Cuurrent database can be droped
First you need to delete all migration steps and delete your current database then run the command
add-migration Initial
This way you will have only one migration step instead of a lot.
II. Data must be kept
First create a backup of the current database (that is set as the default database in your solution), then drop the database so when you run the add-migration command the Entity Framework will think that no migrations were applied yet.
After this do the steps as described in the first part and then you will have only one migration step named Initial. After this run the
update-database
command which will create a database matching your current model however only with one row in the __MigrationHistory table. Save this row from the __MigrationHistory table.
Now you can restore the database you've just backed up, delete all rows in the __MigrationHistory table and insert that one row you have saved before. After this the Entity Framework will correctly think that the database and your model is up-to-date and that this was only achieved by running the migration step initial and this way you can keep all your data.
Of course for doing this to multiple databases you only need to do these steps once and for the other databases you just need to delete the current rows in the __MigrationHistory table and insert the new row for the Initial migration step.

Snapshot too old error

I am getting 'snapshot too old error' frequently while i am running my workflow when it runs for more than 5 hrs.My source is oracle and target is Teradata. Please help to solve this issue.Thanks in advance
The best explanation of the ORA-01555 snapshot too old error that I've read, is found in this AskTom thread
Regards.
The snapshot too old error is more or less directly related to the running time of your queries (often a cursor of a FOR loop). So the best solution is to optimize your queries so they run faster.
As a short term solution you can try to increase the size of the UNDO log.
Update:
The UNDO log stores the previous version of a record before it's updated. It is used to rollback transactions and to retrieve older version of a record for consistent data snapshots for long running queries.
You'll probably need to dive into Oracle DB administration if you want to solve it via increasing the UNDO log. Basically you do (as SYSDBA):
ALTER SYSTEM SET UNDO_RETENTION = 21600;
21600 is 6 hours in seconds.
However, Oracle will only keep 6 hours of old data if the UNDO log files are big enough, which depends on the size of the rollback segments and the amount of updates executed on the database.
So in addition to changing the undo retention time, you should also make sure that few concurrent updates are executed while your job is running. In particular, updates of the data your job is reading should be minimized.
If everything fails, increase the UNDO logs.

Database Project Insists on "Rebuilding" Table on Deployment for Dropped Columns

So I have a VS2010 Database Project that I am deploying, with a few schema changes. I have one table in particular that the VSDBCMD insists on "rebuilding" i.e. rename->create->copy->drop
The only changes for this table are dropping some columns, which could be handled by the simply I dunno, dropping the columns. Normally I wouldn't mind, except this particular table is called "Attachments" and weighs in at 15 gigs or so. Which takes a long time, locks up the database and fails locally, as I don't have 15+ gigs free, and times out remotely in our testing environment.
Can anyone direct me to the rules VSDBCMD follows for changing the schema when it deploys?
Or perhaps you have experienced similar issues and have a suggestion?
Thanks!
VSDBCMD just 'likes' rebuilding tables too often, and I don't have the 'magic vsdbcmd manual' for when it chooses to rebuild a table unfortunately, but I don't trust the output of VSDBCMD on a production database anyway without manual checking first anyway.
There's a setting in the 'dbname.sqldeployment' file that allows the setting 'IgnoreColumnOrder' that might help prevent rebuilding the table (maybe it's triggering the rebuild because the column index has changed).
In your case I would just run a manually created script on your DB.
Heck, writing 'alter table Attachments drop column uselessData' would've probably cost you 10% of the time you put into asking this question in the first place :)

Resources