Don't Leave Me Table - ruby

I am trying to create a rake task that will rollback the database but keep one table. I would guess that the easiest way to do that would be to store that table (maybe in seeds.rb) then re-insert it. My ORM is activerecord and my database is postgresql.

If you are needing to do the rollback solely on your development environment, you could do the rollback, edit the migration file to contain only the one table you want to keep, and then re-migrate. (Don't forget you may need to rollback both dev and test environments).
If you're in a team that already has performed this migration, you're probably better off not rolling back. Instead you could create a new migration that undoes all but the one table's changes.

Do you mean drop all tables except one?
You can list Postgres tables via tables.
Different ways to list tables are here.
Then you could use drop_table (note cascade).

Related

how to install patches in parallel in liquibase?

my project has large oracle sql scripts. liquibase locks the schema (DATABASECHANGELOGLOCK table) when installing a single patch. How do I install multiple patches in parallel without a queue?
P.S. Oracle will independently make locks at its discretion.
Any DDL is make the new schema state that is based on previous state. If the previous state is not valid, you cant apply next DDL (it is impossible to add new constrain to the column that not exist). To check the previous state, you use precondition in your changesets.
So, in general it is impossible to parallelise the schema update, because the schema changes should be applied in order and the order can't be changed.
The lock on DATABASECHANGELOGLOCK is aimed to be sure that it is impossible to run two schema update process in one time, and it is reasonable restriction, so don't try to get around it.
If update process takes to much time, just be sure that you:
not use liquibase to change database state (add data to tables)
not use liquibase to update code objects (functions, procedures and etc.) in the database
not use liquibase for migrate large amount of data

Cassandra Best Practice on edits: delete & re-insert vs. update?

I am new to Cassandra. I am looking at many examples online. Here is one from JHipster Cassandra examples on GitHub:
https://gist.github.com/jdubois/c3d3bedb869466731316
The repository save(user) method does a read (to look for existence) then a delete and re-insert of the existing user across all the denormalized tables whenever the user data changed.
Is this best practice?
Is this only because of how the data model for this sample is designed?
Is this sample's design a result of twisting a POJO framework into a NoSQL database design?
When would I want to just do a update in Cassandra? It supports updates at the field-level, so it seems like that would be preferred.
First of all, the delete operations should be part of the batch for more robust error handling. But it looks like there are also some concurrency issues with the code. It will update the user based on the current user value read before. It's not save to assume this will still be the latest value while save() is actually executed. It will also just overwrite any keys in the lookup table that might be in use for a different user at that point. E.g. the login could already exist for another user while executing insertByLoginStmt.
It is not necessary to delete a row before inserting a new one.
But if you are replacing rows and new columns are different from existing columns then you need to delete all existing columns and insert new columns. Or insert new and delete old, does not matter if happens in batch.

Update H2Database schema with ORMLite

I am using H2Database With ORMLite. we have 60 tables all created with ORMLite "create if not exists", Now we are going to provide a major release and requirement is to update old version database. But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column. any suggestions. I have seen some other posts regarding OrmLite for Android SqlLite. How can this approach be used for other db. e.g Like this post
ORMLite update of the database
But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column.
I'm not sure there is any easy answer here. ORMLite doesn't directly provide any magic capabilities to make the migration of data any easier. Here are some thoughts however:
You will need to use some sort of SQL logic to determine whether your application has the "old" or "new" schema installed. You could use raw SQL to look for the existance of particular tables or columns. Might be a good idea going forward to store a meta table with database version which Android gets for free.
You can create new and old versions of each of your entities (OldAccount versus Account) and map them both to the same table with the #DatabaseTable(tableName = "accounts"). Then you can read the old entities using the oldAccountDao.iterator(), convert them to new entities and (as long as you aren't mucking with the primary key) update them using the new accountDao.update(...).
You can certain come up with a series of SQL statements that will need to be performed in the proper order to change the schema. Then call the dao.exectuteRaw(...) with them in order.
Obviously the new entities will just be created.
You might want to consider dumping a backup file of all tables somewhere before the conversion process and telling the user about it in case there is some failure so your users could revert and run the old version of your application.
Hopefully something here is helpful.

EF 5 - Code first migrations optimization

I use Entity framework 5 code first with enabled migrations. I made many changes to my model classes and now I have too much migrations classes because after every change I updated the database.
Now I want to merge all my updates to get one "initial class" or 2 so that I could run the update-database command only once if I have to create my database again.
Is that possible with no code (too heavy), I mean with a command for instance ?
Thanks.
The solution is based on whether you want to keep existing data in databases (if you have production databases this is definitely a must) or you can simply drop your database.
I. Cuurrent database can be droped
First you need to delete all migration steps and delete your current database then run the command
add-migration Initial
This way you will have only one migration step instead of a lot.
II. Data must be kept
First create a backup of the current database (that is set as the default database in your solution), then drop the database so when you run the add-migration command the Entity Framework will think that no migrations were applied yet.
After this do the steps as described in the first part and then you will have only one migration step named Initial. After this run the
update-database
command which will create a database matching your current model however only with one row in the __MigrationHistory table. Save this row from the __MigrationHistory table.
Now you can restore the database you've just backed up, delete all rows in the __MigrationHistory table and insert that one row you have saved before. After this the Entity Framework will correctly think that the database and your model is up-to-date and that this was only achieved by running the migration step initial and this way you can keep all your data.
Of course for doing this to multiple databases you only need to do these steps once and for the other databases you just need to delete the current rows in the __MigrationHistory table and insert the new row for the Initial migration step.

Auditing in Oracle

I need some help in auditing in Oracle. We have a database with many tables and we want to be able to audit every change made to any table in any field. So the things we want to have in this audit are:
user who modified
time of change occurred
old value and new value
so we started creating the trigger which was supposed to perform the audit for any table but then had issues...
As I mentioned before we have so many tables and we cannot go creating a trigger per each table. So the idea is creating a master trigger that can behaves dynamically for any table that fires the trigger. I was trying to do it but no lucky at all....it seems that Oracle restricts the trigger environment just for a table which is declared by code and not dynamically like we want to do.
Do you have any idea on how to do this or any other advice for solving this issue?
If you have 10g enterprise edition you should look at Oracle's Fine-Grained Auditing. It is definitely better than rolling your own.
But if you have a lesser version or for some reason FGA is not to your taste, here is how to do it. The key thing is: build a separate audit table for each application table.
I know this is not what you want to hear because it doesn't match the table structure you outlined above. But storing a row with OLD and NEW values for each column affected by an update is a really bad idea:
It doesn't scale ( a single update touching ten columns spawns ten inserts)
What about when you insert a record?
It is a complete pain to assemble the state of a record at any given time
So, have an audit table for each application table, with an identical structure. That means including the CHANGED_TIMESTAMP and CHANGED_USER on the application table, but that is not a bad thing.
Finally, and you know where this is leading, have a trigger on each table which inserts a whole record with just the :NEW values into the audit table. The trigger should fire on INSERT and UPDATE. This gives the complete history, it is easy enough to diff two versions of the record. For a DELETE you will insert an audit record with just the primary key populated and all other columns empty.
Your objection will be that you have too many tables and too many columns to implement all these objects. But it is simple enough to generate the table and trigger DDL statements from the data dictionary (user_tables, user_tab_columns).
You don't need write your own triggers.
Oracle ships with flexible and fine grained audit trail services. Have a look at this document (9i) as a starting point.
(Edit: Here's a link for 10g and 11g versions of the same document.)
You can audit so much that it can be like drinking from the firehose - and that can hurt the server performance at some point, or could leave you with so much audit information that you won't be able to extract meaningful information from it quickly, and/or you could end up eating up lots of disk space. Spend some time thinking about how much audit information you really need, and how long you might need to keep it around. To do so might require starting with a basic configuration, and then tailoring it down after you're able to get a sample of the kind of volume of audit trail data you're actually collecting.

Resources