To generate insert statements from my databases (oracle,db2) i've used liquibase generateChangeLog command with argument
--diffTypes="data"
This command generate correct xml with insert statements, however this is not aware of foreign constraints, so I cannot use this file to again to fill my databases. Similar problems has been described here : Is there a way to generate Liquibase data in the right order?. The proposed workaround unfortunatelly is not possibile for my databases, because there is no any command to switch of constraint checks.
My question is, if exists any other solution to this problem ? Why can I generate data insert statements changelog, but cannot use it because of foreign key constraints ?
Jens is right in the comment. Liquibase has no way of determining dependencies because the main use case is tracking ran changeSets. GenerateChangeLog is a useful capability but it is not intended to handle all cases and managing dependencies is a complex task that is definitely out of scope.
My normal recommendation is that the output of generateChangeLog should be considered a useful first step for working with the changeLog and if you have dependency issues just reorder the changeSets before executing them. If there many of them, you may want to write a script to reorder them based on your knowledge of your dependencies.
If you can export schema + data, a good solution is separate DDL, DML and this part of DDL that create the constraints. You have to reorder those in the following order:
DDL (without constraints)
DML (data)
DDL (the constraints removed from step 1)
It requires a little bit of manual editing but you'll have to do it once for your project.
And, as a side note, for you next project, start using liquibase from the start, this way you will never encounter this problem anymore.
The jailer export liquidbase feature is excellent for this. It generates a change log in topological order.
http://jailer.sourceforge.net/
Related
I have a multi-maven module project and I have two modules that contain some Liquibase configuration with tables.
I want to add a foreign key between two tables but table A is in module A and table B is in module B. Problem is that during compilation I got an error because Liquibase doesn't see table A...
Is some solution for that?
I'm assuming this problem only appears when you use jOOQ's LiquibaseDatabase, which simulates your liquibase migrations for code generation. I can think of a few solutions to this problem:
Use a test containers based code generation instead. The blog post before uses Flyway, but it will work the same with Liquibase. This means that during your build of both modules, you'll have a running database instance, which will be up to date by the time you generate jOOQ code. That also removes any simulation related issues, such as allowing you to use vendor specific features of your RDBMS.
Extract the database change management and schema into another module, making the schema "global", instead of module dependent. Your foreign key kinda hints at the schema being global anyway. The relational model isn't really directed in a tree form. It's a graph. Sooner rather than later, you'll have a key from A to B and your dependencies will become cyclic.
Stay purely "modular" and make the schemas independent, removing the key. Of course, data integrity is a good thing, so the price to pay for this is high, but it would solve your immediate problem.
Remove the key from jOOQ's code generation and declare a synthetic foreign key in jOOQ's code generation (though, B would still somehow have to know about A in the generation process)
Run both liquibase migrations for A and B when you generate code for module B. That way, B has all the information available again.
Which to pick is a subjective choice and doesn't have a clear answer. But at least, this gives you options.
my project has large oracle sql scripts. liquibase locks the schema (DATABASECHANGELOGLOCK table) when installing a single patch. How do I install multiple patches in parallel without a queue?
P.S. Oracle will independently make locks at its discretion.
Any DDL is make the new schema state that is based on previous state. If the previous state is not valid, you cant apply next DDL (it is impossible to add new constrain to the column that not exist). To check the previous state, you use precondition in your changesets.
So, in general it is impossible to parallelise the schema update, because the schema changes should be applied in order and the order can't be changed.
The lock on DATABASECHANGELOGLOCK is aimed to be sure that it is impossible to run two schema update process in one time, and it is reasonable restriction, so don't try to get around it.
If update process takes to much time, just be sure that you:
not use liquibase to change database state (add data to tables)
not use liquibase to update code objects (functions, procedures and etc.) in the database
not use liquibase for migrate large amount of data
Where's the best place to store the version of a table in Oracle? Is it possible to store the version in the table itself, e. g. similar to the comment assigned to a table?
I don't think you can store that information in Oracle, except maybe in a comment on the table, but that would be error prone.
But personally I think you shouldn't want to keep track of versions of tables. After all, to get from a version 1 to a version 2, you may need to modify data as well, or other objects like triggers and procedures that use to new version of the table.
So in a way, it's better to version the entire database, so you can 'combine' multiple changes in one atomic version number.
There are different approaches to this, and different tools that can help you with that. I think Oracle even has some built-in feature, but with Oracle, that means that you will be charged gold bars if you use it, so I won't get into that, and just describe the two that I have tried:
Been there, done that: saving schema structure in Git
At some point we wanted to save our database changes in GitHub, where our other source is too.
We've been using Red Gate Source Control for Oracle (and Schema Compare, a similar tool), and have been looking into other similar tools as well. These tools use version control like Git to keep the latest structure of the database, and it can help you get your changes from your development database to scripts folder or VCS, and it can generate migration scripts for you.
Personally I'm not a big fan, because those tools and scripts focus only on the structure of the database (like you would with versioning individual tables). You'd still need to know how to get from version 1 to version 2, and sometimes only adding a column isn't enough; you need to migrate your data too. This isn't covered properly by tools like this.
In addition, I thought they were overall quite expensive for the work that they do, they don't work as easy as promised on the box, and you'd need different tools for different databases.
Working with migrations
A better solution would be to have migration script. You just make a script to get your database from version 1 to version 2, and another script to get it from version 2 to version 3. These migrations can be about table structure, object modifications, or even just data, it doesn't matter. All you need to do is remember which script was executed last, and execute all versions after that.
Executing migrations can be done by hand, or you can simply script it. But there are tools for this as well. One of them is Flyway, a free tool (paid pro support should you need it) that does exactly this. You can feed it SQL scripts from a folder, which are sorted and executed in order. Each script is a 'version'. Meta data about the process is stored in a separate table in your database. The whole process is described in more detail on Flyway's website.
The advantage of this tool is that it's really simple and flexible, because you just write the migration scripts yourself. All the tool does is execute them and keep track of it. And it can do it for all kinds of databases, so you can introduce the same flow for each database you have.
One way is to define a comment on the table:
comment on table your_table is 'some comment';
Then you can read that meta information using all_tab_comments table.
See
How to get table comments via SQL in Oracle?
For further reading, see:
https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4009.htm
I want to migrate a subset of customer data from one shared database environment to another shared database environment. I use hibernate and have quite a few ID and FK_ID columns which are auto generated from an oracle sequence.
I have a liquibase change log that I exported from jailer which has the customer specific data.
I want to be able to rewrite all of the sequence ID columns so that they don't clash with what's already in the target database.
I would like to avoid building something that my company has to manage, and would prefer to upstream this to liquibase.
Is anyone aware of anything within liquibase that might be a good place to start.
I would like to either do this on the liquidbase xml before passing it to 'update' command, or as part of the update command itself. Ideally as part of the update command itself.
I am aware that I would need to make liquibase aware of which columns are PK sequence columns and the related FK columns. The database structure does have this all well defined, so I should be able to read this into the update process.
Alternatively I had thought I could use the extraction model csv from jailer
Jailer - http://jailer.sourceforge.net/
I would suggest that for one-time data migrations like this, Liquibase is not the best tool. It is really better for schema management rather than data management. I think that an ETL tool such as Pentaho would be a better solution.
I actually managed to figure it out for myself with the command line 'update' command of liquibase by using a custom change exec listener.
1) I pushed a MR to liquibase to allow registration of a change exec listener
2) I implemented my own change exec listener that intercepts each insert statement and rewrites each FK and PK field to one that is not as yet allocated in the target database. I achieve this by using a oracle sequence. In order to avoid having to go back to the database each time for a new sequence, I implemented my own version of the hibernate sequence caching
https://github.com/liquibase/liquibase/pull/505
https://github.com/pellcorp/liquibase-extensions
This turned out to be quite a generic solution and in concert with some fixes upstreamed to jailer to improve the liquibase export support its a very viable and reusable solution.
Basic workflow is:
1) Export a subset of data from source db using jailer to liquibase xml
2) Run the liquibase update command, with the custom exec change listener against the target.
3) TODO Run the jailer export on the target db and compare with the original source data.
I use the visual editor to create schduler chains in sqldeveloper3.2. But faced with a problem - can not get the SQL code of the chain, namely the sequence of requests create_chain, define_chain_step, define_chain_rule (from DBMS_SCHEDULER package).
In addition a number of properties have created steps and rules can not be changed like a program_name and so on.
DBMS_Metadata can be used to pull the definitions.
When I last tried this in 10.2.0.4 it did not pull the scheduler rules -- I pulled them by just reading the relevant system tables using SQL.
select 'exec DBMS_SCHEDULER.DEFINE_CHAIN_RULE('''||chain_name||''','''||condition||''','''||action||''','''||rule_name||''','''||comments||''');'
from user_scheduler_chain_rules
where chain_name = 'EXPORT';
I always found it more robust to completely drop and redefine a schedule rather than modify them in place. Some notes here: http://oraclesponge.wordpress.com/category/oracle/dbms_scheduler/