I have an old Liquibase .xml file for adding an index to two columns. It is already in the DATABASECHANGELOG table and in Production, since years. But now i updated the H2 Database for my integration tests and they fail because of "article_id ". There is a blank space in the column name.
<createIndex tableName="order_journal" indexName="IDX_ArticleId_Customer">
<column name="article_id "/>
<column name="customer_id"/>
</createIndex>
My datasource configuration:
I removed the blankspace and the tests worked. Of course the application doesnt start because i edited an already commited file in the changelock.
What is the common way to edit an old Liquibase file or is there an approach for the H2 database?
There are a couple of options, which one suites you best depends on conventions in your project.
preconditions do not affect checksums: you may add precondition to the existing changeset, which prevents execution on test database, and write correct changeset for test database with opposite precondition (that is not clear how to properly distinguish test and prod databases though) - check Prevent error when dropping not existing sequences, creating existing users for example
you may specify <validCheckSum> for modified changeset - seems to be exactly your case
you may create another changeset for test database and mark both changesets as failOnError="false" - if changeset contains only createIndex change that seems to be safe, this scenario is also ~described in liquibase blog and Prevent error when dropping not existing sequences, creating existing users
you may specify <validCheckSum>ANY</validCheckSum> - in that case you do not need to figure out what previous checksum was, however it seems not to be safe.
Related
my project has large oracle sql scripts. liquibase locks the schema (DATABASECHANGELOGLOCK table) when installing a single patch. How do I install multiple patches in parallel without a queue?
P.S. Oracle will independently make locks at its discretion.
Any DDL is make the new schema state that is based on previous state. If the previous state is not valid, you cant apply next DDL (it is impossible to add new constrain to the column that not exist). To check the previous state, you use precondition in your changesets.
So, in general it is impossible to parallelise the schema update, because the schema changes should be applied in order and the order can't be changed.
The lock on DATABASECHANGELOGLOCK is aimed to be sure that it is impossible to run two schema update process in one time, and it is reasonable restriction, so don't try to get around it.
If update process takes to much time, just be sure that you:
not use liquibase to change database state (add data to tables)
not use liquibase to update code objects (functions, procedures and etc.) in the database
not use liquibase for migrate large amount of data
Problem: I have selected few issues. Now, I want to trace an issue within the source code files starting from the moment it was first detected as an issue until it is repaid/resolved/removed/deleted/remaining in the latest repository.
So, for each unique issue (unique to an specific source file), I want a list that has N rows (N = number of analysis, e.g., SNAPSHOTS) where each row shows the existence of of the issue in a source file (preferable also with its location in the source file).
Questions: Apparently, I couldn't find an API for this. When I explored the database, I was unable to establish a connection between SNAPSHOTS and ISSUES tables that I could use to separate issues from one SNAPSHOT/analysis to another.
Do you see any way to solve the problem?
How can I separate issues from one snapshot to the others?
What is the format/encoding of the LOCATION field of the ISSUE
table? Can this be used to identify an issue location in the source
file?
Relation between issues and analysis is not persisted over time. Still each issue has a creation date, the date of last change (status, assignee, ...) and optionally the close date. That allows you to match issues with the dates of analysis.
As a side note, the database must never be accessed by plugins nor external applications. The only API to extract is provided by web services, api/issues/search and api/issues/changelog in your case.
I want to migrate a subset of customer data from one shared database environment to another shared database environment. I use hibernate and have quite a few ID and FK_ID columns which are auto generated from an oracle sequence.
I have a liquibase change log that I exported from jailer which has the customer specific data.
I want to be able to rewrite all of the sequence ID columns so that they don't clash with what's already in the target database.
I would like to avoid building something that my company has to manage, and would prefer to upstream this to liquibase.
Is anyone aware of anything within liquibase that might be a good place to start.
I would like to either do this on the liquidbase xml before passing it to 'update' command, or as part of the update command itself. Ideally as part of the update command itself.
I am aware that I would need to make liquibase aware of which columns are PK sequence columns and the related FK columns. The database structure does have this all well defined, so I should be able to read this into the update process.
Alternatively I had thought I could use the extraction model csv from jailer
Jailer - http://jailer.sourceforge.net/
I would suggest that for one-time data migrations like this, Liquibase is not the best tool. It is really better for schema management rather than data management. I think that an ETL tool such as Pentaho would be a better solution.
I actually managed to figure it out for myself with the command line 'update' command of liquibase by using a custom change exec listener.
1) I pushed a MR to liquibase to allow registration of a change exec listener
2) I implemented my own change exec listener that intercepts each insert statement and rewrites each FK and PK field to one that is not as yet allocated in the target database. I achieve this by using a oracle sequence. In order to avoid having to go back to the database each time for a new sequence, I implemented my own version of the hibernate sequence caching
https://github.com/liquibase/liquibase/pull/505
https://github.com/pellcorp/liquibase-extensions
This turned out to be quite a generic solution and in concert with some fixes upstreamed to jailer to improve the liquibase export support its a very viable and reusable solution.
Basic workflow is:
1) Export a subset of data from source db using jailer to liquibase xml
2) Run the liquibase update command, with the custom exec change listener against the target.
3) TODO Run the jailer export on the target db and compare with the original source data.
I am trying to run liquibase update on oracle database in command line and it couldn't able to identify the already executed change sets and it tries to start update from the beginning of the change log file
java -jar liquibase.jar --driver=oracle.jdbc.driver.OracleDriver --classpath=ojdbc14-11.2.0.3.0.jar --changeLogFile=ParentDBChangeLog.xml --url="jdbc:oracle:thin:#172.25.XX.XXX:1521:ora11g" --username=xxxx --password=xxxxx update
It is getting failed from the first change set which tries to execute and the error which I got is
Error: java.sql.SQLSyntaxErrorException: ORA-00955: name is already used by an existing object
How I can resolve this problem
You should check that the database still contains the DATABASECHANGELOG and DATABASECHANGELOGLOCK tables and that they are appropriately populated (DATABASECHANGELOG should have one row of data for each of the changesets that has been applied to the database, and the ID, AUTHOR, and FILENAME columns should match the values in your changelog.xml).
I have seen many instances of well-meaning DBAs seeing those tables, not recognizing what they are, and removing them. I have also seen instances where someone was starting to introduce Liquibase but not everyone knows about that, and continues to use whatever process is alrady being used to manage schema change, and that process removes or alters those tables.
To generate insert statements from my databases (oracle,db2) i've used liquibase generateChangeLog command with argument
--diffTypes="data"
This command generate correct xml with insert statements, however this is not aware of foreign constraints, so I cannot use this file to again to fill my databases. Similar problems has been described here : Is there a way to generate Liquibase data in the right order?. The proposed workaround unfortunatelly is not possibile for my databases, because there is no any command to switch of constraint checks.
My question is, if exists any other solution to this problem ? Why can I generate data insert statements changelog, but cannot use it because of foreign key constraints ?
Jens is right in the comment. Liquibase has no way of determining dependencies because the main use case is tracking ran changeSets. GenerateChangeLog is a useful capability but it is not intended to handle all cases and managing dependencies is a complex task that is definitely out of scope.
My normal recommendation is that the output of generateChangeLog should be considered a useful first step for working with the changeLog and if you have dependency issues just reorder the changeSets before executing them. If there many of them, you may want to write a script to reorder them based on your knowledge of your dependencies.
If you can export schema + data, a good solution is separate DDL, DML and this part of DDL that create the constraints. You have to reorder those in the following order:
DDL (without constraints)
DML (data)
DDL (the constraints removed from step 1)
It requires a little bit of manual editing but you'll have to do it once for your project.
And, as a side note, for you next project, start using liquibase from the start, this way you will never encounter this problem anymore.
The jailer export liquidbase feature is excellent for this. It generates a change log in topological order.
http://jailer.sourceforge.net/