Can liquibase handle a partially updated schema? - maven

I recently started to work on a branch that had been under development by a dev who left the organization and it looks like he left the associated test environment schema in a bad state.
There is a Liquibase change file that makes a number of changes that are all necessary for the code to run, but it looks like the associated schema has some of the changes applied.
I try never to update any schemata by hand, especially when not my personal dev environment, so I was hoping to make the existing (fairly complicated) changes work.
The error that I get is this:
SEVERE 12/12/12 12:15 PM:liquibase: Change Set db/changelogs/linechanges.xml::14::limit failed. Error: Error executing SQL ALTER TABLE limit ADD id serial: ERROR: column "id" of relation "lineitem_limitgroup" already exists
liquibase.exception.DatabaseException: Error executing SQL ALTER TABLE limit ADD id serial: ERROR: column "id" of relation "limit" already exists
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:62)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:104)
at liquibase.database.AbstractDatabase.execute(AbstractDatabase.java:1075)
at liquibase.database.AbstractDatabase.executeStatements(AbstractDatabase.java:1059)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:317)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:27)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:58)
at liquibase.Liquibase.update(Liquibase.java:113)
at org.liquibase.maven.plugins.LiquibaseUpdate.doUpdate(LiquibaseUpdate.java:31)
at org.liquibase.maven.plugins.AbstractLiquibaseUpdateMojo.performLiquibaseTask(AbstractLiquibaseUpdateMojo.java:24)
at org.liquibase.maven.plugins.AbstractLiquibaseMojo.execute(AbstractLiquibaseMojo.java:302)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
Note that this change file includes multiple changeSets. When I inspect the schema, it looks like the changes from some of the changeSets have been applied, but some of the others have no changes applied.
So, is there a way to tell liquibase (preferably via the Maven plugin) to ignore failed changeSets and continue?
Or (less usefully) is there a way to tell liquibase to apply some changeSets and not others?
Thanks!!!

Looks like your changeset was updated unintentionally, so you are seeing the issue. Its a good practice to create new changesets for the changes in schema, etc of the already created entities rather than updating the existing changesets.
Having said that :
In your log, check that if someother change set is already adding that column.Yes there are ways to tell liquibase to apply some changes and not other.
One workaround is : since your file is already having issues, you can make all the previous changes to go under any correctly run change set. Since the unique identification of the changeset is based on - author, change, changeSetID .Since this has been run once, it will not go and run it again no matter what sql is inside it.

Liquibase won't apply a changeset twice. But probably some of the same (or incompatible) changes were made in different changesets in other branches. I think you have no other choice but to manually edit the changesets of this branch so that they apply cleanly.

I thing your best option is to use a Liquibase precondition to tell it to run the chanset in error only if the column doen't exist. You will have to use the tag columnExists like this :
<preConditions>
<not>
<columnExists columnName="id" tableName="limit" schemaName="yourSchemaNameHere" />
</not>
</preConditions>
It will mark the script as run without actually running any update if you already have the id column in the limit table.
You also have two other options :
First option. You can set the attribute failOnError to false on the changesets that trigger an error. The error prone changesets will be run up to the point they trigger an error. The next changesets will run normally.
Use this with care since it doesn't rollback the changeset on error and it doesn't mark the changeset as run either. Also note that if you already hava a changeset with this attribute set to false, that may explain why you have a partial update.
Second option, insert a row in the table DATABASECHANGELOG to indicate to Liquibase that it doesn't have to run a particular changeset. Actually, it means that the changeset ran successfuly, but the result is that Liquibase will never try to run it again anyway.

Related

Sqitch - single plan row, multiple sql files

I'm looking to move to sqitch, but my team likes their migrations as multiple files. That is, for example, if we create a foreign key and create an index in the same jira ticket, we like PROJ-123-create-fk.sql and PROJ-123-create-index.sql.
I'd like to keep a single row in the sqitch.plan file, so that each jira ticket corresponds to a single row.
Basically, short of adding lines to sqitch.plan, can I do this? Is there a way to "include" other sql files from a master one? Something like
PROJ-123-main.sql
\include PROJ-123-create-fk.sql
\include PROJ-123-create-index.sql
Thanks so much!
The \ir psql directive solved this for me.
in PROJ-123-deploy.sql
\ir PROJ-123-create-fk.sql
\ir PROJ-123-create-index.sql
If the fk and index sql files are in the same directory, they will be executed.
Since you want to store it as a single line in the plan file, it doesn't quite match, but I will explain the method we use below. Maybe there are parts that you want to use in it.
There is a similar situation in my team, but we use sqitch tags. Each version of our application corresponds to a subtask. And each task corresponds to a tag. We create sql files as many as the number of subtasks. Then we combine them in a tag that we created with the name of the main task. In our CI/CD pipeline that we use for the database, we also provide the transition between versions with tags. I wanted to add this method here in case anyone prefers to use a similar structure.
Simple example;
Let's have v2.0 of our application installed and a new table and an index are required for v2.1
We create two subtasks named create table and create index under the main task named v2.1
We create two sql files;
app_v2.1_table_create.sql to create a table, app_v2.1_index_create.sql to create an index.
After that, we create a sqitch tag called v2.1. Notice it has the same name as the main task.

Why does SSDT VS 2019 table rename not drop the table in DB on "Publish"

I've been teaching myself SSDT for use on an upcoming project that I expect to be working on. My understanding of the "publish" operation is that it will take my SQL Server Data Project code, use that to generate something like a reference database, and then use that to compare against my target-deploy database, figure out what changes are required to get the schema into line with the reference db, and then make them.
But for a table rename, this did not happen, and I'm hoping somebody can explain what is wrong with my mental model of the process.
I've got a very simple "library" themed test database with tables like "Libraries", "Books", and "Categories". All very simple 2-3 columns just to experiment with. Then I added a 4th table "Books_MM_Categories" to represent a many-to-many link table between "Books" and "Categories".
I published that, and all was as expected. But, I'd deliberately named the link table 'wrong' to that I could try renaming it. So I renamed the sql file in my DB project, and changed its code to instead create a table named "Books_Categories_Link".
This time when I published, I expected the "Books_MM_Categories" table to be deleted from the DB, and the new one added... or to have some kind of sp_rename procedure show up to rename the table.
Instead, what I got was that both tables are now present. I can understand that my sloppy rename would have lost all the data, simply just causing one new table to be created, and the old one dropped, instead of ACTUALLY renamed... But what I can't figure out is why the original table is not dropped. In my mental model of how this works, a table/column/view/sproc that no longer exists in the reference should be likewise eliminated from the published database. If not, then I should expect to see some error messages telling me it chose not to drop the table because of anticipated dataloss.
I did see a couple of post explaining how to use the "refactor" option in the code view window... That is working as I would expect. So I understand how to do it properly going forward.
Can anybody explain whats wrong with my mental model of how this works? I'm sure its working as it is supposed to, but I'd like to understand where I went wrong. Why does a table not listed in my project not get deleted on publish (I've not tried it but expect the same exact behavior if I export a .dacpac first and then use that to perform the deployment of the new scheme.
Thanks
EDIT 1
Somewhat curiously, when running a "Schema Compare" operation, the extra table is detected and flagged for deletion.
Your mental model seems to be correct. Check 'Advanced' options in 'Publish Database' dialog.
In the 'Drop' tab you can enable 'Drop objects in target but not in source' to produce the intended result.

JPA add a condition to every single query automatically

Before anything, i must say this first: This table design is not my decision. We protest but to no avail, so please don't tell me, don't create a table like that.
We have a database with each table have a flag. This flag used to indicate which environment this row belong to, production or test data.
For server side, we have one variable which currently stored in ThreadLocal to indicate which environment this request belong to, same value as the flag in database.
Our requirement is that if my request belong to test environment then we must select only record belong to this environment. We would need to add a condition to every query we made to database, something like:
SELECT t FROM TABLE t WHERE t.flag = :environment
But we have to update every single query, update every object to set this flag before insert/update into database. This will require a lot of effort as our system already built long ago, not on progress. Also this will bring a lots of risk if someone forgot to add this to any new query.
So is there anyway to insert a condition to check this flag value for every query without have to manually edit the query string? Like an interceptor or something to put this condition in?
Which JPA provider?
With Hibernate, you could try using a #Filter.
Multitenancy could be another option, but probably an overkill in your scenario.
Finally, since you flagged the question with Oracle, perhaps the easiest approach would be to provide dedicated schemas (per environment) with views for every single table in your db, filtered by the flag column. Not sure if you're allowed to do that, though.
With some of the above, you would need a global entity listener to populate the flag field of your entities before they are persisted.

sequel migration error ,relation already exists?

I just created a new migration file for my ruby project (e.g. 003_foo3.rb)
I use sequel 3.48.
Test in local first
$sequel -m ~/myproject/db/migration postgres://postgres#localhost/myproject_db
Error: Sequel::DatabaseError: PG::Error: ERROR: relation "bank" already exists
that 'bank' table is already in first migration file (001_foo1.rb)
I thought sequel tracks migration that already run?
What am I missing?
I feel your pain, as I get similar error messages from Sequel here and then.
Sequel creates a table called schema_info in your app database to track migrations you ran.
create_table(:schema_info) do
column :version, "int(11)", :default=>0, :null=>false
end
Either using timestamps or integer versions.
Your error message might be due to Sequel not creating that table or because you recreated your application database from scratch, in which case the schema version has been lost, thus creating your error message.
It's not possible to say what exactly happened given the information you have given.
I occasionally get similar errors, and I comment out all the migration code in the migration file, run the migrations, and then uncomment the code again.
If you are sure that you have already run a certain migration you can change the value of version field in the schema_info table.
Supposing you have the following migrations:
001_some_migration.rb
002_some_other_migration.rb
...and you already ran 001, and you get an "already exists" error, then you can set schema_info.version = 1 and run your migrations, again. Migration #1 will not be executed, but #2 will be executed directly.

EntityFramework code-first, run a database update script after DropCreate

I'm trying to find some nice work arounds for the issues of computed columns in code first. Specifically, I have a number of CreatedAt datetime columns that need to be set to getdate().
I've looked at doing this via the POCO constructors, but to do that I must remove the Computed option (or it won't persist the data), however, there is no easy to way ensure the column is only set if we are inserting a record. So this would overwrite the CreatedAt each time we update.
I'm looking to create an alter script that can be called after the DropCreate that would go through and alter various columns to include the default value of getdate().
Is there an event to hook into something like OnDropCreateCompleted where I could then run additional SQL
What would be the best way handle the alter script? I am thinking just sending raw sql to the server that would run.
Is there another way to handle the getdate() issue that might be more graceful and more inline with code first that I'm missing?
Thanks
You can just make custom initializer derived from your desired one and override Seed method where you can execute any SQL you want to use - here is some example for creating such initializer.
If you are using migrations you can just the custom SQL to Up method.

Resources