sequel migration error ,relation already exists? - ruby

I just created a new migration file for my ruby project (e.g. 003_foo3.rb)
I use sequel 3.48.
Test in local first
$sequel -m ~/myproject/db/migration postgres://postgres#localhost/myproject_db
Error: Sequel::DatabaseError: PG::Error: ERROR: relation "bank" already exists
that 'bank' table is already in first migration file (001_foo1.rb)
I thought sequel tracks migration that already run?
What am I missing?

I feel your pain, as I get similar error messages from Sequel here and then.
Sequel creates a table called schema_info in your app database to track migrations you ran.
create_table(:schema_info) do
column :version, "int(11)", :default=>0, :null=>false
end
Either using timestamps or integer versions.
Your error message might be due to Sequel not creating that table or because you recreated your application database from scratch, in which case the schema version has been lost, thus creating your error message.
It's not possible to say what exactly happened given the information you have given.
I occasionally get similar errors, and I comment out all the migration code in the migration file, run the migrations, and then uncomment the code again.
If you are sure that you have already run a certain migration you can change the value of version field in the schema_info table.
Supposing you have the following migrations:
001_some_migration.rb
002_some_other_migration.rb
...and you already ran 001, and you get an "already exists" error, then you can set schema_info.version = 1 and run your migrations, again. Migration #1 will not be executed, but #2 will be executed directly.

Related

How to create a get-graphql-schema enum from Hasura?

I am working on a project which is backed by Hasura. I am having difficulty creating an enum and getting it to auto-generate the values.
I've successfully created a migration according to the enum spec, and verified it is loading values into the database. Next I ran yarn hasura console, and from the console started tracking both tables I created & set BaseColor to be an enum type. I added a permission for public to SELECT from BaseColor.
Next I ran yarn hasura metadata export. That generated a tables.yml with BaseColor's table definition having is_enum true.
Then I ran, yarn update-schema (i.e. get-graphql-schema http://localhost:8080/v1/graphql > schema.graphql). The generated file is missing the BaseColor_enum I would expect to be present for an enum.
get-graphql-schema only generates an _enum for a table if that table is referenced by another.
So, if I add a foreign key that references BaseColor, it will generate BaseColor_enum, otherwise it won't.

Why does SSDT VS 2019 table rename not drop the table in DB on "Publish"

I've been teaching myself SSDT for use on an upcoming project that I expect to be working on. My understanding of the "publish" operation is that it will take my SQL Server Data Project code, use that to generate something like a reference database, and then use that to compare against my target-deploy database, figure out what changes are required to get the schema into line with the reference db, and then make them.
But for a table rename, this did not happen, and I'm hoping somebody can explain what is wrong with my mental model of the process.
I've got a very simple "library" themed test database with tables like "Libraries", "Books", and "Categories". All very simple 2-3 columns just to experiment with. Then I added a 4th table "Books_MM_Categories" to represent a many-to-many link table between "Books" and "Categories".
I published that, and all was as expected. But, I'd deliberately named the link table 'wrong' to that I could try renaming it. So I renamed the sql file in my DB project, and changed its code to instead create a table named "Books_Categories_Link".
This time when I published, I expected the "Books_MM_Categories" table to be deleted from the DB, and the new one added... or to have some kind of sp_rename procedure show up to rename the table.
Instead, what I got was that both tables are now present. I can understand that my sloppy rename would have lost all the data, simply just causing one new table to be created, and the old one dropped, instead of ACTUALLY renamed... But what I can't figure out is why the original table is not dropped. In my mental model of how this works, a table/column/view/sproc that no longer exists in the reference should be likewise eliminated from the published database. If not, then I should expect to see some error messages telling me it chose not to drop the table because of anticipated dataloss.
I did see a couple of post explaining how to use the "refactor" option in the code view window... That is working as I would expect. So I understand how to do it properly going forward.
Can anybody explain whats wrong with my mental model of how this works? I'm sure its working as it is supposed to, but I'd like to understand where I went wrong. Why does a table not listed in my project not get deleted on publish (I've not tried it but expect the same exact behavior if I export a .dacpac first and then use that to perform the deployment of the new scheme.
Thanks
EDIT 1
Somewhat curiously, when running a "Schema Compare" operation, the extra table is detected and flagged for deletion.
Your mental model seems to be correct. Check 'Advanced' options in 'Publish Database' dialog.
In the 'Drop' tab you can enable 'Drop objects in target but not in source' to produce the intended result.

How can I go back to a migration with Flyway?

Let's assume that we have 2 scripts.
One that will create a table (ex: Students) and is named as V1_Students_create.sql and another one named V2_Students_Create.sql that will create a second table (ex: Teachers). After I have migrated to the second version, how can I go back to the first migration (the one where I have executed the first script - V1_Students_create.sql) and have only the first table (Students) created ?
I am using an Oracle database.
Thank you
Currently you cannot go back or downgrade your migration. There is an issue created for this feature and a lot of discussions are going on that, but still nothing. Here is the link for the issue: https://github.com/flyway/flyway/issues/109

Model changed during database created

I have uploaded my MVC3 project , it's s simple blog , at first it works well but after couple hours! following error appears (I've made custom error to Off to see the error)
The model backing the 'SiteContext' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data.
to solve this I have to manually delete my database and create again and then restore to the backup that I have created. but after after 2 hours again I get the error!
I really don't have any idea , what caused that ??
When you create a model and ask EF to create a database from it, EF would hash the model and store the hash value with the database. Whenever the context is created, EF recomputes the hash and matches it against what is stored at the database. If the model changes in any way, the resulting hash will be different and EF will throw the exception you have just seen. This is done in order to keep the model in sync with the database.
Is there any way the model could have changed during runtime?
One thing you could do to figure out the difference is to
1.Re-create the database from the model as you are doing now and get it scripted (script1.sql).
2.Wait till the error happens and delete the db and re-create it again and script it (script2.sql)
3.Try to compare the two and see whether you can spot a difference in the schemas.
This should give you an idea of what has changed in the model.
Goodluck

Can liquibase handle a partially updated schema?

I recently started to work on a branch that had been under development by a dev who left the organization and it looks like he left the associated test environment schema in a bad state.
There is a Liquibase change file that makes a number of changes that are all necessary for the code to run, but it looks like the associated schema has some of the changes applied.
I try never to update any schemata by hand, especially when not my personal dev environment, so I was hoping to make the existing (fairly complicated) changes work.
The error that I get is this:
SEVERE 12/12/12 12:15 PM:liquibase: Change Set db/changelogs/linechanges.xml::14::limit failed. Error: Error executing SQL ALTER TABLE limit ADD id serial: ERROR: column "id" of relation "lineitem_limitgroup" already exists
liquibase.exception.DatabaseException: Error executing SQL ALTER TABLE limit ADD id serial: ERROR: column "id" of relation "limit" already exists
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:62)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:104)
at liquibase.database.AbstractDatabase.execute(AbstractDatabase.java:1075)
at liquibase.database.AbstractDatabase.executeStatements(AbstractDatabase.java:1059)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:317)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:27)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:58)
at liquibase.Liquibase.update(Liquibase.java:113)
at org.liquibase.maven.plugins.LiquibaseUpdate.doUpdate(LiquibaseUpdate.java:31)
at org.liquibase.maven.plugins.AbstractLiquibaseUpdateMojo.performLiquibaseTask(AbstractLiquibaseUpdateMojo.java:24)
at org.liquibase.maven.plugins.AbstractLiquibaseMojo.execute(AbstractLiquibaseMojo.java:302)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
Note that this change file includes multiple changeSets. When I inspect the schema, it looks like the changes from some of the changeSets have been applied, but some of the others have no changes applied.
So, is there a way to tell liquibase (preferably via the Maven plugin) to ignore failed changeSets and continue?
Or (less usefully) is there a way to tell liquibase to apply some changeSets and not others?
Thanks!!!
Looks like your changeset was updated unintentionally, so you are seeing the issue. Its a good practice to create new changesets for the changes in schema, etc of the already created entities rather than updating the existing changesets.
Having said that :
In your log, check that if someother change set is already adding that column.Yes there are ways to tell liquibase to apply some changes and not other.
One workaround is : since your file is already having issues, you can make all the previous changes to go under any correctly run change set. Since the unique identification of the changeset is based on - author, change, changeSetID .Since this has been run once, it will not go and run it again no matter what sql is inside it.
Liquibase won't apply a changeset twice. But probably some of the same (or incompatible) changes were made in different changesets in other branches. I think you have no other choice but to manually edit the changesets of this branch so that they apply cleanly.
I thing your best option is to use a Liquibase precondition to tell it to run the chanset in error only if the column doen't exist. You will have to use the tag columnExists like this :
<preConditions>
<not>
<columnExists columnName="id" tableName="limit" schemaName="yourSchemaNameHere" />
</not>
</preConditions>
It will mark the script as run without actually running any update if you already have the id column in the limit table.
You also have two other options :
First option. You can set the attribute failOnError to false on the changesets that trigger an error. The error prone changesets will be run up to the point they trigger an error. The next changesets will run normally.
Use this with care since it doesn't rollback the changeset on error and it doesn't mark the changeset as run either. Also note that if you already hava a changeset with this attribute set to false, that may explain why you have a partial update.
Second option, insert a row in the table DATABASECHANGELOG to indicate to Liquibase that it doesn't have to run a particular changeset. Actually, it means that the changeset ran successfuly, but the result is that Liquibase will never try to run it again anyway.

Resources