How to solve SchemaVersionTrackingException in dbdeploy due to no default value in delta_set? - gradle

I am trying to do automatic DB migration. I am using dbdeploy for the same.
I followed the steps in this link http://blog.codeborne.com/2012/09/using-dbdeploy-in-gradle.html
I created the change log table as:
CREATE TABLE changelog (
change_number INTEGER NOT NULL,
delta_set VARCHAR(10) NOT NULL,
start_dt TIMESTAMP NOT NULL,
complete_dt TIMESTAMP NULL,
applied_by VARCHAR(100) NOT NULL,
description VARCHAR(500) NOT NULL
);
ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number, delta_set);
The updateDatabase task in build.gradle is:
task updateDatabase << {
ant.dbdeploy(driver: dbDriver,
url: dbUrl,
userid: dbUsername,
password: dbPassword,
dir: './src/main/resources/deploy/sql',
dbms: 'mysql'
)
}
When I do gradle updateDatabase, I get com.dbdeploy.exceptions.SchemaVersionTrackingException: Could not update change log because: Field 'delta_set' doesn't have a default value.
I tried assigning 'main' as default value in the change table log file as:
delta_set VARCHAR(10) NOT NULL DEFAULT 'Main'
But, I still got the same exception.
I also removed the delta_set attribute, I got the same exception. This really confused me.
I am completely new to datamigration. So, any help regarding this error and how I should go about it will be deeply appreciated.
Thank you in advance.

The DBDeploy documentation isn't very explicit about this, but the changelog table format changed between versions 2.X and 3.X (see the upgrade instructions). I suspect that you are using DBDeploy 3.X.
You need to:
Remove the old changelog table:
DROP TABLE changelog;
Recreate it using the new format:
CREATE TABLE changelog (
change_number INTEGER NOT NULL,
complete_dt TIMESTAMP NOT NULL,
applied_by VARCHAR(100) NOT NULL,
description VARCHAR(500) NOT NULL
);
ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number);
After this, everything should work.

Related

I can`t see my created tables in HSQLDB, what could be the issue?

My environment is HSQLDB, Maven and spring-boot.
I have created 2 entity POJOs. I do see the CREATE TABLE command under testedb.log file. But when I open Data Source Explorer in Eclipse, I can`t see my tables, albeit I do see all the system tables.
I have looked at this question too, but no vail: Where can I see the HSQL database and tables
Here is my partial pom.xml:
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<version>2.4.0</version>
<scope>runtime</scope>
</dependency>
Here is my partial application.properties:
# DataSource
#spring.datasource.driverClassName=org.hsqldb.jdbc.JDBCDriver
spring.datasource.url=jdbc:hsqldb:file:resources/db/testedb;DATABASE_TO_UPPER=false
#spring.datasource.url=jdbc:hsqldb:mem:memTestdb
spring.datasource.username=sa
spring.datasource.password=
# Hibernate
spring.jpa.show-sql=true
#spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.hibernate.ddl-auto=create
And below is my HSQLDB created on disk:
HSQLDB folder in my workspace
Here is my partial testedb.script:
SET FILES LOG SIZE 50
CREATE USER SA PASSWORD DIGEST 'd41d8cd98f00b204e9800998ecf8427e'
ALTER USER SA SET LOCAL TRUE
CREATE SCHEMA PUBLIC AUTHORIZATION DBA
SET SCHEMA PUBLIC
CREATE SEQUENCE PUBLIC.HIBERNATE_SEQUENCE AS INTEGER START WITH 1
CREATE MEMORY TABLE PUBLIC.ENQUIRY_HISTORY(ID BIGINT NOT NULL PRIMARY KEY,FROM_AMOUNT DOUBLE NOT NULL,FROM_CURRENCY VARCHAR(255) NOT NULL,QUERY_DATE TIMESTAMP NOT NULL,TO_AMOUNT DOUBLE NOT NULL,TO_CURRENCY VARCHAR(255) NOT NULL,USER_ID BIGINT NOT NULL,VERSION INTEGER NOT NULL)
CREATE MEMORY TABLE PUBLIC.USERS(ID BIGINT NOT NULL PRIMARY KEY,EMAIL VARCHAR(255) NOT NULL,LAST_LOGIN TIMESTAMP NOT NULL,PASSWORD VARCHAR(255) NOT NULL,VERSION VARCHAR(255) NOT NULL)
ALTER SEQUENCE SYSTEM_LOBS.LOB_ID RESTART WITH 1
ALTER SEQUENCE PUBLIC.HIBERNATE_SEQUENCE RESTART WITH 1
SET DATABASE DEFAULT INITIAL SCHEMA PUBLIC
GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.SQL_IDENTIFIER TO PUBLIC
Please see above that the CREATE TABLE contains the word MEMORY even though I have created file DB.
And by testsdb.log:
/*C12*/SET SCHEMA PUBLIC
drop table enquiry_history if exists
drop table users if exists
drop sequence hibernate_sequence if exists
create sequence hibernate_sequence start with 1 increment by 1
create table enquiry_history (id bigint not null, from_amount float not null, from_currency varchar(255) not null, query_date timestamp not null, to_amount float not null, to_currency varchar(255) not null, user_id bigint not null, version integer not null, primary key (id))
create table users (id bigint not null, email varchar(255) not null, last_login timestamp not null, password varchar(255) not null, version varchar(255) not null, primary key (id))
/*C14*/SET SCHEMA PUBLIC
DISCONNECT
/*C17*/SET SCHEMA PUBLIC
And Finally here is my screen shot of Database
Data Source Explorer
Any pointer will be awesome, thanks for your time.
file and mem are in-process modes. For testing/debugging, if you need concurrent access to data from another process, start database in Server mode.
Check various available modes here.
I was able to see the tables using in build HSQLDB interface, now a fancy one but it still works for me.
I used the following [listed in this answer https://stackoverflow.com/a/35240141/8610216]
java -cp /path/to/hsqldb.jar org.hsqldb.util.DatabaseManager
And then specify path your database:
jdbc:hsqldb:file:mydb
There was one more thing that I was doing incorrect; it was the path of the db. The correct path is spring.datasource.url=jdbc:hsqldb:file:src/main/resources/db/userx;DATABASE_TO_UPPER=falsein application.properties.

H2DB - executeUpdate() returns 0 or 1 on DELETE depending on table definition

I wonder if someone could explain the behaviour of the H2 JDBC driver when deleting an entry from a rather simple table.
When using the following table definition, the method executeUpdate() for a PreparedStatement instance returns 1 if one entry has been deleted (expected behaviour).
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL
);
When adding a PRIMARY KEY constraint on the CODE column, the same method returns 0 although the entry gets deleted successfully (behaviour not expected).
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("CODE")
);
Most interestingly, when adding an INT typed column to serve as PRIMARY KEY the return value is 1 again:
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"ID" INT NOT NULL AUTO_INCREMENT,
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("ID")
);
Is someone able to reconstruct this behaviour and probably somehow explain it to me?
I have included the current version of H2 DB using maven.
EDIT:
If I eventually add a UNIQUE constraint for the CODE column, the return value is 0 again ...
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"ID" INT NOT NULL AUTO_INCREMENT,
"CODE" VARCHAR(5) NOT NULL UNIQUE,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("CODE")
);
EDIT 2:
The query used to delete an entry looks like the following (used in PreparedStatement):
DELETE FROM MATERIAL WHERE CODE = ?
SOLUTION:
I'm sorry to have you bothered with this. Actually, there was no problem with the table definition or the JDBC driver. It was my test data - from earlier testing I had wanted to INSERT two entries having the same CODE. It was a multiple row insert - obviously this failed, when CODE was the PK or having a UNIQUE index. Thus, in this cases executeUpdate() could only return 0 because there was no data in the table at all.

Automatic Casting of Default Values on Insert

This may be a feature instead of a bug, so I thought to include it on SO instead of MariaDB's Jira.
Yesterday I updated my MariaDB install on Homebrew from 10.1.23 to 10.2.6. All my selects are still working correctly, but now in my legacy app, I get a bunch of errors on inserts where the code is "assuming" MariaDB will set a default value. For example...
INSERT INTO table SET
email = 'some#email.com', -- varchar
phone_number = '', -- bigint
ts = '2017-05-30 23:51:23', -- datetime
some_val = '689728' -- varchar
This code was working fine before, but since I've upgraded I now get the following couple of errors...
Error 1 (is_some_toggle is a tinyint and is not defined in the query above, it is assumed that MariaDB would just insert a 0)
Field 'is_some_toggle' doesn't have a default value
Error 2 (after I set the default value to is_some_toggle)
Incorrect integer value: '' for column 'phone_number' at row 1
I'm guessing this is a feature, not a bug. I've looked through their changelogs for 10.2 series and I'm not seeing anything jump out, but there's a lot so I could have missed it. I saw a server config for OLD_SQL but that didn't seem to be what I was looking for. Any thoughts?
macOS Sierra 10.12.5 btw
CREATE TABLE `table` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`email` varchar(200) NOT NULL,
`phone_number` bigint(20) NOT NULL,
`some_val` varchar(6) NOT NULL,
`ts` datetime DEFAULT NULL,
`is_some_toggle` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `email_code` (`email`(15),`some_val`),
KEY `phone_number_code` (`phone_number`,`some_val`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
Looks like going through the changelog VERY slowly worked for me.
sql_mode was updated as was outlined in this article.
Option | Old default value | New default value
sql_mode | NO_AUTO_CREATE_USER, NO_ENGINE_SUBSTITUTION | STRICT_TRANS_TABLES, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, NO_ENGINE_SUBSTITUTION
I changed it back to the default and I'm good as gold.

How to get DataMapper to not specify a NULL default value

I'm getting an error when I use DataMapper's auto_upgrade! method to add fields in an SQLite3 db based on the properties defined in my code:
DataObjects::SyntaxError at /history
Cannot add a NOT NULL column with default value NULL
An example of an offending line would be:
property :fieldname, Text, required: true
The error goes away if I (a) remove the line, (b) remove required: true, (c) change true to false, or (d) add a default value.
SQLite does not require a default value to be specified for every field, so this problem is definitely with DataMapper, not SQLite.
How can I get around this, so DataMapper can specify that a field is required without assuming that not specifying a default value automatically means the default should be NULL?
(If you want to know more about why I'm designing this way: there will be another client process accessing SQLite and logging data into the SQLite database, while a Sinatra app will be pulling data out of the db for display in a browser. I want the database therefore to enforce the field requirements, but DM's auto_upgrade is a very convenient way to be able to upgrade the database as needed—so long as it doesn't foul things up in the process.)
You are requiring the field, hence it cannot be NULL. This is simple table properties.
When DataMapper runs auto_upgrade! it runs the SQL commands on the database.
CREATE TABLE Test
(
P_Id int NOT NULL,
lname varchar(255) NOT NULL,
fname varchar(255),
Address varchar(255),
City varchar(255)
)
And doing something like this won't work.
CREATE TABLE Test
(
P_Id int NOT NULL,
lname varchar(255) NOT NULL DEFAULT NULL,
fname varchar(255),
Address varchar(255),
City varchar(255)
)
I tested it in MySQL and this is the error.
02:52:43 CREATE TABLE TestTest ( P_Id int NOT NULL, lname
varchar(255) NOT NULL DEFAULT NULL, fname varchar(255), Address
varchar(255), City varchar(255) ) Error Code: 1067. Invalid default
value for 'lname' 0.062 sec
Correction: SQLite does allow you to create a table with such properties. However, when trying to insert anything to that table makes it throw an error whether the field is NULL or not. So DataMapper might be doing some sanitation for your before even creating the table.
It is not clear to me if you are creating a new table or modifying an existing one.
If you have an existing table and are trying to alter it with a column defined as NOT NULL, then you must provide a default value so that the existing rows can be migrated. The RDBMS needs to know what to put in the field for pre-existing rows.
If you are creating a new table, then the property definition you have should be fine.

Table creation with h2 database

I am new to h2.I just using h2 in spring embedded mode with hibernate.I am trying to execute the following scripts using h2.
CREATE TABLE acct_authority (
id bigint(20) NOT NULL auto_increment,
name varchar(255) NOT NULL default '',
value varchar(255) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY name (name)
);
The table acct_authority is created without any error.But if i create another table with the following script.
CREATE TABLE acct_role (
id bigint(20) NOT NULL auto_increment,
name varchar(255) NOT NULL default '',
PRIMARY KEY (id),
UNIQUE KEY name (name)
);
It shows error as constraint name allready exists.What mistake i did.
You tried to create two constraints with same name. As you see, both CREATE TABLE statements contain following:
UNIQUE KEY name (name)
Result is that first one creates constraint named name, and second one fails because constraint name already exists. Problem can be solved by using unique names. Also in general it makes sense have little bit more descriptive names for database objects. Maybe you can use for example something like following:
UNIQUE KEY acct_authority_name_UNIQUE (name)
...
UNIQUE KEY acct_role_name_UNIQUE (name)

Resources