Having a problem with H2 set up in Spring. I placed all my sql data into one file named data.sql, however when I change it to anything else - it cannot be identified. Any idea how to set up multiple separate files?
Let's say i have a table User and some data inserted, but aiming to have 2 separate files, e.g. user-schema and user-data, and so on - multiple schema files with same number of insert data files.
My current spring.properties looks as follows:
spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.url=jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1
spring.datasource.username=sa
spring.datasource.password=
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.H2Dialect
spring.h2.console.enabled=true
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=create-drop
spring.h2.console.path=/h2
My current data.sql looks as follows:
DROP TABLE IF EXISTS User;
CREATE TABLE User
(
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(20) NOT NULL,
surname VARCHAR(20) NOT NULL,
role VARCHAR(20) NOT NULL,
email VARCHAR(30) NOT NULL
);
INSERT INTO User (name, surname, role, email)
VALUES ('Thor', '', 'admin', 'thor#marvel.com'),
('Hulk', '', 'user', 'hulk#marvel.com'),
('Venom', '', 'user', 'venom#marvel.com'),
('Spider', 'Man', 'user', 'spider-man#marvel.com'),
('Super', 'Man', 'user', 'super-man#marvel.com');
If you want to separate your data input to many files you should provide information to Spring about files location:
spring.datasource.data=classpath*:sql/mock-*.sql
spring.datasource.initialization-mode=always
In my project we keep our .sql files in resource/sql folder and every file name mock-*.sql, like mock-user.sql, mock-role-.sql that why I have wildcard in path. Anyway in spring.datasource.data you have to provide path to file with sql inserts.
spring.datasource.initialization-mode=always tells Spring to always initialize DB from files. You should configure that property since you "create-drop" database each time your tests starts.
Spring documentation about data and schema initialization: https://docs.spring.io/spring-boot/docs/2.1.x/reference/html/howto-database-initialization.html
Related
When i run tests the following migration file causes an
Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Syntax error in SQL statement "ALTER TABLE ACCOUNT
ADD IS_PROVIDER_ROOT_ACCOUNT VARCHAR(1) NOT NULL,[*]
ADD PROVIDER_ORGANISATION_ID VARCHAR(255) NULL"; SQL statement:
alter table account
add is_provider_root_account varchar(1) not null,
add provider_organisation_id varchar(255) null [42000-200]
error
alter table account
add is_provider_root_account varchar(1) not null,
add provider_organisation_id varchar(255) null;
The thing is, if I remove any one of the adds there are no errors. So what can I do here?
My testing configuration file:
spring.datasource.url=jdbc:h2:mem:testdb:MODE=MYSQL
spring.datasource.username=sa
spring.datasource.password=secret
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.h2.console.enabled=true
Looking at the H2 syntax (note parentheses), when adding multiple columns, one should do:
alter table account
add (
is_provider_root_account varchar(1) not null,
provider_organisation_id varchar(255) null
);
Spring boot hibernate always drop and create ALL the indexs on server startup
spring.jpa.hibernate.ddl-auto = update
Hibernate: alter table product_category_1 drop index UKkqfeccp86g07ipixmg25dnfia
Hibernate: alter table product_category_1 add constraint
UKkqfeccp86g07ipixmg25dnfia unique (org_id, pr_ty_id, name)
Hibernate: alter table product_category_2 drop index UKqa7n4ip0gfa4qpg034ba7bkob
Hibernate: alter table product_category_2 add constraint UKqa7n4ip0gfa4qpg034ba7bkob unique (org_id, pr_ca1_id, name)
If your column type is a longtext, the index is not created so hibernate tries to recreate it.
I was experiencing the same thing, where starting my application resulted in my unique constraints being dropped and re-added:
Hibernate: alter table category drop constraint if exists UK_CATEGORY_PARENT_NAME
Hibernate: alter table category add constraint UK_CATEGORY_PARENT_NAME unique (parent_id, name)
After much internet digging and debugging, I found simply adding the following to my application properties no longer caused the constraints to be dropped:
spring.jpa.properties.hibernate.schema_update.unique_constraint_strategy=RECREATE_QUIETLY
As i observe that few of the unique key drop and create again and again with the property
spring.jpa.hibernate.ddl-auto = update
You have to change the uniqueConstraints write inside the #Table annotation and put the uniqness check at column level
Execute the drop and create unique index again and again every time you restart the project
#Table(name = "XXXX",
uniqueConstraints = {#UniqueConstraint(columnNames = { "tempUserId"}) },
)
Resolve by adding the unique=true and column level
#Column(unique = true)
private Long tempUserId;
And delete uniqueConstraints from the #Table annotation
This will resolve the problem
Spring Boot 2.4.3.
schema-h2.sql seems to have no effect.
The H2 database is created, but no tables are created.
schema-h2.sql:
DROP TABLE IF EXISTS Blogger;
CREATE TABLE Blogger(
id bigint NOT NULL,
name varchar(100),
age int,
PRIMARY KEY (id)
);
DROP TABLE IF EXISTS Story;
CREATE TABLE Story(
id bigint NOT NULL,
title varchar(100),
content varchar(400),
posted date,
blogger_id int,
PRIMARY KEY (id)
);
application.properties:
spring.thymeleaf.cache=false
spring.web.locale-resolver=fixed
spring.web.locale=en
spring.h2.console.enabled=true
spring.h2.console.path=/db
spring.datasource.url=jdbc:h2:mem:testdb
Any advice?
Thank you!
Found the solution, I need to add spring.datasource.platform=h2 property
My environment is HSQLDB, Maven and spring-boot.
I have created 2 entity POJOs. I do see the CREATE TABLE command under testedb.log file. But when I open Data Source Explorer in Eclipse, I can`t see my tables, albeit I do see all the system tables.
I have looked at this question too, but no vail: Where can I see the HSQL database and tables
Here is my partial pom.xml:
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<version>2.4.0</version>
<scope>runtime</scope>
</dependency>
Here is my partial application.properties:
# DataSource
#spring.datasource.driverClassName=org.hsqldb.jdbc.JDBCDriver
spring.datasource.url=jdbc:hsqldb:file:resources/db/testedb;DATABASE_TO_UPPER=false
#spring.datasource.url=jdbc:hsqldb:mem:memTestdb
spring.datasource.username=sa
spring.datasource.password=
# Hibernate
spring.jpa.show-sql=true
#spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.hibernate.ddl-auto=create
And below is my HSQLDB created on disk:
HSQLDB folder in my workspace
Here is my partial testedb.script:
SET FILES LOG SIZE 50
CREATE USER SA PASSWORD DIGEST 'd41d8cd98f00b204e9800998ecf8427e'
ALTER USER SA SET LOCAL TRUE
CREATE SCHEMA PUBLIC AUTHORIZATION DBA
SET SCHEMA PUBLIC
CREATE SEQUENCE PUBLIC.HIBERNATE_SEQUENCE AS INTEGER START WITH 1
CREATE MEMORY TABLE PUBLIC.ENQUIRY_HISTORY(ID BIGINT NOT NULL PRIMARY KEY,FROM_AMOUNT DOUBLE NOT NULL,FROM_CURRENCY VARCHAR(255) NOT NULL,QUERY_DATE TIMESTAMP NOT NULL,TO_AMOUNT DOUBLE NOT NULL,TO_CURRENCY VARCHAR(255) NOT NULL,USER_ID BIGINT NOT NULL,VERSION INTEGER NOT NULL)
CREATE MEMORY TABLE PUBLIC.USERS(ID BIGINT NOT NULL PRIMARY KEY,EMAIL VARCHAR(255) NOT NULL,LAST_LOGIN TIMESTAMP NOT NULL,PASSWORD VARCHAR(255) NOT NULL,VERSION VARCHAR(255) NOT NULL)
ALTER SEQUENCE SYSTEM_LOBS.LOB_ID RESTART WITH 1
ALTER SEQUENCE PUBLIC.HIBERNATE_SEQUENCE RESTART WITH 1
SET DATABASE DEFAULT INITIAL SCHEMA PUBLIC
GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.SQL_IDENTIFIER TO PUBLIC
Please see above that the CREATE TABLE contains the word MEMORY even though I have created file DB.
And by testsdb.log:
/*C12*/SET SCHEMA PUBLIC
drop table enquiry_history if exists
drop table users if exists
drop sequence hibernate_sequence if exists
create sequence hibernate_sequence start with 1 increment by 1
create table enquiry_history (id bigint not null, from_amount float not null, from_currency varchar(255) not null, query_date timestamp not null, to_amount float not null, to_currency varchar(255) not null, user_id bigint not null, version integer not null, primary key (id))
create table users (id bigint not null, email varchar(255) not null, last_login timestamp not null, password varchar(255) not null, version varchar(255) not null, primary key (id))
/*C14*/SET SCHEMA PUBLIC
DISCONNECT
/*C17*/SET SCHEMA PUBLIC
And Finally here is my screen shot of Database
Data Source Explorer
Any pointer will be awesome, thanks for your time.
file and mem are in-process modes. For testing/debugging, if you need concurrent access to data from another process, start database in Server mode.
Check various available modes here.
I was able to see the tables using in build HSQLDB interface, now a fancy one but it still works for me.
I used the following [listed in this answer https://stackoverflow.com/a/35240141/8610216]
java -cp /path/to/hsqldb.jar org.hsqldb.util.DatabaseManager
And then specify path your database:
jdbc:hsqldb:file:mydb
There was one more thing that I was doing incorrect; it was the path of the db. The correct path is spring.datasource.url=jdbc:hsqldb:file:src/main/resources/db/userx;DATABASE_TO_UPPER=falsein application.properties.
The problem is an existing Oracle table (that I cannot change) with mixed case column names, eg
create table BADTAB ( ID varchar(16) not null, "Name" varchar2(64),
constraint I_BADTAB_PK PRIMARY KEY(ID) );
When I try to do a DBUnit INSERT from an XML dataset it fails
Caused by: java.sql.SQLException: ORA-00904: "NAME": invalid identifier
When I enclose the column name in quotes it fails
<column>"Name"</column>
org.dbunit.dataset.NoSuchColumnException: BADTAB."NAME" - (Non-uppercase input column: "ReadingsPres") in ColumnNameToIndexes cache map.
Note that the map's column names are NOT case sensitive.
at org.dbunit.dataset.AbstractTableMetaData.getColumnIndex(AbstractTableMetaData.java:117)
...
QUESTION:
How can I override DBUnit's column metadata to make it recognize the lowercase column name?
What classes do I override and how do I inject them into the DBUnit test run?
There have been some previous discussions around this org.dbunit.dataset.NoSuchTableException: Did not find table 'xxx' in schema 'null'
You should be able to set a database configuration property to cater for case-sensitive names:
DatabaseConfig config = databaseConnection.getConfig();
config.setProperty(DatabaseConfig.FEATURE_CASE_SENSITIVE_TABLE_NAMES, true);