Not running DDL scripts with spring batch + spring boot + sql server application - spring

My project has this requirement where user uploads a CSV file which has to be pushed to sql server database.
I am following the below basic example to load CSV file into a sql sever database.
https://github.com/michaelcgood/Spring-Batch-CSV-Example
runninng this repo by making change of datasource, here we using sql server instead of in-memory db.
here is the addition to POM file:
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<<artifactId>sqljdbc4</artifactId>
<version>4.0</version>
<</dependency>
additions to applications.properties file
spring.datasource.url=jdbc:sqlserver://localhost;databaseName=springbootdb
spring.datasource.username=sa
spring.datasource.password=Projects#123
Below is the error generated while running the code base, we could not found exact root cause we tried the multiple approaches as mentioned in the google but no luck, while running this spring batch+spring boot 2.0.4 application we are facing the below errors:
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
o.s.boot.autoconfigure.jdbc.Datasource.initailizer disabled(not running DDL scripts)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE
where JOB_NAME = ? order by JOB_INSTANCE_ID desc]; nested exception is java.sql.SQLServerException: Invalid object name 'BATCH_JOB_INSTANCE'.
We are assuming the root cause one of the below?
1.We are using the spring starter project for creating spring batch configuration so not aware how we could define the default table schema and all.
2.maybe we don't have access and/or you need to define the schema. http://forum.spring.io/forum/spring-projects/batch/63771-badsqlgrammarexception-running-commandlinejobrunner
3.Not sure on the why error is saying Invalid object name 'BATCH_JOB_INSTANCE' instead of object doesnot exist?
What are the list of sql queries to create metadate table for spring batch application, to create it manually before running application.
4.we are struggling from couple of days to find the root cause of issue, Please suggest approch by taking the above github code base sample.
any help here would be appreciated. Please let us know if we missed anything here. Thanks In advance

The project you are linking to uses the in-memory hsqldb instance so Spring Boot will automatically create Spring Batch meta-data tables for you.
If you want to adapt the sample to your project and use sql server (which is not embedded in your JVM), Spring Boot will not create Spring Batch tables (unless you instruct it to do so). In this case, you have two options:
Run the Spring Batch DDL script for sqlserver yourself against the db server you are using before running your app
Or instruct Spring Boot to do it for you by specifying the following properties in your application.properties file:
spring.batch.initialize-schema=always
spring.batch.schema=classpath:org/springframework/batch/core/schema-sqlserver.sql

Related

How should I control order of execution between Liquibase and springboot data.sql?

We have a (internal to company, external to project) library (jar) that includes some Liquibase scripts to add tables to a schema to support the functionality of that library.
Using SpringBoot and Maven to run integration tests with H2, we have been using sql files, listed in property files, to initialise the DB.
We want to be able to add data to the tables created by the external (to the project) library for the ITs but finding the tables haven't been created by Liquibase when SpringBoot/SpringData attempts to run the insert statements in our sql files.
Given the errors we're seeing (tables not existing when spring attempts to run the insert.sql files) it looks like spring is executing those files before Liquibase has done its thing.
How can I ensure Liquibase config run by the library to create tables has completed before Spring does it's thing with running sql files specified by the spring.datasource.data property?
We don't really want to include the test data in the library (which was working, but introduced other issued we are trying to work around with liquibase inserting test data into production DB).
what about using different context for your tests?
so you will have application.properties in your test folder and there you will define another changelog that will include all changelogs that are needed (even from library) and you will also include the .sql file that you are running probably with jpa? Try to look here if that helps.

Unable to run SQL script to initialize test data for Spring Boot test

I have a Spring Boot 2.3 reactive application with webflux and r2dbc. Normally it runs on MS Sql database. I want unit tests to run on H2. I got to the point when correct database driver is loaded based on which application.properties file is in use (main or test). But I can't figure out how to run SQL scripts to create schema and load data.
Tried the following without success:
schema-XXX.sql, data-XXX.sql, they work fine with non-reactive JPA.
#Sql annotation referring to the *.sql files in resources directory.
ConnectionFactoryInitializer bean located in the same package as application configuration files, but under the test source tree. #Configuration and #Bean are used properly, but the bean is not instantiated. Maybe this is the problem?
Nothing in the log suggests that there was even a failed attempt to execute *.sql files.
Thank you.

Spring Boot: Executing the Newer version of SQL file each time we rebuild the application

I have a spring-boot application with PostgreSQL. Some of the tables are created using models and other tables have to be created prior to the start of the application or while starting the application. That can be done by running an SQL file while startup.
But DB tend to change over time, we may have to alter some of the tables, add some new tables without disturbing the existing data in the tables, etc.
Is there a way to add new SQL files, and run only the SQL files which was not run in spring-boot application each time when we rebuild and rerun? And don't run any of the SQL files while start-up if everything were already executed?
For your scenario liquibase is the best solution. You can merge liquibase on your spring boot application.
Ex: https://javadeveloperzone.com/spring-boot/spring-boot-liquibase-example/
You can use flyway. It allows you to have versioned sql scripts:
flywaydb.org
flywaydb spring boot plugin
Examples:
Spring Boot Database Migrations with Flyway
Incrementally changing your db with java and flyway

spring boot hsqldb details

I have created two spring boot microservices for experimenting purposes, both use hsqldb. Is there a way to see the hsqldb details like what is the name of the db that spring boot created, username, password, what tables are there in each one and to query those tables.
You can use the built-in SQL functions and INFORMATION_SCHEMA views in HSQLDB to check these properties. See the Guide http://hsqldb.org/doc/2.0/guide/
But the quick and easy way is to use a file: database connection and check the *.script file of the database, which contains SQL statements that set the properties and create the tables.

Spring Boot Spring Batch : Multiple DataSource without Spring batch metadata table

I am writting Spring boot application for Spring batch, where ItemReader reads the data from Oracle Database and writes the data into postgres sql , but i am getting the below error
Caused by: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = ? order by JOB_INSTANCE_ID desc]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "batch_job_instance" does not exist
Position: 39
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:234) ~[spring-jdbc-5.0.8.RELEASE.jar:5.0.8.RELEASE]
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72) ~[spring-jdbc-5.0.8.RELEASE.jar:5.0.8.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1402) ~[spring-jdbc-5.0.8.RELEASE.jar:5.0.8.RELEASE]
I dont want to create the spring batch metadata table , my application not need for monitoring the jobs, please suggest me on this. Thanks in advance!!
According to your exception, it looks like Spring Batch is configured to use the Postgres data source for its metadata and it does not find the BATCH_JOB_INSTANCE table.
If you don't want to use meta-data tables, you can:
use the MapJobRepositoryFactoryBean to create an in-memory JobRepository
use an embedded database (H2, HSQL, etc) with the ResourcelessTransactionManager. Spring Boot will automatically create tables in the in-memory datasource for you (more details here: https://docs.spring.io/spring-boot/docs/2.0.4.RELEASE/reference/htmlsingle/#howto-initialize-a-spring-batch-database).
Then you can configure an Oracle datasource for your reader and a Postgres datasource for your writer.
Nothing prevents from having multiple datasources in your Spring Batch app, you just need to configure which one to use for your business logic and which one to be used by Spring Batch for its internal mechanics.
There are similar questions that might help, I'm adding them here for reference:
Use of multiple DataSources in Spring Batch
How to java-configure separate datasources for spring batch data and business data? Should I even do it?
Use Below properties
spring.batch.initialize-schema=always or never
or else you can create the tables in ur DB.
You can also find the schema for respective dbs in below jar.
Spring-batch-core.jar in maven dependencies if you are using maven.
in classpath:/org/springframework/batch/core/

Resources