How to insert data into table in Spring batch application startup - spring

I am new to Spring Batch programming and stucked in following scenario.
I have a Spring Batch application, and I want to run a SQL INSERT script everytime the app starts.
Scenario :
My Spring batch application has 2 databases - (1) HSQL and (2) MySQL
All the tables that are internal to spring batch are created in HSQL and all the tables required for application are present in MySQL. I want to run a INSERT SQL script in MySQL everytime a application starts.
I went through many of the articles, and as suggested in most of the articles created data.sql and data-mysql.sql and ran my app. But it tries to search the table in HSQL and throws Object not found exception.
Is there any way I can execute a SQL script so that it will try to connect to MySQL and then do inserts.
Help is highly appreciated.
Thanks in advance
Saurabh

You can use JobExecutionListener.beforeJob Listener along with Spring ScriptUtils
Sample Code:
public class MyobListener implements JobExecutionListener {
#Autowired
Private DataSource yourMySQLDataSource
//This method will be called before job get started
#Override
public void beforeJob(JobExecution jobExecution) {
ScriptUtils.executeSqlScript(yourMySQLDataSource.getConnection(), new ClassPathResource("your.sql"));
}
#Override
public void afterJob(JobExecution jobExecution) {
}
}
Reference : ScriptUtil

Related

SpringBoot, test containers to test queries sent to external DB

My Spring app has its own DB for persistence.
The same app needs to send ad-hoc queries to external databases.
Queries are provided by users.
App takes SQL query provided by user
App takes external database type (postgres / oracle / whatever jdbc)
App submits adhoc query in runtime to external DB
App returns result as json to user
Is there any way to utilize spring test containers in order to test this functionaly?
My goal is:
Write tests for every supported DB
each test starts test container with supported DB (some subset of these I suppose: https://www.testcontainers.org/modules/databases/)
each test uploads sample data to container DB
each test runs set of "must work" queries against it.
I see many examples where App itself is tested against test containers, but can I just start container w/o plugging it as App persistence DB?
can I just start container w/o plugging it as App persistence DB?
Yes, this is perfectly possible.
Testcontainers in itself has nothing to do with Spring or Spring Boot.
What you would do is:
pick the container you want to use (different container for different databases
instantiate the container
start it up
construct a DataSource from it
Use that DataSource for your tests.
Spring Data JDBC does exactly that to run tests against various databases.
I add the class doing that for MySQL in the end.
It is a Spring application context configuration, but you could put that in a JUnit before method, a JUnit 4 rule or a JUnit 5 extension or just a normal method that you call at the start of your test.
#Configuration
#Profile("mysql")
class MySqlDataSourceConfiguration extends DataSourceConfiguration {
private static final MySQLContainer MYSQL_CONTAINER = new MySQLContainer().withConfigurationOverride("");
static {
MYSQL_CONTAINER.start();
}
/*
* (non-Javadoc)
* #see org.springframework.data.jdbc.testing.DataSourceConfiguration#createDataSource()
*/
#Override
protected DataSource createDataSource() {
MysqlDataSource dataSource = new MysqlDataSource();
dataSource.setUrl(MYSQL_CONTAINER.getJdbcUrl());
dataSource.setUser(MYSQL_CONTAINER.getUsername());
dataSource.setPassword(MYSQL_CONTAINER.getPassword());
dataSource.setDatabaseName(MYSQL_CONTAINER.getDatabaseName());
return dataSource;
}
#PostConstruct
public void initDatabase() throws SQLException, ScriptException {
ScriptUtils.executeSqlScript(createDataSource().getConnection(), null, "DROP DATABASE test;CREATE DATABASE test;");
}
}

Can i use spring ItemProcessor from spring batch standalone?

My requirement is to read CSV,XML,JSON, excel file format through file upload functionality using spring controller. Read the file transform and save into the database.
I want to use genericItem processor like Spring Batch ItemProcessor to read the above file formats. My transform logic will be common for all and then save it to the Database.
Is there any way by which i can use spring batch ItemProcessor in standalone way without creating the batch job or is there any open source tool which can read the file in above formats?
You can use apache camel to get the desired flow. There are lot of code examples out in internet to configure apache camel with spring-boot.
Here in this example, I assume that data is posted to /csv , /excel , /xml end points and data is routed to direct:records. From direct:records we have a custom processor to process the data. We can have multiple steps like that.
#Component
public class StudentRoute extends RouteBuilder {
#Override
public void configure() {
restConfiguration()
.component("servlet")
.bindingMode(RestBindingMode.json);
rest("/student").produces("application/json")
.post("/csv").to("direct:records")
.post("/excel").to("direct:records")
.post("/xml").to("direct:records");
from("direct:records")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
Object value = exchange.getIn().getBody(Object.class);
//write your logic here
}
});
}
}
Also follow this link to get more about Apache Camel. Spring Boot, Apache Camel, and Swagger UI

Spring Cloud DataFlow - getting Execution ID after running task

Currently I'm moving from Spring XD as my workflow and runtime environment to Spring Cloud DataFlow and Apache Airflow. I want to create workflows in Airflow and use custom Airflow operator to run Spring Cloud Tasks on Spring Cloud DataFlow server by REST-API.
It's possible using:
curl -X GET http://SERVER:9393/tasks/deployments/...
Unfortunately DataFlow doesn't return job execution ID in this request to create simple way for monitoring of app. Is there a way to get this id in synchronic way? Because getting the last execution of specific job can lead to mistakes eg. missing job execution if I ran many the same jobs at the same time.
On Spring DataFlow I am running Spring Batch jobs so maybe better way is too somehow set execution job id and pass it as input parameter?
Try to use the following annotations to collect the task information from your bean:
public class MyBean {
#BeforeTask
public void methodA(TaskExecution taskExecution) {
}
#AfterTask
public void methodB(TaskExecution taskExecution) {
}
#FailedTask
public void methodC(TaskExecution taskExecution, Throwable throwable) {
}
}
https://docs.spring.io/spring-cloud-task/docs/current-SNAPSHOT/reference/htmlsingle/#features-task-execution-listener

Completely auto DB upgradable Spring boot application

I am trying to use flyway for DB migrations and Spring boot's flyway support for auto-upgrading DB upon application start-up and subsequently this database will be used by my JPA layer
However this requires that schema be present in the DB so that primary datasource initialization is successful. What are the options available to run a SQL script that will create the required schema before flyway migrations happen.
Note that If I use flyway gradle plugin (and give the URL as jdbc:mysql://localhost/mysql. It does create the schema for me. Am wondering if I could make this happen from Java code on application startup.
Flyway does not support full installation when schema is empty, just migration-by-migration execution.
You could though add schema/user creation scripts in the first migration, though then your migration scripts need to be executed with sysdba/root/admin user and you need to set current schema at the beginning of each migration.
If using Flyway, the least problematic way is to install schema for the first time manually and do a baseline Flyway task (also manually). Then you are ready for next migrations to be done automatically.
Although Flyway is a great tool for database migrations it does not cover this particular use case well (installing schema for the first time).
"Am wondering if I could make this happen from Java code on application startup."
The simple answer is yes as Flyway supports programmatic configuration from with java applications. The starting point in the flyway documentation can be found here
https://flywaydb.org/documentation/api/
flyway works with a standard JDBC DataSource and so you can code the database creation process in Java and then have flyway handle the schema management. In many environment you are likely to require 2 steps anyway as the database/schema creation will need admin rights to the database, while the ongoing schema management will need an account with reduced access rights.
what you need is to implement the interface FlywayCallback
in order to kick start the migration manually from you code you can use the migrate() method on the flyway class
tracking the migration process can be done through the MigrationInfoService() method of the flyway class
Unfortunately if your app has a single datasource that expects the schema to exist, Flyway will not be able to use that datasource to create the scheme. You must create another datasource that is not bound to the schema and use the unbounded datasource by way of a FlywayMigrationStrategy.
In your properties file:
spring:
datasource:
url: jdbc:mysql://localhost:3306/myschema
bootstrapDatasource:
url: jdbc:mysql://localhost:3306
In your config file:
#Bean
#Primary
#ConfigurationProperties("spring.datasource")
public DataSourceProperties primaryDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("spring.datasource")
public DataSource primaryDataSource() {
return primaryDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
#ConfigurationProperties("spring.bootstrapDatasource")
public DataSource bootstrapDataSource() {
return DataSourceBuilder.create().build();
}
And in your FlywayMigrationStrategy file:
#Inject
#Qualifier("bootstrapDataSource")
public void setBootstrapDataSource(DataSource bootstrapDataSource) {
this.bootstrapDataSource = bootstrapDataSource;
}
#Override
public void migrate(Flyway flyway) {
flyway.setDataSource(bootstrapDataSource);
...
flyway.migrate()
}

Auto insert default record when deploying Spring MVC App with Spring Security

I am looking for a way to auto insert a default admin account, using JPA, when my spring mvc application is deployed.
My database is generated based on the Entities.
I want to kick off something that will insert a default admin user, assign roles, every time the application is deployed.
It depends on which implementation of JPA you use.
If you use Hibernate you can add import.sql file (that contains records to load) to the class path. More info here.
As a workaround you can also use dbunit tool.
I would recommend having a migration utility that will keep your database in synch with your codebase - these are typically DDL's, but again the queries to insert default admin user, assign roles etc can also be part of this migration utility. There are very good one's available - Flyway is one that I have used, Liquibase is another one.
There is a very good comparison of the different migration utilities on the Flyway homepage also that you can look at.
i use CommandLineRunner interface.
#Component
public class CommandLineAppStartupRunner implements CommandLineRunner {
#Autowired
UserRepository userRepository;
#Override
public void run(String...args) throws Exception {
User admin = new user(firstName);
userRepository.save(admin);
}
}
before the app starts this class will be executed.
you can find other ways here : Guide To Running Logic on Startup in Spring

Resources