Camel route to read data from H2 DB in Spring boot app - spring-boot

My spring boot app is using route to read from mySQL DB.
from("sql:select * from Students?repeatCount=1").convertBodyTo(String.class)
.to("file:outbox");
Now I want to create route to read from in memory H2 DB but Im not sure which camel components to use and how to create the route.

If you got Spring Boot, then you can actually inject everything via the dataSource=#mysql parameter and introduce separate DataSource-Beans and use the required ones:
#Configuration
public class DataSourceConfig {
#Bean(name = "mysqlProperties")
#ConfigurationProperties("spring.datasource.mysql")
public DataSourceProperties mysqlDataSourceProperties() {
return new DataSourceProperties();
}
#Bean(name = "mysql")
public DataSource mysqlDataSource(#Qualifier("mysqlProperties") DataSourceProperties properties) {
return properties
.initializeDataSourceBuilder()
.build();
}
#Bean(name = "h2Properties")
#ConfigurationProperties("spring.datasource.h2")
public DataSourceProperties h2DataSourceProperties() {
return new DataSourceProperties();
}
#Bean(name = "h2")
public DataSource h2DataSource(#Qualifier("h2Properties") DataSourceProperties properties) {
return properties
.initializeDataSourceBuilder()
.build();
}
}
Afterwards you can declare the different DataSources in you application.yml but don't forget to disable/exclude the datasource autoconfig.
spring:
autoconfigure:
exclude:
- org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
datasource:
mysql:
url: jdbc:mysql://localhost:3306/db_example
username: user
password: pass
driver-class-name: com.mysql.cj.jdbc.Driver
h2:
url: jdbc:h2:mem:h2testdb
username: sa
password: sa
driver-class-name: org.h2.Driver
Your Camel-Route should look like this to use everything properly:
from("sql:select * from Students?repeatCount=1&dataSource=#mysql")
.convertBodyTo(String.class)
.to("file:outbox");

Related

Amazon RDS Read Replica configuration Postgres database from an spring boot application deployed on PCF?

Hi All currently we are have deployed our springboot code to pcf which is running on aws.
we are using aws database - where we have cup service and VCAP_SERVICES which hold the parameter of db.
Below our configuration to get datasource
#Bean
public DataSource dataSource() {
if (dataSource == null) {
dataSource = connectionFactory().dataSource();
configureDataSource(dataSource);
}
return dataSource;
}
#Bean
public JdbcTemplate jdbcTemplate() {
return new JdbcTemplate(dataSource());
}
private void configureDataSource(DataSource dataSource) {
org.apache.tomcat.jdbc.pool.DataSource tomcatDataSource = asTomcatDatasource(dataSource);
tomcatDataSource.setTestOnBorrow(true);
tomcatDataSource.setValidationQuery("SELECT 1");
tomcatDataSource.setValidationInterval(30000);
tomcatDataSource.setTestWhileIdle(true);
tomcatDataSource.setTimeBetweenEvictionRunsMillis(60000);
tomcatDataSource.setRemoveAbandoned(true);
tomcatDataSource.setRemoveAbandonedTimeout(60);
tomcatDataSource.setMaxActive(Environment.getAsInt("MAX_ACTIVE_DB_CONNECTIONS", tomcatDataSource.getMaxActive()));
}
private org.apache.tomcat.jdbc.pool.DataSource asTomcatDatasource(DataSource dataSource) {
Objects.requireNonNull(dataSource, "There is no DataSource configured");
DataSource targetDataSource = ((DelegatingDataSource)dataSource).getTargetDataSource();
return (org.apache.tomcat.jdbc.pool.DataSource) targetDataSource;
}
Now when we have read replicas created , what configuration do i need to modify so our spring boot application uses the read replicas?
is Just #Transactional(readOnly = true) on the get call is enough - that it will be automatically taken care? or do i need to add some more configuration
#Repository
public class PostgresSomeRepository implements SomeRepository {
#Autowired
public PostgresSomeRepository(JdbcTemplate jdbcTemplate, RowMapper<Consent> rowMapper) {
this.jdbcTemplate = jdbcTemplate;
this.rowMapper = rowMapper;
}
#Override
#Transactional(readOnly = true)
public List<SomeValue> getSomeGetCall(List<String> userIds, String applicationName, String propositionName, String since, String... types) {
//Some Logic
try {
return jdbcTemplate.query(sql, rowMapper, paramList.toArray());
} catch (DataAccessException ex) {
throw new ErrorGettingConsent(ex.getMessage(), ex);
}
}
}
Note:we have not added any spring aws jdbc dependency
Let's assume the cloud service name is my_db.
Map the cloud service to the application config appication-cloud.yml used by default in the CF (BTW this is better than using the connector because you can customize the datasource)
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
# my_db
url: ${vcap.services.my_db.credentials.url}
username: ${vcap.services.my_db.credentials.username}
password: ${vcap.services.my_db.credentials.password}
hikari:
poolName: Hikari
auto-commit: false
data-source-properties:
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
jpa:
generate-ddl: false
show-sql: true
put the service to the application manifest.yml:
---
applications:
- name: my-app
env:
SPRING_PROFILES_ACTIVE: "cloud" # by default
services:
- my_db

Unable to update hikari datasource configuration in spring boot app

Hikari datasource configurations always taking as default values. even though i provide the actual configurations in application.yml file.
MainApp.java
#SpringBootApplication
public class MainApp {
public static void main(String[] args) {
SpringApplication.run(MainApp.class, args);
}
#Bean
#Primary
public DataSourceProperties dataSourceProperties() {
return new DataSourceProperties();
}
//I use below method to set password from different approach instead of taking from app.yml
#Bean
public DataSource dataSource(DataSourceProperties properties) {
DataSource ds = properties.initializeDataSourceBuilder()
.password("setting a password from vault")
.build();
return ds;
}
}
application.yml
spring:
application:
name: demo
datasource:
hikari:
connection-timeout: 20000
minimum-idle: 5
maximum-pool-size: 12
idle-timeout: 300000
max-lifetime: 1200000
auto-commit: true
driverClassName: com.microsoft.sqlserver.jdbc.SQLServerDriver
url: jdbc:sqlserver://ip:port;databaseName=sample
username: username
Using Spring 2.1.1.RELEASE and when i start the application i logged the Hikari logs and it says all default values. So i did debugged on the second bean datasource and its actually HikariDatasource and except default values rest of them are empty and not reflecting the given values from the application.yml. Please recommend or comment if i misconfigured anything or done any mistake!

Non-primary embedded database schema not being initiliazed

I'm currently working on a Spring Boot (1.5.10.RELEASE) application that uses 2 embedded databases. This two databases are defined equally but one's defined on the application itself and one's defined on a custom autoconfiguration class that's on an imported JAR; yet only the one flagged as #Primary (the one on the application) gets the schema initialized.
This is the current definition for both datasources:
Primary, on the application:
#Configuration
public class DataSourceConfiguration {
#Bean
#ConfigurationProperties(prefix = "first.datasource")
#Primary
public DataSourceProperties firstProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
public DataSource firstDataSource() {
return firstProperties().initializeDataSourceBuilder().build();
}
#Bean
#Primary
public JdbcTemplate firstTemplate() {
return new JdbcTemplate(firstDataSource());
}
}
On the autoconfiguration:
#Configuration
#ConditionalOnProperty(name = "second.datasource.url")
public class SecondDataSourceAutoconfiguration {
#Bean
#ConfigurationProperties(prefix = "second.datasource")
public DataSourceProperties secondProperties() {
return new DataSourceProperties();
}
#Bean
public DataSource secondDataSource() {
return secondProperties().initializeDataSourceBuilder().build();
}
#Bean
public JdbcTemplate secondJdbcTemplate(#Qualifier("secondDataSource") DataSource datasource) {
return new JdbcTemplate(datasource);
}
}
And my application.yml fills the properties:
first:
datasource:
url: jdbc:h2:firstdb;DB_CLOSE_ON_EXIT=FALSE
second:
datasource:
url: jdbc:h2:seconddb
platform: h2
My resources folder contains both a schema.sql that's executed on firstdb and a schema-h2.sql which should get executed on seconddb but does not. I tried playing around with the datasource.schema and datasource.initialize properties, switching the script names and the platform property to the first datasource (in that case, the schema-h2.sql gets executed on firstdb, but nothing on the seconddb) and changing the embedded database provider to HSQLDB; but can't get the schema for the non-primary in-memory DB initialized anyhow.

Disable production datasource autoconfiguration in a Spring DataJpaTest

In my application.yml, I have the following configuration (to be able to customize variable on different environment with docker/docker-compose) :
spring:
datasource:
url: ${SPRING_DATASOURCE_URL}
username: ${SPRING_DATASOURCE_USERNAME}
password: ${SPRING_DATASOURCE_PASSWORD}
The trouble is that Spring tries to autoconfigure this datasource while I am in #DataJpaTest, so with an embedded H2 database, and obviously it does not like placeholders....
I tried to exclude some autoconfiguration :
#DataJpaTest(excludeAutoConfiguration =
{DataSourceAutoConfiguration.class,
DataSourceTransactionManagerAutoConfiguration.class,
HibernateJpaAutoConfiguration.class})
But then, nothing works, entityManagerFactory is missing, ...
I could probably use profiles but if possible I preferred another solution.
Did you try defining your own Datasource bean?
#Import(JpaTestConfiguration.class)
#DataJpaTest
#Configuration
public class JpaTestConfiguration{
//...
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.h2.Driver");
dataSource.setUrl("jdbc:h2:mem:db;DB_CLOSE_DELAY=-1");
dataSource.setUsername("sa");
dataSource.setPassword("");
return dataSource;
}
//...
}

How to set up liquibase in Spring for multiple data sources?

I need to set up liquibase for two datasources in Spring, at the moment it seems that only one liquibase set up is possible and you can choose for which data source.
If you are using spring boot, here is the setup which can help you:
Configuration class:
#Configuration
public class DatasourceConfig {
#Primary
#Bean
#ConfigurationProperties(prefix = "datasource.primary")
public DataSource primaryDataSource() {
return DataSourceBuilder.create().build();
}
#Bean
#ConfigurationProperties(prefix = "datasource.primary.liquibase")
public LiquibaseProperties primaryLiquibaseProperties() {
return new LiquibaseProperties();
}
#Bean
public SpringLiquibase primaryLiquibase() {
return springLiquibase(primaryDataSource(), primaryLiquibaseProperties());
}
#Bean
#ConfigurationProperties(prefix = "datasource.secondary")
public DataSource secondaryDataSource() {
return DataSourceBuilder.create().build();
}
#Bean
#ConfigurationProperties(prefix = "datasource.secondary.liquibase")
public LiquibaseProperties secondaryLiquibaseProperties() {
return new LiquibaseProperties();
}
#Bean
public SpringLiquibase secondaryLiquibase() {
return springLiquibase(secondaryDataSource(), secondaryLiquibaseProperties());
}
private static SpringLiquibase springLiquibase(DataSource dataSource, LiquibaseProperties properties) {
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDataSource(dataSource);
liquibase.setChangeLog(properties.getChangeLog());
liquibase.setContexts(properties.getContexts());
liquibase.setDefaultSchema(properties.getDefaultSchema());
liquibase.setDropFirst(properties.isDropFirst());
liquibase.setShouldRun(properties.isEnabled());
liquibase.setLabels(properties.getLabels());
liquibase.setChangeLogParameters(properties.getParameters());
liquibase.setRollbackFile(properties.getRollbackFile());
return liquibase;
}
...
}
properties.yml
datasource:
primary:
url: jdbc:mysql://localhost/primary
username: username
password: password
liquibase:
change-log: classpath:/db/changelog/db.primary.changelog-master.xml
secondary:
url: jdbc:mysql://localhost/secondary
username: username
password: password
liquibase:
change-log: classpath:/db/changelog/db.secondary.changelog-master.xml
I've done a project that I can create multiple dataSources with your specific changeSets, so if you need to add another dataSource, it would just change your application.yml, no longer needing to change the code.
Configuration class
#Configuration
#ConditionalOnProperty(prefix = "spring.liquibase", name = "enabled", matchIfMissing = true)
#EnableConfigurationProperties(LiquibaseProperties.class)
#AllArgsConstructor
public class LiquibaseConfiguration {
private LiquibaseProperties properties;
private DataSourceProperties dataSourceProperties;
#Bean
#DependsOn("tenantRoutingDataSource")
public MultiTenantDataSourceSpringLiquibase liquibaseMultiTenancy(Map<Object, Object> dataSources,
#Qualifier("taskExecutor") TaskExecutor taskExecutor) {
// to run changeSets of the liquibase asynchronous
MultiTenantDataSourceSpringLiquibase liquibase = new MultiTenantDataSourceSpringLiquibase(taskExecutor);
dataSources.forEach((tenant, dataSource) -> liquibase.addDataSource((String) tenant, (DataSource) dataSource));
dataSourceProperties.getDataSources().forEach(dbProperty -> {
if (dbProperty.getLiquibase() != null) {
liquibase.addLiquibaseProperties(dbProperty.getTenantId(), dbProperty.getLiquibase());
}
});
liquibase.setContexts(properties.getContexts());
liquibase.setChangeLog(properties.getChangeLog());
liquibase.setDefaultSchema(properties.getDefaultSchema());
liquibase.setDropFirst(properties.isDropFirst());
liquibase.setShouldRun(properties.isEnabled());
return liquibase;
}
}
application.yml
spring:
dataSources:
- tenantId: db1
url: jdbc:postgresql://localhost:5432/db1
username: postgres
password: 123456
driver-class-name: org.postgresql.Driver
liquibase:
enabled: true
default-schema: public
change-log: classpath:db/master/changelog/db.changelog-master.yaml
- tenantId: db2
url: jdbc:postgresql://localhost:5432/db2
username: postgres
password: 123456
driver-class-name: org.postgresql.Driver
- tenantId: db3
url: jdbc:postgresql://localhost:5432/db3
username: postgres
password: 123456
driver-class-name: org.postgresql.Driver
 
Link of repository: https://github.com/dijalmasilva/spring-boot-multitenancy-datasource-liquibase
I was in the need of supporting a dynamic amount of DataSources, not a fixed number of them. I found that you can use the same SpringLiquibase bean for multiple DataSources by making a service like this:
#Service
#DependsOn("liquibase")
public class LiquibaseService {
#Autowired
#Qualifier("liquibase")
private SpringLiquibase liquibase;
#PostConstruct
public void initialize() {
/* Obtain datasources from wherever. I obtain them from a master DB. It's up to you. */
List<DataSource> dataSources = obtainDataSources();
for (DataSource dataSource : dataSources) {
try {
liquibase.setDataSource(dataSource);
liquibase.setChangeLog("classpath:liquibase/emp.changelog.xml");
liquibase.setShouldRun(true);
// This runs Liquibase
liquibase.afterPropertiesSet();
} catch (LiquibaseException ex) {
throw new RuntimeException(ex);
}
}
}
}
For this to work, you should have a SpringLiquibase bean declared somewhere. In this example, I got this in one of my configuration files:
#Bean
public SpringLiquibase liquibase(LiquibaseProperties properties) {
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDataSource(systemDataSource);
liquibase.setChangeLog("classpath:liquibase/sis.changelog.xml");
liquibase.setContexts(properties.getContexts());
liquibase.setDefaultSchema(properties.getDefaultSchema());
liquibase.setDropFirst(properties.isDropFirst());
liquibase.setLabels(properties.getLabels());
liquibase.setChangeLogParameters(properties.getParameters());
liquibase.setRollbackFile(properties.getRollbackFile());
// This is because we are running the process manually. Don't let SpringLiquibase do it.
liquibase.setShouldRun(false);
return liquibase;
}
The above highly depends on your DataSource configuration requirements. You could also need to put this on your main Application class so the Spring-Liquibase auto-configuration doesn't kick in:
#SpringBootApplication(exclude = {
LiquibaseAutoConfiguration.class
})
public class Application {
// Stuff...
}
Just have 2 datasources and 2 beans
<bean id="liquibase1" class="liquibase.integration.spring.SpringLiquibase">
<property name="dataSource" ref="dataSource1" />
<property name="changeLog" value="classpath:db1-changelog.xml" />
</bean>
<bean id="liquibase2" class="liquibase.integration.spring.SpringLiquibase">
<property name="dataSource" ref="dataSource2" />
<property name="changeLog" value="classpath:db2-changelog.xml" />
</bean>
You can also run multiple liquibase instance (i.e. not only limit to a primary and a secondary).
e.g. Your configuration java can have:
#Bean
#ConfigurationProperties(prefix = "liquibase1")
...
#Bean
#ConfigurationProperties(prefix = "liquibase2")
...
#Bean
#ConfigurationProperties(prefix = "liquibase3")
Your application.property can have:
liquibase1.default-schema=schemaA
...
liquibase2.default-schema=schemaB
...
liquibase3.default-schema=schemaC
...
And (excitingly), these springLiquibase instances can use the same Datasource, or different DataSource... however you like it.
Running order? I haven't found any official document, from my observation in debug, all liquibase migration runs according to the order you write in application.properties. Those who wants to run migration in one datasource, then go to another datasource, then come back to this datasource and run something else,,, you may want to try this multiple liquibase instance approach.

Resources