Unable to update hikari datasource configuration in spring boot app - spring-boot

Hikari datasource configurations always taking as default values. even though i provide the actual configurations in application.yml file.
MainApp.java
#SpringBootApplication
public class MainApp {
public static void main(String[] args) {
SpringApplication.run(MainApp.class, args);
}
#Bean
#Primary
public DataSourceProperties dataSourceProperties() {
return new DataSourceProperties();
}
//I use below method to set password from different approach instead of taking from app.yml
#Bean
public DataSource dataSource(DataSourceProperties properties) {
DataSource ds = properties.initializeDataSourceBuilder()
.password("setting a password from vault")
.build();
return ds;
}
}
application.yml
spring:
application:
name: demo
datasource:
hikari:
connection-timeout: 20000
minimum-idle: 5
maximum-pool-size: 12
idle-timeout: 300000
max-lifetime: 1200000
auto-commit: true
driverClassName: com.microsoft.sqlserver.jdbc.SQLServerDriver
url: jdbc:sqlserver://ip:port;databaseName=sample
username: username
Using Spring 2.1.1.RELEASE and when i start the application i logged the Hikari logs and it says all default values. So i did debugged on the second bean datasource and its actually HikariDatasource and except default values rest of them are empty and not reflecting the given values from the application.yml. Please recommend or comment if i misconfigured anything or done any mistake!

Related

jooq DataSourceConnectionProvider with TransactionAwareDataSourceProxy is not taking part in spring transactions

I have the following spring data source setup:
datasource:
name: postgres-datasource
url: ${POSTGRES_URL:jdbc:postgresql://localhost:5432/mydb}?reWriteBatchedInserts=true&prepareThreshold=0
username: ${POSTGRES_USER:mydb}
password: ${POSTGRES_PASS:12345}
driver-class: org.postgresql.Driver
hikari:
minimumIdle: 2
maximum-pool-size: 30
max-lifetime: 500000
idleTimeout: 120000
auto-commit: false
data-source-properties:
cachePrepStmts: true
useServerPrepStmts: true
prepStmtCacheSize: 500
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
# generate_statistics: true
order_inserts: true
order_updates: true
jdbc:
lob:
non_contextual_creation: true
batch_size: 50
Notice that auto-commit is false.
Since, I need to use both jooq and JPA and also have multiple schemas in my db, I have configured the following DataSourceConnectionProvider
public class SchemaSettingDataSourceConnectionProvider extends DataSourceConnectionProvider {
public SchemaSettingDataSourceConnectionProvider(TransactionAwareDataSourceProxy dataSource) {
super(dataSource);
}
public Connection acquire() {
try {
String tenant = TenantContext.getTenantId();
log.debug("Setting schema to {}", tenant);
Connection connection = dataSource().getConnection();
Statement statement = connection.createStatement();
statement.executeUpdate("SET SCHEMA '" + tenant + "'");
statement.close();
return connection;
} catch (SQLException var2) {
throw new DataAccessException("Error getting connection from data source " + dataSource(), var2);
}
}
I have the #EnableTransactionManagement on the spring boot config. With this setup in place, the connection is not committing after transaction is over.
#Transactional(propagation = Propagation.REQUIRES_NEW)
public FlowRecord insert(FlowName name, String createdBy) {
return dslContext.insertInto(FLOW, FLOW.NAME, FLOW.STATUS)
.values(name.name(), FlowStatus.CREATED.name())
.returning(FLOW.ID)
.fetch()
.get(0);
}
This does not commit. So, I tried adding the following code to my SchemaSettingDataSourceConnectionProvider class
#Override
public void release(Connection connection) {
connection.commit();
super.release(connection);
}
However, now the issue is that even when a transaction should get rolled back, for e.g due to a runtime exception, it still commits all the time.
Is there some configuration I am missing
UPDATE
Following the answer below, I provided a DataSourceTransactionManager bean and it worked for JOOQ.
public DataSourceTransactionManager jooqTransactionManager(DataSource dataSource) {
// DSTM is a PlatformTransactionManager
return new DataSourceTransactionManager(dataSource);
}
However, now my regular JPA calls are all failing with
Caused by: javax.persistence.TransactionRequiredException: Executing an update/delete query
at org.hibernate.internal.AbstractSharedSessionContract.checkTransactionNeededForUpdateOperation(AbstractSharedSessionContract.java:445)
at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1692)
So, I provided a JpaTransactionManager bean. Now this causes JOOQ Auto Configuration to throw multiple DataSourceTransactionManager beans present exception. After much trial and error, the one that worked for me was this:
private final TransactionAwareDataSourceProxy dataSource;
public DslConfig(DataSource dataSource) {
// A transaction aware datasource is needed, otherwise the spring #Transactional is ignored and updates do not work.
this.dataSource = new TransactionAwareDataSourceProxy(dataSource);
}
#Bean("transactionManager")
public PlatformTransactionManager transactionManager() {
// Needed for jpa transactions to work
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setDataSource(dataSource);
return transactionManager;
}
Notice that I am using a JpaTransactionManager but setting the datasource to be TransactionAwareDataSourceProxy. Further testing required, but looks like both JPA and JOOQ transactions are working now.
One thing to watch out for is to make sure you're using the correct #Transactional annotation. In my project there were two: one from a Jakarta package and one from a Spring package - make sure you're using the Spring annotation.
I don't know if your Spring config is correct.
We use Java config so it's hard to compare.
One obvious difference is that we have an explicit TransactionManager defined, is that something you maybe need to do?

Amazon RDS Read Replica configuration Postgres database from an spring boot application deployed on PCF?

Hi All currently we are have deployed our springboot code to pcf which is running on aws.
we are using aws database - where we have cup service and VCAP_SERVICES which hold the parameter of db.
Below our configuration to get datasource
#Bean
public DataSource dataSource() {
if (dataSource == null) {
dataSource = connectionFactory().dataSource();
configureDataSource(dataSource);
}
return dataSource;
}
#Bean
public JdbcTemplate jdbcTemplate() {
return new JdbcTemplate(dataSource());
}
private void configureDataSource(DataSource dataSource) {
org.apache.tomcat.jdbc.pool.DataSource tomcatDataSource = asTomcatDatasource(dataSource);
tomcatDataSource.setTestOnBorrow(true);
tomcatDataSource.setValidationQuery("SELECT 1");
tomcatDataSource.setValidationInterval(30000);
tomcatDataSource.setTestWhileIdle(true);
tomcatDataSource.setTimeBetweenEvictionRunsMillis(60000);
tomcatDataSource.setRemoveAbandoned(true);
tomcatDataSource.setRemoveAbandonedTimeout(60);
tomcatDataSource.setMaxActive(Environment.getAsInt("MAX_ACTIVE_DB_CONNECTIONS", tomcatDataSource.getMaxActive()));
}
private org.apache.tomcat.jdbc.pool.DataSource asTomcatDatasource(DataSource dataSource) {
Objects.requireNonNull(dataSource, "There is no DataSource configured");
DataSource targetDataSource = ((DelegatingDataSource)dataSource).getTargetDataSource();
return (org.apache.tomcat.jdbc.pool.DataSource) targetDataSource;
}
Now when we have read replicas created , what configuration do i need to modify so our spring boot application uses the read replicas?
is Just #Transactional(readOnly = true) on the get call is enough - that it will be automatically taken care? or do i need to add some more configuration
#Repository
public class PostgresSomeRepository implements SomeRepository {
#Autowired
public PostgresSomeRepository(JdbcTemplate jdbcTemplate, RowMapper<Consent> rowMapper) {
this.jdbcTemplate = jdbcTemplate;
this.rowMapper = rowMapper;
}
#Override
#Transactional(readOnly = true)
public List<SomeValue> getSomeGetCall(List<String> userIds, String applicationName, String propositionName, String since, String... types) {
//Some Logic
try {
return jdbcTemplate.query(sql, rowMapper, paramList.toArray());
} catch (DataAccessException ex) {
throw new ErrorGettingConsent(ex.getMessage(), ex);
}
}
}
Note:we have not added any spring aws jdbc dependency
Let's assume the cloud service name is my_db.
Map the cloud service to the application config appication-cloud.yml used by default in the CF (BTW this is better than using the connector because you can customize the datasource)
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
# my_db
url: ${vcap.services.my_db.credentials.url}
username: ${vcap.services.my_db.credentials.username}
password: ${vcap.services.my_db.credentials.password}
hikari:
poolName: Hikari
auto-commit: false
data-source-properties:
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
jpa:
generate-ddl: false
show-sql: true
put the service to the application manifest.yml:
---
applications:
- name: my-app
env:
SPRING_PROFILES_ACTIVE: "cloud" # by default
services:
- my_db

Camel route to read data from H2 DB in Spring boot app

My spring boot app is using route to read from mySQL DB.
from("sql:select * from Students?repeatCount=1").convertBodyTo(String.class)
.to("file:outbox");
Now I want to create route to read from in memory H2 DB but Im not sure which camel components to use and how to create the route.
If you got Spring Boot, then you can actually inject everything via the dataSource=#mysql parameter and introduce separate DataSource-Beans and use the required ones:
#Configuration
public class DataSourceConfig {
#Bean(name = "mysqlProperties")
#ConfigurationProperties("spring.datasource.mysql")
public DataSourceProperties mysqlDataSourceProperties() {
return new DataSourceProperties();
}
#Bean(name = "mysql")
public DataSource mysqlDataSource(#Qualifier("mysqlProperties") DataSourceProperties properties) {
return properties
.initializeDataSourceBuilder()
.build();
}
#Bean(name = "h2Properties")
#ConfigurationProperties("spring.datasource.h2")
public DataSourceProperties h2DataSourceProperties() {
return new DataSourceProperties();
}
#Bean(name = "h2")
public DataSource h2DataSource(#Qualifier("h2Properties") DataSourceProperties properties) {
return properties
.initializeDataSourceBuilder()
.build();
}
}
Afterwards you can declare the different DataSources in you application.yml but don't forget to disable/exclude the datasource autoconfig.
spring:
autoconfigure:
exclude:
- org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
datasource:
mysql:
url: jdbc:mysql://localhost:3306/db_example
username: user
password: pass
driver-class-name: com.mysql.cj.jdbc.Driver
h2:
url: jdbc:h2:mem:h2testdb
username: sa
password: sa
driver-class-name: org.h2.Driver
Your Camel-Route should look like this to use everything properly:
from("sql:select * from Students?repeatCount=1&dataSource=#mysql")
.convertBodyTo(String.class)
.to("file:outbox");

Spring boot how to use Hikari auto configuration but set username/password at runtime

I am using Spring boot 2.0.1 with Hikari CP and want to use application properties to set Hikari datasource properties like Connection timeout, Maximum pool size etc but the username and password should be set at runtime. I tried below but when the datasource is created, it doesn't have the Connection timeout value I am trying to set.
Below is the code for datasource bean.
#Value("${spring.datasource.url}")
private String url;
#ConfigurationProperties(prefix = "spring.datasource.hikari")
#Bean
public DataSource dataSource() throws Exception {
//User name and password is fetched from some other data storage
HikariConfig hikariConfig = new HikariConfig();
hikariConfig.setJdbcUrl(url);
hikariConfig.setUsername(username);
hikariConfig.setPassword(password);
//The data source created here doesn't have connection timeout value
//set by me
return new HikariDataSource(hikariConfig);
}
Below is my application properties file
spring.datasource.url={Our DB URL}
spring.datasource.hikari.maximumPoolSize=100
spring.datasource.hikari.idleTimeout=30000
spring.datasource.hikari.poolName=SpringBootJPAHikariCP
spring.datasource.hikari.connectionTimeout=40000
spring.datasource.hikari.driver-class-
name=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.jpa.hibernate.connection.provider_class=org.hibernate.hikaricp.internal.HikariCPConnectionProvider
I referred to below Spring documentation but it just talks about auto configuring properties like url and credentials (which worked) but not about connection timeout and idle timeout etc.
https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-a-datasource
Please let me know if I am missing anything.
#ConfigurationProperties(prefix = "spring.datasource.hikari")
#Bean
#Primary
public DataSource dataSource(String username,String password) {
return DataSourceBuilder.create().username(username).password(password).build();
}
and use these in yml/property file without giving username and password property.
spring:
profiles: dev
# Development database configuration
datasource.hikari:
driverClassName: oracle.jdbc.driver.OracleDriver
jdbcUrl: jdbc:oracle:thin:#url:1621:sid
type: com.zaxxer.hikari.HikariDataSource
connectionTimeout:40000
This will work. Let me know if it doesn't work for you.

Disable production datasource autoconfiguration in a Spring DataJpaTest

In my application.yml, I have the following configuration (to be able to customize variable on different environment with docker/docker-compose) :
spring:
datasource:
url: ${SPRING_DATASOURCE_URL}
username: ${SPRING_DATASOURCE_USERNAME}
password: ${SPRING_DATASOURCE_PASSWORD}
The trouble is that Spring tries to autoconfigure this datasource while I am in #DataJpaTest, so with an embedded H2 database, and obviously it does not like placeholders....
I tried to exclude some autoconfiguration :
#DataJpaTest(excludeAutoConfiguration =
{DataSourceAutoConfiguration.class,
DataSourceTransactionManagerAutoConfiguration.class,
HibernateJpaAutoConfiguration.class})
But then, nothing works, entityManagerFactory is missing, ...
I could probably use profiles but if possible I preferred another solution.
Did you try defining your own Datasource bean?
#Import(JpaTestConfiguration.class)
#DataJpaTest
#Configuration
public class JpaTestConfiguration{
//...
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.h2.Driver");
dataSource.setUrl("jdbc:h2:mem:db;DB_CLOSE_DELAY=-1");
dataSource.setUsername("sa");
dataSource.setPassword("");
return dataSource;
}
//...
}

Resources