Getting too many connection for role when using DataSource - spring-boot

I have a Rest service and when it gets it has to do some insertion and updation to almost 25 database. So when I tried like the below code, it was working in my localhost but when I deploy to my staging server I was getting FATAL: too many connections for role "user123"
List<String> databaseUrls = null;
databaseUrls.forEach( databaseUrl -> {
DataSource dataSource = DataSourceBuilder.create()
.driverClassName("org.postgresql.Driver")
.url(databaseUrl)
.username("user123")
.password("some-password")
.build();
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.update("Some...Update...Query");
});
As per my understanding DataSource need not to be closed because it is never opened.
Note:
A DataSource implementation need not be closed, because it is never
“opened”. A DataSource is not a resource, is not connected to the
database, so it is not holding networking connections nor resources on
the database server. A DataSource is simply information needed when
making a connection to the database, with the database server's
network name or address, the user name, user password, and various
options you want specified when a connection is eventually made.
Can someone tell why I am getting this issue

The problem is in DataSourceBuilder, it actually creates of the connection pools which spawns some number of connections and keeps them running:
private static final String[] DATA_SOURCE_TYPE_NAMES = new String[] {
"org.apache.tomcat.jdbc.pool.DataSource",
"com.zaxxer.hikari.HikariDataSource",
"org.apache.commons.dbcp.BasicDataSource" };
Javadoc says:
/**
* Convenience class for building a {#link DataSource} with common implementations and
* properties. If Tomcat, HikariCP or Commons DBCP are on the classpath one of them will
* be selected (in that order with Tomcat first). In the interest of a uniform interface,
* and so that there can be a fallback to an embedded database if one can be detected on
* the classpath, only a small set of common configuration properties are supported. To
* inject additional properties into the result you can downcast it, or use
* <code>#ConfigurationProperties</code>.
*/
Try to use e.g. SingleConnectionDataSource, then your problem will gone:
List<String> databaseUrls = null;
Class.forName("org.postgresql.Driver");
databaseUrls.forEach( databaseUrl -> {
SingleConnectionDataSource dataSource;
try {
dataSource = new SingleConnectionDataSource(
databaseUrl, "user123", "some-password", true /*suppressClose*/);
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.update("Some...Update...Query");
} catch (Exception e) {
log.error("Failed to run queries for {}", databaseUrl, e);
} finally {
// release resources
if (dataSource != null) {
dataSource.destroy();
}
}
});

First thing it is very bad architecture decision to have single application managing 50 database. Anyway instead of creating DataSource in for loop, you should make use of Factory Design pattern to create DataSource for each DB. You should add some connection pooling mechanism to your system . HijariCP and TomcatPool are most widely used. Analyse logs of failure thread for any further issues.

Related

Include newly added data sources into route Data Source object without restarting the application server

Implemented Spring's AbstractRoutingDatasource by dynamically determining the actual DataSource based on the current context.
Refered this article : https://www.baeldung.com/spring-abstract-routing-data-source.
Here on spring boot application start up . Created a map of contexts to datasource objects to configure our AbstractRoutingDataSource. All these client context details are fetched from a database table.
#Bean
#DependsOn("dataSource")
#Primary
public DataSource routeDataSource() {
RoutingDataSource routeDataSource = new RoutingDataSource();
DataSource defaultDataSource = (DataSource) applicationContext.getBean("dataSource");
List<EstCredentials> credentials = LocalDataSourcesDetailsLoader.getAllCredentails(defaultDataSource); // fetching from database table
localDataSourceRegistrationBean.registerDataSourceBeans(estCredentials);
routeDataSource.setDefaultTargetDataSource(defaultDataSource);
Map<Object, Object> targetDataSources = new HashMap<>();
for (Credentials credential : credentials) {
targetDataSources.put(credential.getEstCode().toString(),
(DataSource) applicationContext.getBean(credential.getEstCode().toString()));
}
routeDataSource.setTargetDataSources(targetDataSources);
return routeDataSource;
}
The problem is if i add a new client details, I cannot get that in routeDataSource. Obvious reason is that these values are set on start up.
How can I achieve to add new client context and I had to re intialize the routeDataSource object.
Planning to write a service to get all the client context newly added and reset the routeDataSource object, no need to restart the server each time any changes in the client details.
A simple solution to this situation is adding #RefreshScope to the bean definition:
#Bean
#Primary
#RefreshScope
public DataSource routeDataSource() {
RoutingDataSource routeDataSource = new RoutingDataSource();
DataSource defaultDataSource = (DataSource) applicationContext.getBean("dataSource");
List<EstCredentials> credentials = LocalDataSourcesDetailsLoader.getAllCredentails(defaultDataSource); // fetching from database table
localDataSourceRegistrationBean.registerDataSourceBeans(estCredentials);
routeDataSource.setDefaultTargetDataSource(defaultDataSource);
Map<Object, Object> targetDataSources = new HashMap<>();
for (Credentials credential : credentials) {
targetDataSources.put(credential.getEstCode().toString(),
(DataSource) applicationContext.getBean(credential.getEstCode().toString()));
}
routeDataSource.setTargetDataSources(targetDataSources);
return routeDataSource;
}
Add Spring Boot Actuator as a dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Then trigger the refresh endpoint POST to /actuator/refresh to update the DataSource (actually every refresh scoped bean).
So this will depend on how much you know about the datasources to be added, but you could set this up as a multi-tenant project. Another example of creating new datasources:
#Autowired private Map <String, Datasource> mars2DataSources;
public void addDataSourceAtRuntime() {
DataSourceBuilder dataSourcebuilder = DataSourcebuilder.create(
MultiTenantJPAConfiguration.class.getclassloader())
.driverclassName("org.postgresql.Driver")
.username("postgres")
.password("postgres")
.url("Jdbc: postgresql://localhost:5412/somedb");
mars2DataSources("tenantX", datasourcebuilder.build())
}
Given that you are using Oracle, you could also use its database change notification features.
Think of it as a listener in the JDBC driver that gets notified whenever something changes in your database table. So upon receiving a change, you could reinitialize/add datasources.
You can find a tutorial of how to do this here: https://docs.oracle.com/cd/E11882_01/java.112/e16548/dbchgnf.htm#JJDBC28820
Though, depending on your organization database notifications need some extra firewall settings for the communication to work.
Advantage: You do not need to manually call the REST Endpoint if something changes, (though Marcos Barberios answer is perfectly valid!)

Why Querydsl always requires connection to be transactional?

Recently I tried a new tool for db access called Querydsl in my Spring Boot app, here is how I configure the context in #Configuration class:
#Bean
public com.querydsl.sql.Configuration querydslConfiguration() {
SQLTemplates templates = OracleTemplates.builder().build();
com.querydsl.sql.Configuration configuration = new com.querydsl.sql.Configuration(templates);
configuration.setExceptionTranslator(new SpringExceptionTranslator());
return configuration;
}
#Bean
public SQLQueryFactory queryFactory(DataSource dataSource) {
Provider<Connection> provider = new SpringConnectionProvider(dataSource);
return new SQLQueryFactory(querydslConfiguration(), provider);
}
My query is a quite simple select:
fun detailedEntityByIds(ids: Set<String>): List<DetailedEntity> {
val qDetails = QTContainerDetails.tContainerDetails
return sqlQueryFactory.select(qDetails).from(qDetails)
.where(qDetails.id.`in`(ids))
.fetch().map { mapper.qDslEntToModel(it) }
}
Then I faced with was the following exception:
java.lang.IllegalStateException: Connection is not transactional
I quickly found this question: [QueryDSL/Spring]java.lang.IllegalStateException: Connection is not transactional with an advice to use #Transactional for solving this problem.
Why does Querydsl requires connections to be transactional? I used to put #Transactional on a service layer methods where I really need it. Now Querydsl 'forces' me to put it on a whole DAO class, because looks like it is required for every Querydsl query.
From the Javadoc
/**
* {#code SpringConnectionProvider} is a Provider implementation which provides a transactionally bound connection
*
* <p>Usage example</p>
* <pre>
* {#code
* Provider<Connection> provider = new SpringConnectionProvider(dataSource());
* SQLQueryFactory queryFactory = SQLQueryFactory(configuration, provider);
* }
* </pre>
*/
The reason is for resource management. You don't have access to the underlying JDBC implementation. The transaction will close ResultSets, Statements, Connections, etc. Without a transaction, every connection would be left open, the connection pool saturated, the database running out of resources, etc.
If you want to manage your own resources, you could write your own Provider<Connection> and pass in the DataSource E.G.
private static Provider<Connection> getConnection(DataSource dataSource) {
return () -> org.springframework.jdbc.datasource.DataSourceUtils.getConnection(dataSource);
}

Unable to dynamically provision a GORM-capable data source in Grails 3

We are implementing a multitenant application (database per tenant) and would like to include dynamic provisioning of new tenants without restarting the server. This is Grails 3.2.9 / GORM 6.
Among other things this involves creating a dataSource at runtime, without it being configured in application.yml at the application startup.
According to the documentation (11.2.5. Adding Tenants at Runtime) there exists ConnectionSources API for adding tenants at runtime, but the ConnectionSource created this way doesn't seem to be properly registered with Spring (beans for the dataSource, session and transaction manager) and Grails complain about missing beans when we try to use the new datasource.
We expect that when we use the ConnectionSources API to create a connection source for a new database, Grails should initialise it with all the tables according to the GORM Domains in our application, execute Bootstrap.groovy, etc., just like it does for the sources statically configured in application.yml This is not happening either though.
So my question is whether the ConnectionSources API is intended for a different purpose than we are trying to use it for, or it is just not finished/tested yet.
I meant to come back to you. I did manage to figure out a solution. Now this is for schema per customer, not database per customer, but I suspect it would be easy to adapt. I first create the schema using a straight Groovy Sql object as follows:
void createAccountSchema(String tenantId) {
Sql sql = null
try {
sql = new Sql(dataSource as DataSource)
sql.withTransaction {
sql.execute("create schema ${tenantId}" as String)
}
} catch (Exception e) {
log.error("Unable to create schema for tenant $tenantId", e)
throw e
} finally {
sql?.close()
}
}
Then I run the same code as the Liquibase plugin uses, with some simple defaults, as follows:
void updateAccountSchema(String tenantId) {
def applicationContext = Holders.applicationContext
// Now try create the tables for the schema
try {
GrailsLiquibase gl = new GrailsLiquibase(applicationContext)
gl.dataSource = applicationContext.getBean("dataSource", DataSource)
gl.dropFirst = false
gl.changeLog = 'changelog-m.groovy'
gl.contexts = []
gl.labels = []
gl.defaultSchema = tenantId
gl.databaseChangeLogTableName = defaultChangelogTableName
gl.databaseChangeLogLockTableName = defaultChangelogLockTableName
gl.afterPropertiesSet() // this runs the update command
} catch (Exception e) {
log.error("Exception trying to create new account schema tables for $tenantId", e)
throw e
}
}
Finally, I tell Hibernate about the new schema as follows:
try {
hibernateDatastore.addTenantForSchema(tenantId)
} catch (Exception e) {
log.error("Exception adding tenant schema for ${tenantId}", e)
throw e
}
Anywhere you see me referring to 'hibernateDatastore' or 'dataSource' I have those injected by Grails as follows:
def hibernateDatastore
def dataSource
protected String defaultChangelogTableName = "databasechangelog"
protected String defaultChangelogLockTableName = "databasechangeloglock"
Hope this helps.

jdbc connection pool using ThreadpoolExecutor in spring boot

I have an application that runs through multiple databases and for each database runs select query on all tables and dumps it to hadoop.
My design is to create one datasource connection at a time and use the connection pool obtained to run select queries in multiple threads. Once done for this datasource, close the connection and create new one.
Here is the Async code
#Component
public class MySampleService {
private final static Logger LOGGER = Logger
.getLogger(MySampleService.class);
#Async
public Future<String> callAsync(JdbcTemplate template, String query) throws InterruptedException {
try {
jdbcTemplate.query(query);
//process the results
return new AsyncResult<String>("success");
}
catch (Exception ex){
return new AsyncResult<String>("failed");
}
}
Here is the caller
public String taskExecutor() throws InterruptedException, ExecutionException {
Future<String> asyncResult1 = mySampleService.callAsync(jdbcTemplate,query1);
Future<String> asyncResult2 = mySampleService.callAsync(jdbcTemplate,query2);
Future<String> asyncResult3 = mySampleService.callAsync(jdbcTemplate,query3);
Future<String> asyncResult4 = mySampleService.callAsync(jdbcTemplate,query4);
LOGGER.info(asyncResult1.get());
LOGGER.info(asyncResult2.get());
LOGGER.info(asyncResult3.get());
LOGGER.info( asyncResult4.get());
//now all threads finished, close the connection
jdbcTemplate.getConnection().close();
}
I am wondering if this is a right way to do it or do any exiting/optimized solution that out of box I am missing. I can't use spring-data-jpa since my queries are complex.
Thanks
Spring Boot docs:
Production database connections can also be auto-configured using a
pooling DataSource. Here’s the algorithm for choosing a specific
implementation:
We prefer the Tomcat pooling DataSource for its performance and concurrency, so if that is available we always choose it.
Otherwise, if HikariCP is available we will use it.
If neither the Tomcat pooling datasource nor HikariCP are available and if Commons DBCP is available we will use it, but we
don’t recommend it in production.
Lastly, if Commons DBCP2 is available we will use it.
If you use the spring-boot-starter-jdbc or
spring-boot-starter-data-jpa ‘starters’ you will automatically get a
dependency to tomcat-jdbc.
So you should be provided with sensible defaults.

How to set autocommit to false in spring jdbc template

Currently I'm setting autocommit to false in spring through adding a property to a datasource bean id like below :
<property name="defaultAutoCommit" value="false" />
But i need to add it specifically in a single java method before executing my procedure.
I used the below code snippet.
getJdbcTemplate().getDataSource().getConnection().setAutoCommit(false);
But the above line was not setting autocommit to false?
Am i missing anything ?
or any alternative to set autocommit in a specific java method by spring
Thanks
The problem is that you are setting autocommit on a Connection, but JdbcTemplate doesn't remember that Connection; instead, it gets a new Connection for each operation, and that might or might not be the same Connection instance, depending on your DataSource implementation. Since defaultAutoCommit is not a property on DataSource, you have two options:
Assuming your concrete datasource has a setter for defaultAutoCommit (for example, org.apache.commons.dbcp.BasicDataSource), cast the DataSource to your concrete implementation. Of course this means that you can no longer change your DataSource in your Spring configuration, which defeats the purpose of dependency injection.
((BasicDataSource)getJdbcTemplate().getDataSource()).setDefaultAutoCommit(false);
Set the DataSource to a wrapper implementation that sets AutoCommit to false each time you fetch a connection.
final DataSource ds = getJdbcTemplate().getDataSource();
getJdbcTemplate().setDataSource(new DataSource(){
// You'll need to implement all the methods, simply delegating to ds
#Override
public Connection getConnection() throws SQLException {
Connection c = ds.getConnection();
c.setAutoCommit(false);
return c;
}
});
You need to get the current connection. e.g.
Connection conn = DataSourceUtils.getConnection(jdbcTemplate.getDataSource());
try {
conn.setAutoCommit(false);
/**
* Your Code
*/
conn.commit();
} catch (SQLException e) {
conn.rollback();
e.printStackTrace();
}
I'm posting this because I was looking for it everywhere: I used configuration property in Spring boot to achieve setting the default autocommit mode with:
spring.datasource.hikari.auto-commit: false
Spring Boot 2.4.x Doc for Hikari
You will have to do for each statement that the jdbcTemplate executes. Because for each jdbcTemplate.execute() etc it gets a new connection from the Datasource's connection pool. So you will have to set it for the connection that the connection the jdbcTemplate uses for that query. So you will have to do something like
jdbcTemplate.execute("<your sql query", new PreparedStatementCallback<Integer>(){
#Override
public Integer doInPreparedStatement(PreparedStatement stmt) throws SQLException, DataAccessException
{
Connection cxn = stmt.getConnection();
// set autocommit for that cxn object to false
cxn.setAutoCommit(false);
// set parameters etc in the stmt
....
....
cxn.commit();
// restore autocommit to true for that cxn object. because if the same object is obtained from the CxnPool later, autocommit will be false
cxn.setAutoCommit(true);
return 0;
}
});
Hope this helps
after 5 years still a valid question, i resolved my issue in this way :
set a connection with connection.setAutoCommit(false);
create a jbc template with that connection;
do your work and commit.
Connection connection = dataSource.getConnection();
connection.setAutoCommit(false);
JdbcTemplate jdbcTemplate =
new JdbcTemplate(newSingleConnectionDataSource(connection, true));
// ignore case in mapping result
jdbcTemplate.setResultsMapCaseInsensitive(true);
// do your stuff
connection.commit();
I just came across this and thought the solution would help someone even if it's too late.
As Yosef said, the connection that you get by calling getJdbcTemplate().getDataSource().getConnection() method may or may not be the one used for the communication with database for your operation.
Instead, if your requirement is to just test your script, not to commit the data, you can have a Apache Commons DBCP datasource with auto commit set to fault. The bean definition is given below:
/**
* A datasource with auto commit set to false.
*/
#Bean
public DataSource dbcpDataSource() throws Exception {
BasicDataSource ds = new BasicDataSource();
ds.setUrl(url);
ds.setUsername(username);
ds.setPassword(password);
ds.setDefaultAutoCommit(false);
ds.setEnableAutoCommitOnReturn(false);
return ds;
}
// Create either JdbcTemplate or NamedParameterJdbcTemplate as per your needs
#Bean
public NamedParameterJdbcTemplate dbcpNamedParameterJdbcTemplate() throws Exception {
return new NamedParameterJdbcTemplate(dbcpDataSource());
}
And use this datasource for any such operations.
If you wish to commit your transactions, I suggest you to have one more bean of the datasource with auto commit set to true which is the default behavior.
Hope it helps someone!
I needed it to do some unit testing
In fact Spring already provides the SingleConnectionDataSource implementation with the setAutoCommit method
// import org.springframework.jdbc.datasource.SingleConnectionDataSource;
SingleConnectionDataSource dataSource = new SingleConnectionDataSource();
dataSourceRX71.setAutoCommit(false);
dataSourceRX71.setDriverClassName("xxx");
dataSourceRX71.setUrl("xxx");
dataSourceRX71.setUsername("xxx");
dataSourceRX71.setPassword("xxx");
In some case you could just add #Transactional in the method, e.g. After some batch insert, execute commit at last.

Resources