So i am running a spring boot server which i use to query a MySQL database. So far i have been using the auto-configured HikariCP connection pool with JOOQ so i had almost nothing to do with the connection pool. But now i need to query two different schemas (on the same server) and it seems like i can't auto-configure two connection pools so i have to tinker with the DataSource myself. I would like to conserve the native behavior of the connection, i.e have a set of persistent connections so that the server can dispatch the queries and once the query is resolved, the connection is still there and free to use again. I have found multiple implementations of connection pools allowing to have multiple DataSource to query multiple servers but i don't know if each of them is using the behavior that i just described.
Implementation #1 :
https://www.ru-rocker.com/2018/01/28/configure-multiple-data-source-spring-boot/
Implementation #2 :
https://www.stubbornjava.com/posts/database-connection-pooling-in-java-with-hikaricp
I feel like #2 is the most straight forward solution but i am sceptical to the idea of creating a new DataSource everytime i want to query. If i don't close it, am i just opening now connections over and over again? So obviously i would have to close them once finished but then it's not really a connection pool anymore. (Or am i misunderstanding this?)
Meanwhile #1 seems more reliable but again, i would be calling new HikariDataSource everytime so is that what i am looking for?
(Or is there a more simple solution that i have been missing out because i need to query two different schemas but still on the same server and dialect)
Ok so it turns out i don't have to setup multiple connections in my case. As i am querying the same server with the same credentials, i don't have to setup a connection for each shema. I just removed the schema that i specified in my jdbc url config:
spring.datasource.url=jdbc:mysql://localhost:5656/db_name?useUnicode=true&serverTimezone=UTC
Becomes
spring.datasource.url=jdbc:mysql://localhost:5656/?useUnicode=true&serverTimezone=UTC
And then as i had already generated the POJO with the JOOQ generator i could reference my table from the schema object, i.e: Client.CLIENT.ID.as("idClient") becomes ClientSchema.CLIENTSCHEMA.CLIENT.ID.as("idClient"). This way i can query multiple schemas without setting up any new additional connection.
How to configure MAVEN and JOOQ to generate sources from multiple schemas:
https://www.jooq.org/doc/3.13/manual/code-generation/codegen-advanced/codegen-config-database/codegen-database-catalog-and-schema-mapping/
Related
We have an application that uses several data sources. A DB underlying one of those data sources is down at the moment: IOError. Network adapter couldn't establish the connection & Socket read timed out.
Is there an annotation (or other means) of configuring Spring Boot such that it bypasses the culprit data source and still starts up: the DB is not essential in current development work. spring.datasource.continue-on-error=true doesn't seem to work. This is Spring 2.2.2.RELEASE.
using multiple datasource, so when your apps fail at start up your apps still work, i mean using memory db / sqlite to handle fail at connection error...
One other thing I'm finding, is that it appears that Javers is grabbing all of the available Connections out of my connection pool (created via Spring DataSourceBuilder). I'm not using Hibernate/JPA, just straight JDBC via JdbcTemplate and mostly MyBatis for my entity queries.
I've added a logging statement to my ConnectionProvider for Javers, and at the start of the application when it queries for the schema, it pulls 4 connections to check for each of the tables, and then never returns any of them even after the commit from the PlatformTransactionManager.
I understand from https://stackoverflow.com/a/35147884/570291 that it's supposed to participate in the same connection as the current Transaction. Since I'm not using Hibernate/JPA, does that mean I need to implement the connection tracking/etc from MyBatis to the Javers ConnectionProvider to return the same connection (if there is one), and then handle closing (returning to the pool) of that connection at the end of the transaction?
I found DataSourceUtils.getConnection(DataSource) which is a Spring utility class to get a connection from the given DataSource including if it's tied to a current transaction or not as appropriate. Using that in the ConnectionProvider looks like it's done the trick of keeping the connection for the existing transaction.
JaVers won't return connections to application's connection pool for the same reason as it won't call sql commit or rollback.
Managing connactions and transactions is the application's responsibility, not JaVers. We call it passive mode, from Javers doc:
- JaVers doesn’t create JDBC connections on its own and uses connections provided by an application (via ConnectionProvider.getConnection()).
- JaVers philosophy is to use application’s transactions and never to call SQL commit or rollback commands on its own.
Thanks to this approach, data managed by an application (domain objects) and data managed by JaVers (object snapshots) can be saved to SQL database in one transaction.
In JaVers project there is no mybatis support, so you need to implement ConnectionProvider for mybatis on your own.
Proper implementation of ConnectionProvider shouldn't create new sql connection for every getConnection() call. It should return the connection which underlies current application's transaction. Typically, it's implemented using ThreadLocal.
As you mentioned, ConnectionProvider should handle committing transactions and closing connections.
I have been researching Spring Data Rest especially for cassandra and one of the questions my coworkers and I had was when does Spring Data connect to the database. We don't always want a rest controller to connect to the database so when does spring establish a connection if say we had a class extend the CRUDRepository? Does it connect to the database during the start of application itself? Is that something we can control?
For example, I implemented this example on Spring's website:
https://spring.io/guides/gs/accessing-data-rest/
At what point in the code does spring connect to the database?
Spring will connect to the DB as soon as the Datasource get initialized. Basically, Spring contexts will become alive somehow (Web listeners, manually calling them) and start creating beans. As soon as it reaches the Datasource, connection will be made and the connection pool will be populated.
Of course the above is based on a normal out of the box configuration and everything can be setup up to your taste.
So unless, you decide to control the connections yourself, DB connections will be sitting there waiting to be used.
Disagree with the above answer.
As part of research i initiated the datasource using a bean configuration and then changed my database password(not in my spring application but the real db username password)
The connection stays for a while and then in some point of time (maybe idle time) it stops working and throws credential exception.
This is enough to say the JPA does not keep the connection sitting and waiting to be used but uses some mechanism to occupy/release the db connection as per the need.
I am using MongoDB with the C# driver and am wondering what is the most efficient yet safe way to create connections to the database.
Thread Safety
According to the Mongo DB C# driver documentation the MongoClient, MongoServer, MongoDatabase, MongoCollection and MongoGridFS classes are thread safe. Does this mean I can have a singleton instance of MongoClient or MongoDatabase?
The documentation also states that a connection pool is used for MongoClient, so the management of connections to MongoDB is abstracted from the MongoClient class anyway.
Example Scenario
Let's say I have three MongoDB instances in my replicaset; so I create MongoClient and MongoDatabase objects based upon the three server addresses for these instances. Can I create a static singleton for the database and client objects and use them across multiple requests simultaneously? What if one of the instances dies; if I cache the Mongo objects, how can I make sure this scenario is dealt with safely?
In my project I'm using a singleton MongoClient only, then get MongoServer and other stuff from MongoClient.
This is because what you said, the connection pool is in the MongoClient, I definitely don't want more than one connection pool. and here's what the document says:
When you are connecting to a replica set you will still use only one
instance of MongoClient, which represents the replica set as a whole.
The driver automatically finds all the members of the replica set and
identifies the current primary.
Actually the MongoClient is added to C# driver since 1.7, to represent the whole replica set and handle failover, load balancing stuff. Because MongoServer doesn't have the ability to to that. Thus you shouldn't cache MongoServer because once a server is offline you can't know it.
EDIT: Just had a look at the source code. I may have made a mistake. The MongoClient doesn't handle connection pool. the MongoServer does (at least until driver 1.7, haven't looked at the latest driver source yet). This makes sense because MongoServer represents a real Mongo instance. And one connection pool stores connections only to that server.
My application stack consists of Spring MVC, Hibernate and MySQL hosted on Apache tomcat 7.
I have set up Spring to manage transactions and Hibernate session factory is utilizing the tomcat dbcp connection pool backed datasource for getting the connection.
I have a use case in my application in which I have a run a long running task which is initiated through the web UI (say a button click). This task runs for let’s say 10 minutes then my connection pool starts to throw connection closed exceptions. This is obviously because of connection pool setting in which if the connection is not returned to pool after a specific time, it is marked as abandoned and later removed. I could solve this by tinkering with the timeout settings and increasing it to a large enough value. But I may have several other use cases like this and may not currently have idea how long those will run.
So I am thinking of another approach here.
This use case will be initiated not very often, so I may use a separate datasource definition without using connection pool. Of course I can set two transaction managers in Spring with different names “abc” and “xyz” and use the #Transactional(name=”abc”) and #Transactional(name=”xyz)”. Both these transaction managers would use their respective datasources – one with connection pool to support common use cases and one without connection pool to support long running transaction. This way I won’t have to worry about changing the timeout configurations.
Will this be a generally accepted solution or should I take the timeout configuration approach?
Avoiding to use the connection pool will cause problems if you don't have another way to limit the number of connections that your application can initiate. For example (trivial example of cours) if your going to launch your batch process each time a user clicks a button, make sure you limit the times they can do this task.
Another way would be to define a new jdbc resource in your application server (jdbc/batchprocess) and configure in this resource a longer timeout. Then change from one to another using dynamic datasource routing.
You can open Hibernate Sessions, supplying your own Connection:
sessionFactory.withOptions().connection( yourConnection ).openSession();