Kafka Connect JDBC Connector | Closing JDBC Connection after each poll - jdbc

We are going to use use Kafka Connect JDBC Source Connector to ingest data from Oracle Databases.
We have one Kafka JDBC Connector per one Oracle Db.
Looking at the JDBC Connector implementation ,if we have N number of maxTasks per Connector (inside CachedConnectionProvider), there will be N number of JDBC Connections to the server.
These connections are kept alive and will not be closed after each poll().
Our DB Admins have strict conditions about number of live connections to the db servers.
Because of this, we are thinking of closing the JDBC Connection after each poll() since our poll times are usually 10 mins or more.
Is this supported natively by JDBC Connector or do we have to do a patch?

Related

AWS RDS Connection and spring boot, JPA

I connect my DB (AWS RDS) to Spring boot JPA, Then my number of connections increases dramatically.
it is 12 now, I think it works spring boot 5 + browser 5, workbench 1 +, and others?
How can I reduce my number of connections? How can I maintain this connection safely?
You should be looking for Database connection pooling.
Database connection pooling is a method used to keep database connections open so they can be reused and also it will be keeping the total number of connections within a limit that we are specifying.
The default connection pool in Spring Boot is HikariCP, all you have to do is configure it properly
Sample connection pool configuration,
spring.datasource.hikari.connection-timeout = 20000
spring.datasource.hikari.minimum-idle= 10
spring.datasource.hikari.maximum-pool-size= 10
spring.datasource.hikari.idle-timeout=10000
spring.datasource.hikari.max-lifetime= 1000
spring.datasource.hikari.auto-commit =true

Flink JDBC Sink and connection pool

I cannot find any info in docs about connection reusing in JDBCAppendTableSink in Flink. Should I use my own connection pool or Flink reuses connection for me?
Is this really a gap in the documentation or I'm missing something?
Each instance of the sink creates a connection when the sink is created, and that connection (and a prepared statement) are then automatically reused for you.

Can I / Should I pool neo4j's JDBC connections in Bolt mode?

Reading the neo4j JDBC's documentation, there are two transports supported for connecting to a neo4j server at the moment:
through the Bolt protocol (3.0.X) using jdbc:neo4j:bolt://:/
through the HTTP protocol (2.X+) using jdbc:neo4j:http://:/
Obviously, the HTTP protocol does not need pooling connections (unless it's HTTP/2 which is not the case here). But I'm not familiar with Bolt so I'm wondering if I can pool neo4j's connections in Bolt mode? And if I can, is it like any ordinary JDBC connection and I can use, for example, HikariCP to keep its connections alive?
Neo4j driver handles for you a pool of connection to the database.
Take a look here if you want to see the default config : https://github.com/neo4j/neo4j-java-driver/blob/1.1/driver/src/main/java/org/neo4j/driver/internal/net/pooling/PoolSettings.java
For now, you can't configure the bolt java driver via the JDBC one, you can only specify the EncryptionLevel. (https://github.com/neo4j-contrib/neo4j-jdbc/blob/master/neo4j-jdbc-bolt/src/main/java/org/neo4j/jdbc/bolt/BoltDriver.java#L58-L60)
Cheers

Database failure shuts down ActiveMQ windows service using JDBC persistence

I have an ActiveMQ broker running as a Windows service. Its using jdbcPersistenceAdapter with Oracle data source and Oracle's Universal Connection Pooling (UCP).
When the database is down (due to network problems or scheduled maintenance), the ActiveMQ windows service shuts down completely. This, of course makes the broker unavailable even after the database is restored.
I have tried connection validation in UCP, DBCP with connection validation and even MySQL data source without any success. The service shuts down within 30 seconds of database failure (I believe this is because the default cleanupInterval is 30 seconds).
Is there a way to prevent the windows service from shutting down and make it wait for database availability?
Any help is greatly appreciated.
Here is my current configuration from activemq.xml:
<persistenceAdapter>
<jdbcPersistenceAdapter dataSource="#oracle-ds"/>
</persistenceAdapter>
<bean id="oracle-ds" class="oracle.ucp.jdbc.PoolDataSourceFactory"
factory-method="getPoolDataSource" p:URL="jdbc:oracle:thin:#localhost:1521:amq"
p:connectionFactoryClassName="oracle.jdbc.pool.OracleDataSource"
p:validateConnectionOnBorrow="true" p:user="appuser" p:password="userspassword" />
generally, you should use a JDBC master/slave to support failover from another broker when the database becomes unavailable...
see http://activemq.apache.org/jdbc-master-slave.html
that said, there is a known issue with JDBC master/slave failover that was fixed in 5.6.0...
see https://issues.apache.org/jira/browse/AMQ-1958

Oracle OCI Connection Pooling vs Oracle UCP

Oracle offers 4 different JDBC connection pooling mechanisms when the OCI driver is used for JDBC connections:
Oracle DataSource
Oracle OCI Connection Pooling
Oracle UCP (Universal connection pooling -recommended over OracleDataSource)
Oracle Database Resident Connection pooling
What are the pros and cons of using Oracle UCP (Universal Connection Pooling) as compared to Oracle OCI Connection Pooling provided by the OCI driver?
I will try adding some details based on the reading I have done so far.
Below are some distinct features supported by the different connection pooling mechanisms
Oracle UCP (Universal Connection Pooling)
a. Supports features like Fast Connection Failover (FCF), Runtime connection load balancing and connection affinity.
b. JMX Support
c. Support for labelling connections
d. Support for connection harvesting
OCI (Oracle Call Interface) Connection Pooling
a. Support for session multiplexing.
OracleDataSource
a. Implicit connection cache.

Resources