I cannot find any info in docs about connection reusing in JDBCAppendTableSink in Flink. Should I use my own connection pool or Flink reuses connection for me?
Is this really a gap in the documentation or I'm missing something?
Each instance of the sink creates a connection when the sink is created, and that connection (and a prepared statement) are then automatically reused for you.
Related
We are going to use use Kafka Connect JDBC Source Connector to ingest data from Oracle Databases.
We have one Kafka JDBC Connector per one Oracle Db.
Looking at the JDBC Connector implementation ,if we have N number of maxTasks per Connector (inside CachedConnectionProvider), there will be N number of JDBC Connections to the server.
These connections are kept alive and will not be closed after each poll().
Our DB Admins have strict conditions about number of live connections to the db servers.
Because of this, we are thinking of closing the JDBC Connection after each poll() since our poll times are usually 10 mins or more.
Is this supported natively by JDBC Connector or do we have to do a patch?
Reading the neo4j JDBC's documentation, there are two transports supported for connecting to a neo4j server at the moment:
through the Bolt protocol (3.0.X) using jdbc:neo4j:bolt://:/
through the HTTP protocol (2.X+) using jdbc:neo4j:http://:/
Obviously, the HTTP protocol does not need pooling connections (unless it's HTTP/2 which is not the case here). But I'm not familiar with Bolt so I'm wondering if I can pool neo4j's connections in Bolt mode? And if I can, is it like any ordinary JDBC connection and I can use, for example, HikariCP to keep its connections alive?
Neo4j driver handles for you a pool of connection to the database.
Take a look here if you want to see the default config : https://github.com/neo4j/neo4j-java-driver/blob/1.1/driver/src/main/java/org/neo4j/driver/internal/net/pooling/PoolSettings.java
For now, you can't configure the bolt java driver via the JDBC one, you can only specify the EncryptionLevel. (https://github.com/neo4j-contrib/neo4j-jdbc/blob/master/neo4j-jdbc-bolt/src/main/java/org/neo4j/jdbc/bolt/BoltDriver.java#L58-L60)
Cheers
I am using hive 2.1.0 version.
I have a jdbc connection from java side to connect to hive server2. But now i need to create a jdbc connection once and create a datasource pool so that the multiple queries do not create a new connection everytime and use the pooling mechanism instead. Is there any way to implement the pooling mechanism in hive?
Thanks in advance...
I'm currently trying to set-up Apache Ignite with C3P0 as my JDBC Connection pool, but I noticed that since the Ignite driver doesn't support transactions, C3P0's not usable.
Has anyone had any luck getting a JDBC connection pool going with the Ignite driver? Suggestions?
EDIT:
Updating with exactly why C3P0 doesn't work with Ignite's JDBC Driver
So take a look at this line of code
To create a new pooled connection, C3P0 attempts to set transaction isolation through the connection/driver.
That eventually leads us to this line of code in the Ignite driver, which basically tells us that the Ignite driver doesn't support SQL transactions.
Ignite itself DOES support transactions as specified here but it appears the JDBC implementation does not.
So I need an alternative to C3P0 if I want to set up a JDBC connection pool; any suggestions?
It turns out the JDBC driver for Apache Ignite isn't currently JDBC compliant. Specifically the part that breaks it is that it doesn't have transaction support. As a result, your typical JDBC-pool implementation won't work with the Ignite Driver
There's now a ticket for this here: https://issues.apache.org/jira/browse/IGNITE-4191
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("org.apache.ignite.IgniteJdbcDriver");
ds.setUrl("jdbc:ignite:cfg://cache=default#file:///the/path/to/ignite-config.xml");
ds.setInitialSize(2);
ds.setMinIdle(2);
Try BasicDataSource http://commons.apache.org/proper/commons-dbcp/configuration.html
I am using logstash to create a pipeline from Postgres to CockroachDB. Below is the config.
The input plugin(source is postgres) is working fine. But I am unable to establish a connection in the output plugin(cockroachDB) using JDBC. I am facing the below error.
JDBC - Connection is not valid. Please check connection string or that your JDBC endpoint is available. {:level=>:error, :file=>"logstash/outputs/jdbc.rb", :line=>"154", :method=>"setup_and_test_pool!"}
Destination(cockroachDB) is open for connection at the specified ip and port.
As cockroachDB JDBC connection string is very similar to postgres, I tried the below connection strings, and still the same error.
jdbc:postgresql://host/database
jdbc:postgresql://host/database?sslmode=disable
jdbc:postgresql://host:port/database
jdbc:postgresql://host:port/database?sslmode=disable
How do I connect to cockroachDB through JDBC from logstash output plugin?
Your JDBC connection strings are OK.
Do not forget with JDBC the driver must be registered beforehand. You can do this either with Class.forName("org.postgresql.Driver") before your first JDBC class or invoke java.sql.DriverManager.registerDriver(new org.postgresql.Driver()); before you create your connection. Perhaps you forgot to register the driver?
For posterity, this should be working now. The problem was that JDBC's isValid() method was failing due to CockroachDB failing to prepare empty statements, which has since been fixed in CockroachDB.