JDBC Kafka Connect with DB2 - jdbc

I'm struggling to get Confluent's kafka connector to connect to DB2.
I am running an ubuntu instance inside docker for testing pruposes. The solution needs to be deployed to kubernetes, so docker it is.
I have installed the Confluent platform using apt-get and adding their repos. All services are running, kafka, zookeeper, schema and kafka rest.
I have created my kafka connect properties file as described in this article: https://www.progress.com/blogs/build-an-etl-pipeline-with-kafka-connect-via-jdbc-connectors
I assumed that this will work the same for DB2. The step I'm missing in the above tutorial is this one:
java -jar PROGRESS_DATADIRECT_JDBC_POSTGRESQL_ALL.jar
I tried to run it like this:
java -jar /usr/share/java/kafka-connect-jdbc/db2jcc.jar
I get this error:
no main manifest attribute, in /usr/share/java/kafka-connect-jdbc/db2jcc.jar
I proceeded anyway, but of course I get an error:
No suitable driver found for jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db
This is my command to start the connector:
/usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /etc/kafka-connect-jdbc/db2.properties
This is my properties file:
name=test-db2-jdbc
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db
mode=timestamp+incrementing
incrementing.column.name=id
timestamp.column.name=modified_time
topic.prefix=test_jdbc_
table.whitelist=data_log
I am sure I'm close. I just need to get the DB2 driver to register inside java or for kafka connect to pick it up and be able to use it.
I have tried other values for connector.class, but if I change that to the name of the class as it would be in other Java apps, I get this error:
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Class com.ibm.db2.jcc.DB2Jcc does not implement Connector
Any help or suggestions will be appreciated.

I am the author of the tutorial that you mentioned, I just noticed this thread and I see that you are using IBM supplied DB2 driver(db2cc.jar) with DataDirect IBM DB2 connection string(jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db), which is why as soon as you changed the connection string to IBM supplied driver, you were able to connect properly.

Related

Tibco businessworks 6.6. JDBC Resource connection - Snowflake

Has anyone successfully created a JDBC Resource connection for the Snowflake database? I have a specific case, where I would like to connect directly, not through Snowflake plugin. I am stuck at database driver selection. Can't import snowflake-jdbc-3.13.24.jar to choose it in dropdown menu.
I already tried this, but it doesn't work:
https://docs.tibco.com/pub/activematrix_businessworks/6.2.1/doc/html/GUID-DF12A927-F788-46DC-ABA1-0A1BA797DE2F.html
I never worked with Snowflakes but the BusinessWorks 6.6 documentation provides updated explanations on how to set-up a custom JDBC driver in the BusinessWorks environment, you can check it at the following URL :
https://docs.tibco.com/pub/activematrix_businessworks/6.6.1/doc/html/GUID-DF12A927-F788-46DC-ABA1-0A1BA797DE2F.html

Export data from Kafka to Oracle

I am trying to export data from Kafka to Oracle db. I've searched related questions and web but could not understand that we need a platform (confluent etc.. ) or not. I'd been read the link below but it's not clear enough.
https://docs.confluent.io/3.2.2/connect/connect-jdbc/docs/sink_connector.html
So, what we actually need to export data without 3rd party platform? Thanks in advance.
It's not clear what you mean by "third-party" here
What you linked to is Kafka Connect, which is Apache 2.0 Licensed and open source.
Kafka Connect is a plugin ecosystem, you install connectors individually, written by anyone, or write your own, just like any other Java dependency (i.e. a third-party)
The JDBC connector just happens to be maintained by Confluent. and you can configure the Confluent Hub CLI
to install within any Kafka Connect distribution (or use Kafka Connect Docker images from Confluent)
Alternatively, you use Apache Spark, Flink, Nifi, and many other Kafka Consumer libraries to read data and then start an Oracle transaction per record batch
Or you can explore non-JVM kafka libraries as well and use a language you're more familiar with doing Oracle operations with

How to implement kafka-connect using apache-kaka instead of confluent

I would like to use an open source version of kafka-connect instead of the confluent one as it appears that confluent cli is not for production and only for dev. I would like to be able to listen to changes on mysql database on aws ec2. Can someone point me in the right direction.
Kafka Connect is part of Apache Kafka. Period. If you want to use Kafka Connect you can do so with any modern distribution of Apache Kafka.
You then need a connector plugin to use with Kafka Connect, specific to your source technology. For integrating with a database there are various considerations, and available for MySQL you specifically have:
Kafka Connect JDBC - see it in action here
Debezium - see it in action here
The Confluent CLI is just a tool for helping manage and deploy Confluent Platform on developer machines. Confluent Platform itself is widely used in production.

adding kafka connect jdbc driver

I'm trying to access a SAP Advantage DB with kafka connect using JDBC.
I'm using the docker container, and I have added the jdbc driver Jar
FROM: http://devzone.advantagedatabase.com/dz/content.aspx?Key=20&Release=19&Product=12&Platform=11
When I try to use it I get a bad URL error
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
mode=bulk
topic.prefix=adv-
connection.password=password
tasks.max=1
connection.user=admin
name=JdbcSourceConnector2
connection.url=jdbc:extendedsystems:advantage://localhost:6262/mydb
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
You need to register the Database in Dialect class of JDBC connector. Only then "advantage" URL will be identified
Please add advantage in the class and rebuild the connector . Then you will not face the above issue of Bad URL error

WSO2 ESB 4.0.3 and governance registry configuration

I have one server running the ESB and another running the governance registry.
I am using the embedded h2 database. I can't connect the ESB to the Registry.
I get the following error
SQLNestedException: Cannot create JDBC driver of class 'org.h2.Driver' for connect URL
The jar with the the driver is in the path and works with a local h2 instance.
Any help appreciated
I believe this can be due to the h2 database not being able to accept multiple connections. But you need to provide more details to answer this question? Perhaps the entire stack trace. Usually, when we work with remote registries we use mysql or similar as the DB.
Since the h2 driver is shipped and it's already in the classpath anyway, it can't be due to a problem with the h2 driver, unless you changed the driver.

Resources