No suitable driver found for jdbc:mysql in Kafka Connect - jdbc

connect-standalone.properties
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
bootstrap.servers=10.33.62.20:9092,10.33.62.110:9092,10.33.62.200:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/grid/1/mukul/confluent-5.0.0/share/java
source-sqlite.properties
name=test-source-sqlite-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=5
connection.url=jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxxx
table.whitelist=banner_hourly_statistics_v2
group.id=test-mysql-kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
config.storage.topic=demo-1-distributed-config
offset.storage.topic=demo-1-distributed-offset
status.storage.topic=demo-1-distributed-status
bootstrap.servers=10.33.62.20:9092,10.33.62.110:9092,10.33.62.200:9092
mode=bulk
#incrementing.column.name=id
topic.prefix=test-sqlite-jdbc-
CMD: connect-standalone /grid/1/mukul/confluent-5.0.0/etc/kafka/connect-standalone.properties /grid/1/mukul/confluent-5.0.0/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
In the startup logs, it clearly shows loading JDBC Connectors:
[2018-08-09 06:59:30,072] INFO Loading plugin from: /grid/1/mukul/confluent-5.0.0/share/java/kafka-connect-jdbc (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:218)
[2018-08-09 06:59:30,133] INFO Registered loader: PluginClassLoader{pluginLocation=file:/grid/1/mukul/confluent-5.0.0/share/java/kafka-connect-jdbc/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:241)
[2018-08-09 06:59:30,133] INFO Added plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:170)
[2018-08-09 06:59:30,133] INFO Added plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:170)
But it fails with following exception:
Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxx for configuration Couldn't open connection to jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxx
Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxx for configuration Couldn't open connection to jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxx
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:110)
Tried changing the plugin directories too but it didn't work. Tried moving the confluent share/* to /usr/share/java too but it also didn't work.

Download the JAR from the URL: https://dev.mysql.com/downloads/connector/j/5.1.html
Place inside the Plugin dir
Run the connect
It will take start pulling data from MySql.

May be a little late. I had the same issue of "No Driver found.." when I connect DB2 using kafka jdbc connector.
1st Possible Solution:
I resolved it by placing the DB2 driver at the exact location where jdbc-connector is.
With in Kafka connect:
find / -name kafka-connect-jdbc\*.jar
Once you found the location from the above command, copy DB2 jar at that location:
cp {your DB2 jar location}/db2.jar {copy the location from 'find' command}
Example
cp /Download/db2.jar /Users/share/java/kafka-connect-java/
Restart kafka-connect, it will pick up the DB2 drivers
2nd Possible Solution:
Download the jt400 jar (jdk-8) and put it next to the other jdbc drivers (DB2, SQL etc)
Happy coding :)

Related

debezium content based routing configuration

I'm using confluent so I've installed dibezium connectors according to confluent docs using confluent-hub in connect.properties I do have entry
plugin.path=/usr/share/java,/opt/confluent-6.0.0/share/confluent-hub-components
I need to use io.debezium.transforms.ContentBasedRouter https://debezium.io/documentation/reference/1.3/configuration/content-based-routing.html
so according to debezium doc I've downloaded debezium-scripting-1.3.1.Final.jar
and put it into
/opt/confluent-6.0.0/share/confluent-hub-components/ and copied it into
/opt/confluent-6.0.0/share/confluent-hub-components/debezium-debezium-connector-sqlserver/lib directories
here the entries in my mysql_src.json connector
"transforms": "unwrap,route",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.add.fields": "source.snapshot",
"transforms.route.type": "io.debezium.transforms.ContentBasedRouter",
"transforms.route.language": "jsr223.groovy",
"transforms.route.topic.expression": "value.__source_snapshot == 'false' ? 'test'"
when I'm trying to configure/load this connector I'm getting following error message
[2020-12-15 22:18:45,351] ERROR [Worker clientId=connect-1, groupId=connect-cluster] Failed to reconfigure connector's tasks, retrying after backoff: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1369)
java.lang.NoClassDefFoundError: io/debezium/DebeziumException
Any suggestions how to fix this problem ?
According the docs, you need to additionally obtain a JSR-223 script engine implementation and add its contents to the Debezium plug-in directories of your Kafka Connect environment, since:
Debezium does not come with any implementations of the JSR 223 API. To use an expression language with Debezium, you must download the JSR 223 script engine implementation for the language, and add to your Debezium connector plug-in directories, along any other JAR files used by the language implementation.
I am not sure that configuration is correct but I passed first configuration problem (I hope) I'm facing another problem now which I will describe in different question.
I am not sure what was wrong, I did following
Clean up zookeeper directories
Clean up kafka directories
Run kafka in distributed mode using command line start/stop scripts (not using confluent cli)
this solved java.lang.NoClassDefFoundError: io/debezium/DebeziumException
error

Kafka Connect Elasticsearch - NoSuchMethodError

I am trying to run the kafka-connect-elasticsearch plugin from Confluent in order to stream topics from Kafka (V0.11.0.1) directly into Elasticsearch (without putting Logstash in between).
I build the connector using Maven -
$ cd kafka-connect-elasticsearch
$ mvn clean package
I then created the require configuration file -
name=es-cluster-lab
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=filebeats-test
topic.index.map=filebeats-test:kafka_test_index
key.ignore=true
schema-ignore=true
connection.url=http://elastic:9200
type.name=log
As per the new Kafka Classpath Isolation spec, I also added the following line to my connect-standalone.properties file -
plugin.path=/home/kafka/kafka-connect-elasticsearch-3.3.0/target/kafka-connect-elasticsearch-3.3.0-development/share/java/kafka-connect-elasticsearch/
I go to run the script ...
bin/connect-standalone.sh config/connect-standalone.properties config/elasticsearch-connect.properties
... and receive the below error.
[2017-09-14 16:08:26,599] INFO Loading plugin from: /home/kafka/kafka-connect-elasticsearch-3.3.0/target/kafka-connect-elasticsearch-3.3.0-development/share/java/kafka-connect-elasticsearch/slf4j-api-1.7.25.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.collect.Sets$SetView.iterator()Lcom/google/common/collect/UnmodifiableIterator;
at org.reflections.Reflections.expandSuperTypes(Reflections.java:380)
at org.reflections.Reflections.<init>(Reflections.java:126)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:221)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:198)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:190)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:150)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:47)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:68)
I also tried to move the JAR files into the /app/kafka/libs directory (default CLASSPATH) and even tried to create a subdirectory /app/kafka/libs/connect_libs and add that manually to my CLASSPATH environment variable.
Not sure what my next step is besides putting Logstash between Kafka and Elastic.
try to change the guava version to 20 before you build it
I think you are missing the star '*' at the end of the path of the plugin path.
plugin.path=/home/kafka/kafka-connect-elasticsearch-3.3.0/target/kafka-connect-elasticsearch-3.3.0-development/share/java/kafka-connect-elasticsearch/*

Connect to Teradata Using Airflow JDBC Connection

I'm trying to execute a SqlSensor task in Airflow using a connection to Teradata database. The connection is configured as follow:
I have provide in particular 2 driver paths separated by ", " but I am not sure if it's the proper way to do it?
/home/airflow/java_sample/tdgssconfig.jar
/home/airflow/java_sample/terajdbc4.jar
When the DAG executes, it triggers the error message
[2017-08-02 02:32:45,162] {models.py:1342} INFO - Executing <Task(SqlSensor): check_running_batch> on 2017-08-02 02:32:12
[2017-08-02 02:32:45,179] {base_hook.py:67} INFO - Using connection to: jdbc:teradata://myservername.mycompanyname.org/database=MYDBNAME,TMODE=ANSI,CHARSET=UTF8
[2017-08-02 02:32:45,313] {sensors.py:109} INFO - Poking: SELECT BATCH_KEY FROM MYDBNAME.AUDIT_BATCH WHERE BATCH_OWNER='ARO_TEST' AND AUDIT_STATUS_KEY=1;
[2017-08-02 02:32:45,316] {base_hook.py:67} INFO - Using connection to: jdbc:teradata://myservername.mycompanyname.org/database=MYDBNAME,TMODE=ANSI,CHARSET=UTF8
[2017-08-02 02:32:45,497] {models.py:1417} ERROR - java.lang.RuntimeException: Class com.teradata.jdbc.TeraDriver not found
What am I doing wrong?
The appropriate way to input multiple jars in the connections page is to separate both fully qualified paths with a comma which you did above.
I can confirm this is the approach I took and it worked (Airflow 10.1.1 and 10.1.2).
See: https://github.com/apache/airflow/blob/master/airflow/hooks/jdbc_hook.py#L51
Bonus: If you use Ad Hoc Query in Data Profiling to test it out, you'll notice that you'll get an error when you send a SELECT statement because Airflow wraps it in a LIMIT clause which TD doesn't support.
The solution provided by my team member was to merge the two jar into a single jar file. After doing it and indicating that new jar file in the driver path, it worked as expected.
Here is the link to the JAR file: https://github.com/alexisrolland/linux-setup/blob/master/teradataDriverJdbc.jar
Here is a code snippet example to use the connection in a SQLSensor Task:
CheckRunningBatch = SqlSensor(
task_id='check_running_batch',
conn_id='ed_data_quality_edw_dev',
sql="SELECT CASE WHEN MAX(BATCH_KEY) IS NOT NULL THEN 0 ELSE 1 END FROM DATABASE.AUDIT_BATCH WHERE STATUS_KEY=1;",
poke_interval=300,
dag=dag)

Unable to connect to Oracle with SchemaSpy

I've installed Oracle Instant Client 64 bits, when connecting with SchemaSpy I get the error message below.
PLEASE NOTE: Both these files exist
C:\app\instantclient_12_1\ojdbc6.jar
C:\app\instantclient_12_1\ocijdbc12.dll
And "C:\app\instantclient_12_1\" is in the PATH.
I've tried C:\app\instantclient_12_1\ojdbc7.jar as well, same result.
Windows 7 64 bits.
Would greatly appreciate any help from anyone who got this to work correctly.
Error message:
Failed to load driver [oracle.jdbc.driver.OracleDriver] from classpath [file:/C:/app/instantclient_12_1/ojdbc6.jar]
Make sure the reported library (.dll/.lib/.so) from the following line can be
found by your PATH (or LIB*PATH) environment variable
java.lang.UnsatisfiedLinkError: C:\app\instantclient_12_1\ocijdbc12.dll: Specified process not found
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(Unknown Source)
at java.lang.ClassLoader.loadLibrary(Unknown Source)
at java.lang.Runtime.loadLibrary0(Unknown Source)
at java.lang.System.loadLibrary(Unknown Source)
at oracle.jdbc.driver.T2CConnection$1.run(T2CConnection.java:4115)
at java.security.AccessController.doPrivileged(Native Method)
at oracle.jdbc.driver.T2CConnection.loadNativeLibrary(T2CConnection.java:4111)
at oracle.jdbc.driver.T2CConnection.logon(T2CConnection.java:308)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:662)
at oracle.jdbc.driver.T2CDriverExtension.getConnection(T2CDriverExtension.java:54)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:560)
at net.sourceforge.schemaspy.SchemaAnalyzer.getConnection(SchemaAnalyzer.java:582)
at net.sourceforge.schemaspy.SchemaAnalyzer.analyze(SchemaAnalyzer.java:157)
at net.sourceforge.schemaspy.Main.main(Main.java:42)
E=3I=3
Here's how to run SchemaSpy 6 against an Oracle database:
Dependecies
Make sure you have the following available on your machine:
The lastest version from schemaspy.org, the following will describe the process for schemaspy-6.0.0-rc1.
The Oracle JDBC thin driver, otherwise you'll have to mess around with Oracle OCI. You can get it from Oracle Database 12.1.0.2 JDBC Driver & UCP Downloads
SchemaSpy uses GraphViz to generate the diagrams, get it from graphviz.org. You'll need to update you PATH variable, add C:\Program Files (x86)\Graphviz2.38\bin to it (make sure the version fits the one you downloaded).
Database Type
Note, SchemaSpy supports Oracle OCI (-t ora) and Oracle Thin (-t orathin) as database types. To get the list of available database types:
java -jar schemaspy-6.0.0-rc1.jar -dbhelp
Configuration
You can put most configuration parameters into a file called schemaspy.properties, put this file into the same directory as schemaspy-6.0.0-rc1.jar.
Example schemaspy.properties:
# type of database. Run with -dbhelp for details
schemaspy.t=orathin
# path to the dowloaded oracle jdbc drivers, for example
schemaspy.dp=C:\tools\dbdoc\drivers\ojdbc7.jar
# database properties: host, port number, name user, password
schemaspy.host=[orcale database host]
schemaspy.port=[orcale database port, usualy 1521]
schemaspy.db=[database name or SID]
schemaspy.u=[username]
schemaspy.p=[password, for complexer ones, put it in quotation marks]
# output dir to save generated files
schemaspy.o=C:\tools\dbdoc\output
# db scheme for which generate diagrams
schemaspy.s=[scheme name]
Generate documentation
With the configuration in place, now all you have to do is run:
java -jar schemaspy-6.0.0-rc1.jar

Error:Unable to connect to database. Driver class not found: com.ibm.db2.jcc.D B2Driver

I am trying to connect to a DB2 database that I just created. I have CLASSPATH pointing to the two necessary jar files: C:\Program Files (x86)\IBM\SQLLIB\java\db2jcc.jar;C:\Program Files (x86)\IBM\SQLLIB\java\db2jcc_license_cu.jar
My connection configuration file looks like
database.driverClassName=com.ibm.db2.jcc.DB2Driver
database.url=jdbc:db2:test
database.userName=test
database.schemaNames=test
database.password=test
database.driverLocation='C:\Program Files (x86)\IBM\SQLLIB\java\db2jcc.jar'
When I try to connect I get the error:
Creating data source. Driver: com.ibm.db2.jcc.DB2Driver, url: jdbc:db2:test, us
er: test, password: test
# Error:Unable to connect to database. Driver class not found: com.ibm.db2.jcc.D
B2Driver
# Cause:com.ibm.db2.jcc.DB2Driver
I am fairly new to this so I would appreciate any help on what I am doing wrong.
Thanks

Resources