Create Kafka connect without confluent - windows

I recently started with Kafka and I try to create a Kafka connect to connect to oracle but I can't do it. The information I found is about confluent, but that does't work in Windows ... How can i configure one or create it with java?
I use for my test standalone conecction:
cmd .\windows\connect-standalone.bat .\config\connect-standalone.properties .\config\connect-bbdd.properties ->
name=jdbc-conector
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=dbc:oracle:thin#localhost:xe
connection.user: user
connection.password: pwd
mode = bulk
topic.prefix=test
table.whitelist: mytable
Error:
WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
WARN The configuration 'offset.storage.file.filename' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
jul 21, 2019 10:36:13 PM org.glassfish.jersey.internal.Errors logErrors
ADVERTENCIA: The following warnings have been detected: WARNING: The (sub)resource method createConnector in
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains
empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource
contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.
[2019-07-21 22:36:13,886] ERROR Failed to create job for ..\config\connect-bbdd.properties (org.apache.kafka.connect.cli.ConnectStandalone)
[2019-07-21 22:36:13,888] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration
is invalid and contains the following 2 error(s):
Invalid value java.sql.SQLException: No suitable driver found for jdbc:oracle:thin#localhost:xe
for configuration Couldn't open connection to jdbc:oracle:thin#localhost:xe
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:118)
...and other errors from "any class loader (org.reflections.Reflections)"

The confluent command doesn't work natively in Windows, no.
But connect-distributed or connect-standalone are not only in Confluent, and should both work and load the JDBC connectors provided within Confluent Platform if you did download it on Windows.
Otherwise, if you have only Apache Kafka, you will need to download JDBC Connector separately and set it up yourself via the plugin.path property mentioned in the Connect config files.

This error that you get:
No suitable driver found for jdbc:oracle:thin#localhost:xe
for configuration Couldn't open connection to jdbc:oracle:thin#localhost:xe
is because you've not made the Oracle JDBC driver available. See https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector#jdbc-drivers.

Related

No pools configured yet in tomcat for apex

Tomcat is not up for Oracle Apex because it's pointing to a wrong path which does not exist. Is there a way to fix this? below is the error message.
INFO [main] . No pools configured yet
SEVERE [main] oracle.dbtools.common.logging.LegacyLoggingAdaptor.severe Error writing to:
D:\oracle\product\old_ords\defaults.xml, D:\oracle\product\old_ords\defaults.xml (The system cannot find the path specified)

Why does Test Connection fail in Wildfly 20 Using SQL Anywhere sajdbc4 driver?

I had Wildfly 10 running previously and have just upgraded to Wildfly 20 (under Ubuntu 20). My configuration from Wildfly 10 no longer works when it comes to getting the Sybase SQL Anywhere 17 sajdbc4 driver working. When I "Test Connection" it fails. I am using the same configuration and testing against the exact same (SQL Anywhere High Availability) database server.
"Test Connection" on the following Datasource triggers an "Invalid ODBC handle" error:
<datasource jndi-name="java:jboss/datasources/TestDB" pool-name="TestDB" spy="true" tracking="true" enlistment-trace="true">
<connection-url>jdbc:sqlanywhere:Host=192.168.1.45:19000,192.168.1.45:19001;ServerName=TestDB</connection-url>
<driver>sajdbc4.jar</driver>
<security>
<user-name>...</user-name>
<password>...</password>
</security>
</datasource>
Connection is not valid
Caused by: java.sql.SQLException: Invalid ODBC handle
at deployment.sajdbc4.jar//sap.jdbc4.sqlanywhere.IDriver.makeODBCConnection(Native Method)
at deployment.sajdbc4.jar//sap.jdbc4.sqlanywhere.IDriver.connect(IDriver.java:809)
at org.jboss.ironjacamar.jdbcadapters#1.4.22.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:321)
... 35 more
How I set this up:
I used the console to Deploy the sajdbc4.jar and that appears to work fine. I see no errors and sajdbc4 shows up as Deployed in the console and it also shows up as a JDBC Driver in the Subsystems. Here is what was created in standalone.xml after using the console:
deployment name="sajdbc4.jar" runtime-name="sajdbc4.jar">
content sha1="b690ff7a8ba1a3c2e8dd5079138b7970d969c2b9"/>
/deployment>
(I had to drop the leading angle brackets to get the previous lines to show - even when marked as Code!)
Next I had to ensure that the java.library.path and classpath included the path to the sajdbc4.jar and its support files so Wildfly can find them. To do so I added the "HACK" to the following in standalone.conf:
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
# ADDED FOLLOWING HACK
JAVA_OPTS="$JAVA_OPTS -Djava.library.path=/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main -cp .:/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main/sajdbc4.jar"
echo "Java Properties Next:"
java -XshowSettings:properties -version
else
echo "JAVA_OPTS already set in environment; overriding default settings with values: $JAVA_OPTS"
fi
Finally, I added the datasource block shown at the top. After starting Wildfly TestDB shows up as a Datasource in the Datasources Subsystem but when I Test Connection I get the "Invalid ODBC handle" error.
I feel confident that the driver and all its support files are "working" because I have a very simple Java test app that just makes a connection to TestDB, fetches from a table and displays the rows. Note that it uses the exact same java.library.path and classpath as I set in standalone.conf:
cd $HOME/Desktop
export LD_LIBRARY_PATH=/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main
export CLASSPATH=.:/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main/sajdbc4.jar
java sajdbc4DriverTest.java
Note that server.log shows no errors and in fact shows lines like:
[org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0027: Starting deployment of "sajdbc4.jar" (runtime-name: "sajdbc4.jar")
...
[org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) WFLYJCA0005: Deploying non-JDBC-compliant driver class sap.jdbc4.sqlanywhere.IDriver (version 4.0)
[org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-3) WFLYJCA0018: Started Driver service with driver-name = sajdbc4.jar
[org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-3) WFLYJCA0001: Bound data source [java:jboss/datasources/TestDB]
...
[org.jboss.as.server] (Controller Boot Thread) WFLYSRV0010: Deployed "sajdbc4.jar" (runtime-name : "sajdbc4.jar")
Note that my connection string is for connecting to a SQL Anywhere High Availability system (hence the two URLS). In Wildfly 20 I see that there is now a new "HA URL Separator" field in the console's Datasource definition page. I tried setting that to a comma and that just changed the Test Connection error to "Unable to create connection from URL":
2020-08-25 11:45:08,378 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (External Management Request Threads -- 1) IJ000604: Throwable while attempting to get a new connection: null: javax.resource.ResourceException: IJ031085: Unable to create connection from URL: jdbc:sqlanywhere:Host=192.168.1.45:19000,192.168.1.45:19001;ServerName=TestDB
at org.jboss.ironjacamar.jdbcadapters#1.4.22.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.getHALocalManagedConnection(LocalManagedConnectionFactory.java:381)
How do I get "Test Connection" to work?
Thank you in advance.
The problem turned out to be related to the fact that I was running Wildfly as a service and apparently my efforts above to set the java.library.path is failing. I know the reason for the error but I do not know how to set the path when running as a service.

Jooq and Spring boot: Upgraded JOOQ via starter: failed to bind spring.jooq.sql-dialect' to org.jooq.SQLDialect

So i upgraded spring-boot-parent-starter to 2.2.8.RELEASE, which results in jooq 3.12.4 . Previously i had 3.11.5.
I am getting the following error
Failed to bind properties under 'spring.jooq.sql-dialect' to org.jooq.SQLDialect:
Property: spring.jooq.sqldialect
Value: MYSQL_5_7
Origin: "spring.jooq.SQLDialect" from property source "applicationConfig: [classpath:/config/application.yaml]"
Reason: failed to convert java.lang.String to org.jooq.SQLDialect
Here is what my application.yaml was before
spring:
jooq:
sql-dialect: mysql_5_7
If you read the whole error message you will see this:
Failed to bind properties under 'spring.jooq.sql-dialect' to org.jooq.SQLDialect:
Property: spring.jooq.sql-dialect
Value: MYSQL_5_7
Origin: class path resource [application.properties]:2:25
Reason: failed to convert java.lang.String to org.jooq.SQLDialect
Action:
Update your application's configuration. The following values are valid:
CUBRID
DEFAULT
DERBY
FIREBIRD
H2
HSQLDB
MARIADB
MYSQL
POSTGRES
SQL99
SQLITE
MYSQL_5_7 is not a value supported by jOOQ open source. It's only availble in the pro version
MYSQL_5_7
#Pro
public static final SQLDialect MYSQL_5_7
The MySQL 5.7 dialect.
This dialect is available in commercial jOOQ distributions, only.

Netezza Jar not working with source jdbc connector in confluent

I'm facing issue while connecting Zetezza database from confluent source jdbc connector.
source-jdbc connector properties:
name=source-netezza
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:netezza://10.245.*.*:5480/DATABASE=mydb,user=admin,password=password
table.whitelist=netezza_str
mode=bulk
topic.prefix=netezza_str_
After executing the command I'm getting below errors
Invalid value java.sql.SQLException: No suitable driver found for jdbc:netezza://10.245.*.*:5480/mydb,user=admin,password=password for configuration Couldn't open connection to jdbc:netezza://10.245.*.*:5480/mydb,user=admin,password=password
I've tried with netezza-3.2.2 jar and nzjdbc jar.

Unable to start Hive CLI Hadoop(MapR)

I am trying to access hive CLI. However, it is failing to start with the following AccessControl issue.
Strangly enough, I am able to query hive data from Hue without the AccessControl issue. However, hive CLI is not working.
I am on a MapR cluster.
Any help is much appreciated.
[<user_name>#<edge_node> ~]$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/mapr/hive/hive-2.1/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in file:/opt/mapr/hive/hive-2.1/conf/hive-log4j2.properties Async: true
2017-09-23 23:52:08,988 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-api-jdo-4.2.4.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-api-jdo-4.2.1.jar."
2017-09-23 23:52:08,993 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-core-4.1.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-core-4.1.6.jar."
2017-09-23 23:52:09,004 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-rdbms-4.1.19.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-rdbms-4.1.7.jar."
2017-09-23 23:52:09,038 INFO [main] DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
2017-09-23 23:52:09,039 INFO [main] DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
2017-09-23 23:52:14,2251 ERROR JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:2172 Thread: 20235 mkdirs failed for /user/<user_name>, error 13
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: User <user_name>(user id 50005586) has been denied access to create <user_name>
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:617)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:646)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.security.AccessControlException: User <user_name>(user id 50005586) has been denied access to create <user_name>
at com.mapr.fs.MapRFileSystem.makeDir(MapRFileSystem.java:1256)
at com.mapr.fs.MapRFileSystem.mkdirs(MapRFileSystem.java:1276)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1913)
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getDefaultDestDir(DagUtils.java:823)
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getHiveJarDirectory(DagUtils.java:917)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createJarLocalResource(TezSessionState.java:616)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:256)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.beginOpen(TezSessionState.java:220)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:614)
... 10 more
The error is saying you're defined access to create a directory in the file system. This is likely /user/<user name>, which will need to be added by the HDFS / MapR FS super user.
I am able to query hive data from Hue without the AccessControl
Hue communicates via Thrift and HiveServer2.
Hive CLI bypasses HiveServer2 and is deprecated.
You should use Beeline instead.
beeline -n $(whoami) -u jdbc:hive2://hiveserver:10000/default
And if you're in a kerberized cluster, then you'll need some extra options there.

Resources