Kafka-connect cannot connect to oracle database - oracle

I am trying to create a topic in Kafka. When i send a Post request to Kafka-connect to create a topic, connector is created but topic is not created. When i checked the kafka-connect log i saw below error:
Exception in thread "Thread-14" org.apache.kafka.connect.errors.ConnectException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:69)
at io.confluent.connect.jdbc.source.TableMonitorThread.updateTables(TableMonitorThread.java:141)
at io.confluent.connect.jdbc.source.TableMonitorThread.run(TableMonitorThread.java:76)
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:774)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:39)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:691)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:211)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:88)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:66)
... 2 more
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:523)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:521)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:660)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:286)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1438)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:518)
... 10 more
Caused by: java.io.IOException: Connection refused, socket connect lapse 1 ms. /10.206.41.145 1521 0 1 true
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:209)
at oracle.net.nt.ConnOption.connect(ConnOption.java:161)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470)
... 15 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at oracle.net.nt.TimeoutSocketChannel.<init>(TimeoutSocketChannel.java:81)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:169)
You can see my Post request below;
{
"name": "jdbc_source_oracle_order",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url":"jdbc:oracle:thin:#10.206.41.111:1521:ORCLCDB",
"connection.user": "SYS AS SYSDBA",
"connection.password": "123456",
"topic.prefix": "oracle01-",
"mode":"timestamp+incrementing",
"table.whitelist" : "SYS.oc_order",
"incrementing.column.name":"order_id",
"validate.non.null": false
}
}
When i check the connector status, task list is also empty:
{
"name": "jdbc_source_oracle_order",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"mode": "timestamp+incrementing",
"incrementing.column.name": "order_id",
"topic.prefix": "oracle01-",
"connection.password": "kafka_connect",
"validate.non.null": "false",
"connection.user": "kafka_connect as sysdba",
"task.max": "3",
"name": "jdbc_source_oracle_order",
"connection.url": "jdbc:oracle:thin:#10.206.43.77:1521:ORCLCDB",
"table.whitelist": "sys.oc_order"
},
"tasks": [],
"type": "source"
}
I coud not solve the problem. How can i solve this?

Related

Unable to extractFieldwith SMT transformation in Oracle database

I'm not able to perform SMT transformation "ExtractField" in order to extract field from key struct to a simple long value with an Oracle database. It works fine with a Postgres database.
I tried to use "ReplaceField" SMT to rename the key and it works fine. I suspect a problem in the class "org.apache.kafka.connect.transforms.ExtractField" on schema handling to get the field. Schema handling seems to work differently between "ReplaceField" and "ExtractField".
Oracle database version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.8.0.0.0
Debezium connect: 1.6
Kafka version: 2.7.0
Instanclient basic (Oracle client and drivers): 21.3.0.0.0
I got an "Unknown field ID_MYTABLE":
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded
in error handler at
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
at
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
at
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:339)
at
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
at
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834) Caused by:
java.lang.IllegalArgumentException: Unknown field: ID_MYTABLE
org.apache.kafka.connect.transforms.ExtractField.apply(ExtractField.java:65)
at
org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
... 11 more
Here is my configuration of my Kafka connector:
{
"name": "oracle-connector",
"config": {
"connector.class": "io.debezium.connector.oracle.OracleConnector",
"tasks.max": "1",
"database.server.name": "serverName",
"database.user": "c##dbzuser",
"database.password": "dbz",
"database.url": "jdbc:oracle:thin:...",
"database.dbname": "dbName",
"database.pdb.name": "PDBName",
"database.connection.adapter": "logminer",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "schema-changes.data",
"schema.include.list": "mySchema",
"table.include.list": "mySchema.myTable",
"log.mining.strategy": "online_catalog",
"snapshot.mode": "initial",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable": "false",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable": "true",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"transforms": "unwrap,route,extractField",
"transforms.extractField.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractField.field": "ID_MYTABLE",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
"transforms.route.replacement": "$1_$2_$3"
}
}

Spring actuator error out the heath check during the startup : RedisReactiveHealthIndicator : Redis health check failed

We are seeing the below error after the spring boot upgrade to 2.2.11.RELEASE. I think the first health check if failing with the below error but when I invoke the health endpoint i see that health check is success.
2020-12-28 05:42:08.840 WARN 1 --- [oundedElastic-8] o.s.b.a.r.RedisReactiveHealthIndicator : Redis health check failed
org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to xc-dev-redis.xylem-cloud.com:6379
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.translateException(LettuceConnectionFactory.java:1511) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.getConnection(LettuceConnectionFactory.java:1419) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$SharedConnection.getNativeConnection(LettuceConnectionFactory.java:1205) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$SharedConnection.getConnection(LettuceConnectionFactory.java:1188) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getSharedReactiveConnection(LettuceConnectionFactory.java:962) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getReactiveConnection(LettuceConnectionFactory.java:439) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getReactiveConnection(LettuceConnectionFactory.java:99) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at reactor.core.publisher.MonoSupplier.subscribe(MonoSupplier.java:56) ~[reactor-core-3.3.11.RELEASE.jar!/:3.3.11.RELEASE]
at reactor.core.publisher.Mono.subscribe(Mono.java:4213) ~[reactor-core-3.3.11.RELEASE.jar!/:3.3.11.RELEASE]
at reactor.core.publisher.MonoSubscribeOn$SubscribeOnSubscriber.run(MonoSubscribeOn.java:124) ~[reactor-core-3.3.11.RELEASE.jar!/:3.3.11.RELEASE]
at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84) ~[reactor-core-3.3.11.RELEASE.jar!/:3.3.11.RELEASE]
at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37) ~[reactor-core-3.3.11.RELEASE.jar!/:3.3.11.RELEASE]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_272]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_272]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[na:1.8.0_272]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_272]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_272]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_272]
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to xc-dev-redis.xylem-cloud.com:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78) ~[lettuce-core-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56) ~[lettuce-core-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:230) ~[lettuce-core-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at io.lettuce.core.RedisClient.connect(RedisClient.java:207) ~[lettuce-core-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at org.springframework.data.redis.connection.lettuce.StandaloneConnectionProvider.lambda$getConnection$1(StandaloneConnectionProvider.java:115) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at java.util.Optional.orElseGet(Optional.java:267) ~[na:1.8.0_272]
at org.springframework.data.redis.connection.lettuce.StandaloneConnectionProvider.getConnection(StandaloneConnectionProvider.java:115) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.getConnection(LettuceConnectionFactory.java:1417) ~[spring-data-redis-2.2.11.RELEASE.jar!/:2.2.11.RELEASE]
... 16 common frames omitted
Caused by: java.lang.InterruptedException: null
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:347) ~[na:1.8.0_272]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) ~[na:1.8.0_272]
at io.lettuce.core.DefaultConnectionFuture.get(DefaultConnectionFuture.java:68) ~[lettuce-core-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:227) ~[lettuce-core-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
... 21 common frames omitted
And here is the response from health endpoint :
{
"status": "UP",
"components": {
"clientConfigServer": {
"status": "UNKNOWN",
"details": {
"error": "no property sources located"
}
},
"discoveryComposite": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN",
"components": {
"discoveryClient": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN"
}
}
},
"diskSpace": {
"status": "UP",
"details": {
"total": 536858308608,
"free": 463599861760,
"threshold": 10485760
}
},
"hystrix": {
"status": "UP"
},
"kubernetes": {
"status": "UP",
"details": {
"inside": true,
"namespace": "dev",
"podName": "xc-api-gateway-647d8c4f5f-sfqx4",
"podIp": "10.16.64.36",
"serviceAccount": "default",
"nodeName": "ip-10-16-64-163.ec2.internal",
"hostIp": "10.16.64.163",
"labels": {
"app": "xc-api-gateway",
"draft": "draft-app",
"pod-template-hash": "647d8c4f5f"
}
}
},
"ping": {
"status": "UP"
},
"reactiveDiscoveryClients": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN",
"components": {
"Kubernetes Reactive Discovery Client": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN"
},
"Simple Reactive Discovery Client": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN"
}
}
},
"redis": {
"status": "UP",
"details": {
"version": "5.0.5"
}
},
"refreshScope": {
"status": "UP"
}
}
}

Confluent Kafka JDBC Source to Oracle EBR table

Confluent JDBC Source (Confluent 5.5) cannot detect a table with EBR (Edition Based Redefinition) from Oracle:
Config:
{
"name": "source-oracle-bulk2",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": 4,
"connection.url": "jdbc:oracle:thin:#//host:1521/service",
"connection.user": "user",
"connection.password": "passw",
"mode": "bulk",
"topic.prefix": "oracle-bulk-",
"numeric.mapping": "best_fit",
"poll.interval.ms": 60000,
"table.whitelist" : "SCHEMA1.T1"
}
}
Connect log:
connect | [2020-07-20 17:24:09,156] WARN No tasks will be run because no tables were found (io.confluent.connect.jdbc.JdbcSourceConnector)
With Query it successfully ingests data:
{
"name": "source-oracle-bulk1",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": 1,
"connection.url": "jdbc:oracle:thin:#//host:1521/service",
"connection.user": "user",
"connection.password": "pass",
"mode": "bulk",
"topic.prefix": "oracle-bulk-T1",
"numeric.mapping": "best_fit",
"poll.interval.ms": 60000,
"dialect.name" : "OracleDatabaseDialect",
"query": "SELECT * FROM SCHEMA1.T1"
}
}
Could it be that "table.types" should be specified for some specific type ?
I tried "VIEW" but it just cannot create a Source and fails with timeout: {"error_code":500,"message":"Request timed out"}

Kafka JDBC Connector Sap can't read tables with # "Error Illegal initial character: #"

I am using JDBC connector to connect to SAP and reat tables. Some of the has # at beginning and JDBC return me this error
Illegal initial character: #
this is my connector configuration
{
"name": "sap-jdbc",
"config": {
"name": "sap-jdbc",
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": "10",
"topic.prefix": "sap_",
"table.whitelist": "DB.#MYTABLE",
"connection.url": "jdbc:sap://server:30015/",
"connection.user": "user",
"connection.password": "password",
"retention.ms": "86400000",
"mode": "bulk",
"poll.interval.ms": "86400000",
},
}
I tried this configuratons without result
"table.whitelist": "\"DB\".\"#MYTABLE\"",
"table.whitelist": "DB.\"#MYTABLE\"",
"table.whitelist": "DB.'#MYTABLE'",
"table.whitelist": "DB.\\#MYTABLE",
anyone solved this?

Kafka Connect failing to flush records to Elasticsearch

I'm running a simple Kafka docker instance and trying to insert data into Elasticsearch instance, however I'm seeing this kind of exception:
[2018-01-08 16:17:20,839] ERROR Failed to execute batch 36528 of 1 records after total of 6 attempt(s) (io.confluent.connect.elasticsearch.bulk.BulkProcessor)
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at io.searchbox.client.http.JestHttpClient.execute(JestHttpClient.java:48)
at io.confluent.connect.elasticsearch.BulkIndexingClient.execute(BulkIndexingClient.java:57)
at io.confluent.connect.elasticsearch.BulkIndexingClient.execute(BulkIndexingClient.java:34)
at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.execute(BulkProcessor.java:350)
at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.call(BulkProcessor.java:327)
at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.call(BulkProcessor.java:313)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
My Connect config is as follows:
{
"name": "elasticsearch-analysis",
"config": {
"tasks.max": 1,
"topics": "analysis",
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"connection.url": "http://elasticsearch:9200",
"topic.index.map": "analysis:analysis",
"schema.ignore": true,
"key.ignore": false,
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema_registry:8081",
"type.name": "analysis",
"batch.size": 200,
"flush.timeout.ms": 600000,
"transforms":"insertKey,extractId",
"transforms.insertKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.insertKey.fields": "Id",
"transforms.extractId.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractId.field":"Id"
}
}
There's not much data int the topic, just about 70000 unique messages.
As you can see, I've increased flush time and reduced batch sizes, but I still experience these timeouts.
I was unable to find what could've been the fix for it.
Possible your index is refreshing too quickly (the default is 1 second). Try and updating it to something less frequent or even turning it off initially.
curl -X PUT http://$ES_HOST/$ELASTICSEARCH_INDEX_NEW/_settings \
-H "Content-Type: application/json" -d '
{
"index" : {
"refresh_interval" : "15s"
}
}'

Resources