I'm running a simple Kafka docker instance and trying to insert data into Elasticsearch instance, however I'm seeing this kind of exception:
[2018-01-08 16:17:20,839] ERROR Failed to execute batch 36528 of 1 records after total of 6 attempt(s) (io.confluent.connect.elasticsearch.bulk.BulkProcessor)
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at io.searchbox.client.http.JestHttpClient.execute(JestHttpClient.java:48)
at io.confluent.connect.elasticsearch.BulkIndexingClient.execute(BulkIndexingClient.java:57)
at io.confluent.connect.elasticsearch.BulkIndexingClient.execute(BulkIndexingClient.java:34)
at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.execute(BulkProcessor.java:350)
at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.call(BulkProcessor.java:327)
at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.call(BulkProcessor.java:313)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
My Connect config is as follows:
{
"name": "elasticsearch-analysis",
"config": {
"tasks.max": 1,
"topics": "analysis",
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"connection.url": "http://elasticsearch:9200",
"topic.index.map": "analysis:analysis",
"schema.ignore": true,
"key.ignore": false,
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema_registry:8081",
"type.name": "analysis",
"batch.size": 200,
"flush.timeout.ms": 600000,
"transforms":"insertKey,extractId",
"transforms.insertKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.insertKey.fields": "Id",
"transforms.extractId.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractId.field":"Id"
}
}
There's not much data int the topic, just about 70000 unique messages.
As you can see, I've increased flush time and reduced batch sizes, but I still experience these timeouts.
I was unable to find what could've been the fix for it.
Possible your index is refreshing too quickly (the default is 1 second). Try and updating it to something less frequent or even turning it off initially.
curl -X PUT http://$ES_HOST/$ELASTICSEARCH_INDEX_NEW/_settings \
-H "Content-Type: application/json" -d '
{
"index" : {
"refresh_interval" : "15s"
}
}'
Related
I'm not able to perform SMT transformation "ExtractField" in order to extract field from key struct to a simple long value with an Oracle database. It works fine with a Postgres database.
I tried to use "ReplaceField" SMT to rename the key and it works fine. I suspect a problem in the class "org.apache.kafka.connect.transforms.ExtractField" on schema handling to get the field. Schema handling seems to work differently between "ReplaceField" and "ExtractField".
Oracle database version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.8.0.0.0
Debezium connect: 1.6
Kafka version: 2.7.0
Instanclient basic (Oracle client and drivers): 21.3.0.0.0
I got an "Unknown field ID_MYTABLE":
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded
in error handler at
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
at
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
at
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:339)
at
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
at
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834) Caused by:
java.lang.IllegalArgumentException: Unknown field: ID_MYTABLE
org.apache.kafka.connect.transforms.ExtractField.apply(ExtractField.java:65)
at
org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
... 11 more
Here is my configuration of my Kafka connector:
{
"name": "oracle-connector",
"config": {
"connector.class": "io.debezium.connector.oracle.OracleConnector",
"tasks.max": "1",
"database.server.name": "serverName",
"database.user": "c##dbzuser",
"database.password": "dbz",
"database.url": "jdbc:oracle:thin:...",
"database.dbname": "dbName",
"database.pdb.name": "PDBName",
"database.connection.adapter": "logminer",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "schema-changes.data",
"schema.include.list": "mySchema",
"table.include.list": "mySchema.myTable",
"log.mining.strategy": "online_catalog",
"snapshot.mode": "initial",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable": "false",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable": "true",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"transforms": "unwrap,route,extractField",
"transforms.extractField.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractField.field": "ID_MYTABLE",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
"transforms.route.replacement": "$1_$2_$3"
}
}
I have a table “books” in database motor. This is my source and for source connection I created a topic “mysql-books”. So far all good I am able to see messages on Confluent Control Center. Now these messages I want to sink into another database called "motor-audit" so that in audit I am should see all the changes that happened to the table “books”. I have given the topic “mysql-books” in my sink curl for sink connector since changes are being published to this topic.
My source config -
curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d '{
"name": "jdbc_source_mysql_001",
"config": {
"value.converter.schema.registry.url": "http://0.0.0.0:8081",
"key.converter.schema.registry.url": "http://0.0.0.0:8081",
"name": "jdbc_source_mysql_001",
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"connection.url": "jdbc:mysql://localhost:3306/motor",
"connection.user": "yagnesh",
"connection.password": "yagnesh123",
"catalog.pattern": "motor",
"mode": "bulk",
"poll.interval.ms": "10000",
"topic.prefix": "mysql-",
"transforms":"createKey,extractInt",
"transforms.createKey.type":"org.apache.kafka.connect.transforms.ValueToKey",
"transforms.createKey.fields":"id",
"transforms.extractInt.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractInt.field":"id"
}
}
My Sink config -
curl -X PUT http://localhost:8083/connectors/jdbc_sink_mysql_001/config \
-H "Content-Type: application/json" -d '{
"value.converter.schema.registry.url": "http://0.0.0.0:8081",
"value.converter.schemas.enable": "true",
"key.converter.schema.registry.url": "http://0.0.0.0:8081",
"name": "jdbc_sink_mysql_001",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"topics":"mysql-books",
"connection.url": "jdbc:mysql://mysql:3306/motor",
"connection.user": "yagnesh",
"connection.password": "yagnesh123",
"insert.mode": "insert",
"auto.create": "true",
"auto.evolve": "true"
}'
This is how messages on the topic look like -
The keys are seen in bytes but even if I use either AvroConverter or StringConverter for the key and keep it same in both source and sink still I face the same error.
The database table which is into play is created with this schema -
CREATE TABLE `motor`.`books` (
`id` INT NOT NULL AUTO_INCREMENT,
`author` VARCHAR(45) NULL,
PRIMARY KEY (`id`));
With all this I am facing this error -
io.confluent.rest.exceptions.RestNotFoundException: Subject 'mysql-books-key' not found.
at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.subjectNotFoundException(Errors.java:69)
Edit: I modified the URL in sink to have localhost and given stringconverter to key and kep avroconverter for value and now I am getting a new error which is -
Caused by: java.sql.SQLException: Exception chain:
java.sql.SQLSyntaxErrorException: BLOB/TEXT column 'id' used in key specification without a key length
Edit 2:
As suggested by #Onecricketeer I am trying Debezium and using below config for MysqlConnector. I have already enabled bin_log in mysqld.cnf but upon launching getting errors like -
Caused by: org.apache.kafka.connect.errors.DataException: Field does not exist: id
This is my debezium config -
{
"transforms.createKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.extractInt.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"value.converter.schema.registry.url": "http://0.0.0.0:8081",
"transforms.extractInt.field": "id",
"transforms.createKey.fields": "id",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"key.converter.schema.registry.url": "http://0.0.0.0:8081",
"name": "mysql-connector-deb-demo",
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"key.converter": "org.apache.kafka.connect.converters.IntegerConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"transforms": [
"createKey",
"extractInt",
"unwrap"
],
"database.hostname": "localhost",
"database.port": "3306",
"database.user": "yagnesh",
"database.password": "**********",
"database.server.name": "mysql",
"database.server.id": "1",
"event.processing.failure.handling.mode": "ignore",
"database.history.kafka.bootstrap.servers": "localhost:9092",
"database.history.kafka.topic": "dbhistory.demo",
"table.whitelist": [
"motor.books"
],
"table.include.list": [
"motor.books"
],
"include.schema.changes": "true"
}
Before using "unwrap" I was facing mismatched input '-' expecting <EOF> SQL
hence upon looking for this fixed this using "unwrap" following this question - Fix for mismatched input.
Let me know if this is actually needed or not.
I am trying to create a topic in Kafka. When i send a Post request to Kafka-connect to create a topic, connector is created but topic is not created. When i checked the kafka-connect log i saw below error:
Exception in thread "Thread-14" org.apache.kafka.connect.errors.ConnectException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:69)
at io.confluent.connect.jdbc.source.TableMonitorThread.updateTables(TableMonitorThread.java:141)
at io.confluent.connect.jdbc.source.TableMonitorThread.run(TableMonitorThread.java:76)
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:774)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:39)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:691)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:211)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:88)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:66)
... 2 more
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:523)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:521)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:660)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:286)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1438)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:518)
... 10 more
Caused by: java.io.IOException: Connection refused, socket connect lapse 1 ms. /10.206.41.145 1521 0 1 true
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:209)
at oracle.net.nt.ConnOption.connect(ConnOption.java:161)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470)
... 15 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at oracle.net.nt.TimeoutSocketChannel.<init>(TimeoutSocketChannel.java:81)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:169)
You can see my Post request below;
{
"name": "jdbc_source_oracle_order",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url":"jdbc:oracle:thin:#10.206.41.111:1521:ORCLCDB",
"connection.user": "SYS AS SYSDBA",
"connection.password": "123456",
"topic.prefix": "oracle01-",
"mode":"timestamp+incrementing",
"table.whitelist" : "SYS.oc_order",
"incrementing.column.name":"order_id",
"validate.non.null": false
}
}
When i check the connector status, task list is also empty:
{
"name": "jdbc_source_oracle_order",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"mode": "timestamp+incrementing",
"incrementing.column.name": "order_id",
"topic.prefix": "oracle01-",
"connection.password": "kafka_connect",
"validate.non.null": "false",
"connection.user": "kafka_connect as sysdba",
"task.max": "3",
"name": "jdbc_source_oracle_order",
"connection.url": "jdbc:oracle:thin:#10.206.43.77:1521:ORCLCDB",
"table.whitelist": "sys.oc_order"
},
"tasks": [],
"type": "source"
}
I coud not solve the problem. How can i solve this?
I'm trying to use TimeBasedPartitioner that extracts using RecordField with the following configuration:
{
"name": "s3-sink",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"tasks.max": "10",
"topics": "topics1.topics2",
"s3.region": "us-east-1",
"s3.bucket.name": "bucket",
"s3.part.size": "5242880",
"s3.compression.type": "gzip",
"timezone": "UTC",
"rotate.schedule.interval.ms": "900000",
"flush.size": "1000000",
"schema.compatibility": "NONE",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.bytearray.ByteArrayFormat",
"partitioner.class": "io.confluent.connect.storage.partitioner.HourlyPartitioner",
"partition.duration.ms": "900000",
"locale": "en",
"timestamp.extractor": "RecordField",
"timestamp.field": "time",
"key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"key.converter.schemas.enabled": false,
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"value.converter.schemas.enabled": false,
"interal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
"internal.key.converter.schemas.enabled": false,
"interal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
"internal.value.converter.schemas.enabled": false,
}
I keep getting the following error and I'm not finding much that explains what is going on. I looked at the source code and it appears that the record is not a Struct or Map type so I'm wondering if there is an issue with using ByteArrayFormat?
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:302)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: io.confluent.connect.storage.errors.PartitionException: Error encoding partition.
at io.confluent.connect.storage.partitioner.TimeBasedPartitioner$RecordFieldTimestampExtractor.extract(TimeBasedPartitioner.java:294)
at io.confluent.connect.s3.TopicPartitionWriter.executeState(TopicPartitionWriter.java:199)
at io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:176)
at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:195)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:524)
I've been able to write out using the default partitioner.
I am attempting to flatten a topic before sending it along to my postgres db, using something like the connector below. I am using the confluent 4.1.1 kafka connect docker image, the only change being I copied a custom connector jar into /usr/share/java and am running it under a different accoount.
version (kafka connect) "1.1.1-cp1"
commit "0a5db4d59ee15a47"
{
"name": "problematic_postgres_sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schema.registry.url": "http://kafkaschemaregistry.service.consul:8081",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://kafkaschemaregistry.service.consul:8081",
"connection.url": "jdbc:postgresql://123.123.123.123:5432/mypostgresdb",
"connection.user": "abc",
"connection.password": "xyz",
"insert.mode": "upsert",
"auto.create": true,
"auto.evolve": true,
"topics": "mytopic",
"pk.mode": "kafka",
"transforms": "Flatten",
"transforms.Flatten.type": "org.apache.kafka.connect.transforms.Flatten$Value",
"transforms.Flatten.delimiter": "_"
}
}
I get a 400 error code:
Connector configuration is invalid and contains the following 1
error(s): Invalid value class
org.apache.kafka.connect.transforms.Flatten for configuration
transforms.Flatten.type: Error getting config definition from
Transformation: null