Facing issues with kakfa keys while building a SQL audit system using Kafka connect & Debezium - apache-kafka-connect

I have a table “books” in database motor. This is my source and for source connection I created a topic “mysql-books”. So far all good I am able to see messages on Confluent Control Center. Now these messages I want to sink into another database called "motor-audit" so that in audit I am should see all the changes that happened to the table “books”. I have given the topic “mysql-books” in my sink curl for sink connector since changes are being published to this topic.
My source config -
curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d '{
"name": "jdbc_source_mysql_001",
"config": {
"value.converter.schema.registry.url": "http://0.0.0.0:8081",
"key.converter.schema.registry.url": "http://0.0.0.0:8081",
"name": "jdbc_source_mysql_001",
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"connection.url": "jdbc:mysql://localhost:3306/motor",
"connection.user": "yagnesh",
"connection.password": "yagnesh123",
"catalog.pattern": "motor",
"mode": "bulk",
"poll.interval.ms": "10000",
"topic.prefix": "mysql-",
"transforms":"createKey,extractInt",
"transforms.createKey.type":"org.apache.kafka.connect.transforms.ValueToKey",
"transforms.createKey.fields":"id",
"transforms.extractInt.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractInt.field":"id"
}
}
My Sink config -
curl -X PUT http://localhost:8083/connectors/jdbc_sink_mysql_001/config \
-H "Content-Type: application/json" -d '{
"value.converter.schema.registry.url": "http://0.0.0.0:8081",
"value.converter.schemas.enable": "true",
"key.converter.schema.registry.url": "http://0.0.0.0:8081",
"name": "jdbc_sink_mysql_001",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"topics":"mysql-books",
"connection.url": "jdbc:mysql://mysql:3306/motor",
"connection.user": "yagnesh",
"connection.password": "yagnesh123",
"insert.mode": "insert",
"auto.create": "true",
"auto.evolve": "true"
}'
This is how messages on the topic look like -
The keys are seen in bytes but even if I use either AvroConverter or StringConverter for the key and keep it same in both source and sink still I face the same error.
The database table which is into play is created with this schema -
CREATE TABLE `motor`.`books` (
`id` INT NOT NULL AUTO_INCREMENT,
`author` VARCHAR(45) NULL,
PRIMARY KEY (`id`));
With all this I am facing this error -
io.confluent.rest.exceptions.RestNotFoundException: Subject 'mysql-books-key' not found.
at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.subjectNotFoundException(Errors.java:69)
Edit: I modified the URL in sink to have localhost and given stringconverter to key and kep avroconverter for value and now I am getting a new error which is -
Caused by: java.sql.SQLException: Exception chain:
java.sql.SQLSyntaxErrorException: BLOB/TEXT column 'id' used in key specification without a key length
Edit 2:
As suggested by #Onecricketeer I am trying Debezium and using below config for MysqlConnector. I have already enabled bin_log in mysqld.cnf but upon launching getting errors like -
Caused by: org.apache.kafka.connect.errors.DataException: Field does not exist: id
This is my debezium config -
{
"transforms.createKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.extractInt.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"value.converter.schema.registry.url": "http://0.0.0.0:8081",
"transforms.extractInt.field": "id",
"transforms.createKey.fields": "id",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"key.converter.schema.registry.url": "http://0.0.0.0:8081",
"name": "mysql-connector-deb-demo",
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"key.converter": "org.apache.kafka.connect.converters.IntegerConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"transforms": [
"createKey",
"extractInt",
"unwrap"
],
"database.hostname": "localhost",
"database.port": "3306",
"database.user": "yagnesh",
"database.password": "**********",
"database.server.name": "mysql",
"database.server.id": "1",
"event.processing.failure.handling.mode": "ignore",
"database.history.kafka.bootstrap.servers": "localhost:9092",
"database.history.kafka.topic": "dbhistory.demo",
"table.whitelist": [
"motor.books"
],
"table.include.list": [
"motor.books"
],
"include.schema.changes": "true"
}
Before using "unwrap" I was facing mismatched input '-' expecting <EOF> SQL
hence upon looking for this fixed this using "unwrap" following this question - Fix for mismatched input.
Let me know if this is actually needed or not.

Related

Can JDBC Sink Connector work on unique key instead of primary key?

I have a source PostgreSQL table with following columns
ID: Long
FirstName: Varchar
...
I am getting the messages as events in Kafka using Debezium. This is working fine
My question is related to JDBC Sink. My target table is:
ID: UUID
UserID: Long
FirstName: Varchar
If you notice the ID Type here is UUID and UserID is the one that is ID from source table.
So question is can I have my own primary key i.e ID and still can have upsert commands work?
My Config:
{
"name": "users-task-service",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"key.converter.schemas.enable": "true",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "true",
"database.hostname": "host.docker.internal",
"topics": "postgres.public.users",
"connection.url": "jdbc:postgresql://host.docker.internal:5432/tessting",
"connection.user": "postgres",
"connection.password": "",
"auto.create": "false",
"insert.mode": "upsert",
"table.name.format": "users_temp",
"dialect.name": "PostgreSqlDatabaseDialect",
"transforms": "unwrap, RenameField",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.RenameField.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.RenameField.renames": "id:userid",
"pk.fields": "id",
"pk.mode": "record_key",
"delete.enabled": "true",
"fields.whitelist": "userid"
}
}

S3 connector with HourlyPartitioner failing

When we tried to write into S3 through S3 sink connector with default config, working fine without any issue. But when we tried with hourly partition getting failed with below error.
Please find the both codes and error messages and help us to resolve this issue
Default :
{
"value.converter.schemas.enable": "false",
"name": "tibconew1-test-s3standard-default-sink-connector",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"tasks.max": "2",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"errors.tolerance": "all",
"topics": [
"test.s3custom.default.dax.shipment.data",
"test.s3custom.default.dax.shipment.data",
"test.s3custom.hourly.onprem.tibco.dax_shipment.dpp_asn"
],
"topics.regex": "",
"errors.deadletterqueue.topic.name": "dlq_test.s3custom.default.dax.shipment.data",
"errors.deadletterqueue.context.headers.enable": "true",
"format.class": "io.confluent.connect.s3.format.json.JsonFormat",
"flush.size": "1000",
"s3.bucket.name": "test-stg-raw",
"s3.region": "us-east-1",
"s3.credentials.provider.class": "com.amazonaws.auth.InstanceProfileCredentialsProvider",
"s3.acl.canned": "bucket-owner-full-control",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"topics.dir": "streams_dir",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner"
}
Hourly :
{
"value.converter.schema.registry.url": "https://confschema.test-dsol-core.testdigital-stg.com",
"value.converter.schemas.enable": "false",
"name": "test.s3custom.hourly.tibco.dax_shipment.dpp_asn.sink-connector",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"tasks.max": "2",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"errors.tolerance": "all",
"topics": [
"test.s3custom.hourly.onprem.tibco.dax_shipment.dpp_asn"
],
"topics.regex": "",
"errors.deadletterqueue.topic.name": "dlq_test.s3custom.hourly.onprem.tibco.dax_shipment.dpp_asn.sink",
"errors.deadletterqueue.context.headers.enable": "true",
"format.class": "io.confluent.connect.s3.format.json.JsonFormat",
"flush.size": "10",
"s3.bucket.name": "test-stg-raw",
"s3.region": "us-east-1",
"s3.credentials.provider.class": "com.amazonaws.auth.InstanceProfileCredentialsProvider",
"s3.acl.canned": "bucket-owner-full-control",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"topics.dir": "streams_dir",
"partitioner.class": "io.confluent.connect.storage.partitioner.HourlyPartitioner",
"locale": "en-US",
"timezone": "America/Chicago",
"timestamp.extractor": "RecordField",
"timestamp.field": "DPP_ASN.LST_UPDT_TS"
}
Error :
Finally we found the reason . Due to timestamp received from the payload is an invalid format which has additional space in it.So we corrected the format in source side. For the hourly partitioner, the connector is expecting the value is based on hours.
Hourly Partitioner:
io.confluent.connect.storage.partitioner.HourlyPartitioner is equivalent to the TimeBasedPartitioner with path.format='year'=YYYY/'month'=MM/'day'=dd/'hour'=HH and
Message was : "LST_UPDT_TS":"2021-02-01 07:16:23.567"
Corrected as : "LST_UPDT_TS":"2015-08-01T17:00:00.69243-05:00"

kafka not retreiving data from clickhouse

I have to push data from Clickhouse to Kafka topics,so I tried to use the Confluent JDBC connector.
i am following this tutorial that uses mysql instead of clickhouse.
here is my configuration and its works with mysql but has this error with clickhouse.
Missing columns: 'CURRENT_TIMESTAMP' while processing query: 'SELECT CURRENT_TIMESTAMP', required columns: 'CURRENT_TIMESTAMP', source columns: 'dummy' (version 19.17.4.11 (official build))
my configuration:
{
"name": "jdbc_source_clickhouse_my-table_01",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "http://localhost:8081",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://localhost:8081",
"connection.url": "jdbc:clickhouse://localhost:8123/default?user=default&password=12344esz",
"table.whitelist": "my-table",
"mode": "timestamp",
"timestamp.column.name": "order_time",
"validate.non.null": "false",
"topic.prefix": "clickhouse-"
}
}

Kafka JDBC Source connector: create topics from column values

I have a microservice that uses OracleDB to publish the system changes in the EVENT_STORE table. The table EVENT_STORE contains a column TYPE with the name of the type of the event.
It is possible that JDBC Source Kafka Connect take the EVENT_STORE table changes and publish them with the value of column TYPE in the KAFKA-TOPIC?
It is my source kafka connector config:
{
"name": "kafka-connector-source-ms-name",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": "1",
"connection.url": "jdbc:oracle:thin:#localhost:1521:xe",
"connection.user": "squeme-name",
"connection.password": "password",
"topic.prefix": "",
"table.whitelist": "EVENT_STORE",
"mode": "timestamp+incrementing",
"timestamp.column.name": "CREATE_AT",
"incrementing.column.name": "ID",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"config.action.reload": "restart",
"errors.retry.timeout": "0",
"errors.retry.delay.max.ms": "60000",
"errors.tolerance": "none",
"errors.log.enable": "false",
"errors.log.include.messages": "false",
"connection.attempts": "3",
"connection.backoff.ms": "10000",
"numeric.precision.mapping": "false",
"validate.non.null": "true",
"quote.sql.identifiers": "ALWAYS",
"table.types": "TABLE",
"poll.interval.ms": "5000",
"batch.max.rows": "100",
"table.poll.interval.ms": "60000",
"timestamp.delay.interval.ms": "0",
"db.timezone": "UTC"
}
}
You can try the ExtractTopic transform to pull a topic name from a field
Add the following properties to the JSON
transforms=ValueFieldExample
transforms.ValueFieldExample.type=io.confluent.connect.transforms.ExtractTopic$Value
transforms.ValueFieldExample.field=TYPE

having a problem with the flatten value transformation

I am attempting to flatten a topic before sending it along to my postgres db, using something like the connector below. I am using the confluent 4.1.1 kafka connect docker image, the only change being I copied a custom connector jar into /usr/share/java and am running it under a different accoount.
version (kafka connect) "1.1.1-cp1"
commit "0a5db4d59ee15a47"
{
"name": "problematic_postgres_sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schema.registry.url": "http://kafkaschemaregistry.service.consul:8081",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://kafkaschemaregistry.service.consul:8081",
"connection.url": "jdbc:postgresql://123.123.123.123:5432/mypostgresdb",
"connection.user": "abc",
"connection.password": "xyz",
"insert.mode": "upsert",
"auto.create": true,
"auto.evolve": true,
"topics": "mytopic",
"pk.mode": "kafka",
"transforms": "Flatten",
"transforms.Flatten.type": "org.apache.kafka.connect.transforms.Flatten$Value",
"transforms.Flatten.delimiter": "_"
}
}
I get a 400 error code:
Connector configuration is invalid and contains the following 1
error(s): Invalid value class
org.apache.kafka.connect.transforms.Flatten for configuration
transforms.Flatten.type: Error getting config definition from
Transformation: null

Resources