configure kafka jdbc source connector - jdbc

I am dealing with kafka from a spring boot app
when I produce a message using kafkatemplate and with the use of avro scheme
the produced message is something like:
"{\"kkk\":{\"string\":\"somevalue\" ....}
but when I use the kafka connect jdbc source connector I am getting something like this:
"{\"kkk\":\"somevalue\" ...
my question is how to make the jdbc connector produce the same format as the kafkaTemple

The first output shown is Avro JSON encoding. You cannot remove the string type unless you make the field a required non-nullable, and non-union type
So, seems the JDBC source has inferred the schema of that column to not have nulls, and the column is always a varchar type.

Related

Send multiple oracle tables into single kafka topic

I'm using JDBC source connector to transfer data from Oracle to Kafka topic. I want to transfer 10 different oracle tables to same kafka topic using JDBC source connector with table name mentioned somewhere in message(e.g: header) . Is it possible?
with table name mentioned somewhere in message
You can use an ExtractTopic transform to read the topic name from a column in the tables
Otherwise, if that data isn't in the table, you can use the InsertField transform with static.value before the extract one to force the topic name to be the same
Note: If you use Avro or other record-type with schemas, and your tables do not have the same schema (column names and types), then you should expect all but the first producer to fail, becuase the schemas would be incompatible

Kafka JDBC sink connector - is it possible to store the topic data as a json in DB

Kafka JDBC sink connector - is it possible to store the topic data as a json into the postgre DB. Currently it parse each json data from Topic and map it to the corresponding column in the table.
If anyone has worked on a similar case, can you please help me what are the config details I should add inside the connector code.
I used the below code. But, it didn't work.
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable":"false",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":"false"
The JDBC sink requires a Struct type (JSON with Schema, Avro, etc)
If you want to store a string, that string needs to be the value of a key that corresponds to a database column. That string can be anything, including delimited JSON

kafka connect jdbc sink Value schema must be of type Struct

I want to use a kafka connect jdbc sink connector with the avro converter.
Those are my avro configs :
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url" : "http://myurl.com" ,
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url" : "http://myurl.com" ,
Schemas are set to false
key.converter.schemas.enable=false
value.converter.schemas.enable=false
Now, when i start the connector, i get this error
Caused by: org.apache.kafka.connect.errors.ConnectException: Value schema must be of type Struct
From what i read, Struct are for json schemas, right ? i should not have any struct if i am using an avro schema ?
Avro schema types are : record, enum, arrays, maps, unions and fixed but there is no struct.
What am i missing ?
Thanks !!
An Avro record creates a Struct Connect data type.
The error is saying your data is not a record.
Schemas are set to false
Those properties don't mean anything to the Avro converter. Avro always has a schema
I want to use a kafka connect jdbc sink connector with the avro converter.
Then the producer needs to send records with schemas. This includes Avro records or JSON with schemas enabled
Avro structure has always schema. So you need to set as mentioned below
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
key.converter.enhanced.avro.schema.support=true
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
value.converter.enhanced.avro.schema.support=true

table.whitelist is working as case sensitive even after specifying quote.sql.identifiers=NEVER

I have used JDBC source connector to ingest data from oracle to kafka topics. I have kafka topics created in small letters so I have to specify table.whitelist=table_name (in small case). Since by default it takes everything in quotes so I have explicitly specified property in order to make it case insensitive quote.sql.identifiers=NEVER but, it is not working.
I assume you are using confluent platform.
You can set topic name using Transformation: ExtractTopic. ExtractTopic transformation can take any message field an set its value as topic name.
In your use case you can add field with your topic name to JDBC Source connector query property (SELECT ..., 'topicName' from ...) and than with ExtractTopic set topic name

Oracle Number Type Column not converted from Bytes when it comes to Kafka

I'm connecting Oracle to Kafka by using a JDBC connector. When data comes in from Oracle, it is converted correctly except for the Oracle Columns that are Numbers. For such columns, the data is not decoded. The following is an example:
{"ID":"\u0004{","TYPE":"\u0000Ù","MODE":"bytes":"\u0007"},"STAT_TEMP":{"string":"TESTING"}}
I should mention that I'm also connecting the Kafka to spark such that I get the same output in the spark.
I'm wondering what is the best way to convert the data?
Whether to do it in Kafka or spark. If in Kafka, what is your suggestion in how to convert it?
Add in your connector config numeric.mapping
"numeric.mapping":"best_fit"
for more explication here

Resources