Is there any way to delete the existing task of debezium mysql connector and replace it with new task - apache-kafka-connect

I am using a Debezium mysql connector, I want to replace the existing task with new task.

Tasks are all configured at the worker level.
You would have to send your configuration to the PUT <connector>/config endpoint.

Related

How to create a Postgres -> Kafka connector with custom configuration?

heroku data:connectors:create command only has "Tables to include" and "Columns to exclude" options. But I need debezium connector configured with transforms, converters, and predicates. Is there some way to specify a custom configuration file for connector? Or my best option is to not use a heroku-provided connector, but instead manually run a debezium container?

best practice on debezium kafka connector deployment automation

We are trying to use debezium kafka connector to capture postgres changelog. Based on the online tutorial, we need to start a debezium server, and send a POST http request to debezium api to deploy the connector. We want to capture the kafka connector configuration in a code repository, and have automated debezium based kafka connector deployment. What is debezium best practice for this kind of deployment automation?

Kafka Ignite sink connector remote node data transfer

I am trying to send data from Kafka to Ignite using Ignite Sink Connector. I have done little experiments :
When I am running Kafka and Ignite on same machine locally with connector, I am able to send the data. - In this case I have provided xml configuration file for ignite in connector.properties which includes CacheName and Discovery property.
When I am trying to run them on remote node and connector on Kafka server's node it is unable to push the data even if I change IP in discovery property. - In this case I am running ignite with xml configuration on other node with that node's ip via terminal shell script.
When I am running kafka and ignite on remote node's but connector on ignite side, it's able to pull from kafka and push into cache.
I am very new to Ignite. Please help me out with these doubts.
I am using same xml configuration file which comes with ignite setup called example-cache.xml
Why is it so?
Ideally, On which side worker and connector should run , Kafka or Ignite ? If I want to make them at kafka server only? What changes that I need to do?
Have I mistaken something to configure in xml configuration? If Yes, what should be the configurations that I should made in my ignite server xml file and in xml file which I pass in connector?

Configuring spring Batch tasks in Spring cloud data flow

I have created a project with 2 rest API that launches different Jobs. My project is connected to a MySql database. I would like to monitor both the Jobs in spring cloud data flow. Please help me out how we need to configure SCDF to MySql so that both the Jobs will be monitored. And additionally, i would like to know that how, if we launch the job by firing the API, whether our SCDF will monitor those Job Instance. If not, please let me know how we can do that.
Thanks in Advance
Please take a moment to read the Spring Batch Admin to SCDF migration guide. It is a requirement that the jobs are wrapped with Spring Cloud Task programming model.
Once you have the batch-job wrapped as a Task, you can register them in SCDF to build Task/Batch pipelines using SCDF's DSL or the GUI.
As for the datasource, all you must have to make sure is that the same datasource is shared between SCDF and the batch-jobs. With this, SCDF's Dashboard will automatically list the jobs and its execution details.
Here are a few examples for your reference.
And additionally, i would like to know that how, if we launch the job by firing the API, whether our SCDF will monitor those Job Instance
Assuming you're referring to SCDF's Task launch API (e.g., a Scheduled trigger or by other means); if triggered, yes, the job executions will be captured in the database as far as SCDF and the batch-jobs share a common datasource as explained previously.

Why Debezium creates Topics for all the tables, even when table.whitelist is specified

I am using Debezium plugin for Kafka Connect to stream MySQL Database changes.
I have explicitly mentioned my whitelist table in the
connector.properties
table.whitelist=tripDriverMapping
database.tables=azuga.tripDriverMapping
Why does Debezium create topics for all the tables in the database? Is there any workaround to avoid creation of all these unnecessary topics as I'm going to consume from only one topic.
The correct configuration for the connector is:
database.whitelist=azuga
table.whitelist=azuga.tripDriverMapping
database.whitelist might be optional

Resources