Confluent cloud elastic search sink connector - elasticsearch

I want to connect the confluent cloud to elastic search (local environment). Is it possible to connect the local elasticsearch to confluent cloud kafka?.
Thanks,
Bala

Yes; a local instance of Kafka Connect with the ElasticSearch sink connector can be installed and consume from Confluent Cloud (or any Kafka cluster)
A connector running in the cloud would be unlikely to connect/write to your local instance without port-forwarding from your router, thus why you should consume from a remote cluster and write to Elastic locally.

Related

Sending data from elasticsearch to kafka and finally to influxdb?

I would like to know how can I send data from elasticsearch to kafka and then to influxdb?
I've already tried using confluent platform with sources connector from elasticsearch and sink connector from influxdb, but the problem is that I'm stuck on sending data from elasticsearch to kafka
moreover once my computer is off I no longer have the backup of the connectors and I have to start from scratch
that's why my questions:
How to send data from elasticsearch to kafka? using confluent platform?
Do I really have to use confluent platform if I want to use kafka connect?
Kafka Connect is Apache 2.0 Licensed and is included with Apache Kafka download.
Confluent (among other companies) write plugins for it, such as Sinks to Elasticsearch or Influx.
It appears the Elasticsearch source on Confluent Hub is not built by Confluent, for example.
Related - Use Confluent Hub without Confluent Platform installation
once my computer is off I no longer have the backup of the connectors and I have to start from scratch
Kafka Connect distributed mode stores its config data in Kafka topics... Kafka defaults to store topic data in /tmp... Which is deleted when you shutdown your computer
Similarly, if you are using Docker for any of these systems without mounted volumes, Docker also is not persistent by default

Does Elastic-sink connector in Confluent Platform work with elastic cloud?

I tried a simple example with elastic-sink connector in Confluent Platform version 6.2 and I observed that if I connect my elastic-sink connector with elasticsearch in local(http://192.168.x.x:9200) it works well, but if I use my elasticsearch cloud the conenctor fails. So as far as I understood the elastic-sink connector that I use in Confluent Platform does not work with elastic cloud, right? Should I use Confluent Cloud if I want to connect my elastic-sink connector to Elastic Cloud?
The example is the follow: https://docs.confluent.io/kafka-connect-elasticsearch/current/overview.html

How to connect to kafka installed using confluent helm chart

I have a kubernetes cluster hosted on azure cloud. I had installed kafka resources using below helm chart https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka. This helm chart successfully deployed zoopkeeper pods and broker pods etc. Now I want to write a golang based application which connects with any of the kafka broker installed on my kubernetes cluster and creates a new producer and publishes messages. Any help would be highly appreciated.
You can use the following string in bootstrap.servers to communicate with the brokers <helm-release-name>-cp-kafka-headless.<namespace>:9092 or bootstrap service which is created as part of confluent helm chart <helm-release-name>-cp-kafka. When you hit this service, it will randomly got to any of the brokers for the first time and get all the metadata information which is synced through zookeeper.
The subsequent requests will be made to individual brokers based on information found in metadata.
You would deploy your Golang code in a container, in k8s, then set bootstrap.servers to the Kafka Deployment's Service name, ideally via an environment variable

How to implement kafka-connect using apache-kaka instead of confluent

I would like to use an open source version of kafka-connect instead of the confluent one as it appears that confluent cli is not for production and only for dev. I would like to be able to listen to changes on mysql database on aws ec2. Can someone point me in the right direction.
Kafka Connect is part of Apache Kafka. Period. If you want to use Kafka Connect you can do so with any modern distribution of Apache Kafka.
You then need a connector plugin to use with Kafka Connect, specific to your source technology. For integrating with a database there are various considerations, and available for MySQL you specifically have:
Kafka Connect JDBC - see it in action here
Debezium - see it in action here
The Confluent CLI is just a tool for helping manage and deploy Confluent Platform on developer machines. Confluent Platform itself is widely used in production.

How to configure a fallback in Logtash if Elasticsearch is disconnected?

We are going to deploy an Elasticsearch in a VM and configure our Logstash output to point to it. We don't plan for a multiple node cluster or cloud for hosting Elasticsearch. But we are checking for any possibility to fallback to our system-locally run Elasticsearch service, in case of any connection failure to the VM hosted Elasticsearch.
Is it possible to configure in Logstash in any way to have such fallback, in case connection to Elasticsearch is not available.
We use 5.6.5 version of Logstash and Elasticsearch. Thanks!

Resources