Is an upsert possible with Kafka Connect to ElasticSearch - elasticsearch

I'm receiving events which end up in Kafka. From these events I fetch the id using a Kafka Streams application and posting it back to Kafka as a pair of (id, 1) in another topic. Then I would like to see if the id exists already in ElasticSearch, and if so update its counter, otherwise create a new record in ElasticSearch with the id from Kafka and counter set to 1, i.e. an upsert of the record (id, 1) to ES.
I was hoping to use Kafka Connect to ElasticSearch for this, but it seems to be not that straightforward if possible at all. I can see that adding records to ES works, but merging with existing records seems is something I haven't found out about yet. Is this possible already, and if so, how, and if not, is it planned to be possible in a nearby release?

I forked the datamountaineer ES sink connector to allow Upsert. With it you can specify a PK and run an update with docAsUpsert into ES. You can grab the project and compile the Jar from my github fork.

Related

For Kafka sink Connector I send a single message to multiple indices documents in elasticseach?

I am recieving a very complex json inside a topic message, so i want to do some computations with it using SMTs and send to different elasticsearch indice documents. is it possible?
I am not able to find a solution for this.
The Elasticsearch sink connector only writes to one index, per record, based on the topic name. It's explicitly written in the Confluent documentation that topic altering transforms such as RegexRouter will not work as expected.
I'd suggest looking at logstash Kafka input and Elasticsearch output as an alternative, however, I'm still not sure how you'd "split" a record into multiple documents there either.
You may need an intermediate Kafka consumer such as Kafka Streams or ksqlDB to extract your nested JSON and emit multiple records that you expect in Elasticsearch.

kafka-connect-elasticsearch: How to sync elasticsearch with consumer group?

I want to query messages in a Kafka topic but not all messages, not from the beginning. I just need to see which messages are not yet committed based on a consumer group. So, basically what I want to have is to delete the documents whose offset is lower than a consumer group offset.
At this point, if I use elastic-connector, is there any way or a workaround to delete documents from the elastic index after a message is consumed and committed?
Or, should I use Kafka Streams and how?
The sink connector only deletes documents when that property is explicitly enabled and there is a null valued record for a document ID in the topic you're reading. This means you need to actually consume this null record and have it be processed by the connector
see which messages are not yet committed
This would imply messages that have not been processed by the connector, making them not searchable in Elasticsearch
delete the documents whose offset is lower than a consumer group offset
If you created a fresh index in Elasticsearch that's only used by the connector, you could pause the connector, then truncate the index, then resume the connector
is there any way or a workaround to delete documents from the elastic index after a message is consumed and committed
Directly use the DELETE API

Elasticsearch best practices

1) We are fairly new to Elasticsearch. In our spring boot application, we are using Spring's Elasticsearch that is based on in-memory node client. Insert/update/deletes happen on our primary relational database (DB2) and we use Elasticsearch solely for handling searches. We have a synchronizing mechanism to keep elastic search up to date with the latest changes
2) In production, we have 4 instances of our application running. To synchronize the in-memory elastic store on all 4 servers, we have a JMS topic in place where all the DB2 updates are posted. Application has a topic listener that will consume any DB changes posted to this JMS topic and update the in-memory elastic store.
Question:
i) Is the above an ideal way to implement Elasticsearch in your application? If not, what else would you recommend?
ii) Any Elasticsearch best practices that you can point us to?
Thanks Much!
1- In Prod, choose 3 master and 4 data nodes. Always an odd number of total servers
2- Define your mappings and index in advance, dont choose auto-create option.
Should define data types
Define amount as sclaed_float with 100 precision
All numeric fields should be defined as long so query ' between', 'sort' or aggregation.
Chose carefully between keyword and text field type. Use text where it is necessary.
3- Define external version if you update the same record, again and again, to avoid updating with stale data.

Using elasticsearch generated ID's in kafka elasticsearch connector

I noticed that documents indexed in elasticsearch using the kafka elasticsearch connector have their ids in the following format topic+partition+offset.
I would prefer to use id's generated by elasticsearch. It seems topic+partition+offset is not usually unique so I am loosing data.
How can I change that?
As Phil says in the comments -- topic-partition-offset should be unique, so I don't see how this is causing data loss for you.
Regardless - you can either let the connector generate the key (as you are doing), or you can define the key yourself (key.ignore=false). There is no other option.
You can use Single Message Transformations with Kafka Connect to derive a key from the fields in your data. Based on your message in the Elasticsearch forum it looks like there is an id in your data - if that's going to be unique you could set that as your key, and thus as your Elasticsearch document ID too. Here's an example of defining a key with SMT:
# Add the `id` field as the key using Simple Message Transformations
transforms=InsertKey, ExtractId
# `ValueToKey`: push an object of one of the column fields (`id`) into the key
transforms.InsertKey.type=org.apache.kafka.connect.transforms.ValueToKey
transforms.InsertKey.fields=id
# `ExtractField`: convert key from an object to a plain field
transforms.ExtractId.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.ExtractId.field=id
(via https://www.confluent.io/blog/building-real-time-streaming-etl-pipeline-20-minutes/)
#Robin Moffatt, as much as I see it, topic-partition-offset can cause duplicates in case that upgrade your kafka cluster, but not in rolling upgrade fashion but just replace cluster with cluster (which is sometime easier to replace). In this case you will experience data loss because of overwriting data.
Regarding to your excellent example, this can be the solution for many of the cases, but I'd add another option. Maybe you can add epoc timestamp element to the topic-partition-offset so this will be like this topic-partition-offset-current_timestamp.
What do you think?

how do you update or sync with a jdbc river

A question about rivers and data syncing with a production database using elastic search:
Are rivers suited for only bulk loading data initially, or does it somehow listen or monitor for changes.
If I have a nightly import of data, is it just better to delete rivers and indexes, and re-index and recreate the rivers?
If I update or change a river, do I have to delete and re-create the index?
How do I set up a schedule with a river to fetch new data periodically. Can it store last maxid so that it can do diff queries in the sql to select into the river?
Any suggestions on a better way to keep the database and elastic search in sync - without calling individual index update functions with a PUT command?
All of the Elasticsearch rivers are different - some are provided directly by Elasticsearch, many more are developed by third parties:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-plugins.html
Each operates differently, so to answer your questions you have to choose a specific river. For your case, since you're looking to index data from a production database, I'll assume that the JDBC river is what you would use:
https://github.com/jprante/elasticsearch-river-jdbc
This river will index data from your JDBC source, including picking up changes. It can do so on a schedule (there is detailed documentation on the schedule parameter on this page: https://github.com/jprante/elasticsearch-river-jdbc). However, this river will not pick up deletes:
https://github.com/jprante/elasticsearch-river-jdbc/issues/213
you may find this discussion useful, concerning getting around the lack of delete support with building a new river/index daily and using index aliases: ElasticSearch river JDBC MySQL not deleting records
You can just map your id in your DB to be _id with alias, this way the elastic will identify when the document was changed or not.

Resources