Execute code when two previous events have been processed (Apache Kafka) - spring-boot

I´m new in Apache Kafka and Spring Boot. I´m trying to create a Spring Boot listener that generates a new event only when two specific messages (sent through Apache Kafka) have been received (for a determined resource).
The obvious solution is to use the database to change the status of the resource when the first event comes, and execute the code when the second event comes (if the customer is in the correct status in database). In this case, I'm worried if both events arrive at the same time.
Is there a way to aggregate both messages in Spring Boot/Apache Kafka instead do this manually?
Thanks.

You can do it with kafka streams. Example topology:
input stream (key/value from input topic A)
filter (filter by event type for example)
groupBy (group events by key or some field)
aggregate (aggregate events into new data structure)
filter (verify if aggregate its complete)
map (generate new output event with aggregate values)
output stream (key/value to topic B)
Check details in official doc: https://kafka.apache.org/24/documentation/streams/developer-guide/dsl-api.html#creating-source-streams-from-kafka

Related

Why Kafka streams creates topics for aggregation and joins

I recently created my first Kafka stream application for learning. I used spring-cloud-stream-kafka-binding. This is a simple eCommerce system, in which I am reading a topic called products, which have all the product entries whenever a new stock of a product comes in. I am aggregating the quantity to get the total quantity of a product.
I had two choices -
Send the aggregate details (KTable) to another kafka topic called aggregated-products
Materialize the aggregated data
I opted second option and what I found out that application created a kafka topic by itself and when I consumed messages from that topic then got the aggregated messages.
.peek((k,v) -> LOGGER.info("Received product with key [{}] and value [{}]",k, v))
.groupByKey()
.aggregate(Product::new,
(key, value, aggregate) -> aggregate.process(value),
Materialized.<String, Product, KeyValueStore<Bytes, byte[]>>as(PRODUCT_AGGREGATE_STATE_STORE).withValueSerde(productEventSerde)//.withKeySerde(keySerde)
// because keySerde is configured in application.properties
);
Using InteractiveQueryService, I am able to access this state store in my application to find out the total quantity available for a product.
Now have few questions -
why application created a new kafka topic?
if answer is 'to store aggregated data' then how is this different from option 1 in which I could have sent the aggregated data by my self?
Where does RocksDB come into picture?
Code of my application (which does more than what I explained here) can be accessed from this link -
https://github.com/prashantbhardwaj/kafka-stream-example/blob/master/src/main/java/com/appcloid/kafka/stream/example/config/SpringStreamBinderTopologyBuilderConfig.java
The internal topics are called changelog topics and are used for fault-tolerance. The state of the aggregation is stored both locally on the disk using RocksDB and on the Kafka broker in the form of a changelog topic - which is essentially a "backup". If a task is moved to a new machine or the local state is lost for a different reason, the local state can be restored by Kafka Streams by reading all changes to the original state from the changelog topic and applying it to a new RocksDB instance. After restoration has finished (the whole changelog topic was processed), the same state should be on the new machine, and the new machine can continue processing where the old one stopped. There are a lot of intricate details to this (e.g. in the default setting, it can happen that the state is updated twice for the same input record when failures happen).
See also https://developer.confluent.io/learn-kafka/kafka-streams/stateful-fault-tolerance/

kstream topology with inmemory statestore data not commited

I need to aggregate client information and every hours push it to an output topic.
I have a topology with :
input-topic
processor
sink topic
Data arrives in input-topic with a key in string which contains a clientID concatenated with date in YYYYMMDDHH
.
In my processor I use a simple InMemoryKeyValueStore (withCachingDisabled) to merge/aggregate data with specific rules (data are sometime not aggregated according to business logic).
In a punctuator, every hours the program parse the statestore to get all the messages transform it and forward it to the sink topic, after what I clean the statestore for all the message processed.
After the punctuation, I ask the size of the store which is effectivly empty (by .all() and
approximateNumEntries), every thing is OK.
But when I restart the application, the statstore is restored with all the elements normally deleted.
When I parse manually (with a simple KafkaConsumer) the changelog topic of the statestore in Kafka, I view that I have two records for each key :
The first record is commited and the message contains my aggregation.
The second record is a deletion message (message with null) but is not commited (visible only with read_uncommitted) which is dangerous in my case because the next punctuator will forward again the aggregate.
I have play with commit in the punctuator which forward, I have create an other punctuator which commit the context periodically (every 3 seconds) but after the restart I still have my data restored in the store (normal my delete message in not commited.)
I have a classic kstream configuration :
acks=all
enable.idempotence=true
processing.guarantee=exactly_once_v2
commit.interval.ms=100
isolation.level=read_committed
with the last version of the library kafka-streams 3.2.2 and a cluster in 2.6
Any help is welcome to have my record in the statestore commited. I don't use TimeWindowedKStream which is not exactly my need (sometime I don't aggregate but directly forward)

Kafka Streams - Access data from last instance of the topic

We have a producer sending data periodically that is being processed using Kafka Streams. We need to find the difference between the data that the producer sends between two consecutive attempts. One option would be to store the data in a database to look back at the previous transmittal. I was wondering if there is a way to do this using the aggregate or reduce functions.
i.e. if the producer first sends
[100,500,600,800] first and then sends [100,500,800], I need to be able to identify the missing element [600].

StateStoreSupplier to store a sequence in KafkaStreams

I need to re-sequence the data that comes from two topics (merged using an outer join). Is it good practice to use a StateStore to keep the latest sequence and modify the downstream stream value with the re-sequenced message.
Simplified problem :
(seq from topic A, seq from topic B) -> new seq to output (keeping the current sequence in the StateStore)
(10,100) -> 1
(11,101) -> 2
(12,102) -> 3
(...,...) -> ...
The new sequence would be stored as value for the key "currentSeq" in the stateStore. The sequence will be incremented on each message and stored back to the stateStore.
You should use Processor API with a registered (maybe custom) state.
You can also mix-and-match the Processor API with DSL using process(), transform() or transformValue() and reference a state store (by name).
See
How to add a custom StateStore to the Kafka Streams DSL processor?
How to filter keys and value with a Processor using Kafka Stream DSL
How to create a state store with HashMap as value in Kafka streams?

Spark Streaming Direct Stream approach with Group ID

I was reading the Spark Streaming kafka integration guide in the latest documentation page, which is based on Kafka 010 version.
http://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html#creating-a-direct-stream
In that i can see one of the Kafka params is "group.id" -> "example"
I thought we dont have to pass group.id as one of the parameter when we use DirectStream approach. I am confused on this documentation. What is the relation between group.id and Spark Streaming Direct Stream approach.
group.id is a Kafka consumer configuration that is used to group a set of consumer processes into a group so that each Kafka partition can be assigned to exactly one node in the group.
Looking at the Kafka Consumer Configuration, the parameter is optional, unless we use Kafka-based offset management (which Spark Streaming doesn't use with its direct approach). So it should be an optional parameter.
Also looking at the source code of Spark Kafka Direct DStream, spark doesn't add other Kafka params that the client doesn't set. So group.id will default to empty string if not given.
In general, consumer group id's are needed, when you have multiple consumers (a spark streaming job, an akka application, etc.) for the same Kafka topic and you don't want all of them to fall under the same group (which they will if you don't give the group id to all of them). So I think it is a good practice to name each consumer group with its own group id. If you use operational tools around Kafka, it will also visibility about each consumer group as well, if you name them properly.

Resources