read __consumer_offsets with kafka go - go

I want to read from the topic __consumer_offsets using this library: https://github.com/segmentio/kafka-go
My problem is that unless I specify a partition nothing seems to happen. There are 100 partitions for this topic by default, it seems unreasonable to query kafka for the list of partitions and then loop over them to read them, and I'm hoping for a pre-existing method in the library that reads messages from all partitions in the topic.
Currently the following works, after I verified with kafkacat that there are messages in partition 15 of __consumer_offsets topic:
r := kafka.NewReader(kafka.ReaderConfig{
Brokers: []string{"kafka:9092"},
Topic: "__consumer_offsets",
Partition: 15
})
r.SetOffset(0)
for {
m, err := r.ReadMessage(context.Background())
if err != nil {
log.Println("Error while trying to read message")
log.Fatal(err)
break
}
log.Printf("message at offset %d\n", m.Offset)
}
r.Close()
I would imagine that partition choosing should be transparent on the user level unless needed. Am I wrong?
Is there a way to read from the topic regardless of which partitions the messages are in? or to rephrase, read from all partitions?

Use the consumer group API, and you don't need to give partitions.
https://github.com/segmentio/kafka-go#consumer-groups
// GroupID holds the optional consumer group id. If GroupID is specified, then
// Partition should NOT be specified e.g. 0
GroupID string
// Partition to read messages from. Either Partition or GroupID may
// be assigned, but not both
Partition int
https://godoc.org/github.com/segmentio/kafka-go#ReaderConfig

Related

Apache Kafka with multiple partitions distributed deployment

I have a kafka topic with 10 partitions. I plan to deploy two application on different servers. One application will read from partitions 0 to 4. While the other will read from partitions 5 to 9.
Deployment 1
#KafkaListener(topicPartitions =
{ #TopicPartition(topic = "testpartition", partitions = { "0", "1","2", "3","4" })
})
public void receive(ConsumerRecord record) {
System.out.println(String.format("Listener 1 -Topic - %s, Partition - %d, Value: %s", kafkaTopic, record.partition(), record.value()));
}
Deployment 2
#KafkaListener(topicPartitions =
{ #TopicPartition(topic = "testpartition", partitions = { "5", "6","7", "8","9" })
})
public void receive(ConsumerRecord record) {
System.out.println(String.format("Listener 2 -Topic - %s, Partition - %d, Value: %s", kafkaTopic, record.partition(), record.value()));
}
So we will be having two consumer groups as application is deployed separately on different servers.
As each application is consuming from different partitions
will this lead to unwanted replication of messages on kafka topic?
Will all the messages get replicated twice. Also if this is the case then will there be message duplication?
Is this the right way to deploy the consumer application in distributed environment or there a better way?
Since you are manually assigning the partitions, no, there will be no duplication and each instance will only receive records from its assigned partitions.
When you say "replicated"; that depends on the replication factor when the topic is created. Replicas are used to ensure there are multiple copies on different broker instances in order to handle server failures. Replication is not the same as duplication.
But even though records are replicated in that way, there is only one logical instance of each record.
It is possible to get duplicate records in certain (rare) failure scenarios, unless you enable exactly once semantics.
The other way to deploy it is to use Kafka Group Management and let Kafka distribute the partitions across the instances, either using its default algorithm or using a custom ConsumerPartitionAssignor.

Kafka Streams: Add Sequence to each message within a group of message

Set Up
Kafka 2.5
Apache KStreams 2.4
Deployment to Openshift(Containerized)
Objective
Group a set of messages from a topic using a set of value attributes & assign a unique group identifier
-- This can be achieved by using selectKey and groupByKey
originalStreamFromTopic
.selectKey((k,v)-> String.join("|",v.attribute1,v.attribute2))
.groupByKey()
groupedStream.mapValues((k,v)->
{
v.setGroupKey(k);
return v;
});
For each message within a specific group , create a new message with an itemCount number as one of the attributes
e.g. A group with key "keypart1|keyPart2" can have 10 messages and each of the message should have an incremental id from 1 through 10.
aggregate?
count and some additional StateStore based implementation.
One of the options (that i listed above), can make use of a couple of state stores
state store 1-> Mapping of each groupId and individual Item (KTable)
state store 2 -> Count per groupId (KTable)
A join of these 2 tables to stamp a sequence on the message as they get published to the final topic.
Other statistics:
Average number of messages per group would be in some 1000s except for an outlier case where it can go upto 500k.
In general the candidates for a group should be made available on the source within a span of 15 mins max.
Following points are of concern from the optimum solution perspective .
I am still not clear how i would be able to stamp a sequence number on the messages unless some kind of state store is used for keeping track of messages published within a group.
Use of KTable and state stores (either explicit usage or implicitly by the use of KTable) , would add to the state store size considerably.
Given the problem involves some kind of tasteful processing , the state store cant be avoided but any possible optimizations might be useful.
Any thoughts or references to similar patterns would be helpful.
You can use one state store with which you maintain the ID for each composite key. When you get a message you select a new composite key and then you lookup the next ID for the composite key in the state store. You stamp the message with the new ID that you just looked up. Finally, you increase the ID and write it back to the state store.
Code-wise, it would be something like:
// create state store
StoreBuilder<KeyValueStore<String,String>> keyValueStoreBuilder = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("idMaintainer"),
Serdes.String(),
Serdes.Long()
);
// add store
builder.addStateStore(keyValueStoreBuilder);
originalStreamFromTopic
.selectKey((k,v)-> String.join("|",v.attribute1,v.attribute2))
.repartition()
.transformValues(() -> new ValueTransformer() {
private StateStore state;
void init(ProcessorContext context) {
state = context.getStateStore("idMaintainer");
}
NewValueType transform(V value) {
// your logic to:
// - get the ID for the new composite key,
// - stamp the record
// - increase the ID
// - write the ID back to the state store
// - return the stamped record
}
void close() {
}
}, "idMaintainer")
.to("output-topic");
You do not need to worry about concurrent access to the state store because in Kafka Streams same keys are processed by one single task and tasks do not share state stores. That means, your new composite keys with the same value will be processed by one single task that exclusively maintains the IDs for the composite keys in its state store.

Could using changelogs cause a bottleneck for the app itself?

I have a spring cloud kafka streams application that rekeys incoming data to be able to join two topics, selectkeys, mapvalues and aggregate data. Over time the consumer lag seems to increase and scaling by adding multiple instances of the app doesn't help a bit. With every instance the consumer lag seems to be increasing.
I scaled up and down the instances from 1 to 18 but no big difference is noticed. The number of messages it lags behind, keeps increasing every 5 seconds independent of the number of instances
KStream<String, MappedOriginalSensorData> flattenedOriginalData = originalData
.flatMap(flattenOriginalData())
.through("atl-mapped-original-sensor-data-repartition", Produced.with(Serdes.String(), new MappedOriginalSensorDataSerde()));
//#2. Save modelid and algorithm parts of the key of the errorscore topic and reduce the key
// to installationId:assetId:tagName
//Repartition ahead of time avoiding multiple repartition topics and thereby duplicating data
KStream<String, MappedErrorScoreData> enrichedErrorData = errorScoreData
.map(enrichWithModelAndAlgorithmAndReduceKey())
.through("atl-mapped-error-score-data-repartition", Produced.with(Serdes.String(), new MappedErrorScoreDataSerde()));
return enrichedErrorData
//#3. Join
.join(flattenedOriginalData, join(),
JoinWindows.of(
// allow messages within one second to be joined together based on their timestamp
Duration.ofMillis(1000).toMillis())
// configure the retention period of the local state store involved in this join
.until(Long.parseLong(retention)),
Joined.with(
Serdes.String(),
new MappedErrorScoreDataSerde(),
new MappedOriginalSensorDataSerde()))
//#4. Set instalation:assetid:modelinstance:algorithm::tag key back
.selectKey((k,v) -> v.getOriginalKey())
//#5. Map to ErrorScore (basically removing the originalKey field)
.mapValues(removeOriginalKeyField())
.through("atl-joined-data-repartition");
then the aggregation part:
Materialized<String, ErrorScore, WindowStore<Bytes, byte[]>> materialized = Materialized
.as(localStore.getStoreName());
// Set retention of changelog topic
materialized.withLoggingEnabled(topicConfig);
// Configure how windows looks like and how long data will be retained in local stores
TimeWindows configuredTimeWindows = getConfiguredTimeWindows(
localStore.getTimeUnit(), Long.parseLong(topicConfig.get(RETENTION_MS)));
// Processing description:
// 2. With the groupByKey we group the data on the new key
// 3. With windowedBy we split up the data in time intervals depending on the provided LocalStore enum
// 4. With reduce we determine the maximum value in the time window
// 5. Materialized will make it stored in a table
stream.groupByKey()
.windowedBy(configuredTimeWindows)
.reduce((aggValue, newValue) -> getMaxErrorScore(aggValue, newValue), materialized);
}
private TimeWindows getConfiguredTimeWindows(long windowSizeMs, long retentionMs) {
TimeWindows timeWindows = TimeWindows.of(windowSizeMs);
timeWindows.until(retentionMs);
return timeWindows;
}
I would expect that increasing the number of instances would decrease the consumer lag tremendous.
So in this setup there are multiple topics involved such as:
* original-sensor-data
* error-score
* kstream-joinother
* kstream-jointhis
* atl-mapped-original-sensor-data-repartition
* atl-mapped-error-score-data-repartition
* atl-joined-data-repartition
the idea is to join the original-sensor-data with the error-score. The rekeying requires the atl-mapped-* topics. then the join will use the kstream* topics and in the end as a result of the join the atl-joined-data-repartition is filled. After that the aggregation also creates topics but I leave this out of scope now.
original-sensor-data
\
\
\ atl-mapped-original-sensor-data-repartition-- kstream-jointhis -\
/ atl-mapped-error-score-data-repartition -- kstream-joinother -\
/ \
error-score atl-joined-data-repartition
As it seems that increasing the number of instances doesn't seem to have much of affect anymore since I introduced the join and the atl-mapped topics, I'm wondering if it is possible that this topology would become its own bottleneck. From the consumer lag it seems that the original-sensor-data and error-score topic have a much smaller consumer lag compare to for instance the atl-mapped-* topics. Is there a way to cope with this by removing these changelogs or does this result in not being able to scale?

Why searchResult.TotalHits() is different than len(searchResult.Hits.Hits)?

I work with golang elastic 5 API to run queries in ElasticSearch. I check the number of hits with searchResult.TotalHits() and it gives me a large number (more than 100), but when I try to iterate over the hits it gives only 10 entities. Also when I check the len(searchResult.Hits.Hits) variable I get 10.
I tried different queries when I select less than 10 entity and it works well.
query = elastic.NewBoolQuery()
ctx := context.Background()
query = query.Must(elastic.NewTermQuery("key0", "term"),
elastic.NewWildcardQuery("key1", "*term2*"),
elastic.NewWildcardQuery("key3", "*.*"),
elastic.NewRangeQuery("timestamp").From(fromTime).To(toTime),
)
searchResult, err = client.Search().Index("index").
Query(query).Pretty(true).Do(ctx)
fmt.Printf("TotalHits(): %v", searchResult.TotalHits()) //It gives me 482
fmt.Printf("length of the hits array: %v", len(searchResult.Hits.Hits)) //It gives 10
for _, hit := range searchResult.Hits.Hits {
var tweet Tweet
_ = json.Unmarshal(*hit.Source, &tweet)
fmt.Printf("entity: %s", tweet) //It prints 10 entity
}
What am I doing wrong? Are there batches in the SearchResult or what could be the solution?
It's not specified in your question so please comment if you're using a different client library (eg the official client), but it appears you're using github.com/olivere/elastic. Based on that assumption, what you're seeing is the default result set size of 10. The TotalHits number is how many documents match your query in total; the Hits number is how many were returned in your current result, which you can manipulate using Size, Sort and From. Size is documented as:
Size is the number of search hits to return. Defaults to 10.

using kafka-streams to create a new KStream containing multiple aggregations

I am sending JSON messages containing details about a web service request and response to a Kafka topic. I want to process each message as it arrives in Kafka using Kafka streams and send the results as a continuously updated summary(JSON message) to a websocket to which a client is connected.
The client will then parse the JSON and display the various counts/summaries on a web page.
Sample input messages are as below
{
"reqrespid":"048df165-71c2-429c-9466-365ad057eacd",
"reqDate":"30-Aug-2017",
"dId":"B198693",
"resp_UID":"N",
"resp_errorcode":"T0001",
"resp_errormsg":"Unable to retrieve id details. DB Procedure error",
"timeTaken":11,
"timeTakenStr":"[0 minutes], [0 seconds], [11 milli-seconds]",
"invocation_result":"T"
}
{
"reqrespid":"f449af2d-1f8e-46bd-bfda-1fe0feea7140",
"reqDate":"30-Aug-2017",
"dId":"G335887",
"resp_UID":"Y",
"resp_errorcode":"N/A",
"resp_errormsg":"N/A",
"timeTaken":23,
"timeTakenStr":"[0 minutes], [0 seconds], [23 milli-seconds]",
"invocation_result":"S"
}
{
"reqrespid":"e71b802d-e78b-4dcd-b100-fb5f542ea2e2",
"reqDate":"30-Aug-2017",
"dId":"X205014",
"resp_UID":"Y",
"resp_errorcode":"N/A",
"resp_errormsg":"N/A",
"timeTaken":18,
"timeTakenStr":"[0 minutes], [0 seconds], [18 milli-seconds]",
"invocation_result":"S"
}
As the stream of messages comes into Kafka, I want to be able to compute on the fly
**
total number of requests i.e a count of all
total number of requests with invocation_result equal to 'S'
total number of requests with invocation_result not equal to 'S'
total number of requests with invocation_result equal to 'S' and UID
equal to 'Y'
total number of requests with invocation_result equal to 'S' and UID
equal to 'Y'
minimum time taken i.e. min(timeTaken)
maximum time taken i.e. max(timeTaken)
average time taken i.e. avg(timeTaken)
**
and write them out into a KStream with new key set to the reqdate value and new value a JSON message that contains the computed values as shown below using the 3 messages shown earlier
{
"total_cnt":3, "num_succ":2, "num_fail":1, "num_succ_data":2,
"num_succ_nodata":0, "num_fail_biz":0, "num_fail_tech":1,
"min_timeTaken":11, "max_timeTaken":23, "avg_timeTaken":17.3
}
Am new to Kafka streams. How do i do the multiple counts and by differing columns all in one or as a chain of different steps? Would Apache flink or calcite be more appropriate as my understanding of a KTable suggests that you can only have a key e.g. 30-AUG-2017 and then a single column value e.g a count say 3. I need a resulting table structure with one key and multiple count values.
All help is very much appreciated.
You can just do a complex aggregation step that computes all those at once. I am just sketching the idea:
class AggResult {
long total_cnt = 0;
long num_succ = 0;
// and many more
}
stream.groupBy(...).aggregate(
new Initializer<AggResult>() {
public AggResult apply() {
return new AggResult();
}
},
new Aggregator<KeyType, JSON, AggResult> {
AggResult apply(KeyType key, JSON value, AggResult aggregate) {
++aggregate.total_cnt;
if (value.get("success").equals("true")) {
++aggregate.num_succ;
}
// add more conditions to get all the other aggregate results
return aggregate;
}
},
// other parameters omitted for brevity
)
.to("result-topic");

Resources