Spring cloud stream 2.0 and startOffset latest - spring-boot

While upgrading our app to spring boot 2.0.3 , we encoutered an issue with consuming messages and using startOffset: latest.
The scenario that we tested is as follows:
deploy an app with consumer(topic = TEST_CHANNEL,group = kafka2_test) configured with startOffset: latest to pcf and increase the number of instances to 5.
stop the app
publish 40 messages to TEST_CHANNEL topic
start the app
we can see that the app successfully assigns to the partitions ,however, some of the instances does not consume from the partitions that they are assigned to.
We also verified it by checking the lag for the group.
This scenario worked when using spring boot 1.5.10. It also worked when using spring boot 2.0.3 and not setting the startOffset: latest.
Note: this is not a new group , so offset suppose to be present for the consumer to use, and resetOffsets should not have any effect.
Is this an issue in spring boot 2.0.3?

not a new group ... and resetOffsets should not have any effect.
resetOffsets is specifically designed to reset the offset for an existing group to the startOffset value.
It was broken in the 1.3.x version of the binder (the version used by Boot 1.5.x) and had no effect.

Related

Migration from Transport client to Java API client in ElasticSearch Springboot project

I am new to elasticsearch and I have a elasticsearch springboot project. We use docker to setup elasticsearch cluster and use springboot(backend),esclient to fetch data from elasticsearch. It is an old project and now I have the task of upgrading the cluster to it's latest version which is 8.x. Upon glancing upon the dependencies I saw that the project uses transport client but as per documentation the transport client is long deprecated and instead of that we were to use High level rest client client which was again deprecated in favor of JAVA Api client. So for the app to work I have to update the dependencies. I found the migration guide from transport client to high level rest client and the migration guide from high level rest client to java api client. I also gathered from the documentation that java api client and high level rest client do not have any relation and the transition to java api client can be done gradually)
Now my question is that
Suppose I upgrade from transport to java api client. Will this bring a whole lot of changes in my code as it is a really big project or is there a workaround in which I don't have to make that many changes in my code and I can work with elasticsearch version 8.
The need for the client upgrade didn't arise till now as we were still using 7.17.x and that was working fine till now but when I upgraded the cluster to version 8 and with that they are not providing transport client. My one approach is that I migrate to HLRC first and then gradually transition to JAVA api client.
Any help will be appreciated.
Regards

Spring boot application stop serving traffic while Kafka consumer rebalancing

I'm running Spring boot applications in k8s cluster with Kafka.
during a rolling update or scaling my services, some of them rebalanced which is ok since consumers are being added or removed, but this causes the service whos rebalancing to stop serving traffic.
I'm using
Spring boot 2.1.1.RELEASE
Spring Integration Kafka 3.1.0.RELEASE
Spring Kafka 2.2.7.RELEASE
I have 3 topics each with 2000 partition, the services are 30-50 depending on the system load.
And using consumer groups for each topic.
First I thought that new services are signaling that they are ready (via Actuator readiness probe) which causes them to accept traffic before they are actually ready, but that's not the case since the existing ones also stop serving traffic while they rebalancing.
What's the best practices for scaling or rolling update which will trigger the minimum rebalancing possible
Boot 2.1 is end of life. The last release, last month, was 2.1.18. The current 2.2.x release of spring-kafka is 2.2.14.
If you can upgrade to (at least) Boot 2.2.11 (spring-kafka 2.4.11 - Boot brings in 2.3.x by default) (and a broker >= 2.3), you could consider configuring incremental cooperative rebalancing.
Current releases are Boot 2.4.0 and spring-kafka 2.6.3.
https://www.confluent.io/blog/incremental-cooperative-rebalancing-in-kafka/

Micrometer KafkaConsumerMetrics present when running locally but not when deployed

When I run locally I can see that kafka.consumer. are being collected. While when I deploy my service - I see that those metrics are not present.
I use kafka version 1.11.0, java 11 and Spring Boot 2.2.
How I can determine what is missing?
In case anyone has this issue. I've had to explicitly add:
spring.jmx.enabled=true
It is needed since Kafka publishes data to jmx, and Micrometer reads it from there. By default jmx is disabled starting from Spring Boot 2.2.
It worked locally because IDEA added spring.jmx.enabled=true flag under the covers.

Can JMS Messaging be performed on quarkus using Apache camel routes?

Sorry for a naive question , just starting off with quarkus here. Since i read that quarkus already supports camel , is it possible to create a JMS route to send a message to a JMS queue ?
I also have some legacy services which use Database bean map handlers (apache commons db). If i include them as a part of quarkus ,can these still be deployed on GraalVM ?
The list of the component currently fully supported by camel-quarkus is listed here: https://github.com/apache/camel-quarkus/tree/master/extensions
Other components not listed here are working out of the box in JVM mode but some work may be required ot make them working as native image

Upgrading consumers from zk based offset storage to kafka based storage

I am using Golang and Sarama client. Kafka version is 0.9 which I plan to upgrade.
I am planning to upgrade sarama clients to latest version and use sarama-cluster instead of wvanbergen/kafka. I see that offset will be committed to kafka now.
On Apache Kafka page it says that for doing migration from zk based storage to kafka you need to do following:
Set offsets.storage=kafka and dual.commit.enabled=true in your consumer config.
There is no such property in wvanbergen/kafka library and they don't have plans to add it too.
Has anyone performed a similar upgrade from wvanbergen/kafka to sarama-cluster without the dual.commit.enabled settings on a production system? How did you migrate offsets from zk to kafka?

Resources