Kafka consumer LeaveGroup request to coordinator - spring-boot

I have one interesting scenario. Seems like when there are no new topics to pick up (at least that's what I think is happening), my consumer suddenly shuts down.
I am using Kotlin + Spring Boot Kafka Producer and Consumer. My consumer is configured like this:
spring:
profiles:
active: ${SPRING_PROFILE:prod}
cloud:
stream:
bindingRetryInterval: ${BINDING_RETRY_INTERVAL:0}
default:
contentType: application/*+avro
group: my-app
consumer:
useNativeDecoding: false
concurrency: ${CONSUMER_CONCURRENCY:3}
maxAttempts: ${CONSUMER_MAX_ATTEMPTS:3}
bindings:
outbox:
# Topic to consume from
destination: my_outbox_topic
kafka:
bindings:
outbox:
consumer:
enableDlq: ${ENABLE_DLQ:false}
binder:
brokers: ${BOOTSTRAP_SERVERS:localhost:9092}
configuration.ssl.endpoint.identification.algorithm: ${SSL_ALGORITHM:}
consumerProperties:
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://localhost:8081}
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
specific.avro.reader: false
So I start the app, it consumes for a while and then I get these types of logs:
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2,
groupId=youscript-app] Member consumer-2-dc0e4115-6e9a-4a99-9d04-a3a2efb2c7b3 sending LeaveGroup request to coordinator __MASKES__:9094 (id: 2147483646 rack: null)
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-2, groupId=my-app] Unsubscribed all topics or patterns and assigned partitions
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=my-app] Member consumer-3-701431a3-7cb9-4962-8f89-f2df41a176fd sending LeaveGroup request to coordinator __MASKED__:9094 (id: 2147483646 rack: null)
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-3, groupId=my-app] Unsubscribed all topics or patterns and assigned partitions
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2021-09-28 12:57:38.189 INFO 4254 --- [container-1-C-1] essageListenerContainer$ListenerConsumer : my-app: Consumer stopped
2021-09-28 12:57:38.197 INFO 4254 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : my-app: Consumer stopped

Related

Trace id propagation in Spring Boot 3 with Spring cloud streams and WebFlux

I tried to use spring cloud stream with kafka binder. But when I called WebClient in chain, then trace id is lost.
My flow is 'external service' -> 'functionStream-in' -> 'http call' -> functionStream-out' -> 'testStream-in' -> 'testStream-out' -> 'external service'
But after http call(or not?) the trace id is not propagated and I don't understand why. If I remove http call, then everything is OK.
I tried to add Hooks.enableAutomaticContextPropagation();, but that didn't help.
I tried to add ContextSnapshot.setThreadLocalsFrom around http call - same thing.
How can I solve it?
Dependencies:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-webflux'
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-starter-stream-kafka'
implementation 'io.micrometer:micrometer-tracing-bridge-brave'
implementation 'io.zipkin.reporter2:zipkin-reporter-brave'
implementation "io.projectreactor:reactor-core:3.5.3"
implementation "io.micrometer:context-propagation:1.0.2"
implementation "io.micrometer:micrometer-core:1.10.4"
implementation "io.micrometer:micrometer-tracing:1.0.2"
}
application.yml:
spring:
cloud.stream:
kafka.binder:
enableObservation: true
headers:
- b3
function.definition: functionStream;testStream
default.producer.useNativeEncoding: true
bindings:
functionStream-in-0:
destination: spring-in
group: spring-test1
functionStream-out-0:
destination: test-in
testStream-in-0:
destination: test-in
group: spring-test2
testStream-out-0:
destination: spring-out
integration:
management:
observation-patterns: "*"
kafka:
bootstrap-servers: localhost:9092
consumer:
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
management:
tracing:
enabled: true
sampling.probability: 1.0
propagation.type: b3
logging.pattern.level: "%5p [%X{traceId:-},%X{spanId:-}]"
Code:
#Bean
WebClient webClient(final WebClient.Builder builder) {
return builder.build();
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> functionStream(final WebClient webClient, final ObservationRegistry registry) {
return flux -> flux
.<Message<String>>handle((msg, sink) -> {
log.info("functionStream-1");
sink.next(msg);
})
.flatMap(msg -> webClient.get()
.uri("http://localhost:8080/test")
.exchangeToMono(httpResponse -> httpResponse.bodyToMono(String.class)
.map(httpBody -> MessageBuilder.withPayload(httpBody)
.copyHeaders(httpResponse.headers().asHttpHeaders())
.build())
.<Message<String>>handle((m, sink) -> {
log.info("functionStream-3");
sink.next(m);
})
)
)
.handle((msg, sink) -> {
log.info("functionStream-2");
sink.next(msg);
});
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> testStream(final ObservationRegistry registry) {
return flux -> flux
.publishOn(Schedulers.boundedElastic())
.<Message<String>>handle((msg, sink) -> {
log.info("testStream-1");
sink.next(msg);
})
.map(msg -> MessageBuilder
.withPayload(msg.getPayload())
.copyHeaders(msg.getHeaders())
.build());
}
#Bean
RouterFunction<ServerResponse> router(final ObservationRegistry registry) {
return route()
.GET("/test", r -> ServerResponse.ok().body(Mono.deferContextual(contextView -> {
try (final var scope = ContextSnapshot.setThreadLocalsFrom(contextView, ObservationThreadLocalAccessor.KEY)) {
log.info("GET /test");
}
return Mono.just("answer");
}), String.class))
.build();
}
With this code I have output:
2023-02-16T17:06:22.111 INFO [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:06:22.166 WARN [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#523fe6a9]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.170 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#545339d8]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.187 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#44400bcc]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.361 INFO [63ee385de15f1061dea076eb06b0d1e0,908f48f8485a4277] 220348 --- [ctor-http-nio-4] com.example.demo.TestApplication : GET /test
2023-02-16T17:06:22.407 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-3
2023-02-16T17:06:22.409 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:06:22.448 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.456 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382456
2023-02-16T17:06:22.477 INFO [,] 220348 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:06:22.512 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,b5babc6bef4e30ca] 220348 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:06:22.539 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.543 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382543
Without http call I have output:
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:03:09.615 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189628
2023-02-16T17:03:09.691 INFO [,] 204228 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:03:09.859 INFO [63ee379d924e5645fc1d9e27b8135b48,b92a1a59ffd32d80] 204228 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:03:09.868 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189874

spring kafka: Transactions in consumer Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn

We have a transactional producer. And there are no issue there.
For the consumer, we see the following in the logs. Question is why is a transaction being started here while we are consuming the message (and there is this resulting exception)?
2022-12-28 18:02:05.986 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2022-12-28 18:02:10.437 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Created Kafka transaction on producer [CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]]
2022-12-28 18:02:10.438 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[[S] Consuming message] data[record = ConsumerRecord(topic =XXXXXXX)))]
2022-12-28 18:02:10.438 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[Processed message]
2022-12-28 18:03:10.439 INFO [qa] 85474 --- [tainer#0-51-C-1] abc.abc.common.Generic : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[Throwable t] data[exception = [Ljava.lang.StackTraceElement;#6d314e1d] ex[Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn] sts[org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn
]
2022-12-28 18:03:10.439 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[[E] Consuming message]
2022-12-28 18:03:10.439 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Initiating transaction commit
2022-12-28 18:04:10.444 ERROR [qa] 85474 --- [tainer#0-51-C-1] o.s.k.core.DefaultKafkaProducerFactory : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
commitTransaction failed: CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]
2022-12-28 18:04:10.444 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
Initiating transaction rollback after commit exception
2022-12-28 18:04:10.445 WARN [qa] 85474 --- [tainer#0-51-C-1] o.s.k.core.DefaultKafkaProducerFactory : Error during some operation; producer removed from cache: CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]
2022-12-28 18:04:12.436 ERROR [qa] 85474 --- [tainer#0-51-C-1] o.s.k.l.KafkaMessageListenerContainer : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
Transaction rolled back
This is the configuration:
spring:
data:
mongodb:
uri: indb-prop
auto-index-creation: false
kafka:
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
retries: 2
client-id: ${spring.application.name}-${info.cluster}-${IP_ADDRESS}PP
transaction-id-prefix: tx-${spring.kafka.producer.client-id}-
properties:
enable.idempotence: true
spring.json.add.type.headers: false
bootstrap-servers: ${kafka_bootstrap_servers_2}
# listener:
# missing-topics-fatal: true
# type: batch
# concurrency: 15
# ack-mode: manual_immediate
# poll-timeout: 1s
consumer:
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
group-id: ${spring.application.name}-${info.cluster}-111111111112
client-id: ${spring.application.name}-${info.cluster}-${IP_ADDRESS}
# client-id: ${spring.kafka.consumer.group-id}-${IP_ADDRESS}
auto-offset-reset: earliest
enable-auto-commit: false
isolation-level: read_committed
# max-poll-records: 3
fetch-max-wait: 5s
properties:
max.poll.interval.ms: 20000000
spring.json.trusted.packages: '*'
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.value.default.type: net.abc.abc.EventData
bootstrap-servers: ${kafka_bootstrap_servers}
And the consumer:
#KafkaListener(
topics = "SOME_TOPIC",
autoStartup = "true")
// #Transactional(transactionManager = "mongoTransactionManager", propagation = Propagation.REQUIRED)
#Override
public void onMessage(ConsumerRecord<String, EventData> data) {
if (data == null || data.value() == null) {
return;
}
logger.debug(m -> m.event(KAFKA)
.msg("[S] Consuming message")
.with("record", data));
try {
logger.debug(m -> m.event(KAFKA)
.msg("Processed message"));
}
finally {
logger.debug(m -> m.event(KAFKA)
.msg("[E] Consuming message"));
}
The springBoot version: '2.6.6'
How many broker nodes are you using? If 1, have you overridden the broker params KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR and KAFKA_TRANSACTION_STATE_LOG_MIN_ISR to 1?
Similar sounding issue here:
https://github.com/testcontainers/testcontainers-java/issues/1816
Rob.

How to make different instances of consumers in the same consumer group consume different shards of the same kinesis stream?

I'm following the example given in spring-cloud-stream-samples with the following modifications.
application.yml
spring:
cloud:
stream:
instanceCount: 2
bindings:
produceOrder-out-0:
destination: test_stream
content-type: application/json
producer:
partitionCount: 2
partitionSelectorName: eventPartitionSelectorStrategy
partitionKeyExtractorName: eventPartitionKeyExtractorStrategy
processOrder-in-0:
group: eventConsumers
destination: test_stream
content-type: application/json
function:
definition: processOrder;produceOrder
ProducerConfiguration.java
package demo.config;
import demo.stream.Event;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy;
import org.springframework.cloud.stream.binder.PartitionSelectorStrategy;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.messaging.Message;
#Configuration
public class ProducerConfiguration {
private static Logger logger = LoggerFactory.getLogger(ProducerConfiguration.class);
#Bean
public PartitionSelectorStrategy eventPartitionSelectorStrategy() {
return new PartitionSelectorStrategy() {
#Override
public int selectPartition(Object key, int partitionCount) {
if(key instanceof Integer) {
int partition = (((Integer)key)%partitionCount + partitionCount)%partitionCount;
logger.info("key {} falls into partition {}" , key , partition);
return partition;
}
return 0;
}
};
}
#Bean
public PartitionKeyExtractorStrategy eventPartitionKeyExtractorStrategy() {
return new PartitionKeyExtractorStrategy() {
#Override
public Object extractKey(Message<?> message) {
if(message.getPayload() instanceof Event) {
return ((Event) message.getPayload()).hashCode();
} else {
return 0;
}
}
};
}
}
When I run two instances of this application by setting --spring.cloud.stream.instanceIndex=0 and --spring.cloud.stream.instanceIndex=1 I'm able to see the events getting produced. However, only one of the instance is consuming the records from both the partitions, the other instance is not consuming despite the producer creating partitioned records.
Logs seen in KinesisProducer
2022-09-04 00:17:22.628 INFO 34029 --- [ main] a.i.k.KinesisMessageDrivenChannelAdapter : started KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}], consumerGroup='eventConsumers'}
2022-09-04 00:17:22.658 INFO 34029 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 64398 (http) with context path ''
2022-09-04 00:17:22.723 INFO 34029 --- [ main] demo.KinesisApplication : Started KinesisApplication in 18.487 seconds (JVM running for 19.192)
2022-09-04 00:17:23.938 INFO 34029 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : The [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}, state=NEW}] has been started.
2022-09-04 00:17:55.222 INFO 34029 --- [io-64398-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2022-09-04 00:17:55.222 INFO 34029 --- [io-64398-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2022-09-04 00:17:55.224 INFO 34029 --- [io-64398-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 2 ms
2022-09-04 00:17:55.598 INFO 34029 --- [io-64398-exec-1] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=5fbaca2f-d947-423d-a1f1-b1c9c268d2d0, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:17:56.337 INFO 34029 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 1397835167 falls into partition 1
2022-09-04 00:18:02.047 INFO 34029 --- [io-64398-exec-2] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=83021259-89b5-4451-a0ec-da3152d37a58, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:18:02.361 INFO 34029 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 147530256 falls into partition 0
Logs seen in KinesisConsumer
2022-09-04 00:17:28.050 INFO 34058 --- [ main] a.i.k.KinesisMessageDrivenChannelAdapter : started KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000001', reset=false}], consumerGroup='eventConsumers'}
2022-09-04 00:17:28.076 INFO 34058 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 64399 (http) with context path ''
2022-09-04 00:17:28.116 INFO 34058 --- [ main] demo.KinesisApplication : Started KinesisApplication in 18.566 seconds (JVM running for 19.839)
2022-09-04 00:17:29.365 INFO 34058 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : The [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=AFTER_SEQUENCE_NUMBER, sequenceNumber='49632927200161141377996226513172299243826807332967284754', timestamp=null, stream='test_stream', shard='shardId-000000000001', reset=false}, state=NEW}] has been started.
2022-09-04 00:17:57.346 INFO 34058 --- [esis-consumer-1] demo.stream.OrderStreamConfiguration : An order has been placed from this service Event [id=null, subject=Order [id=5fbaca2f-d947-423d-a1f1-b1c9c268d2d0, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:18:04.384 INFO 34058 --- [esis-consumer-1] demo.stream.OrderStreamConfiguration : An order has been placed from this service Event [id=null, subject=Order [id=83021259-89b5-4451-a0ec-da3152d37a58, name=pen], type=ORDER, originator=KinesisProducer]
spring-cloud-stream-binder-kinesis version : 2.2.0
I have these following questions:
For Static shard distribution within a single consumer group, is there any other parameter that needs to be configured that I have missed?
Do I need to specify the DynamoDB Checkpoint properties only for dynamic shard distribution?
EDIT
I have added the DEBUG logs seen in KinesisProducer below:
2022-09-07 08:30:38.120 INFO 4993 --- [io-64398-exec-1] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=b3927132-a80d-481e-a219-dbd0c0c7d124, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-07 08:30:38.806 INFO 4993 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 1842629003 falls into partition 1
2022-09-07 08:30:38.812 DEBUG 4993 --- [ask-scheduler-3] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=1, id=9cb8ec58-4a9e-7b6f-4263-c9d4d1eec906, contentType=application/json, timestamp=1662519638809}]
2022-09-07 08:30:38.813 DEBUG 4993 --- [ask-scheduler-3] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#63811d15 received message: GenericMessage [payload=byte[126], headers={scst_partition=1, scst_partitionOverride=0, id=731f444b-d3df-a51a-33de-8adf78e1e746, contentType=application/json, timestamp=1662519638813}]
2022-09-07 08:30:38.832 DEBUG 4993 --- [ask-scheduler-3] o.s.c.s.m.DirectWithAttributesChannel : postSend (sent=true) on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=1, scst_partitionOverride=0, id=731f444b-d3df-a51a-33de-8adf78e1e746, contentType=application/json, timestamp=1662519638813}]
2022-09-07 08:35:51.153 INFO 4993 --- [io-64398-exec-2] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=6a5b3084-11dc-4080-a80e-61cc73315139, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-07 08:35:51.915 INFO 4993 --- [ask-scheduler-5] demo.config.ProducerConfiguration : key 1525662264 falls into partition 0
2022-09-07 08:35:51.916 DEBUG 4993 --- [ask-scheduler-5] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=0, id=115c5421-00f2-286d-de02-0020e9322a17, contentType=application/json, timestamp=1662519951916}]
2022-09-07 08:35:51.916 DEBUG 4993 --- [ask-scheduler-5] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#63811d15 received message: GenericMessage [payload=byte[126], headers={scst_partition=0, scst_partitionOverride=0, id=145be7e8-381f-af73-e430-9cb645ff785f, contentType=application/json, timestamp=1662519951916}]
2022-09-07 08:35:51.917 DEBUG 4993 --- [ask-scheduler-5] o.s.c.s.m.DirectWithAttributesChannel : postSend (sent=true) on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=0, scst_partitionOverride=0, id=145be7e8-381f-af73-e430-9cb645ff785f, contentType=application/json, timestamp=1662519951916}]

Auto queue exchange creation with spring cloud function and rabbitmq

We are creating a rabbitMq consumer using the new spring cloud function library.
However we find that on startup of the application, we don't see the queues or exchanges create on the rabbitMq instance.
Here is our config.
spring:
cloud:
function:
definition: someReceiver
stream:
binders:
rabbit:
type: rabbit
bindings:
someReceiver-in-0:
consumer:
max-attemps: 1
batch-mode: true
binder: rabbit
destination: someExhange
group: someQueue
default-binder: rabbit
rabbit:
bindings:
someReceiver-in-0:
consumer:
acknowledge-mode: MANUAL
auto-bind-dlq: true
queue-name-group-only: true
exchange-type: topic
max-concurrency: 10
prefetch: 200
enable-batching: true
batch-size: 10
receive-timeout: 200
dlq-dead-letter-exchange:
This is our consumer.
#Bean
public Consumer<Message<Long>> someReceiver() {
return ....
}
In logs we can see :
o.s.i.monitor.IntegrationMBeanExporter : Registering MessageChannel errorChannel
o.s.i.monitor.IntegrationMBeanExporter : Registering MessageChannel nullChannel
o.s.i.monitor.IntegrationMBeanExporter : Registering MessageHandler _org.springframework.integration.errorLogger
o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
o.s.i.channel.PublishSubscribeChannel : Channel 'X' has 1 subscriber(s).
The problem we are having is that no queue or exchange is being created on the rabbitMq broker on application startup.
We were expecting that a queue named someQueue and an exchanged named someExchange should have been created on application startup
Works as designed just directly from https://start.spring.io:
2021-04-19 12:02:07.476 INFO 28260 --- [ main] o.s.c.s.m.DirectWithAttributesChannel : Channel 'application.someReceiver-in-0' has 1 subscriber(s).
2021-04-19 12:02:07.563 INFO 28260 --- [ main] o.s.i.monitor.IntegrationMBeanExporter : Registering MessageChannel errorChannel
2021-04-19 12:02:07.595 INFO 28260 --- [ main] o.s.i.monitor.IntegrationMBeanExporter : Registering MessageChannel nullChannel
2021-04-19 12:02:07.600 INFO 28260 --- [ main] o.s.i.monitor.IntegrationMBeanExporter : Registering MessageChannel someReceiver-in-0
2021-04-19 12:02:07.613 INFO 28260 --- [ main] o.s.i.monitor.IntegrationMBeanExporter : Registering MessageHandler _org.springframework.integration.errorLogger
2021-04-19 12:02:07.631 INFO 28260 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2021-04-19 12:02:07.631 INFO 28260 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'application.errorChannel' has 1 subscriber(s).
2021-04-19 12:02:07.631 INFO 28260 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started bean '_org.springframework.integration.errorLogger'
2021-04-19 12:02:07.632 INFO 28260 --- [ main] o.s.c.s.binder.DefaultBinderFactory : Creating binder: rabbit
2021-04-19 12:02:07.740 INFO 28260 --- [ main] o.s.c.s.binder.DefaultBinderFactory : Caching the binder: rabbit
2021-04-19 12:02:07.740 INFO 28260 --- [ main] o.s.c.s.binder.DefaultBinderFactory : Retrieving cached binder: rabbit
2021-04-19 12:02:07.792 INFO 28260 --- [ main] c.s.b.r.p.RabbitExchangeQueueProvisioner : declaring queue for inbound: someQueue, bound to: someExhange
2021-04-19 12:02:07.793 INFO 28260 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2021-04-19 12:02:07.938 INFO 28260 --- [ main] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#244e619a:0/SimpleConnection#73a0f2b [delegate=amqp://guest#127.0.0.1:5672/, localPort= 51143]
2021-04-19 12:02:07.991 INFO 28260 --- [ main] o.s.c.stream.binder.BinderErrorChannel : Channel 'someQueue.errors' has 1 subscriber(s).
2021-04-19 12:02:07.991 INFO 28260 --- [ main] o.s.c.stream.binder.BinderErrorChannel : Channel 'someQueue.errors' has 2 subscriber(s).
2021-04-19 12:02:08.002 INFO 28260 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started bean 'inbound.someQueue'
2021-04-19 12:02:08.010 INFO 28260 --- [ main] o.s.s.c.s.s.So67160902Application : Started So67160902Application in 1.659 seconds (JVM running for 2.147)
And I see this in the Rabbit MQ Management Console:
You probably miss some dependency or do something else in your configuration to prevent RabbitMQ Binder to do its stuff.
This is my deps, which are in the pom just generated from https://start.spring.io:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-rabbit</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
<scope>test</scope>
<classifier>test-binder</classifier>
<type>test-jar</type>
</dependency>
</dependencies>

Error registering service to eureka server

I am trying to register a client to spring-eureka-server, client deregisters just after registering
eureka-server logs:
2018-05-13 16:02:47.290 INFO 25557 --- [io-9091-exec-10]
c.n.e.registry.AbstractInstanceRegistry : Registered instance
HELLO-CLIENT/192.168.43.96:hello-client:8072 with status UP
(replication=false) 2018-05-13 16:02:47.438 INFO 25557 ---
[nio-9091-exec-3] c.n.e.registry.AbstractInstanceRegistry :
Registered instance HELLO-CLIENT/192.168.43.96:hello-client:8072 with
status DOWN (replication=false) 2018-05-13 16:02:47.457 INFO 25557
--- [nio-9091-exec-2] c.n.e.registry.AbstractInstanceRegistry : Cancelled instance HELLO-CLIENT/192.168.43.96:hello-client:8072
(replication=false) 2018-05-13 16:02:47.950 INFO 25557 ---
[nio-9091-exec-5] c.n.e.registry.AbstractInstanceRegistry :
Registered instance HELLO-CLIENT/192.168.43.96:hello-client:8072 with
status DOWN (replication=true) 2018-05-13 16:02:47.951 INFO 25557 ---
[nio-9091-exec-5] c.n.e.registry.AbstractInstanceRegistry : Cancelled
instance HELLO-CLIENT/192.168.43.96:hello-client:8072
(replication=true) 2018-05-13 16:03:25.747 INFO 25557 ---
[a-EvictionTimer] c.n.e.registry.AbstractInstanceRegistry : Running
the evict task with compensationTime 4ms
Eureka-client logs:
2018-05-13 16:02:47.163 INFO 25676 --- [nfoReplicator-0]
com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072:
registering service... 2018-05-13 16:02:47.212 INFO 25676 --- [
main] c.a.helloclient.HelloClientApplication : Started
HelloClientApplication in 7.62 seconds (JVM running for 8.573)
2018-05-13 16:02:47.224 INFO 25676 --- [ Thread-5]
s.c.a.AnnotationConfigApplicationContext : Closing
org.springframework.context.annotation.AnnotationConfigApplicationContext#6f7923a5:
startup date [Sun May 13 16:02:42 IST 2018]; parent:
org.springframework.context.annotation.AnnotationConfigApplicationContext#5c30a9b0
2018-05-13 16:02:47.226 INFO 25676 --- [ Thread-5]
o.s.c.n.e.s.EurekaServiceRegistry : Unregistering application
hello-client with eureka with status DOWN 2018-05-13 16:02:47.227
WARN 25676 --- [ Thread-5] com.netflix.discovery.DiscoveryClient
: Saw local status change event StatusChangeEvent
[timestamp=1526207567227, current=DOWN, previous=UP] 2018-05-13
16:02:47.232 INFO 25676 --- [ Thread-5]
o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
2018-05-13 16:02:47.235 INFO 25676 --- [ Thread-5]
com.netflix.discovery.DiscoveryClient : Shutting down
DiscoveryClient ... 2018-05-13 16:02:47.292 INFO 25676 ---
[nfoReplicator-0] com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072 -
registration status: 204 2018-05-13 16:02:47.423 INFO 25676 ---
[nfoReplicator-0] com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072:
registering service... 2018-05-13 16:02:47.440 INFO 25676 ---
[nfoReplicator-0] com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072 -
registration status: 204 2018-05-13 16:02:47.442 INFO 25676 --- [
Thread-5] com.netflix.discovery.DiscoveryClient : Unregistering ...
2018-05-13 16:02:47.460 INFO 25676 --- [ Thread-5]
com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072 -
deregister status: 200 2018-05-13 16:02:47.494 INFO 25676 --- [
Thread-5] com.netflix.discovery.DiscoveryClient : Completed shut
down of DiscoveryClient 2018-05-13 16:02:47.495 INFO 25676 --- [
Thread-5] o.s.j.e.a.AnnotationMBeanExporter : Unregistering
JMX-exposed beans on shutdown 2018-05-13 16:02:47.498 INFO 25676 ---
[ Thread-5] o.s.j.e.a.AnnotationMBeanExporter :
Unregistering JMX-exposed beans
Please let me know what could be possibly wrong.
Add
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
into Eureka and client app
It really works !!!
Eureka client deregisters when an app has been shutdown.
Check if there is any other reason why the app is stopping leading to eureka-client deregistering.
For my case, the application was shutting down due to spring-boot-starter-web dependency. After resolving this, the application started well.
Looks like a dependency issue.
If the app works fine (the core functionality) without eureka integration, then try changing the eureka-client dependency version.
I would suggest you to check the following:
Check all the port numbers that you are running
Check for any version issues
Add above web dependency in your eureka pom.xml (it worked for me in Maven projects)

Resources