How to make different instances of consumers in the same consumer group consume different shards of the same kinesis stream? - spring-boot

I'm following the example given in spring-cloud-stream-samples with the following modifications.
application.yml
spring:
cloud:
stream:
instanceCount: 2
bindings:
produceOrder-out-0:
destination: test_stream
content-type: application/json
producer:
partitionCount: 2
partitionSelectorName: eventPartitionSelectorStrategy
partitionKeyExtractorName: eventPartitionKeyExtractorStrategy
processOrder-in-0:
group: eventConsumers
destination: test_stream
content-type: application/json
function:
definition: processOrder;produceOrder
ProducerConfiguration.java
package demo.config;
import demo.stream.Event;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy;
import org.springframework.cloud.stream.binder.PartitionSelectorStrategy;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.messaging.Message;
#Configuration
public class ProducerConfiguration {
private static Logger logger = LoggerFactory.getLogger(ProducerConfiguration.class);
#Bean
public PartitionSelectorStrategy eventPartitionSelectorStrategy() {
return new PartitionSelectorStrategy() {
#Override
public int selectPartition(Object key, int partitionCount) {
if(key instanceof Integer) {
int partition = (((Integer)key)%partitionCount + partitionCount)%partitionCount;
logger.info("key {} falls into partition {}" , key , partition);
return partition;
}
return 0;
}
};
}
#Bean
public PartitionKeyExtractorStrategy eventPartitionKeyExtractorStrategy() {
return new PartitionKeyExtractorStrategy() {
#Override
public Object extractKey(Message<?> message) {
if(message.getPayload() instanceof Event) {
return ((Event) message.getPayload()).hashCode();
} else {
return 0;
}
}
};
}
}
When I run two instances of this application by setting --spring.cloud.stream.instanceIndex=0 and --spring.cloud.stream.instanceIndex=1 I'm able to see the events getting produced. However, only one of the instance is consuming the records from both the partitions, the other instance is not consuming despite the producer creating partitioned records.
Logs seen in KinesisProducer
2022-09-04 00:17:22.628 INFO 34029 --- [ main] a.i.k.KinesisMessageDrivenChannelAdapter : started KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}], consumerGroup='eventConsumers'}
2022-09-04 00:17:22.658 INFO 34029 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 64398 (http) with context path ''
2022-09-04 00:17:22.723 INFO 34029 --- [ main] demo.KinesisApplication : Started KinesisApplication in 18.487 seconds (JVM running for 19.192)
2022-09-04 00:17:23.938 INFO 34029 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : The [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}, state=NEW}] has been started.
2022-09-04 00:17:55.222 INFO 34029 --- [io-64398-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2022-09-04 00:17:55.222 INFO 34029 --- [io-64398-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2022-09-04 00:17:55.224 INFO 34029 --- [io-64398-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 2 ms
2022-09-04 00:17:55.598 INFO 34029 --- [io-64398-exec-1] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=5fbaca2f-d947-423d-a1f1-b1c9c268d2d0, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:17:56.337 INFO 34029 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 1397835167 falls into partition 1
2022-09-04 00:18:02.047 INFO 34029 --- [io-64398-exec-2] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=83021259-89b5-4451-a0ec-da3152d37a58, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:18:02.361 INFO 34029 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 147530256 falls into partition 0
Logs seen in KinesisConsumer
2022-09-04 00:17:28.050 INFO 34058 --- [ main] a.i.k.KinesisMessageDrivenChannelAdapter : started KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000001', reset=false}], consumerGroup='eventConsumers'}
2022-09-04 00:17:28.076 INFO 34058 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 64399 (http) with context path ''
2022-09-04 00:17:28.116 INFO 34058 --- [ main] demo.KinesisApplication : Started KinesisApplication in 18.566 seconds (JVM running for 19.839)
2022-09-04 00:17:29.365 INFO 34058 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : The [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=AFTER_SEQUENCE_NUMBER, sequenceNumber='49632927200161141377996226513172299243826807332967284754', timestamp=null, stream='test_stream', shard='shardId-000000000001', reset=false}, state=NEW}] has been started.
2022-09-04 00:17:57.346 INFO 34058 --- [esis-consumer-1] demo.stream.OrderStreamConfiguration : An order has been placed from this service Event [id=null, subject=Order [id=5fbaca2f-d947-423d-a1f1-b1c9c268d2d0, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:18:04.384 INFO 34058 --- [esis-consumer-1] demo.stream.OrderStreamConfiguration : An order has been placed from this service Event [id=null, subject=Order [id=83021259-89b5-4451-a0ec-da3152d37a58, name=pen], type=ORDER, originator=KinesisProducer]
spring-cloud-stream-binder-kinesis version : 2.2.0
I have these following questions:
For Static shard distribution within a single consumer group, is there any other parameter that needs to be configured that I have missed?
Do I need to specify the DynamoDB Checkpoint properties only for dynamic shard distribution?
EDIT
I have added the DEBUG logs seen in KinesisProducer below:
2022-09-07 08:30:38.120 INFO 4993 --- [io-64398-exec-1] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=b3927132-a80d-481e-a219-dbd0c0c7d124, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-07 08:30:38.806 INFO 4993 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 1842629003 falls into partition 1
2022-09-07 08:30:38.812 DEBUG 4993 --- [ask-scheduler-3] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=1, id=9cb8ec58-4a9e-7b6f-4263-c9d4d1eec906, contentType=application/json, timestamp=1662519638809}]
2022-09-07 08:30:38.813 DEBUG 4993 --- [ask-scheduler-3] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#63811d15 received message: GenericMessage [payload=byte[126], headers={scst_partition=1, scst_partitionOverride=0, id=731f444b-d3df-a51a-33de-8adf78e1e746, contentType=application/json, timestamp=1662519638813}]
2022-09-07 08:30:38.832 DEBUG 4993 --- [ask-scheduler-3] o.s.c.s.m.DirectWithAttributesChannel : postSend (sent=true) on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=1, scst_partitionOverride=0, id=731f444b-d3df-a51a-33de-8adf78e1e746, contentType=application/json, timestamp=1662519638813}]
2022-09-07 08:35:51.153 INFO 4993 --- [io-64398-exec-2] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=6a5b3084-11dc-4080-a80e-61cc73315139, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-07 08:35:51.915 INFO 4993 --- [ask-scheduler-5] demo.config.ProducerConfiguration : key 1525662264 falls into partition 0
2022-09-07 08:35:51.916 DEBUG 4993 --- [ask-scheduler-5] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=0, id=115c5421-00f2-286d-de02-0020e9322a17, contentType=application/json, timestamp=1662519951916}]
2022-09-07 08:35:51.916 DEBUG 4993 --- [ask-scheduler-5] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#63811d15 received message: GenericMessage [payload=byte[126], headers={scst_partition=0, scst_partitionOverride=0, id=145be7e8-381f-af73-e430-9cb645ff785f, contentType=application/json, timestamp=1662519951916}]
2022-09-07 08:35:51.917 DEBUG 4993 --- [ask-scheduler-5] o.s.c.s.m.DirectWithAttributesChannel : postSend (sent=true) on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=0, scst_partitionOverride=0, id=145be7e8-381f-af73-e430-9cb645ff785f, contentType=application/json, timestamp=1662519951916}]

Related

Trace id propagation in Spring Boot 3 with Spring cloud streams and WebFlux

I tried to use spring cloud stream with kafka binder. But when I called WebClient in chain, then trace id is lost.
My flow is 'external service' -> 'functionStream-in' -> 'http call' -> functionStream-out' -> 'testStream-in' -> 'testStream-out' -> 'external service'
But after http call(or not?) the trace id is not propagated and I don't understand why. If I remove http call, then everything is OK.
I tried to add Hooks.enableAutomaticContextPropagation();, but that didn't help.
I tried to add ContextSnapshot.setThreadLocalsFrom around http call - same thing.
How can I solve it?
Dependencies:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-webflux'
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-starter-stream-kafka'
implementation 'io.micrometer:micrometer-tracing-bridge-brave'
implementation 'io.zipkin.reporter2:zipkin-reporter-brave'
implementation "io.projectreactor:reactor-core:3.5.3"
implementation "io.micrometer:context-propagation:1.0.2"
implementation "io.micrometer:micrometer-core:1.10.4"
implementation "io.micrometer:micrometer-tracing:1.0.2"
}
application.yml:
spring:
cloud.stream:
kafka.binder:
enableObservation: true
headers:
- b3
function.definition: functionStream;testStream
default.producer.useNativeEncoding: true
bindings:
functionStream-in-0:
destination: spring-in
group: spring-test1
functionStream-out-0:
destination: test-in
testStream-in-0:
destination: test-in
group: spring-test2
testStream-out-0:
destination: spring-out
integration:
management:
observation-patterns: "*"
kafka:
bootstrap-servers: localhost:9092
consumer:
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
management:
tracing:
enabled: true
sampling.probability: 1.0
propagation.type: b3
logging.pattern.level: "%5p [%X{traceId:-},%X{spanId:-}]"
Code:
#Bean
WebClient webClient(final WebClient.Builder builder) {
return builder.build();
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> functionStream(final WebClient webClient, final ObservationRegistry registry) {
return flux -> flux
.<Message<String>>handle((msg, sink) -> {
log.info("functionStream-1");
sink.next(msg);
})
.flatMap(msg -> webClient.get()
.uri("http://localhost:8080/test")
.exchangeToMono(httpResponse -> httpResponse.bodyToMono(String.class)
.map(httpBody -> MessageBuilder.withPayload(httpBody)
.copyHeaders(httpResponse.headers().asHttpHeaders())
.build())
.<Message<String>>handle((m, sink) -> {
log.info("functionStream-3");
sink.next(m);
})
)
)
.handle((msg, sink) -> {
log.info("functionStream-2");
sink.next(msg);
});
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> testStream(final ObservationRegistry registry) {
return flux -> flux
.publishOn(Schedulers.boundedElastic())
.<Message<String>>handle((msg, sink) -> {
log.info("testStream-1");
sink.next(msg);
})
.map(msg -> MessageBuilder
.withPayload(msg.getPayload())
.copyHeaders(msg.getHeaders())
.build());
}
#Bean
RouterFunction<ServerResponse> router(final ObservationRegistry registry) {
return route()
.GET("/test", r -> ServerResponse.ok().body(Mono.deferContextual(contextView -> {
try (final var scope = ContextSnapshot.setThreadLocalsFrom(contextView, ObservationThreadLocalAccessor.KEY)) {
log.info("GET /test");
}
return Mono.just("answer");
}), String.class))
.build();
}
With this code I have output:
2023-02-16T17:06:22.111 INFO [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:06:22.166 WARN [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#523fe6a9]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.170 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#545339d8]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.187 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#44400bcc]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.361 INFO [63ee385de15f1061dea076eb06b0d1e0,908f48f8485a4277] 220348 --- [ctor-http-nio-4] com.example.demo.TestApplication : GET /test
2023-02-16T17:06:22.407 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-3
2023-02-16T17:06:22.409 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:06:22.448 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.456 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382456
2023-02-16T17:06:22.477 INFO [,] 220348 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:06:22.512 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,b5babc6bef4e30ca] 220348 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:06:22.539 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.543 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382543
Without http call I have output:
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:03:09.615 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189628
2023-02-16T17:03:09.691 INFO [,] 204228 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:03:09.859 INFO [63ee379d924e5645fc1d9e27b8135b48,b92a1a59ffd32d80] 204228 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:03:09.868 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189874

application shutting down in spring boot

In a spring boot project in configuration file there is a task executor whose code goes like this
#Bean(name = "asyncExec")
public Executor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(50);
executor.setQueueCapacity(500);
executor.setThreadNamePrefix("CashFlowThread-");
executor.initialize();
return executor;
}
I am deploying an API which download from s3 bucket and create 4 pdf and store it in target folder . while the api is called console shows error that asyncExec is shutting down .
Stack trace for it shows
Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-12-01 17:04:30.174 INFO 3680 --- [nio-5000-exec-2] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2020-12-01 17:04:30.179 INFO 3680 --- [nio-5000-exec-2] o.s.web.servlet.DispatcherServlet : Completed initialization in 5 ms
2020-12-01 17:04:30.185 INFO 3680 --- [nio-5000-exec-2] com.zaxxer.hikari.HikariDataSource : HikariPool-17 - Starting...
2020-12-01 17:04:35.767 INFO 3680 --- [nio-5000-exec-2] com.zaxxer.hikari.HikariDataSource : HikariPool-17 - Start completed.
File is created!
Successfully obtained bytes from an S3 object
2020-12-01 17:04:43.907 INFO 3680 --- [ Thread-174] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'asyncExec'
2020-12-01 17:04:43.907 INFO 3680 --- [ Thread-174] com.zaxxer.hikari.HikariDataSource : HikariPool-17 - Shutdown initiated...

Connect limits service to spring cloud config server failed

i would like ask your help regarding the above question.Now I am trying to connect application (SpringCloudConfigServer) with another application (Limits-service), Limits-service application have to pick the SpringCloudConfigServer properties file. when I hit http://localhost:8080/limits i need to get oupput like {"maximum":888,"minimum":8} but im getting {"maximum":999,"minimum":99}
this is my stack trace
2020-04-29 22:51:23.740 INFO 2016 --- [ restartedMain] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at :` http://localhost:8888
2020-04-29 22:51:25.063 INFO 2016 --- [ restartedMain] c.c.c.ConfigServicePropertySourceLocator : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2020-04-29 22:51:25.074 WARN 2016 --- [ restartedMain] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/limits-service/dev": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
2020-04-29 22:51:25.074 INFO 2016 --- [ restartedMain] c.i.m.l.LimitsServiceApplication : The following profiles are active: dev
2020-04-29 22:51:27.792 INFO 2016 --- [ restartedMain] o.s.cloud.context.scope.GenericScope : BeanFactory id=fe72d2a0-4692-3641-b317-8a12c5b9eb59
2020-04-29 22:51:29.065 INFO 2016 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2020-04-29 22:51:29.099 INFO 2016 --- [ restartedMain] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2020-04-29 22:51:29.099 INFO 2016 --- [ restartedMain] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.33]
2020-04-29 22:51:29.349 INFO 2016 --- [ restartedMain] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2020-04-29 22:51:29.349 INFO 2016 --- [ restartedMain] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 4166 ms
2020-04-29 22:51:30.477 INFO 2016 --- [ restartedMain] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2020-04-29 22:51:31.571 INFO 2016 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2020-04-29 22:51:31.941 INFO 2016 --- [ restartedMain] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator'
2020-04-29 22:51:32.269 INFO 2016 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-04-29 22:51:32.627 INFO 2016 --- [ restartedMain] c.i.m.l.LimitsServiceApplication : Started LimitsServiceApplication in 12.155 seconds (JVM running for 14.055)
2020-04-29 22:51:39.114 INFO 2016 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-04-29 22:51:39.115 INFO 2016 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2020-04-29 22:51:39.151 INFO 2016 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 35 ms
here is my limits service code
package com.in28minutes.microservices.limitsservice;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class LimitsServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LimitsServiceApplication.class, args);
}
}
`controller
package com.in28minutes.microservices.limitsservice;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import com.in28minutes.microservices.limitsservice.bean.LimitConfiguration;
#RestController
public class LimitsConfigurationController {
#Autowired
private Configuration configuration;
#GetMapping("/limits")
public LimitConfiguration retrieveLimitsFromConfigurations() {
return new LimitConfiguration(configuration.getMaximum(),
configuration.getMinimum());
}
}
component class
package com.in28minutes.microservices.limitsservice;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;
#Component
#ConfigurationProperties("limits-service")
public class Configuration {
private int minimum;
private int maximum;
public int getMinimum() {
return minimum;
}
public int getMaximum() {
return maximum;
}
public void setMinimum(int minimum) {
this.minimum = minimum;
}
public void setMaximum(int maximum) {
this.maximum = maximum;
}
}
Bean class
package com.in28minutes.microservices.limitsservice.bean;
public class LimitConfiguration {
private int maximum;
private int minimum;
protected LimitConfiguration() {
}
public LimitConfiguration(int maximum, int minimum) {
super();
this.maximum = maximum;
this.minimum = minimum;
}
public int getMaximum() {
return maximum;
}
public int getMinimum() {
return minimum;
}
}
bootstrap.properties
spring.application.name=limits-service
spring.cloud.config.server.uri=http://localhost:8888
/git-localconfig-repo/limits-service.properties
limits-service.minimum=8
limits-service.maximum=888
please help me how to fix this
It looks like your config server call fails. That is why you are not able to load properties from config server but from your service local config file
Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/limits-service/dev": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
2020-04-29 22:51:25.074 INFO 2016 --- [ restartedMain] c.i.m.l.LimitsServiceApplication : The following profiles are active: dev
In your /git-localconfig-repo/limits-service.properties
add spring.cloud.config.server.git.uri=file:/git-localconfig-repo/limits-service

Why Spring Integration QueueChannel runs sequentially with the delayed delivery message in kafka

When using Kafka integration and configuring a QueueChannel,
The processing of messages after the queue channel receives are executed sequentially with a delay of one second, it is not possible to understand the reason, the queue channel should be an accumulation of messages (up to the configured limit) and release the messages from the queue as long as it is not empty and there is a consumer.
Why are messages released sequentially with a delay of one second?
follows the log, as can be seen, the messages are received immediately (according to the date of the log) and are processed sequentially with a delay of 1 second?
2020-04-06 13:08:28.108 INFO 30718 --- [ntainer#0-0-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 2 - enriched
2020-04-06 13:08:28.109 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 2 - enriched
2020-04-06 13:08:28.110 INFO 30718 --- [ntainer#0-0-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 7 - enriched
2020-04-06 13:08:28.111 INFO 30718 --- [ntainer#0-0-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 5 - enriched
2020-04-06 13:08:28.116 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 6 - enriched
2020-04-06 13:08:28.119 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 4 - enriched
2020-04-06 13:08:28.120 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 1 - enriched
2020-04-06 13:08:28.121 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 8 - enriched
2020-04-06 13:08:28.122 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 3 - enriched
2020-04-06 13:08:28.123 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 9 - enriched
2020-04-06 13:08:28.124 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 10 - enriched
2020-04-06 13:08:29.111 INFO 30718 --- [ask-scheduler-2] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 7 - enriched
2020-04-06 13:08:30.112 INFO 30718 --- [ask-scheduler-4] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 5 - enriched
2020-04-06 13:08:31.112 INFO 30718 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 6 - enriched
2020-04-06 13:08:32.113 INFO 30718 --- [ask-scheduler-5] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 4 - enriched
2020-04-06 13:08:33.113 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 1 - enriched
2020-04-06 13:08:34.113 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 8 - enriched
2020-04-06 13:08:35.113 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 3 - enriched
2020-04-06 13:08:36.114 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 9 - enriched
2020-04-06 13:08:37.114 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 10 - enriched
Blockquote
package br.com.gubee.kafaexample
import org.apache.kafka.clients.admin.NewTopic
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.http.MediaType
import org.springframework.integration.annotation.Gateway
import org.springframework.integration.annotation.MessagingGateway
import org.springframework.integration.config.EnableIntegration
import org.springframework.integration.context.IntegrationContextUtils
import org.springframework.integration.dsl.IntegrationFlow
import org.springframework.integration.dsl.IntegrationFlows
import org.springframework.integration.kafka.dsl.Kafka
import org.springframework.kafka.core.ConsumerFactory
import org.springframework.kafka.core.KafkaTemplate
import org.springframework.kafka.listener.ContainerProperties
import org.springframework.scheduling.annotation.Async
import org.springframework.stereotype.Component
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.PathVariable
import org.springframework.web.bind.annotation.RequestMapping
import org.springframework.web.bind.annotation.RestController
#RestController
#RequestMapping(path = ["/testKafka"], produces = [MediaType.APPLICATION_JSON_VALUE])
class TestKafkaResource(private val testKafkaGateway: TestKafkaGateway) {
#GetMapping("init/{param}")
fun init(#PathVariable("param", required = false) param: String? = null) {
(1..10).forEach {
println("Send async item $it")
testKafkaGateway.init("item: $it")
}
}
}
#MessagingGateway(errorChannel = IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
#Component
interface TestKafkaGateway {
#Gateway(requestChannel = "publishKafkaChannel")
#Async
fun init(param: String)
}
#Configuration
#EnableIntegration
class TestKafkaFlow(private val kafkaTemplate: KafkaTemplate<*, *>,
private val consumerFactory: ConsumerFactory<*, *>) {
#Bean
fun readKafkaChannelTopic(): NewTopic {
return NewTopic("readKafkaChannel", 40, 1)
}
#Bean
fun publishKafka(): IntegrationFlow {
return IntegrationFlows
.from("publishKafkaChannel")
.transform<String, String> { "${it} - enriched" }
.handle(
Kafka.outboundChannelAdapter(kafkaTemplate)
.topic("readKafkaChannel")
.sendFailureChannel(IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
)
.get()
}
#Bean
fun readFromKafka(): IntegrationFlow {
return IntegrationFlows
.from(
Kafka.messageDrivenChannelAdapter(consumerFactory, "readKafkaChannel")
.configureListenerContainer { kafkaMessageListenerContainer ->
kafkaMessageListenerContainer.concurrency(2)
kafkaMessageListenerContainer.ackMode(ContainerProperties.AckMode.RECORD)
}
.errorChannel(IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
)
.channel { c -> c.queue(10) }
.log<String> {
"readKafkaChannel: ${it.payload}"
}
.channel("channelThatIsProcessingSequential")
.get()
}
#Bean
fun kafkaFlowAfter(): IntegrationFlow {
return IntegrationFlows
.from("channelThatIsProcessingSequential")
.log<String> {
"channelThatIsProcessingSequential - ${it.payload}"
}
.get()
}
}
As Gary said, it is not good to shift Kafka messages into a QueueChannel. The consumption on the Kafka.messageDrivenChannelAdapter() is already async - really no reason to move messages to the separate thread.
Anyway it looks like you use Spring Cloud Stream with its PollerMetadata configured to a 1 message per second polling policy.
If that doesn't fit your requirements, you always can change that .channel { c -> c.queue(10) } to use a second lambda and configure a custom poller over there.
BTW, we have already some Kotlin DSL implementation in Spring Integration: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/kotlin-dsl.html#kotlin-dsl

Why Spring Boot Application logs that it started twice after adding spring-cloud-bus dependency

This is simple code in my Spring boot application:
package com.maxxton.SpringBootHelloWorld;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class SpringBootHelloWorldApplication {
public static void main(String[] args) {
SpringApplication.run(SpringBootHelloWorldApplication.class, args);
}
}
And a ApplicationListener class to listen to ApplicationEvent:
package com.maxxton.SpringBootHelloWorld;
import org.springframework.context.ApplicationEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
#Component
public class Test implements ApplicationListener {
#Override
public void onApplicationEvent(ApplicationEvent event) {
if (event.getClass().getSimpleName().equals("ApplicationReadyEvent")) {
System.out.println("-------------------------------------");
System.out.println(event.getClass().getSimpleName());
System.out.println("-------------------------------------");
}
}
}
build.gradle contains these dependencies:
dependencies {
compile("org.springframework.boot:spring-boot-starter-amqp")
compile("org.springframework.cloud:spring-cloud-starter-bus-amqp")
compile('org.springframework.boot:spring-boot-starter-web')
compile('org.springframework.boot:spring-boot-starter')
compile("org.springframework.cloud:spring-cloud-starter")
compile("org.springframework.cloud:spring-cloud-starter-security")
compile("org.springframework.cloud:spring-cloud-starter-eureka")
testCompile('org.springframework.boot:spring-boot-starter-test')
}
Now, when I run this spring boot application, I see this log printed twice:
[main] c.m.S.SpringBootHelloWorldApplication : Started SpringBootHelloWorldApplication in ... seconds (JVM running for ...)
Usually, this log get printed only once, but it get printed twice if I add these dependencies:
compile("org.springframework.boot:spring-boot-starter-amqp")
compile("org.springframework.cloud:spring-cloud-starter-bus-amqp")
This is complete log:
2017-11-17 15:44:07.372 INFO 5976 --- [ main] o.s.c.support.GenericApplicationContext : Refreshing org.springframework.context.support.GenericApplicationContext#31c7c281: startup date [Fri Nov 17 15:44:07 IST 2017]; root of context hierarchy
-------------------------------------
ApplicationReadyEvent
-------------------------------------
2017-11-17 15:44:07.403 INFO 5976 --- [ main] c.m.S.SpringBootHelloWorldApplication : Started SpringBootHelloWorldApplication in 1.19 seconds (JVM running for 10.231)
2017-11-17 15:44:09.483 WARN 5976 --- [ main] o.s.amqp.rabbit.core.RabbitAdmin : Failed to declare exchange: Exchange [name=springCloudBus, type=topic, durable=true, autoDelete=false, internal=false, arguments={}], continuing... org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
2017-11-17 15:44:09.492 INFO 5976 --- [ main] o.s.integration.channel.DirectChannel : Channel 'a-bootiful-client.springCloudBusOutput' has 1 subscriber(s).
2017-11-17 15:44:09.493 INFO 5976 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 0
2017-11-17 15:44:09.530 INFO 5976 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2017-11-17 15:44:09.530 INFO 5976 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'a-bootiful-client.errorChannel' has 1 subscriber(s).
2017-11-17 15:44:09.530 INFO 5976 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2017-11-17 15:44:09.530 INFO 5976 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147482647
2017-11-17 15:44:09.539 INFO 5976 --- [ main] c.s.b.r.p.RabbitExchangeQueueProvisioner : declaring queue for inbound: springCloudBus.anonymous.kZ1vvxHaRfChKe1TncH-MQ, bound to: springCloudBus
2017-11-17 15:44:11.562 WARN 5976 --- [ main] o.s.amqp.rabbit.core.RabbitAdmin : Failed to declare exchange: Exchange [name=springCloudBus, type=topic, durable=true, autoDelete=false, internal=false, arguments={}], continuing... org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
2017-11-17 15:44:13.587 WARN 5976 --- [ main] o.s.amqp.rabbit.core.RabbitAdmin : Failed to declare queue: Queue [name=springCloudBus.anonymous.kZ1vvxHaRfChKe1TncH-MQ, durable=false, autoDelete=true, exclusive=true, arguments={}], continuing... org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
2017-11-17 15:44:15.611 WARN 5976 --- [ main] o.s.amqp.rabbit.core.RabbitAdmin : Failed to declare binding: Binding [destination=springCloudBus.anonymous.kZ1vvxHaRfChKe1TncH-MQ, exchange=springCloudBus, routingKey=#], continuing... org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
2017-11-17 15:44:17.662 INFO 5976 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started inbound.springCloudBus.anonymous.kZ1vvxHaRfChKe1TncH-MQ
2017-11-17 15:44:17.662 INFO 5976 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {message-handler:inbound.springCloudBus.default} as a subscriber to the 'bridge.springCloudBus' channel
2017-11-17 15:44:17.662 INFO 5976 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started inbound.springCloudBus.default
2017-11-17 15:44:17.663 INFO 5976 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2017-11-17 15:44:17.714 INFO 5976 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
-------------------------------------
ApplicationReadyEvent
-------------------------------------
2017-11-17 15:44:17.717 INFO 5976 --- [ main] c.m.S.SpringBootHelloWorldApplication : Started SpringBootHelloWorldApplication in 20.131 seconds (JVM running for 20.545)
As you can see, ApplicationReadyEvent is happening twice.
Why is this happening?
Is there any way to avoid this?
spring-cloud-bus uses spring-cloud-stream which puts the binder in a separate boot child application context.
You should make your event listener aware of the application context it is running in. You can also use generics to select the event type you are interested in...
#Component
public class Test implements ApplicationListener<ApplicationReadyEvent>,
ApplicationContextAware {
private ApplicationContext applicationContext;
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
#Override
public void onApplicationEvent(ApplicationReadyEvent event) {
if (event.getApplicationContext().equals(this.applicationContext)) {
System.out.println("-------------------------------------");
System.out.println(event.getClass().getSimpleName());
System.out.println("-------------------------------------");
}
}
}
Are u using multiple binders rabbitmq configuration in your application.yml/.xml ?
If it's a yes, then u can try to exclude RabbitAutoConfiguration.
#EnableDiscoveryClient
#EnableAutoConfiguration(exclude = {RabbitAutoConfiguration.class})
#SpringBootApplication
public class SpringBootHelloWorldApplication {
public static void main(String[] args) {
SpringApplication.run(SpringBootHelloWorldApplication.class, args);
}
}

Resources