Spring Webflux throws IOException for SSE - spring

I've been trying to implement the Server Sent Events using Spring Webflux (2.1.1.RELEASE) and consume it in JavaScript App (Angular 7).
The problem is that anytime I use .close() method on the EventSource on the client it makes the server throw:
Error [java.io.IOException: An established connection was aborted by the software in your host machine] for HTTP GET "/price", but ServerHttpResponse already committed (200 OK)
The code is pretty straightforward:
#RestController("/price")
public class PriceController {
private final PriceProvider priceProvider;
public PriceController (PriceProvider priceProvider) {
this.priceProvider = priceProvider;
}
#GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Price> randomPrices () {
return priceProvider.getPrices().log();
}
}
The Flux is created as follows:
Flux.interval(Duration.ofSeconds(1)).map(i -> randomPrice());
On the client side i was trying to use the native EventSource and the pollyfills but with the same result all the time. Here's the output:
2018-12-03 17:29:59.388 INFO 15080 --- [ restartedMain] c.m.t.sseserver.SseServerApplication : Started SseServerApplication in 1.844 seconds (JVM running for 2.874)
2018-12-03 17:30:10.519 INFO 15080 --- [ctor-http-nio-2] reactor.Flux.OnAssembly.1 : | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
2018-12-03 17:30:10.519 INFO 15080 --- [ctor-http-nio-2] reactor.Flux.OnAssembly.1 : | request(1)
2018-12-03 17:30:11.522 INFO 15080 --- [ parallel-1] reactor.Flux.OnAssembly.1 : | onNext(Price(currency=EUR, rate=30.314275239679823))
2018-12-03 17:30:11.565 INFO 15080 --- [ctor-http-nio-2] reactor.Flux.OnAssembly.1 : | request(31)
2018-12-03 17:30:12.521 INFO 15080 --- [ parallel-1] reactor.Flux.OnAssembly.1 : | onNext(Price(currency=PLN, rate=41.7888937560866))
2018-12-03 17:30:13.521 INFO 15080 --- [ parallel-1] reactor.Flux.OnAssembly.1 : | onNext(Price(currency=CHF, rate=89.64097216739523))
2018-12-03 17:30:14.521 INFO 15080 --- [ parallel-1] reactor.Flux.OnAssembly.1 : | onNext(Price(currency=EUR, rate=87.5498883139903))
2018-12-03 17:30:15.521 INFO 15080 --- [ parallel-1] reactor.Flux.OnAssembly.1 : | onNext(Price(currency=PLN, rate=28.019190555534855))
2018-12-03 17:30:16.521 INFO 15080 --- [ parallel-1] reactor.Flux.OnAssembly.1 : | onNext(Price(currency=PLN, rate=78.07885390201281))
2018-12-03 17:30:17.522 INFO 15080 --- [ parallel-1] reactor.Flux.OnAssembly.1 : | onNext(Price(currency=EUR, rate=95.44618060483998))
2018-12-03 17:30:17.534 INFO 15080 --- [ctor-http-nio-2] reactor.Flux.OnAssembly.1 : | cancel()
2018-12-03 17:30:17.549 ERROR 15080 --- [ctor-http-nio-2] o.s.w.s.adapter.HttpWebHandlerAdapter : [e24fb0e9] Error [java.io.IOException: An established connection was aborted by the software in your host machine] for HTTP GET "/price", but ServerHttpResponse already committed (200 OK)
Despite that the program seems to behave correctly, though my logs are full of these ugly errors. Is there any way to fix it or at least swallow the exception?

This looks a lot like SPR-17257. In this case, we're getting an IOException, which is hard to differentiate between a client going away and a remote exception if you're streaming data from another server.
SPR-17341 will try to address that in the Spring Framework 5.2 release, to be included in Spring Boot 2.2.

Related

Trace id propagation in Spring Boot 3 with Spring cloud streams and WebFlux

I tried to use spring cloud stream with kafka binder. But when I called WebClient in chain, then trace id is lost.
My flow is 'external service' -> 'functionStream-in' -> 'http call' -> functionStream-out' -> 'testStream-in' -> 'testStream-out' -> 'external service'
But after http call(or not?) the trace id is not propagated and I don't understand why. If I remove http call, then everything is OK.
I tried to add Hooks.enableAutomaticContextPropagation();, but that didn't help.
I tried to add ContextSnapshot.setThreadLocalsFrom around http call - same thing.
How can I solve it?
Dependencies:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-webflux'
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-starter-stream-kafka'
implementation 'io.micrometer:micrometer-tracing-bridge-brave'
implementation 'io.zipkin.reporter2:zipkin-reporter-brave'
implementation "io.projectreactor:reactor-core:3.5.3"
implementation "io.micrometer:context-propagation:1.0.2"
implementation "io.micrometer:micrometer-core:1.10.4"
implementation "io.micrometer:micrometer-tracing:1.0.2"
}
application.yml:
spring:
cloud.stream:
kafka.binder:
enableObservation: true
headers:
- b3
function.definition: functionStream;testStream
default.producer.useNativeEncoding: true
bindings:
functionStream-in-0:
destination: spring-in
group: spring-test1
functionStream-out-0:
destination: test-in
testStream-in-0:
destination: test-in
group: spring-test2
testStream-out-0:
destination: spring-out
integration:
management:
observation-patterns: "*"
kafka:
bootstrap-servers: localhost:9092
consumer:
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
management:
tracing:
enabled: true
sampling.probability: 1.0
propagation.type: b3
logging.pattern.level: "%5p [%X{traceId:-},%X{spanId:-}]"
Code:
#Bean
WebClient webClient(final WebClient.Builder builder) {
return builder.build();
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> functionStream(final WebClient webClient, final ObservationRegistry registry) {
return flux -> flux
.<Message<String>>handle((msg, sink) -> {
log.info("functionStream-1");
sink.next(msg);
})
.flatMap(msg -> webClient.get()
.uri("http://localhost:8080/test")
.exchangeToMono(httpResponse -> httpResponse.bodyToMono(String.class)
.map(httpBody -> MessageBuilder.withPayload(httpBody)
.copyHeaders(httpResponse.headers().asHttpHeaders())
.build())
.<Message<String>>handle((m, sink) -> {
log.info("functionStream-3");
sink.next(m);
})
)
)
.handle((msg, sink) -> {
log.info("functionStream-2");
sink.next(msg);
});
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> testStream(final ObservationRegistry registry) {
return flux -> flux
.publishOn(Schedulers.boundedElastic())
.<Message<String>>handle((msg, sink) -> {
log.info("testStream-1");
sink.next(msg);
})
.map(msg -> MessageBuilder
.withPayload(msg.getPayload())
.copyHeaders(msg.getHeaders())
.build());
}
#Bean
RouterFunction<ServerResponse> router(final ObservationRegistry registry) {
return route()
.GET("/test", r -> ServerResponse.ok().body(Mono.deferContextual(contextView -> {
try (final var scope = ContextSnapshot.setThreadLocalsFrom(contextView, ObservationThreadLocalAccessor.KEY)) {
log.info("GET /test");
}
return Mono.just("answer");
}), String.class))
.build();
}
With this code I have output:
2023-02-16T17:06:22.111 INFO [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:06:22.166 WARN [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#523fe6a9]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.170 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#545339d8]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.187 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#44400bcc]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.361 INFO [63ee385de15f1061dea076eb06b0d1e0,908f48f8485a4277] 220348 --- [ctor-http-nio-4] com.example.demo.TestApplication : GET /test
2023-02-16T17:06:22.407 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-3
2023-02-16T17:06:22.409 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:06:22.448 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.456 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382456
2023-02-16T17:06:22.477 INFO [,] 220348 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:06:22.512 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,b5babc6bef4e30ca] 220348 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:06:22.539 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.543 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382543
Without http call I have output:
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:03:09.615 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189628
2023-02-16T17:03:09.691 INFO [,] 204228 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:03:09.859 INFO [63ee379d924e5645fc1d9e27b8135b48,b92a1a59ffd32d80] 204228 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:03:09.868 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189874

How to make different instances of consumers in the same consumer group consume different shards of the same kinesis stream?

I'm following the example given in spring-cloud-stream-samples with the following modifications.
application.yml
spring:
cloud:
stream:
instanceCount: 2
bindings:
produceOrder-out-0:
destination: test_stream
content-type: application/json
producer:
partitionCount: 2
partitionSelectorName: eventPartitionSelectorStrategy
partitionKeyExtractorName: eventPartitionKeyExtractorStrategy
processOrder-in-0:
group: eventConsumers
destination: test_stream
content-type: application/json
function:
definition: processOrder;produceOrder
ProducerConfiguration.java
package demo.config;
import demo.stream.Event;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy;
import org.springframework.cloud.stream.binder.PartitionSelectorStrategy;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.messaging.Message;
#Configuration
public class ProducerConfiguration {
private static Logger logger = LoggerFactory.getLogger(ProducerConfiguration.class);
#Bean
public PartitionSelectorStrategy eventPartitionSelectorStrategy() {
return new PartitionSelectorStrategy() {
#Override
public int selectPartition(Object key, int partitionCount) {
if(key instanceof Integer) {
int partition = (((Integer)key)%partitionCount + partitionCount)%partitionCount;
logger.info("key {} falls into partition {}" , key , partition);
return partition;
}
return 0;
}
};
}
#Bean
public PartitionKeyExtractorStrategy eventPartitionKeyExtractorStrategy() {
return new PartitionKeyExtractorStrategy() {
#Override
public Object extractKey(Message<?> message) {
if(message.getPayload() instanceof Event) {
return ((Event) message.getPayload()).hashCode();
} else {
return 0;
}
}
};
}
}
When I run two instances of this application by setting --spring.cloud.stream.instanceIndex=0 and --spring.cloud.stream.instanceIndex=1 I'm able to see the events getting produced. However, only one of the instance is consuming the records from both the partitions, the other instance is not consuming despite the producer creating partitioned records.
Logs seen in KinesisProducer
2022-09-04 00:17:22.628 INFO 34029 --- [ main] a.i.k.KinesisMessageDrivenChannelAdapter : started KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}], consumerGroup='eventConsumers'}
2022-09-04 00:17:22.658 INFO 34029 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 64398 (http) with context path ''
2022-09-04 00:17:22.723 INFO 34029 --- [ main] demo.KinesisApplication : Started KinesisApplication in 18.487 seconds (JVM running for 19.192)
2022-09-04 00:17:23.938 INFO 34029 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : The [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}, state=NEW}] has been started.
2022-09-04 00:17:55.222 INFO 34029 --- [io-64398-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2022-09-04 00:17:55.222 INFO 34029 --- [io-64398-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2022-09-04 00:17:55.224 INFO 34029 --- [io-64398-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 2 ms
2022-09-04 00:17:55.598 INFO 34029 --- [io-64398-exec-1] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=5fbaca2f-d947-423d-a1f1-b1c9c268d2d0, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:17:56.337 INFO 34029 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 1397835167 falls into partition 1
2022-09-04 00:18:02.047 INFO 34029 --- [io-64398-exec-2] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=83021259-89b5-4451-a0ec-da3152d37a58, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:18:02.361 INFO 34029 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 147530256 falls into partition 0
Logs seen in KinesisConsumer
2022-09-04 00:17:28.050 INFO 34058 --- [ main] a.i.k.KinesisMessageDrivenChannelAdapter : started KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000001', reset=false}], consumerGroup='eventConsumers'}
2022-09-04 00:17:28.076 INFO 34058 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 64399 (http) with context path ''
2022-09-04 00:17:28.116 INFO 34058 --- [ main] demo.KinesisApplication : Started KinesisApplication in 18.566 seconds (JVM running for 19.839)
2022-09-04 00:17:29.365 INFO 34058 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : The [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=AFTER_SEQUENCE_NUMBER, sequenceNumber='49632927200161141377996226513172299243826807332967284754', timestamp=null, stream='test_stream', shard='shardId-000000000001', reset=false}, state=NEW}] has been started.
2022-09-04 00:17:57.346 INFO 34058 --- [esis-consumer-1] demo.stream.OrderStreamConfiguration : An order has been placed from this service Event [id=null, subject=Order [id=5fbaca2f-d947-423d-a1f1-b1c9c268d2d0, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:18:04.384 INFO 34058 --- [esis-consumer-1] demo.stream.OrderStreamConfiguration : An order has been placed from this service Event [id=null, subject=Order [id=83021259-89b5-4451-a0ec-da3152d37a58, name=pen], type=ORDER, originator=KinesisProducer]
spring-cloud-stream-binder-kinesis version : 2.2.0
I have these following questions:
For Static shard distribution within a single consumer group, is there any other parameter that needs to be configured that I have missed?
Do I need to specify the DynamoDB Checkpoint properties only for dynamic shard distribution?
EDIT
I have added the DEBUG logs seen in KinesisProducer below:
2022-09-07 08:30:38.120 INFO 4993 --- [io-64398-exec-1] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=b3927132-a80d-481e-a219-dbd0c0c7d124, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-07 08:30:38.806 INFO 4993 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 1842629003 falls into partition 1
2022-09-07 08:30:38.812 DEBUG 4993 --- [ask-scheduler-3] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=1, id=9cb8ec58-4a9e-7b6f-4263-c9d4d1eec906, contentType=application/json, timestamp=1662519638809}]
2022-09-07 08:30:38.813 DEBUG 4993 --- [ask-scheduler-3] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#63811d15 received message: GenericMessage [payload=byte[126], headers={scst_partition=1, scst_partitionOverride=0, id=731f444b-d3df-a51a-33de-8adf78e1e746, contentType=application/json, timestamp=1662519638813}]
2022-09-07 08:30:38.832 DEBUG 4993 --- [ask-scheduler-3] o.s.c.s.m.DirectWithAttributesChannel : postSend (sent=true) on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=1, scst_partitionOverride=0, id=731f444b-d3df-a51a-33de-8adf78e1e746, contentType=application/json, timestamp=1662519638813}]
2022-09-07 08:35:51.153 INFO 4993 --- [io-64398-exec-2] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=6a5b3084-11dc-4080-a80e-61cc73315139, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-07 08:35:51.915 INFO 4993 --- [ask-scheduler-5] demo.config.ProducerConfiguration : key 1525662264 falls into partition 0
2022-09-07 08:35:51.916 DEBUG 4993 --- [ask-scheduler-5] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=0, id=115c5421-00f2-286d-de02-0020e9322a17, contentType=application/json, timestamp=1662519951916}]
2022-09-07 08:35:51.916 DEBUG 4993 --- [ask-scheduler-5] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#63811d15 received message: GenericMessage [payload=byte[126], headers={scst_partition=0, scst_partitionOverride=0, id=145be7e8-381f-af73-e430-9cb645ff785f, contentType=application/json, timestamp=1662519951916}]
2022-09-07 08:35:51.917 DEBUG 4993 --- [ask-scheduler-5] o.s.c.s.m.DirectWithAttributesChannel : postSend (sent=true) on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=0, scst_partitionOverride=0, id=145be7e8-381f-af73-e430-9cb645ff785f, contentType=application/json, timestamp=1662519951916}]

How to control parallelism of Flux.flatMap (Mono)?

The code below executes all web requests (webClient) in parallel, not respecting the limit I put in parallel(5).
Flux.fromIterable(dataListWithHundredsElements)
.parallel(5).runOn(Schedulers.boundedElastic())
.flatMap(element ->
webClient.post().
.bodyValue(element)
.retrieve()
.bodyToMono(String.class)
.doOnError(err -> element.setError(Utils.toString(err)))
.doOnSuccess(r -> element.setResponse(r))
)
.sequential()
.onErrorContinue((e, v) -> {})
.doOnComplete(() -> updateInDatabase(dataListWithHundresdElements))
.subscribe();
I would like to know if it is possible to execute requests according to the value specified in parallel(5) and how best to do that?
One detail, this code is a Spring MVC application which I am making requests for an external service.
UPDATE 01
In fact Flux creates the 5 threads, however, all requests (WebClient Mono) are executed at the same time.
What I want is to have 5 requests executed at a time, so when 1 request ends another request is started, but at no time should there be more than 5 requests in parallel.
As Mono is also a reactive type, it seems to me that the 5 threads of Flux invoke it and are not blocked, in practice what happens is that all requests happen in parallel.
UPDATE 02 - External Service Logs
This is the log of the external service which takes about 5 seconds to respond. As you can see in the logs below, 14 requests at the same time.
2020-05-08 11:53:56.655 INFO 28223 --- [nio-8080-exec-8] EXTERNAL SERVICE LOG {"id": 21} http-nio-8080-exec-8
2020-05-08 11:53:56.655 INFO 28223 --- [nio-8080-exec-7] EXTERNAL SERVICE LOG {"id": 20} http-nio-8080-exec-7
2020-05-08 11:53:56.659 INFO 28223 --- [nio-8080-exec-2] EXTERNAL SERVICE LOG {"id": 27} http-nio-8080-exec-2
2020-05-08 11:53:56.659 INFO 28223 --- [nio-8080-exec-6] EXTERNAL SERVICE LOG {"id": 19} http-nio-8080-exec-6
2020-05-08 11:53:56.659 INFO 28223 --- [io-8080-exec-10] EXTERNAL SERVICE LOG {"id": 23} http-nio-8080-exec-10
2020-05-08 11:53:56.660 INFO 28223 --- [nio-8080-exec-5] EXTERNAL SERVICE LOG {"id": 18} http-nio-8080-exec-5
2020-05-08 11:53:56.660 INFO 28223 --- [nio-8080-exec-9] EXTERNAL SERVICE LOG {"id": 17} http-nio-8080-exec-9
2020-05-08 11:53:56.660 INFO 28223 --- [nio-8080-exec-1] EXTERNAL SERVICE LOG {"id": 29} http-nio-8080-exec-1
2020-05-08 11:53:56.661 INFO 28223 --- [nio-8080-exec-4] EXTERNAL SERVICE LOG {"id": 24} http-nio-8080-exec-4
2020-05-08 11:53:56.666 INFO 28223 --- [io-8080-exec-11] EXTERNAL SERVICE LOG {"id": 25} http-nio-8080-exec-11
2020-05-08 11:53:56.675 INFO 28223 --- [io-8080-exec-13] EXTERNAL SERVICE LOG {"id": 42} http-nio-8080-exec-13
2020-05-08 11:53:56.678 INFO 28223 --- [io-8080-exec-14] EXTERNAL SERVICE LOG {"id": 28} http-nio-8080-exec-14
2020-05-08 11:53:56.680 INFO 28223 --- [io-8080-exec-12] EXTERNAL SERVICE LOG {"id": 26} http-nio-8080-exec-12
2020-05-08 11:53:56.686 INFO 28223 --- [io-8080-exec-15] EXTERNAL SERVICE LOG {"id": 22} http-nio-8080-exec-15
UPDATE 03 - Reactor Logs
Reinforcing, the external service takes about 5 seconds to respond. However it is possible to see that all requests (14) are made at almost the same time.
2020-05-08 11:53:56.051 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : onSubscribe([Fuseable] FluxPublishOn.PublishOnSubscriber)
2020-05-08 11:53:56.053 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : request(unbounded)
2020-05-08 11:53:56.081 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : onSubscribe([Fuseable] FluxPublishOn.PublishOnSubscriber)
2020-05-08 11:53:56.081 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : request(unbounded)
2020-05-08 11:53:56.082 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : onSubscribe([Fuseable] FluxPublishOn.PublishOnSubscriber)
2020-05-08 11:53:56.082 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : request(unbounded)
2020-05-08 11:53:56.093 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : onSubscribe([Fuseable] FluxPublishOn.PublishOnSubscriber)
2020-05-08 11:53:56.093 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : request(unbounded)
2020-05-08 11:53:56.094 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : onSubscribe([Fuseable] FluxPublishOn.PublishOnSubscriber)
2020-05-08 11:53:56.095 INFO 28223 --- [nio-8080-exec-1] reactor.Parallel.RunOn.1 : request(unbounded)
2020-05-08 11:53:56.110 INFO 28223 --- [oundedElastic-1] reactor.Parallel.RunOn.1 : onNext(#40ddcd53)
2020-05-08 11:53:56.112 INFO 28223 --- [oundedElastic-5] reactor.Parallel.RunOn.1 : onNext(#200e0819)
2020-05-08 11:53:56.112 INFO 28223 --- [oundedElastic-2] reactor.Parallel.RunOn.1 : onNext(#3b81eee2)
2020-05-08 11:53:56.113 INFO 28223 --- [oundedElastic-3] reactor.Parallel.RunOn.1 : onNext(#60af2a4d)
2020-05-08 11:53:56.115 INFO 28223 --- [oundedElastic-4] reactor.Parallel.RunOn.1 : onNext(#723db553)
2020-05-08 11:53:56.440 INFO 28223 --- [oundedElastic-2] reactor.Parallel.RunOn.1 : onNext(#387743b5)
2020-05-08 11:53:56.440 INFO 28223 --- [oundedElastic-3] reactor.Parallel.RunOn.1 : onNext(#62ed2f8d)
2020-05-08 11:53:56.440 INFO 28223 --- [oundedElastic-5] reactor.Parallel.RunOn.1 : onNext(#1a40554a)
2020-05-08 11:53:56.442 INFO 28223 --- [oundedElastic-3] reactor.Parallel.RunOn.1 : onNext(#1bcb696a)
2020-05-08 11:53:56.440 INFO 28223 --- [oundedElastic-4] reactor.Parallel.RunOn.1 : onNext(#46c98823)
2020-05-08 11:53:56.443 INFO 28223 --- [oundedElastic-3] reactor.Parallel.RunOn.1 : onComplete()
2020-05-08 11:53:56.446 INFO 28223 --- [oundedElastic-5] reactor.Parallel.RunOn.1 : onComplete()
2020-05-08 11:53:56.442 INFO 28223 --- [oundedElastic-2] reactor.Parallel.RunOn.1 : onNext(#1c0da4a)
2020-05-08 11:53:56.448 INFO 28223 --- [oundedElastic-2] reactor.Parallel.RunOn.1 : onComplete()
2020-05-08 11:53:56.452 INFO 28223 --- [oundedElastic-4] reactor.Parallel.RunOn.1 : onNext(#14d54d26)
2020-05-08 11:53:56.453 INFO 28223 --- [oundedElastic-4] reactor.Parallel.RunOn.1 : onComplete()
2020-05-08 11:53:56.490 INFO 28223 --- [oundedElastic-1] reactor.Parallel.RunOn.1 : onNext(#46e43af)
2020-05-08 11:53:56.492 INFO 28223 --- [oundedElastic-1] reactor.Parallel.RunOn.1 : onNext(#5ca02355)
2020-05-08 11:53:56.496 INFO 28223 --- [oundedElastic-1] reactor.Parallel.RunOn.1 : onComplete()
You could use ParallelFlux#flatMap(Function<? super T,? extends Publisher<? extends R>>, boolean, int) method to control concurrency.
For your situation it could be:
.flatMap(element ->
webClient.post().
.bodyValue(element)
.retrieve()
.bodyToMono(String.class)
.doOnError(err -> element.setError(Utils.toString(err)))
.doOnSuccess(r -> element.setResponse(r)),
false, 1
)
But, actually, you don't have to create ParallelFlux. Just use Flux#flatMap(Function<? super T,? extends Publisher<? extends V>>, int)method:
Flux.fromIterable(dataListWithHundredsElements)
.flatMap(element -> webclient.post()..., 5)
...
The second argument of the flatMap method is responsible for concurrency.

Why is my web service forbidding my connections when using .hasIpAddress()?

Defintieyl searched stackex for this already. Problems with hasIpAddress seem often unique.
I believe I understand the route of my reques to my server.
User -> Zuul -> My web service
http.authorizeRequests().antMatchers("/**").permitAll();
in my webservice, allows me to send requests and receive responses from localhost and my system's IP.
http.authorizeRequests().antMatchers("/**").hasIpAddress(10.10.1.24);
or
http.authorizeRequests().antMatchers("/**").hasIpAddress("127.0.0.1");
both fail.
When Zuul gives access to my web service... is it misreporting my request IP or something?
If my hasIpAddress() shouldnt be localhost, 127.0.0.1 or 10.10.1.24 then what else could it be?
I've shut down Zuul, Eureka and the ws and started them all up again.
I also did a maven clean.
2019-10-23 11:58:46.608 INFO 7468 --- [ restartedMain] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter#27040a7b, org.springframework.security.web.context.SecurityContextPersistenceFilter#6d6a2d29, org.springframework.security.web.header.HeaderWriterFilter#3485fdae, org.springframework.security.web.authentication.logout.LogoutFilter#198a3831, org.springframework.security.web.savedrequest.RequestCacheAwareFilter#2c7fb62d, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter#20043371, org.springframework.security.web.authentication.AnonymousAuthenticationFilter#ad82f08, org.springframework.security.web.session.SessionManagementFilter#2285c828, org.springframework.security.web.access.ExceptionTranslationFilter#6511c7f9, org.springframework.security.web.access.intercept.FilterSecurityInterceptor#7e27dfef]
2019-10-23 11:58:46.618 WARN 7468 --- [ restartedMain] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
2019-10-23 11:58:46.618 INFO 7468 --- [ restartedMain] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2019-10-23 11:58:46.621 WARN 7468 --- [ restartedMain] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
2019-10-23 11:58:46.621 INFO 7468 --- [ restartedMain] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2019-10-23 11:58:46.731 INFO 7468 --- [ restartedMain] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-10-23 11:58:47.264 WARN 7468 --- [ restartedMain] ockingLoadBalancerClientRibbonWarnLogger : You already have RibbonLoadBalancerClient on your classpath. It will be used by default. As Spring Cloud Ribbon is in maintenance mode. We recommend switching to BlockingLoadBalancerClient instead. In order to use it, set the value of `spring.cloud.loadbalancer.ribbon.enabled` to `false` or remove spring-cloud-starter-netflix-ribbon from your project.
2019-10-23 11:58:47.365 INFO 7468 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 54293 (http) with context path ''
2019-10-23 11:58:47.366 INFO 7468 --- [ restartedMain] .s.c.n.e.s.EurekaAutoServiceRegistration : Updating port to 54293
2019-10-23 11:58:47.370 INFO 7468 --- [ restartedMain] o.s.c.n.eureka.InstanceInfoFactory : Setting initial instance status as: STARTING
2019-10-23 11:58:47.392 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Initializing Eureka in region us-east-1
2019-10-23 11:58:47.491 INFO 7468 --- [ restartedMain] c.n.d.provider.DiscoveryJerseyProvider : Using JSON encoding codec LegacyJacksonJson
2019-10-23 11:58:47.492 INFO 7468 --- [ restartedMain] c.n.d.provider.DiscoveryJerseyProvider : Using JSON decoding codec LegacyJacksonJson
2019-10-23 11:58:47.570 INFO 7468 --- [ restartedMain] c.n.d.provider.DiscoveryJerseyProvider : Using XML encoding codec XStreamXml
2019-10-23 11:58:47.571 INFO 7468 --- [ restartedMain] c.n.d.provider.DiscoveryJerseyProvider : Using XML decoding codec XStreamXml
2019-10-23 11:58:47.690 INFO 7468 --- [ restartedMain] c.n.d.s.r.aws.ConfigClusterResolver : Resolving eureka endpoints via configuration
2019-10-23 11:58:47.814 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2019-10-23 11:58:47.814 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2019-10-23 11:58:47.814 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2019-10-23 11:58:47.814 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Application is null : false
2019-10-23 11:58:47.815 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2019-10-23 11:58:47.815 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Application version is -1: true
2019-10-23 11:58:47.815 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2019-10-23 11:58:47.890 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : The response status is 200
2019-10-23 11:58:47.892 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew interval is: 30
2019-10-23 11:58:47.894 INFO 7468 --- [ restartedMain] c.n.discovery.InstanceInfoReplicator : InstanceInfoReplicator onDemand update allowed rate per min is 4
2019-10-23 11:58:47.897 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Discovery Client initialized at timestamp 1571842727896 with initial instances count: 0
2019-10-23 11:58:47.900 INFO 7468 --- [ restartedMain] o.s.c.n.e.s.EurekaServiceRegistry : Registering application USERS-WS with eureka with status UP
2019-10-23 11:58:47.900 INFO 7468 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Saw local status change event StatusChangeEvent [timestamp=1571842727900, current=UP, previous=STARTING]
2019-10-23 11:58:47.902 INFO 7468 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_USERS-WS/users-ws:90ae4ec0932916bcd2b9155854f3a269: registering service...
2019-10-23 11:58:47.945 INFO 7468 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_USERS-WS/users-ws:90ae4ec0932916bcd2b9155854f3a269 - registration status: 204
2019-10-23 11:58:48.064 INFO 7468 --- [ restartedMain] c.p.p.a.u.PhotoAppApiUsersApplication : Started PhotoAppApiUsersApplication in 5.037 seconds (JVM running for 5.825)
2019-10-23 11:59:17.895 INFO 7468 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2019-10-23 11:59:17.895 INFO 7468 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2019-10-23 11:59:17.895 INFO 7468 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2019-10-23 11:59:17.896 INFO 7468 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false
2019-10-23 11:59:17.896 INFO 7468 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2019-10-23 11:59:17.896 INFO 7468 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: false
2019-10-23 11:59:17.896 INFO 7468 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2019-10-23 11:59:17.959 INFO 7468 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
Here is a working basic example using spring-boot 2.2.0.RELEASE with spring-boot-starter-security and spring-boot-starter-web
Works when accessing via http://localhost:8080/ip
#SpringBootApplication
public class SpringSecurityHasIpAddressApplication {
public static void main(String[] args) {
SpringApplication.run(SpringSecurityHasIpAddressApplication.class, args);
}
}
#RestController
class HelloController {
#GetMapping("/hello")
public String hello() {
return "Hello World!";
}
#GetMapping("/ip")
public String ip(HttpServletRequest request) {
return request.getRemoteAddr();
}
#GetMapping("/secure")
public String secure(Principal principal,HttpServletRequest request) {
return principal.getName() + " with " + request.getRemoteAddr();
}
}
#Configuration
class SecurityConfig extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests(
authorizeRequests ->
authorizeRequests
.antMatchers("/hello").permitAll()
.antMatchers("/secure").authenticated()
.antMatchers("/ip").hasIpAddress("0:0:0:0:0:0:0:1") // localhost
.anyRequest().authenticated()
)
.formLogin();
}
}
You could access the /secure path, then you can see you're actually used IP-address.
0:0:0:0:0:0:0:1 is my localhost address so I can access /ip without authentication
By setting the log-level of org.springframework.security could also be very helpful.
application.properties
logging.level.org.springframework.security=debug
then you can see in logging something like:
2019-10-30 19:14:01.039 DEBUG 3692 --- [nio-8080-exec-6] o.s.s.w.a.i.FilterSecurityInterceptor : Previously Authenticated: org.springframework.security.authentication.AnonymousAuthenticationToken#536ff536: Principal: anonymousUser; Credentials: [PROTECTED]; Authenticated: true; Details: org.springframework.security.web.authentication.WebAuthenticationDetails#166c8: RemoteIpAddress: 0:0:0:0:0:0:0:1; SessionId: 7602343558C34E2576CD0D3E20EDCBEE; Granted Authorities: ROLE_ANONYMOUS
2019-10-30 19:14:01.040 DEBUG 3692 --- [nio-8080-exec-6] o.s.s.access.vote.AffirmativeBased : Voter: org.springframework.security.web.access.expression.WebExpressionVoter#7527e914, returned: -1
2019-10-30 19:14:01.041 DEBUG 3692 --- [nio-8080-exec-6] o.s.s.w.a.ExceptionTranslationFilter : Access is denied (user is anonymous); redirecting to authentication entry point
if you try via http://127.0.0.1/ip above solution will fail
then you can use
...
.antMatchers("/ip").hasIpAddress("127.0.0.1/32")
...
If you want to use a range of allowed IP-addresses then you could you use
...
.antMatchers("/access") // multiple IP matching
.access("hasIpAddress('192.168.0.1/16') or hasIpAddress('127.0.0.1/32')")
...
hasIpAddress(1.1.1.1) has always worked fine for me. you don't need the /32 but you can use it. it's actually the same IP with /32 on it but you really on need it if you're attempting to match a range of IPs in a subnet . My guess is you're getting the IP of Zuul and not localhost/127.0.0.1 and you're using embedded Tomcat without the <Valve className="org.apache.catalina.valves.RemoteIpValve" /> installed. Also, enable the access log for Tomcat to see what IP is hitting your service via the SPring Boot properties located at https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html - just search for tomcat

Error registering service to eureka server

I am trying to register a client to spring-eureka-server, client deregisters just after registering
eureka-server logs:
2018-05-13 16:02:47.290 INFO 25557 --- [io-9091-exec-10]
c.n.e.registry.AbstractInstanceRegistry : Registered instance
HELLO-CLIENT/192.168.43.96:hello-client:8072 with status UP
(replication=false) 2018-05-13 16:02:47.438 INFO 25557 ---
[nio-9091-exec-3] c.n.e.registry.AbstractInstanceRegistry :
Registered instance HELLO-CLIENT/192.168.43.96:hello-client:8072 with
status DOWN (replication=false) 2018-05-13 16:02:47.457 INFO 25557
--- [nio-9091-exec-2] c.n.e.registry.AbstractInstanceRegistry : Cancelled instance HELLO-CLIENT/192.168.43.96:hello-client:8072
(replication=false) 2018-05-13 16:02:47.950 INFO 25557 ---
[nio-9091-exec-5] c.n.e.registry.AbstractInstanceRegistry :
Registered instance HELLO-CLIENT/192.168.43.96:hello-client:8072 with
status DOWN (replication=true) 2018-05-13 16:02:47.951 INFO 25557 ---
[nio-9091-exec-5] c.n.e.registry.AbstractInstanceRegistry : Cancelled
instance HELLO-CLIENT/192.168.43.96:hello-client:8072
(replication=true) 2018-05-13 16:03:25.747 INFO 25557 ---
[a-EvictionTimer] c.n.e.registry.AbstractInstanceRegistry : Running
the evict task with compensationTime 4ms
Eureka-client logs:
2018-05-13 16:02:47.163 INFO 25676 --- [nfoReplicator-0]
com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072:
registering service... 2018-05-13 16:02:47.212 INFO 25676 --- [
main] c.a.helloclient.HelloClientApplication : Started
HelloClientApplication in 7.62 seconds (JVM running for 8.573)
2018-05-13 16:02:47.224 INFO 25676 --- [ Thread-5]
s.c.a.AnnotationConfigApplicationContext : Closing
org.springframework.context.annotation.AnnotationConfigApplicationContext#6f7923a5:
startup date [Sun May 13 16:02:42 IST 2018]; parent:
org.springframework.context.annotation.AnnotationConfigApplicationContext#5c30a9b0
2018-05-13 16:02:47.226 INFO 25676 --- [ Thread-5]
o.s.c.n.e.s.EurekaServiceRegistry : Unregistering application
hello-client with eureka with status DOWN 2018-05-13 16:02:47.227
WARN 25676 --- [ Thread-5] com.netflix.discovery.DiscoveryClient
: Saw local status change event StatusChangeEvent
[timestamp=1526207567227, current=DOWN, previous=UP] 2018-05-13
16:02:47.232 INFO 25676 --- [ Thread-5]
o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
2018-05-13 16:02:47.235 INFO 25676 --- [ Thread-5]
com.netflix.discovery.DiscoveryClient : Shutting down
DiscoveryClient ... 2018-05-13 16:02:47.292 INFO 25676 ---
[nfoReplicator-0] com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072 -
registration status: 204 2018-05-13 16:02:47.423 INFO 25676 ---
[nfoReplicator-0] com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072:
registering service... 2018-05-13 16:02:47.440 INFO 25676 ---
[nfoReplicator-0] com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072 -
registration status: 204 2018-05-13 16:02:47.442 INFO 25676 --- [
Thread-5] com.netflix.discovery.DiscoveryClient : Unregistering ...
2018-05-13 16:02:47.460 INFO 25676 --- [ Thread-5]
com.netflix.discovery.DiscoveryClient :
DiscoveryClient_HELLO-CLIENT/192.168.43.96:hello-client:8072 -
deregister status: 200 2018-05-13 16:02:47.494 INFO 25676 --- [
Thread-5] com.netflix.discovery.DiscoveryClient : Completed shut
down of DiscoveryClient 2018-05-13 16:02:47.495 INFO 25676 --- [
Thread-5] o.s.j.e.a.AnnotationMBeanExporter : Unregistering
JMX-exposed beans on shutdown 2018-05-13 16:02:47.498 INFO 25676 ---
[ Thread-5] o.s.j.e.a.AnnotationMBeanExporter :
Unregistering JMX-exposed beans
Please let me know what could be possibly wrong.
Add
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
into Eureka and client app
It really works !!!
Eureka client deregisters when an app has been shutdown.
Check if there is any other reason why the app is stopping leading to eureka-client deregistering.
For my case, the application was shutting down due to spring-boot-starter-web dependency. After resolving this, the application started well.
Looks like a dependency issue.
If the app works fine (the core functionality) without eureka integration, then try changing the eureka-client dependency version.
I would suggest you to check the following:
Check all the port numbers that you are running
Check for any version issues
Add above web dependency in your eureka pom.xml (it worked for me in Maven projects)

Resources