Why Spring Integration QueueChannel runs sequentially with the delayed delivery message in kafka - spring

When using Kafka integration and configuring a QueueChannel,
The processing of messages after the queue channel receives are executed sequentially with a delay of one second, it is not possible to understand the reason, the queue channel should be an accumulation of messages (up to the configured limit) and release the messages from the queue as long as it is not empty and there is a consumer.
Why are messages released sequentially with a delay of one second?
follows the log, as can be seen, the messages are received immediately (according to the date of the log) and are processed sequentially with a delay of 1 second?
2020-04-06 13:08:28.108 INFO 30718 --- [ntainer#0-0-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 2 - enriched
2020-04-06 13:08:28.109 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 2 - enriched
2020-04-06 13:08:28.110 INFO 30718 --- [ntainer#0-0-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 7 - enriched
2020-04-06 13:08:28.111 INFO 30718 --- [ntainer#0-0-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 5 - enriched
2020-04-06 13:08:28.116 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 6 - enriched
2020-04-06 13:08:28.119 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 4 - enriched
2020-04-06 13:08:28.120 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 1 - enriched
2020-04-06 13:08:28.121 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 8 - enriched
2020-04-06 13:08:28.122 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 3 - enriched
2020-04-06 13:08:28.123 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 9 - enriched
2020-04-06 13:08:28.124 INFO 30718 --- [ntainer#0-1-C-1] o.s.integration.handler.LoggingHandler : readKafkaChannel: item: 10 - enriched
2020-04-06 13:08:29.111 INFO 30718 --- [ask-scheduler-2] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 7 - enriched
2020-04-06 13:08:30.112 INFO 30718 --- [ask-scheduler-4] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 5 - enriched
2020-04-06 13:08:31.112 INFO 30718 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 6 - enriched
2020-04-06 13:08:32.113 INFO 30718 --- [ask-scheduler-5] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 4 - enriched
2020-04-06 13:08:33.113 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 1 - enriched
2020-04-06 13:08:34.113 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 8 - enriched
2020-04-06 13:08:35.113 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 3 - enriched
2020-04-06 13:08:36.114 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 9 - enriched
2020-04-06 13:08:37.114 INFO 30718 --- [ask-scheduler-3] o.s.integration.handler.LoggingHandler : channelThatIsProcessingSequential - item: 10 - enriched
Blockquote
package br.com.gubee.kafaexample
import org.apache.kafka.clients.admin.NewTopic
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.http.MediaType
import org.springframework.integration.annotation.Gateway
import org.springframework.integration.annotation.MessagingGateway
import org.springframework.integration.config.EnableIntegration
import org.springframework.integration.context.IntegrationContextUtils
import org.springframework.integration.dsl.IntegrationFlow
import org.springframework.integration.dsl.IntegrationFlows
import org.springframework.integration.kafka.dsl.Kafka
import org.springframework.kafka.core.ConsumerFactory
import org.springframework.kafka.core.KafkaTemplate
import org.springframework.kafka.listener.ContainerProperties
import org.springframework.scheduling.annotation.Async
import org.springframework.stereotype.Component
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.PathVariable
import org.springframework.web.bind.annotation.RequestMapping
import org.springframework.web.bind.annotation.RestController
#RestController
#RequestMapping(path = ["/testKafka"], produces = [MediaType.APPLICATION_JSON_VALUE])
class TestKafkaResource(private val testKafkaGateway: TestKafkaGateway) {
#GetMapping("init/{param}")
fun init(#PathVariable("param", required = false) param: String? = null) {
(1..10).forEach {
println("Send async item $it")
testKafkaGateway.init("item: $it")
}
}
}
#MessagingGateway(errorChannel = IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
#Component
interface TestKafkaGateway {
#Gateway(requestChannel = "publishKafkaChannel")
#Async
fun init(param: String)
}
#Configuration
#EnableIntegration
class TestKafkaFlow(private val kafkaTemplate: KafkaTemplate<*, *>,
private val consumerFactory: ConsumerFactory<*, *>) {
#Bean
fun readKafkaChannelTopic(): NewTopic {
return NewTopic("readKafkaChannel", 40, 1)
}
#Bean
fun publishKafka(): IntegrationFlow {
return IntegrationFlows
.from("publishKafkaChannel")
.transform<String, String> { "${it} - enriched" }
.handle(
Kafka.outboundChannelAdapter(kafkaTemplate)
.topic("readKafkaChannel")
.sendFailureChannel(IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
)
.get()
}
#Bean
fun readFromKafka(): IntegrationFlow {
return IntegrationFlows
.from(
Kafka.messageDrivenChannelAdapter(consumerFactory, "readKafkaChannel")
.configureListenerContainer { kafkaMessageListenerContainer ->
kafkaMessageListenerContainer.concurrency(2)
kafkaMessageListenerContainer.ackMode(ContainerProperties.AckMode.RECORD)
}
.errorChannel(IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
)
.channel { c -> c.queue(10) }
.log<String> {
"readKafkaChannel: ${it.payload}"
}
.channel("channelThatIsProcessingSequential")
.get()
}
#Bean
fun kafkaFlowAfter(): IntegrationFlow {
return IntegrationFlows
.from("channelThatIsProcessingSequential")
.log<String> {
"channelThatIsProcessingSequential - ${it.payload}"
}
.get()
}
}

As Gary said, it is not good to shift Kafka messages into a QueueChannel. The consumption on the Kafka.messageDrivenChannelAdapter() is already async - really no reason to move messages to the separate thread.
Anyway it looks like you use Spring Cloud Stream with its PollerMetadata configured to a 1 message per second polling policy.
If that doesn't fit your requirements, you always can change that .channel { c -> c.queue(10) } to use a second lambda and configure a custom poller over there.
BTW, we have already some Kotlin DSL implementation in Spring Integration: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/kotlin-dsl.html#kotlin-dsl

Related

Trace id propagation in Spring Boot 3 with Spring cloud streams and WebFlux

I tried to use spring cloud stream with kafka binder. But when I called WebClient in chain, then trace id is lost.
My flow is 'external service' -> 'functionStream-in' -> 'http call' -> functionStream-out' -> 'testStream-in' -> 'testStream-out' -> 'external service'
But after http call(or not?) the trace id is not propagated and I don't understand why. If I remove http call, then everything is OK.
I tried to add Hooks.enableAutomaticContextPropagation();, but that didn't help.
I tried to add ContextSnapshot.setThreadLocalsFrom around http call - same thing.
How can I solve it?
Dependencies:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-webflux'
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-starter-stream-kafka'
implementation 'io.micrometer:micrometer-tracing-bridge-brave'
implementation 'io.zipkin.reporter2:zipkin-reporter-brave'
implementation "io.projectreactor:reactor-core:3.5.3"
implementation "io.micrometer:context-propagation:1.0.2"
implementation "io.micrometer:micrometer-core:1.10.4"
implementation "io.micrometer:micrometer-tracing:1.0.2"
}
application.yml:
spring:
cloud.stream:
kafka.binder:
enableObservation: true
headers:
- b3
function.definition: functionStream;testStream
default.producer.useNativeEncoding: true
bindings:
functionStream-in-0:
destination: spring-in
group: spring-test1
functionStream-out-0:
destination: test-in
testStream-in-0:
destination: test-in
group: spring-test2
testStream-out-0:
destination: spring-out
integration:
management:
observation-patterns: "*"
kafka:
bootstrap-servers: localhost:9092
consumer:
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
management:
tracing:
enabled: true
sampling.probability: 1.0
propagation.type: b3
logging.pattern.level: "%5p [%X{traceId:-},%X{spanId:-}]"
Code:
#Bean
WebClient webClient(final WebClient.Builder builder) {
return builder.build();
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> functionStream(final WebClient webClient, final ObservationRegistry registry) {
return flux -> flux
.<Message<String>>handle((msg, sink) -> {
log.info("functionStream-1");
sink.next(msg);
})
.flatMap(msg -> webClient.get()
.uri("http://localhost:8080/test")
.exchangeToMono(httpResponse -> httpResponse.bodyToMono(String.class)
.map(httpBody -> MessageBuilder.withPayload(httpBody)
.copyHeaders(httpResponse.headers().asHttpHeaders())
.build())
.<Message<String>>handle((m, sink) -> {
log.info("functionStream-3");
sink.next(m);
})
)
)
.handle((msg, sink) -> {
log.info("functionStream-2");
sink.next(msg);
});
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> testStream(final ObservationRegistry registry) {
return flux -> flux
.publishOn(Schedulers.boundedElastic())
.<Message<String>>handle((msg, sink) -> {
log.info("testStream-1");
sink.next(msg);
})
.map(msg -> MessageBuilder
.withPayload(msg.getPayload())
.copyHeaders(msg.getHeaders())
.build());
}
#Bean
RouterFunction<ServerResponse> router(final ObservationRegistry registry) {
return route()
.GET("/test", r -> ServerResponse.ok().body(Mono.deferContextual(contextView -> {
try (final var scope = ContextSnapshot.setThreadLocalsFrom(contextView, ObservationThreadLocalAccessor.KEY)) {
log.info("GET /test");
}
return Mono.just("answer");
}), String.class))
.build();
}
With this code I have output:
2023-02-16T17:06:22.111 INFO [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:06:22.166 WARN [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#523fe6a9]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.170 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#545339d8]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.187 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#44400bcc]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.361 INFO [63ee385de15f1061dea076eb06b0d1e0,908f48f8485a4277] 220348 --- [ctor-http-nio-4] com.example.demo.TestApplication : GET /test
2023-02-16T17:06:22.407 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-3
2023-02-16T17:06:22.409 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:06:22.448 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.456 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382456
2023-02-16T17:06:22.477 INFO [,] 220348 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:06:22.512 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,b5babc6bef4e30ca] 220348 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:06:22.539 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.543 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382543
Without http call I have output:
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:03:09.615 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189628
2023-02-16T17:03:09.691 INFO [,] 204228 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:03:09.859 INFO [63ee379d924e5645fc1d9e27b8135b48,b92a1a59ffd32d80] 204228 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:03:09.868 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189874

How to make different instances of consumers in the same consumer group consume different shards of the same kinesis stream?

I'm following the example given in spring-cloud-stream-samples with the following modifications.
application.yml
spring:
cloud:
stream:
instanceCount: 2
bindings:
produceOrder-out-0:
destination: test_stream
content-type: application/json
producer:
partitionCount: 2
partitionSelectorName: eventPartitionSelectorStrategy
partitionKeyExtractorName: eventPartitionKeyExtractorStrategy
processOrder-in-0:
group: eventConsumers
destination: test_stream
content-type: application/json
function:
definition: processOrder;produceOrder
ProducerConfiguration.java
package demo.config;
import demo.stream.Event;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy;
import org.springframework.cloud.stream.binder.PartitionSelectorStrategy;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.messaging.Message;
#Configuration
public class ProducerConfiguration {
private static Logger logger = LoggerFactory.getLogger(ProducerConfiguration.class);
#Bean
public PartitionSelectorStrategy eventPartitionSelectorStrategy() {
return new PartitionSelectorStrategy() {
#Override
public int selectPartition(Object key, int partitionCount) {
if(key instanceof Integer) {
int partition = (((Integer)key)%partitionCount + partitionCount)%partitionCount;
logger.info("key {} falls into partition {}" , key , partition);
return partition;
}
return 0;
}
};
}
#Bean
public PartitionKeyExtractorStrategy eventPartitionKeyExtractorStrategy() {
return new PartitionKeyExtractorStrategy() {
#Override
public Object extractKey(Message<?> message) {
if(message.getPayload() instanceof Event) {
return ((Event) message.getPayload()).hashCode();
} else {
return 0;
}
}
};
}
}
When I run two instances of this application by setting --spring.cloud.stream.instanceIndex=0 and --spring.cloud.stream.instanceIndex=1 I'm able to see the events getting produced. However, only one of the instance is consuming the records from both the partitions, the other instance is not consuming despite the producer creating partitioned records.
Logs seen in KinesisProducer
2022-09-04 00:17:22.628 INFO 34029 --- [ main] a.i.k.KinesisMessageDrivenChannelAdapter : started KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}], consumerGroup='eventConsumers'}
2022-09-04 00:17:22.658 INFO 34029 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 64398 (http) with context path ''
2022-09-04 00:17:22.723 INFO 34029 --- [ main] demo.KinesisApplication : Started KinesisApplication in 18.487 seconds (JVM running for 19.192)
2022-09-04 00:17:23.938 INFO 34029 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : The [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000000', reset=false}, state=NEW}] has been started.
2022-09-04 00:17:55.222 INFO 34029 --- [io-64398-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2022-09-04 00:17:55.222 INFO 34029 --- [io-64398-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2022-09-04 00:17:55.224 INFO 34029 --- [io-64398-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 2 ms
2022-09-04 00:17:55.598 INFO 34029 --- [io-64398-exec-1] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=5fbaca2f-d947-423d-a1f1-b1c9c268d2d0, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:17:56.337 INFO 34029 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 1397835167 falls into partition 1
2022-09-04 00:18:02.047 INFO 34029 --- [io-64398-exec-2] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=83021259-89b5-4451-a0ec-da3152d37a58, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:18:02.361 INFO 34029 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 147530256 falls into partition 0
Logs seen in KinesisConsumer
2022-09-04 00:17:28.050 INFO 34058 --- [ main] a.i.k.KinesisMessageDrivenChannelAdapter : started KinesisMessageDrivenChannelAdapter{shardOffsets=[KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='test_stream', shard='shardId-000000000001', reset=false}], consumerGroup='eventConsumers'}
2022-09-04 00:17:28.076 INFO 34058 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 64399 (http) with context path ''
2022-09-04 00:17:28.116 INFO 34058 --- [ main] demo.KinesisApplication : Started KinesisApplication in 18.566 seconds (JVM running for 19.839)
2022-09-04 00:17:29.365 INFO 34058 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : The [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=AFTER_SEQUENCE_NUMBER, sequenceNumber='49632927200161141377996226513172299243826807332967284754', timestamp=null, stream='test_stream', shard='shardId-000000000001', reset=false}, state=NEW}] has been started.
2022-09-04 00:17:57.346 INFO 34058 --- [esis-consumer-1] demo.stream.OrderStreamConfiguration : An order has been placed from this service Event [id=null, subject=Order [id=5fbaca2f-d947-423d-a1f1-b1c9c268d2d0, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-04 00:18:04.384 INFO 34058 --- [esis-consumer-1] demo.stream.OrderStreamConfiguration : An order has been placed from this service Event [id=null, subject=Order [id=83021259-89b5-4451-a0ec-da3152d37a58, name=pen], type=ORDER, originator=KinesisProducer]
spring-cloud-stream-binder-kinesis version : 2.2.0
I have these following questions:
For Static shard distribution within a single consumer group, is there any other parameter that needs to be configured that I have missed?
Do I need to specify the DynamoDB Checkpoint properties only for dynamic shard distribution?
EDIT
I have added the DEBUG logs seen in KinesisProducer below:
2022-09-07 08:30:38.120 INFO 4993 --- [io-64398-exec-1] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=b3927132-a80d-481e-a219-dbd0c0c7d124, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-07 08:30:38.806 INFO 4993 --- [ask-scheduler-3] demo.config.ProducerConfiguration : key 1842629003 falls into partition 1
2022-09-07 08:30:38.812 DEBUG 4993 --- [ask-scheduler-3] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=1, id=9cb8ec58-4a9e-7b6f-4263-c9d4d1eec906, contentType=application/json, timestamp=1662519638809}]
2022-09-07 08:30:38.813 DEBUG 4993 --- [ask-scheduler-3] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#63811d15 received message: GenericMessage [payload=byte[126], headers={scst_partition=1, scst_partitionOverride=0, id=731f444b-d3df-a51a-33de-8adf78e1e746, contentType=application/json, timestamp=1662519638813}]
2022-09-07 08:30:38.832 DEBUG 4993 --- [ask-scheduler-3] o.s.c.s.m.DirectWithAttributesChannel : postSend (sent=true) on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=1, scst_partitionOverride=0, id=731f444b-d3df-a51a-33de-8adf78e1e746, contentType=application/json, timestamp=1662519638813}]
2022-09-07 08:35:51.153 INFO 4993 --- [io-64398-exec-2] demo.stream.OrdersSource : Event sent: Event [id=null, subject=Order [id=6a5b3084-11dc-4080-a80e-61cc73315139, name=pen], type=ORDER, originator=KinesisProducer]
2022-09-07 08:35:51.915 INFO 4993 --- [ask-scheduler-5] demo.config.ProducerConfiguration : key 1525662264 falls into partition 0
2022-09-07 08:35:51.916 DEBUG 4993 --- [ask-scheduler-5] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=0, id=115c5421-00f2-286d-de02-0020e9322a17, contentType=application/json, timestamp=1662519951916}]
2022-09-07 08:35:51.916 DEBUG 4993 --- [ask-scheduler-5] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#63811d15 received message: GenericMessage [payload=byte[126], headers={scst_partition=0, scst_partitionOverride=0, id=145be7e8-381f-af73-e430-9cb645ff785f, contentType=application/json, timestamp=1662519951916}]
2022-09-07 08:35:51.917 DEBUG 4993 --- [ask-scheduler-5] o.s.c.s.m.DirectWithAttributesChannel : postSend (sent=true) on channel 'bean 'produceOrder-out-0'', message: GenericMessage [payload=byte[126], headers={scst_partition=0, scst_partitionOverride=0, id=145be7e8-381f-af73-e430-9cb645ff785f, contentType=application/json, timestamp=1662519951916}]

KAFKA : splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE

I am sending 10 messages . 2 messagesare "right" and 1 message has size over 1MB which gets rejected by Kafka broker due to RecordTooLargeException.
I have 2 doubts
1) MESSAGE_TOO_LARGE appears only when the Scheduler calls the method second time onwards.When the method is called for the first time by scheduler splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE doesnt appear.
2)Why retries are not getting decreased.I have given retry=1 .
I am calling Sender class using Spring Boot Scheduling mechanism.Something like this
#Scheduled(fixedDelay = 30000)
public void process() {
sender.sendThem();
}
I am using Spring Boot KafkaTemplate.
#Configuration
#EnableKafka
public class KakfaConfiguration {
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> config = new HashMap<>();
// props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
// props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,
// appProps.getJksLocation());
// props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG,
// appProps.getJksPassword());
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.ACKS_CONFIG, acks);
config.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, retryBackOffMsConfig);
config.put(ProducerConfig.RETRIES_CONFIG, retries);
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
config.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "prod-99");
return new DefaultKafkaProducerFactory<>(config);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean(name = "ktm")
public KafkaTransactionManager kafkaTransactionManager() {
KafkaTransactionManager ktm = new KafkaTransactionManager(producerFactory());
ktm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return ktm;
}
}
#Component
#EnableTransactionManagement
class Sender {
#Autowired
private KafkaTemplate<String, String> template;
private static final Logger LOG = LoggerFactory.getLogger(Sender.class);
#Transactional("ktm")
public void sendThem(List<String> toSend) throws InterruptedException {
List<ListenableFuture<SendResult<String, String>>> futures = new ArrayList<>();
CountDownLatch latch = new CountDownLatch(toSend.size());
ListenableFutureCallback<SendResult<String, String>> callback = new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
LOG.info(" message sucess : " + result.getProducerRecord().value());
latch.countDown();
}
#Override
public void onFailure(Throwable ex) {
LOG.error("Message Failed ");
latch.countDown();
}
};
toSend.forEach(str -> {
ListenableFuture<SendResult<String, String>> future = template.send("t_101", str);
future.addCallback(callback);
});
if (latch.await(12, TimeUnit.MINUTES)) {
LOG.info("All sent ok");
} else {
for (int i = 0; i < toSend.size(); i++) {
if (!futures.get(i).isDone()) {
LOG.error("No send result for " + toSend.get(i));
}
}
}
I am getting the following logs
2020-05-01 15:55:18.346 INFO 6476 --- [ scheduling-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1588328718345
2020-05-01 15:55:18.347 INFO 6476 --- [ scheduling-1] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-prod-991, transactionalId=prod-991] ProducerId set to -1 with epoch -1
2020-05-01 15:55:18.351 INFO 6476 --- [oducer-prod-991] org.apache.kafka.clients.Metadata : [Producer clientId=producer-prod-991, transactionalId=prod-991] Cluster ID: bL-uhcXlRSWGaOaSeDpIog
2020-05-01 15:55:48.358 INFO 6476 --- [oducer-prod-991] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-prod-991, transactionalId=prod-991] ProducerId set to 13000 with epoch 10
Value of kafka template----- 1518752790
2020-05-01 15:55:48.377 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 8 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.379 INFO 6476 --- [oducer-prod-991] com.a.kafkaproducer.producer.Sender : message sucess : TTTT0
2020-05-01 15:55:48.379 INFO 6476 --- [oducer-prod-991] com.a.kafkaproducer.producer.Sender : message sucess : TTTT1
2020-05-01 15:55:48.511 ERROR 6476 --- [oducer-prod-991] com.a.kafkaproducer.producer.Sender : Message Failed
2020-05-01 15:55:48.512 ERROR 6476 --- [oducer-prod-991] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='
2020-05-01 15:55:48.514 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 10 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.518 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 11 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.523 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 12 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.527 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 13 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.531 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 14 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.534 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 15 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.538 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 16 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.542 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 17 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
2020-05-01 15:55:48.546 WARN 6476 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Got error produce response in correlation id 18 on topic-partition t_101-2, splitting and retrying (1 attempts left). Error: MESSAGE_TOO_LARGE
Then after sometime the program completes with following log
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for t_101-0:120000 ms has passed since batch creation
2020-05-01 16:18:31.322 WARN 17816 --- [ scheduling-1] o.s.k.core.DefaultKafkaProducerFactory : Error during transactional operation; producer removed from cache; possible cause: broker restarted during transaction: CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer#7085a4dd, txId=prod-991]
2020-05-01 16:18:31.322 INFO 17816 --- [ scheduling-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-prod-991, transactionalId=prod-991] Closing the Kafka producer with timeoutMillis = 5000 ms.
2020-05-01 16:18:31.324 INFO 17816 --- [oducer-prod-991] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-prod-991, transactionalId=prod-991] Aborting incomplete transaction due to shutdown
error messahe here
------ processing done in parent class------
A broad picture of producer workflow is given below.
By setting RETRIES_CONFIG property, we can guarantee that in case of failure this producer will try sending that message.
If the batch is too large, we split the batch and send the split batches again. We do not decrement the retry attempts in this case.
You can go through the source code given below and find the scenarios in which retry count is decremented.
https://github.com/apache/kafka/blob/68ac551966e2be5b13adb2f703a01211e6f7a34b/clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java

Simple unit test for Apache Camel SNMP route

I'm having some trouble getting a working Camel Spring-Boot unit test written, that tests a simple SNMP route. Here is what I have so far:
SnmpRoute.kt
open class SnmpRoute(private val snmpProperties: SnmpProperties, private val repository: IPduEventRepository) : RouteBuilder() {
#Throws(Exception::class)
override fun configure() {
logger.debug("Initialising with properties [{}]", snmpProperties)
from("snmp:0.0.0.0:1161?protocol=udp&type=TRAP")
.process { exchange ->
// do stuff
}
.bean(repository, "save")
}
}
SnmpRouteTest.kt
#CamelSpringBootTest
#SpringBootApplication
#EnableAutoConfiguration
open class SnmpRouteTest : CamelTestSupport() {
object SnmpConstants {
const val SNMP_TRAP = "<snmp><entry><oid>...datadatadata...</oid><value>123456</value></entry></snmp>"
const val MOCK_SNMP_ENDPOINT = "mock:snmp"
}
#Mock
lateinit var snmpProperties: SnmpProperties
#Mock
lateinit var repository: IPduEventRepository
#InjectMocks
lateinit var snmpRoute: SnmpRoute
#EndpointInject(SnmpConstants.MOCK_SNMP_ENDPOINT)
lateinit var mock: MockEndpoint
#Before
fun setup() {
initMocks(this)
}
#Throws(Exception::class)
override fun createRouteBuilder(): RouteBuilder {
return snmpRoute
}
#Test
#Throws(Exception::class)
fun `Test SNMP endpoint`() {
mock.expectedBodiesReceived(SnmpConstants.SNMP_TRAP)
template.sendBody(SnmpConstants.MOCK_SNMP_ENDPOINT,
SnmpConstants.SNMP_TRAP)
mock.assertIsSatisfied()
verify(repository).save(PduEvent(1234, PDU.TRAP))
}
}
However, when I run this test, it fails as the repository mock never has any interactions:
Wanted but not invoked:
repository.save(
PduEvent(requestId=1234, type=-89)
);
-> at org.meanwhile.in.hell.camel.snmp.route.SnmpRouteTest.Test SNMP endpoint(SnmpRouteTest.kt:61)
Actually, there were zero interactions with this mock.
Can someone help me understand why this isn't interacting correctly? When run manually, this works and saves as expected.
Now I see what is going on here!
Your RouteBuilder under test has a from("snmp"). If you wish to deliver a mock message there for testing, you need to swap the snmp: component with something like a direct: or seda: component, during test execution.
Your current test is delivering a message to a Mock endpoint and verifying if it was received there. It does not interact with the real route builder. That's why your mock endpoint assertions do passed but Mockito.verify() failed.
TL;DR
Presuming that you are using Apache Camel 3.x, here is how to do it. I'm not fluent in Kotlin so, I'll show how to do that in Java.
AdviceWithRouteBuilder.adviceWith(context, "route-id", routeBuilder -> {
routeBuilder.replaceFromWith("direct:snmp-from"); //Replaces the from part of the route `route-id` with a direct component
});
You need to modify your route builder code to assign an ID to the route (say, route-id)
Replace the SNMP component at the start of the route with a direct component
Deliver test messages to the direct: component instead of SNMP
TL;DR ends.
Full blown sample code below.
PojoRepo.java
#Component
public class PojoRepo {
public void save(String body){
System.out.println(body);
}
}
SNMPDummyRoute.java
#Component
public class SNMPDummyRoute extends RouteBuilder {
PojoRepo pojoRepo;
public SNMPDummyRoute(PojoRepo pojoRepo) {
this.pojoRepo = pojoRepo;
}
#Override
public void configure() throws Exception {
from("snmp:0.0.0.0:1161?protocol=udp&type=TRAP")
.id("snmp-route")
.process(exchange -> {
exchange.getMessage().setBody(String.format("Saw message [%s]", exchange.getIn().getBody()));
})
.to("log:snmp-log")
.bean(pojoRepo, "save");
}
}
SNMPDummyRoteTest.java
Note: This class uses CamelSpringBootRunner instead of extending CamelTestSupport, but the core idea is same.
#RunWith(CamelSpringBootRunner.class)
#SpringBootTest
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#DisableJmx(false)
#MockEndpoints("log:*")
public class SNMPDummyRouteTest {
#MockBean
PojoRepo repo;
#EndpointInject("mock:log:snmp-log")
MockEndpoint mockEndpoint;
#Produce
ProducerTemplate testTemplate;
#Autowired
CamelContext camelContext;
#Test
public void testRoute() throws Exception {
AdviceWithRouteBuilder.adviceWith(camelContext,"snmp-route",routeBuilder -> {
routeBuilder.replaceFromWith("direct:snmp-from");
});
testTemplate.sendBody("direct:snmp-from","One");
testTemplate.sendBody("direct:snmp-from","Two");
mockEndpoint.expectedMinimumMessageCount(2);
mockEndpoint.setAssertPeriod(2_000L);
mockEndpoint.assertIsSatisfied();
Mockito.verify(repo, Mockito.atLeast(2)).save(anyString());
}
}
Logs from test run below. Take a closer look at the XML piece where the SNMP endpoint gets swapped in with a direct component.
2019-11-12 20:52:57.126 INFO 32560 --- [ main] o.a.c.component.snmp.SnmpTrapConsumer : Starting trap consumer on udp:0.0.0.0/1161
2019-11-12 20:52:58.363 INFO 32560 --- [ main] o.a.c.component.snmp.SnmpTrapConsumer : Started trap consumer on udp:0.0.0.0/1161 using udp protocol
2019-11-12 20:52:58.364 INFO 32560 --- [ main] o.a.c.s.boot.SpringBootCamelContext : Route: snmp-route started and consuming from: snmp://udp:0.0.0.0/1161
2019-11-12 20:52:58.368 INFO 32560 --- [ main] o.a.c.s.boot.SpringBootCamelContext : Total 1 routes, of which 1 are started
2019-11-12 20:52:58.370 INFO 32560 --- [ main] o.a.c.s.boot.SpringBootCamelContext : Apache Camel 3.0.0-M4 (CamelContext: MyCamel) started in 2.645 seconds
2019-11-12 20:52:59.670 INFO 32560 --- [ main] o.a.c.i.engine.DefaultShutdownStrategy : Starting to graceful shutdown 1 routes (timeout 10 seconds)
2019-11-12 20:52:59.680 INFO 32560 --- [ - ShutdownTask] o.a.c.component.snmp.SnmpTrapConsumer : Stopped trap consumer on udp:0.0.0.0/1161
2019-11-12 20:52:59.683 INFO 32560 --- [ - ShutdownTask] o.a.c.i.engine.DefaultShutdownStrategy : Route: snmp-route shutdown complete, was consuming from: snmp://udp:0.0.0.0/1161
2019-11-12 20:52:59.684 INFO 32560 --- [ main] o.a.c.i.engine.DefaultShutdownStrategy : Graceful shutdown of 1 routes completed in 0 seconds
2019-11-12 20:52:59.687 INFO 32560 --- [ main] o.a.c.s.boot.SpringBootCamelContext : Route: snmp-route is stopped, was consuming from: snmp://udp:0.0.0.0/1161
2019-11-12 20:52:59.689 INFO 32560 --- [ main] o.a.c.s.boot.SpringBootCamelContext : Route: snmp-route is shutdown and removed, was consuming from: snmp://udp:0.0.0.0/1161
2019-11-12 20:52:59.691 INFO 32560 --- [ main] o.apache.camel.builder.AdviceWithTasks : AdviceWith replace input from [snmp:0.0.0.0:1161?protocol=udp&type=TRAP] --> [direct:snmp-from]
2019-11-12 20:52:59.692 INFO 32560 --- [ main] org.apache.camel.reifier.RouteReifier : AdviceWith route after: Route(snmp-route)[From[direct:snmp-from] -> [process[Processor#0x589dfa6f], To[log:snmp-log], Bean[org.foo.bar.POJORepo$MockitoMock$868728200]]]
2019-11-12 20:52:59.700 INFO 32560 --- [ main] org.apache.camel.reifier.RouteReifier : Adviced route before/after as XML:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<route xmlns="http://camel.apache.org/schema/spring" customId="true" id="snmp-route">
<from uri="snmp:0.0.0.0:1161?protocol=udp&type=TRAP"/>
<process id="process1"/>
<to id="to1" uri="log:snmp-log"/>
<bean id="bean1" method="save"/>
</route>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<route xmlns="http://camel.apache.org/schema/spring" customId="true" id="snmp-route">
<from uri="direct:snmp-from"/>
<process id="process1"/>
<to id="to1" uri="log:snmp-log"/>
<bean id="bean1" method="save"/>
</route>
2019-11-12 20:52:59.734 INFO 32560 --- [ main] .i.e.InterceptSendToMockEndpointStrategy : Adviced endpoint [log://snmp-log] with mock endpoint [mock:log:snmp-log]
2019-11-12 20:52:59.755 INFO 32560 --- [ main] o.a.c.s.boot.SpringBootCamelContext : Route: snmp-route started and consuming from: direct://snmp-from
2019-11-12 20:52:59.834 INFO 32560 --- [ main] snmp-log : Exchange[ExchangePattern: InOnly, BodyType: String, Body: Saw message [One]]
2019-11-12 20:52:59.899 INFO 32560 --- [ main] snmp-log : Exchange[ExchangePattern: InOnly, BodyType: String, Body: Saw message [Two]]
2019-11-12 20:52:59.900 INFO 32560 --- [ main] o.a.camel.component.mock.MockEndpoint : Asserting: mock://log:snmp-log is satisfied
2019-11-12 20:53:01.903 INFO 32560 --- [ main] o.a.camel.component.mock.MockEndpoint : Re-asserting: mock://log:snmp-log is satisfied after 2000 millis
2019-11-12 20:53:01.992 INFO 32560 --- [ main] o.a.c.s.boot.SpringBootCamelContext : Apache Camel 3.0.0-M4 (CamelContext: MyCamel) is shutting down
2019-11-12 20:53:01.993 INFO 32560 --- [ main] o.a.c.i.engine.DefaultShutdownStrategy : Starting to graceful shutdown 1 routes (timeout 10 seconds)
2019-11-12 20:53:01.996 INFO 32560 --- [ - ShutdownTask] o.a.c.i.engine.DefaultShutdownStrategy : Route: snmp-route shutdown complete, was consuming from: direct://snmp-from
2019-11-12 20:53:01.996 INFO 32560 --- [ main] o.a.c.i.engine.DefaultShutdownStrategy : Graceful shutdown of 1 routes completed in 0 seconds

Spring Cloud Gateway in Docker Compose returns ERR_NAME_NOT_RESOLVED

I'm building a microservices app and I've run into problem with configuring the Spring Cloud gateway to proxy the calls to the API from frontend running on Nginx server.
When I make a POST request to /users/login, I get this response: OPTIONS http://28a41511677e:8082/login net::ERR_NAME_NOT_RESOLVED.
The string 28a41511677e is the services docker container ID. When I call another service (using GET method), it returns data just fine.
I'm using Eureka discovery server which seems to find all the services correctly. The service in question is registered as 28a41511677e:users-service:8082
Docker compose:
version: "3.7"
services:
db:
build: db/
expose:
- 5432
registry:
build: registryservice/
expose:
- 8761
ports:
- 8761:8761
gateway:
build: gatewayservice/
expose:
- 8080
depends_on:
- registry
users:
build: usersservice/
expose:
- 8082
depends_on:
- registry
- db
timetable:
build: timetableservice/
expose:
- 8081
depends_on:
- registry
- db
ui:
build: frontend/
expose:
- 80
ports:
- 80:80
depends_on:
- gateway
Gateway implementation:
#EnableDiscoveryClient
#SpringBootApplication
public class GatewayserviceApplication {
#Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder){
return builder.routes()
.route("users-service", p -> p.path("/user/**")
.uri("lb://users-service"))
.route("timetable-service", p -> p.path("/routes/**")
.uri("lb://timetable-service"))
.build();
}
public static void main(String[] args) {
SpringApplication.run(GatewayserviceApplication.class, args);
}
}
Gateway settings:
spring:
application:
name: gateway-service
cloud:
gateway:
globalcors:
cors-configurations:
'[/**]':
allowedOrigins: "*"
allowedMethods:
- GET
- POST
- PUT
- DELETE
eureka:
client:
service-url:
defaultZone: http://registry:8761/eureka
Users service controller:
#RestController
#CrossOrigin
#RequestMapping("/user")
public class UserController {
private UserService userService;
#Autowired
public UserController(UserService userService) {
this.userService = userService;
}
#PostMapping(path = "/login")
ResponseEntity<Long> login(#RequestBody LoginDto loginDto) {
logger.info("Logging in user");
Long uid = userService.logIn(loginDto);
return new ResponseEntity<>(uid, HttpStatus.OK);
}
}
Edit:
This also happens on NPM dev server. I tried changing the lb://users-service to http://users:8082, with no success, still getting ERR_NAME_NOT_RESOLVED.
I however found that when I call the endpoint, the following output can be seen in log:
gateway_1 | 2019-05-19 23:55:10.842 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false
gateway_1 | 2019-05-19 23:55:10.866 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
gateway_1 | 2019-05-19 23:55:10.867 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
gateway_1 | 2019-05-19 23:55:10.868 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false
gateway_1 | 2019-05-19 23:55:10.868 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
gateway_1 | 2019-05-19 23:55:10.869 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: false
gateway_1 | 2019-05-19 23:55:10.871 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
gateway_1 | 2019-05-19 23:55:11.762 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
users_1 | 2019-05-19 21:55:19.268 INFO 1 --- [nio-8082-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
users_1 | 2019-05-19 21:55:19.273 INFO 1 --- [nio-8082-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
users_1 | 2019-05-19 21:55:19.513 INFO 1 --- [nio-8082-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 239 ms
users_1 | 2019-05-19 21:55:20.563 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false
users_1 | 2019-05-19 21:55:20.565 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
users_1 | 2019-05-19 21:55:20.565 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
users_1 | 2019-05-19 21:55:20.566 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false
users_1 | 2019-05-19 21:55:20.566 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
users_1 | 2019-05-19 21:55:20.566 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: false
users_1 | 2019-05-19 21:55:20.567 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
users_1 | 2019-05-19 21:55:20.958 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
Edit 2:
I enabled logging for the gateway service and this is the output whenever I call /user/login. According to the logs, the gateway matches the /users/login/ correctly, but then starts using just /login for some reason.
2019-05-20 12:58:47.002 DEBUG 1 --- [or-http-epoll-2] r.n.http.server.HttpServerOperations : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] New http connection, requesting read
2019-05-20 12:58:47.025 DEBUG 1 --- [or-http-epoll-2] reactor.netty.channel.BootstrapHandlers : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-05-20 12:58:47.213 DEBUG 1 --- [or-http-epoll-2] r.n.http.server.HttpServerOperations : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Increasing pending responses, now 1
2019-05-20 12:58:47.242 DEBUG 1 --- [or-http-epoll-2] reactor.netty.http.server.HttpServer : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Handler is being applied: org.springframework.http.server.reactive.ReactorHttpHandlerAdapter#575e590e
2019-05-20 12:58:47.379 TRACE 1 --- [or-http-epoll-2] o.s.c.g.f.WeightCalculatorWebFilter : Weights attr: {}
2019-05-20 12:58:47.817 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition CompositeDiscoveryClient_USERS-SERVICE applying {pattern=/USERS-SERVICE/**} to Path
2019-05-20 12:58:47.952 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition CompositeDiscoveryClient_USERS-SERVICE applying filter {regexp=/USERS-SERVICE/(?<remaining>.*), replacement=/${remaining}} to RewritePath
2019-05-20 12:58:47.960 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition matched: CompositeDiscoveryClient_USERS-SERVICE
2019-05-20 12:58:47.961 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition CompositeDiscoveryClient_GATEWAY-SERVICE applying {pattern=/GATEWAY-SERVICE/**} to Path
2019-05-20 12:58:47.964 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition CompositeDiscoveryClient_GATEWAY-SERVICE applying filter {regexp=/GATEWAY-SERVICE/(?<remaining>.*), replacement=/${remaining}} to RewritePath
2019-05-20 12:58:47.968 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition matched: CompositeDiscoveryClient_GATEWAY-SERVICE
2019-05-20 12:58:47.979 TRACE 1 --- [or-http-epoll-2] o.s.c.g.h.p.RoutePredicateFactory : Pattern "/user/**" matches against value "/user/login"
2019-05-20 12:58:47.980 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.h.RoutePredicateHandlerMapping : Route matched: users-service
2019-05-20 12:58:47.981 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.h.RoutePredicateHandlerMapping : Mapping [Exchange: POST http://gateway:8080/user/login] to Route{id='users-service', uri=lb://users-service, order=0, predicate=org.springframework.cloud.gateway.support.ServerWebExchangeUtils$$Lambda$333/0x000000084035ac40#276b060f, gatewayFilters=[]}
2019-05-20 12:58:47.981 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.h.RoutePredicateHandlerMapping : [ff6d8305] Mapped to org.springframework.cloud.gateway.handler.FilteringWebHandler#4faea64b
2019-05-20 12:58:47.994 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.handler.FilteringWebHandler : Sorted gatewayFilterFactories: [OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.AdaptCachedBodyGlobalFilter#773f7880}, order=-2147482648}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyWriteResponseFilter#65a4798f}, order=-1}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardPathFilter#4c51bb7}, order=0}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RouteToRequestUrlFilter#878452d}, order=10000}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.LoadBalancerClientFilter#4f2613d1}, order=10100}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.WebsocketRoutingFilter#83298d7}, order=2147483646}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyRoutingFilter#6d24ffa1}, order=2147483647}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardRoutingFilter#426b6a74}, order=2147483647}]
2019-05-20 12:58:47.996 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.RouteToRequestUrlFilter : RouteToRequestUrlFilter start
2019-05-20 12:58:47.999 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.LoadBalancerClientFilter : LoadBalancerClientFilter url before: lb://users-service/user/login
2019-05-20 12:58:48.432 INFO 1 --- [or-http-epoll-2] c.netflix.config.ChainedDynamicProperty : Flipping property: users-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-05-20 12:58:48.492 INFO 1 --- [or-http-epoll-2] c.n.u.concurrent.ShutdownEnabledTimer : Shutdown hook installed for: NFLoadBalancer-PingTimer-users-service
2019-05-20 12:58:48.496 INFO 1 --- [or-http-epoll-2] c.netflix.loadbalancer.BaseLoadBalancer : Client: users-service instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=users-service,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
2019-05-20 12:58:48.506 INFO 1 --- [or-http-epoll-2] c.n.l.DynamicServerListLoadBalancer : Using serverListUpdater PollingServerListUpdater
2019-05-20 12:58:48.543 INFO 1 --- [or-http-epoll-2] c.netflix.config.ChainedDynamicProperty : Flipping property: users-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-05-20 12:58:48.555 INFO 1 --- [or-http-epoll-2] c.n.l.DynamicServerListLoadBalancer : DynamicServerListLoadBalancer for client users-service initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=users-service,current list of Servers=[157e1f567371:8082],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:157e1f567371:8082; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 01:00:00 CET 1970; First connection made: Thu Jan 01 01:00:00 CET 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#3cd9b0bf
2019-05-20 12:58:48.580 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.LoadBalancerClientFilter : LoadBalancerClientFilter url chosen: http://157e1f567371:8082/user/login
2019-05-20 12:58:48.632 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : Creating new client pool [proxy] for 157e1f567371:8082
2019-05-20 12:58:48.646 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439] Created new pooled channel, now 0 active connections and 1 inactive connections
2019-05-20 12:58:48.651 DEBUG 1 --- [or-http-epoll-2] reactor.netty.channel.BootstrapHandlers : [id: 0xa9634439] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (SimpleChannelPool$1#0 = io.netty.channel.pool.SimpleChannelPool$1), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-05-20 12:58:48.673 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}, [connected])
2019-05-20 12:58:48.679 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(GET{uri=/, connection=PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}}, [configured])
2019-05-20 12:58:48.682 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Registering pool release on close event for channel
2019-05-20 12:58:48.690 DEBUG 1 --- [or-http-epoll-2] r.netty.http.client.HttpClientConnect : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Handler is being applied: {uri=http://157e1f567371:8082/user/login, method=POST}
2019-05-20 12:58:48.701 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] New sending options
2019-05-20 12:58:48.720 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Writing object DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
POST /user/login HTTP/1.1
content-length: 37
accept-language: cs-CZ,cs;q=0.9,en;q=0.8
referer: http://localhost/user/login
cookie: JSESSIONID=6797219EB79F6026BD8F19E9C46C09DB
accept: application/json, text/plain, */*
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36
content-type: application/json;charset=UTF-8
origin: http://gateway:8080
accept-encoding: gzip, deflate, br
Forwarded: proto=http;host="gateway:8080";for="172.19.0.7:42958"
X-Forwarded-For: 172.19.0.1,172.19.0.7
X-Forwarded-Proto: http,http
X-Forwarded-Port: 80,8080
X-Forwarded-Host: localhost,gateway:8080
host: 157e1f567371:8082
2019-05-20 12:58:48.751 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Channel connected, now 1 active connections and 0 inactive connections
2019-05-20 12:58:48.759 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Writing object
2019-05-20 12:58:48.762 DEBUG 1 --- [or-http-epoll-2] reactor.netty.channel.FluxReceive : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Subscribing inbound receiver [pending: 1, cancelled:false, inboundDone: true]
2019-05-20 12:58:48.808 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Writing object EmptyLastHttpContent
2019-05-20 12:58:48.809 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(POST{uri=/user/login, connection=PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}}, [request_sent])
2019-05-20 12:58:49.509 INFO 1 --- [erListUpdater-0] c.netflix.config.ChainedDynamicProperty : Flipping property: users-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-05-20 12:58:49.579 DEBUG 1 --- [or-http-epoll-2] r.n.http.client.HttpClientOperations : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Received response (auto-read:false) : [Set-Cookie=JSESSIONID=7C47A99C1F416F910AB554F4617247D6; Path=/; HttpOnly, X-Content-Type-Options=nosniff, X-XSS-Protection=1; mode=block, Cache-Control=no-cache, no-store, max-age=0, must-revalidate, Pragma=no-cache, Expires=0, X-Frame-Options=DENY, Location=http://157e1f567371:8082/login, Content-Length=0, Date=Mon, 20 May 2019 10:58:49 GMT]
2019-05-20 12:58:49.579 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(POST{uri=/user/login, connection=PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}}, [response_received])
2019-05-20 12:58:49.581 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.NettyWriteResponseFilter : NettyWriteResponseFilter start
2019-05-20 12:58:49.586 DEBUG 1 --- [or-http-epoll-2] reactor.netty.channel.FluxReceive : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Subscribing inbound receiver [pending: 0, cancelled:false, inboundDone: false]
2019-05-20 12:58:49.586 DEBUG 1 --- [or-http-epoll-2] r.n.http.client.HttpClientOperations : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Received last HTTP packet
2019-05-20 12:58:49.593 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Writing object DefaultFullHttpResponse(decodeResult: success, version: HTTP/1.1, content: EmptyByteBufBE)
HTTP/1.1 302 Found
Set-Cookie: JSESSIONID=7C47A99C1F416F910AB554F4617247D6; Path=/; HttpOnly
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Location: http://157e1f567371:8082/login
Date: Mon, 20 May 2019 10:58:49 GMT
content-length: 0
2019-05-20 12:58:49.595 DEBUG 1 --- [or-http-epoll-2] r.n.http.server.HttpServerOperations : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Detected non persistent http connection, preparing to close
2019-05-20 12:58:49.595 DEBUG 1 --- [or-http-epoll-2] r.n.http.server.HttpServerOperations : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Last Http packet was sent, terminating channel
2019-05-20 12:58:49.598 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(POST{uri=/user/login, connection=PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}}, [disconnecting])
2019-05-20 12:58:49.598 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Releasing channel
2019-05-20 12:58:49.598 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Channel cleaned, now 0 active connections and 1 inactive connections
I managed to fix it. The problem was actually not in the gateway, it was in the users service. It had improper security configuration and required a login when accessing its endpoints. So, when I called any endpoint, it got redirected to /login.
I added the following code to the service and it works properly now.
#Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {
#Bean
public PasswordEncoder passwordEncoder() {
return new BCryptPasswordEncoder();
}
#Override
protected void configure(HttpSecurity httpSecurity) throws Exception {
httpSecurity.authorizeRequests().antMatchers("/").permitAll();
httpSecurity.cors().and().csrf().disable();
}
#Bean
CorsConfigurationSource corsConfigurationSource() {
CorsConfiguration configuration = new CorsConfiguration();
configuration.setAllowedOrigins(Arrays.asList("*"));
configuration.setAllowedMethods(Arrays.asList("*"));
configuration.setAllowedHeaders(Arrays.asList("*"));
configuration.setAllowCredentials(true);
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", configuration);
return source;
}
}
That's probably not a proper solution, but on a non production code it gets the job done.

Resources