spring kafka: Transactions in consumer Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn - spring-boot

We have a transactional producer. And there are no issue there.
For the consumer, we see the following in the logs. Question is why is a transaction being started here while we are consuming the message (and there is this resulting exception)?
2022-12-28 18:02:05.986 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2022-12-28 18:02:10.437 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Created Kafka transaction on producer [CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]]
2022-12-28 18:02:10.438 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[[S] Consuming message] data[record = ConsumerRecord(topic =XXXXXXX)))]
2022-12-28 18:02:10.438 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[Processed message]
2022-12-28 18:03:10.439 INFO [qa] 85474 --- [tainer#0-51-C-1] abc.abc.common.Generic : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[Throwable t] data[exception = [Ljava.lang.StackTraceElement;#6d314e1d] ex[Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn] sts[org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn
]
2022-12-28 18:03:10.439 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[[E] Consuming message]
2022-12-28 18:03:10.439 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Initiating transaction commit
2022-12-28 18:04:10.444 ERROR [qa] 85474 --- [tainer#0-51-C-1] o.s.k.core.DefaultKafkaProducerFactory : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
commitTransaction failed: CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]
2022-12-28 18:04:10.444 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
Initiating transaction rollback after commit exception
2022-12-28 18:04:10.445 WARN [qa] 85474 --- [tainer#0-51-C-1] o.s.k.core.DefaultKafkaProducerFactory : Error during some operation; producer removed from cache: CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]
2022-12-28 18:04:12.436 ERROR [qa] 85474 --- [tainer#0-51-C-1] o.s.k.l.KafkaMessageListenerContainer : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
Transaction rolled back
This is the configuration:
spring:
data:
mongodb:
uri: indb-prop
auto-index-creation: false
kafka:
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
retries: 2
client-id: ${spring.application.name}-${info.cluster}-${IP_ADDRESS}PP
transaction-id-prefix: tx-${spring.kafka.producer.client-id}-
properties:
enable.idempotence: true
spring.json.add.type.headers: false
bootstrap-servers: ${kafka_bootstrap_servers_2}
# listener:
# missing-topics-fatal: true
# type: batch
# concurrency: 15
# ack-mode: manual_immediate
# poll-timeout: 1s
consumer:
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
group-id: ${spring.application.name}-${info.cluster}-111111111112
client-id: ${spring.application.name}-${info.cluster}-${IP_ADDRESS}
# client-id: ${spring.kafka.consumer.group-id}-${IP_ADDRESS}
auto-offset-reset: earliest
enable-auto-commit: false
isolation-level: read_committed
# max-poll-records: 3
fetch-max-wait: 5s
properties:
max.poll.interval.ms: 20000000
spring.json.trusted.packages: '*'
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.value.default.type: net.abc.abc.EventData
bootstrap-servers: ${kafka_bootstrap_servers}
And the consumer:
#KafkaListener(
topics = "SOME_TOPIC",
autoStartup = "true")
// #Transactional(transactionManager = "mongoTransactionManager", propagation = Propagation.REQUIRED)
#Override
public void onMessage(ConsumerRecord<String, EventData> data) {
if (data == null || data.value() == null) {
return;
}
logger.debug(m -> m.event(KAFKA)
.msg("[S] Consuming message")
.with("record", data));
try {
logger.debug(m -> m.event(KAFKA)
.msg("Processed message"));
}
finally {
logger.debug(m -> m.event(KAFKA)
.msg("[E] Consuming message"));
}
The springBoot version: '2.6.6'

How many broker nodes are you using? If 1, have you overridden the broker params KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR and KAFKA_TRANSACTION_STATE_LOG_MIN_ISR to 1?
Similar sounding issue here:
https://github.com/testcontainers/testcontainers-java/issues/1816
Rob.

Related

Trace id propagation in Spring Boot 3 with Spring cloud streams and WebFlux

I tried to use spring cloud stream with kafka binder. But when I called WebClient in chain, then trace id is lost.
My flow is 'external service' -> 'functionStream-in' -> 'http call' -> functionStream-out' -> 'testStream-in' -> 'testStream-out' -> 'external service'
But after http call(or not?) the trace id is not propagated and I don't understand why. If I remove http call, then everything is OK.
I tried to add Hooks.enableAutomaticContextPropagation();, but that didn't help.
I tried to add ContextSnapshot.setThreadLocalsFrom around http call - same thing.
How can I solve it?
Dependencies:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-webflux'
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-starter-stream-kafka'
implementation 'io.micrometer:micrometer-tracing-bridge-brave'
implementation 'io.zipkin.reporter2:zipkin-reporter-brave'
implementation "io.projectreactor:reactor-core:3.5.3"
implementation "io.micrometer:context-propagation:1.0.2"
implementation "io.micrometer:micrometer-core:1.10.4"
implementation "io.micrometer:micrometer-tracing:1.0.2"
}
application.yml:
spring:
cloud.stream:
kafka.binder:
enableObservation: true
headers:
- b3
function.definition: functionStream;testStream
default.producer.useNativeEncoding: true
bindings:
functionStream-in-0:
destination: spring-in
group: spring-test1
functionStream-out-0:
destination: test-in
testStream-in-0:
destination: test-in
group: spring-test2
testStream-out-0:
destination: spring-out
integration:
management:
observation-patterns: "*"
kafka:
bootstrap-servers: localhost:9092
consumer:
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
management:
tracing:
enabled: true
sampling.probability: 1.0
propagation.type: b3
logging.pattern.level: "%5p [%X{traceId:-},%X{spanId:-}]"
Code:
#Bean
WebClient webClient(final WebClient.Builder builder) {
return builder.build();
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> functionStream(final WebClient webClient, final ObservationRegistry registry) {
return flux -> flux
.<Message<String>>handle((msg, sink) -> {
log.info("functionStream-1");
sink.next(msg);
})
.flatMap(msg -> webClient.get()
.uri("http://localhost:8080/test")
.exchangeToMono(httpResponse -> httpResponse.bodyToMono(String.class)
.map(httpBody -> MessageBuilder.withPayload(httpBody)
.copyHeaders(httpResponse.headers().asHttpHeaders())
.build())
.<Message<String>>handle((m, sink) -> {
log.info("functionStream-3");
sink.next(m);
})
)
)
.handle((msg, sink) -> {
log.info("functionStream-2");
sink.next(msg);
});
}
#Bean
Function<Flux<Message<String>>, Flux<Message<String>>> testStream(final ObservationRegistry registry) {
return flux -> flux
.publishOn(Schedulers.boundedElastic())
.<Message<String>>handle((msg, sink) -> {
log.info("testStream-1");
sink.next(msg);
})
.map(msg -> MessageBuilder
.withPayload(msg.getPayload())
.copyHeaders(msg.getHeaders())
.build());
}
#Bean
RouterFunction<ServerResponse> router(final ObservationRegistry registry) {
return route()
.GET("/test", r -> ServerResponse.ok().body(Mono.deferContextual(contextView -> {
try (final var scope = ContextSnapshot.setThreadLocalsFrom(contextView, ObservationThreadLocalAccessor.KEY)) {
log.info("GET /test");
}
return Mono.just("answer");
}), String.class))
.build();
}
With this code I have output:
2023-02-16T17:06:22.111 INFO [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:06:22.166 WARN [63ee385de15f1061dea076eb06b0d1e0,39a60588a695a702] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#523fe6a9]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.170 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#545339d8]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.187 WARN [63ee385de15f1061dea076eb06b0d1e0,de5d233d531b10f7] 220348 --- [container-0-C-1] i.m.o.c.ObservationThreadLocalAccessor : Scope from ObservationThreadLocalAccessor [null] is not the same as the one from ObservationRegistry [io.micrometer.observation.SimpleObservation$SimpleScope#44400bcc]. You must have created additional scopes and forgotten to close them. Will close both of them
2023-02-16T17:06:22.361 INFO [63ee385de15f1061dea076eb06b0d1e0,908f48f8485a4277] 220348 --- [ctor-http-nio-4] com.example.demo.TestApplication : GET /test
2023-02-16T17:06:22.407 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-3
2023-02-16T17:06:22.409 INFO [,] 220348 --- [ctor-http-nio-3] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:06:22.448 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.456 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.457 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,dd1b0fd86a6c39ca] 220348 --- [ctor-http-nio-3] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382456
2023-02-16T17:06:22.477 INFO [,] 220348 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:06:22.481 INFO [,] 220348 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:06:22.512 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,b5babc6bef4e30ca] 220348 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:06:22.539 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:06:22.543 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:06:22.544 INFO [63ee385eda64dcebdd1b0fd86a6c39ca,30126c50752d5928] 220348 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556382543
Without http call I have output:
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-1
2023-02-16T17:03:09.518 INFO [63ee379d924e5645fc1d9e27b8135b48,9ad408700a3b5684] 204228 --- [container-0-C-1] com.example.demo.TestApplication : functionStream-2
2023-02-16T17:03:09.615 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.629 INFO [63ee379d924e5645fc1d9e27b8135b48,3d4c6bd14a3ca4b6] 204228 --- [container-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189628
2023-02-16T17:03:09.691 INFO [,] 204228 --- [| adminclient-6] o.a.kafka.common.utils.AppInfoParser : App info kafka.admin.client for adminclient-6 unregistered
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-02-16T17:03:09.693 INFO [,] 204228 --- [| adminclient-6] o.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2023-02-16T17:03:09.859 INFO [63ee379d924e5645fc1d9e27b8135b48,b92a1a59ffd32d80] 204228 --- [oundedElastic-1] com.example.demo.TestApplication : testStream-1
2023-02-16T17:03:09.868 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.3.2
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: b66af662e61082cb
2023-02-16T17:03:09.874 INFO [63ee379d924e5645fc1d9e27b8135b48,db97f5eed98602f6] 204228 --- [oundedElastic-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1676556189874

How to close Rabbitmq Connection before Spring context

Spring - Close RabbitMQ Consumer connection without closing spring application
We have a requirement we want to close the rabbitmq connection, wait for few minutes before cleaning up other resources and then closing the spring application. We have a buffer of 6 minutes to clean up the other resources. However, during this time, the rabbitmq connection is auto recreated.
We are creating the connections as follows...
#Configuration
public class RabbitMQSubscribersHolder extends BaseRabbitMQConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(RabbitMQSubscribersHolder.class);
private ConcurrentMap<String, SimpleMessageListenerContainer> containers = new ConcurrentHashMap();
public RabbitMQSubscribersHolder() {
}
public void addSubscriber(String mqAlias, MessageListener receiver) {
ConnectionFactory connectionFactory = this.connectionFactory(mqAlias);
RabbitAdmin admin = null;
try {
admin = new RabbitAdmin(connectionFactory);
AbstractExchange exchange = this.exchange(mqAlias);
admin.declareExchange(exchange);
Queue queue = null;
if (!StringUtils.isEmpty(this.getMqQueueNameForQueueAlias(mqAlias))) {
queue = this.queue(mqAlias);
admin.declareQueue(queue);
} else {
queue = admin.declareQueue();
}
admin.declareBinding(this.binding(mqAlias, queue, exchange));
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(new String[]{queue.getName()});
container.setMessageListener(this.listenerAdapter(receiver));
container.setRabbitAdmin(admin);
container.start();
this.containers.put(mqAlias, container);
} catch (Exception var8) {
LOGGER.error(var8.getMessage(), var8);
}
}
MessageListenerAdapter listenerAdapter(MessageListener receiver) {
return new MessageListenerAdapter(receiver, "onMessage");
}
public SimpleMessageListenerContainer getContainer(String mqAlias) {
return (SimpleMessageListenerContainer)this.containers.get(mqAlias);
}
This is our destroy method
public void destroy() {
Iterator var1 = this.aliasToConnectionFactory.entrySet().iterator();
while(var1.hasNext()) {
Map.Entry<String, CachingConnectionFactory> entry = (Map.Entry)var1.next();
try {
((CachingConnectionFactory)entry.getValue()).destroy();
LOGGER.info("RabbitMQ caching connection factory closed for alias: {}", entry.getKey());
} catch (Exception var4) {
LOGGER.error("RabbitMQ caching connection destroy operation failed for alias: {} due to {}", entry.getKey(), StringUtils.getStackTrace(var4));
}
}
}
It seems the connection is re-created automatically as soon as it is destroyed (verified from Rabbit UI). Here are the logs:
2022-07-11 19:42:55.334 INFO 35761 --- [ Thread-11] c.g.f.f.r.GracefulShutdownRabbitMQ : RabbitMQ caching connection factory closed for alias: preMatchFreeze
2022-07-11 19:42:55.395 INFO 35761 --- [cTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer: tags=[{amq.ctag-tSAgXuPdMnLd9q79ryBWGg=fury_pre_matchfreeze_queue}], channel=Cached Rabbit Channel: AMQChannel(amqp://gwportal#10.24.128.76:5672/,1), conn: Proxy#205b73d8 Shared Rabbit Connection: null, acknowledgeMode=AUTO local queue size=0
2022-07-11 19:42:55.405 INFO 35761 --- [ Thread-11] c.g.f.f.r.GracefulShutdownRabbitMQ : RabbitMQ caching connection factory closed for alias: parentMatchFreezeEds
2022-07-11 19:42:55.447 INFO 35761 --- [ Thread-11] c.g.f.f.r.GracefulShutdownRabbitMQ : RabbitMQ caching connection factory closed for alias: fantasy_matchFreeze
2022-07-11 19:42:55.489 INFO 35761 --- [ Thread-11] c.g.f.f.r.GracefulShutdownRabbitMQ : RabbitMQ caching connection factory closed for alias: parentMatchFreezeCore
2022-07-11 19:42:55.489 INFO 35761 --- [ Thread-11] c.g.f.fury.GracefulShutdownHandler : Waiting before starting graceful shutdown
2022-07-11 19:42:55.794 INFO 35761 --- [cTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer: tags=[{amq.ctag-sK8_FY4PtHW52HXkQV3S_w=fury_match_freeze_queue}], channel=Cached Rabbit Channel: AMQChannel(amqp://gwportal#10.24.128.76:5672/,1), conn: Proxy#4bc9389 Shared Rabbit Connection: null, acknowledgeMode=AUTO local queue size=0
2022-07-11 19:42:56.046 INFO 35761 --- [cTaskExecutor-2] o.s.a.r.c.CachingConnectionFactory : Created new connection: SimpleConnection#3828e912 [delegate=amqp://gwportal#10.24.128.76:5672/, localPort= 58170]
2022-07-11 19:42:56.098 INFO 35761 --- [cTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer: tags=[{amq.ctag-KyvnWaqZco9NRTessjupJQ=fantasy_parent_match_freeze_queue}], channel=Cached Rabbit Channel: AMQChannel(amqp://gwportal#10.24.128.76:5672/,1), conn: Proxy#17d45cfb Shared Rabbit Connection: null, acknowledgeMode=AUTO local queue size=0
2022-07-11 19:43:13.127 INFO 35761 --- [cTaskExecutor-2] o.s.a.r.c.CachingConnectionFactory : Created new connection: SimpleConnection#4c3cefad [delegate=amqp://gwportal#10.24.141.77:5672/, localPort= 58182]
2022-07-11 19:43:16.849 INFO 35761 --- [cTaskExecutor-2] o.s.a.r.c.CachingConnectionFactory : Created new connection: SimpleConnection#801eaf8 [delegate=amqp://gwportal#10.24.141.77:5672/, localPort= 58185]
You need to stop the message listener container(s) to prevent them from trying to reconnect.

Kafka consumer LeaveGroup request to coordinator

I have one interesting scenario. Seems like when there are no new topics to pick up (at least that's what I think is happening), my consumer suddenly shuts down.
I am using Kotlin + Spring Boot Kafka Producer and Consumer. My consumer is configured like this:
spring:
profiles:
active: ${SPRING_PROFILE:prod}
cloud:
stream:
bindingRetryInterval: ${BINDING_RETRY_INTERVAL:0}
default:
contentType: application/*+avro
group: my-app
consumer:
useNativeDecoding: false
concurrency: ${CONSUMER_CONCURRENCY:3}
maxAttempts: ${CONSUMER_MAX_ATTEMPTS:3}
bindings:
outbox:
# Topic to consume from
destination: my_outbox_topic
kafka:
bindings:
outbox:
consumer:
enableDlq: ${ENABLE_DLQ:false}
binder:
brokers: ${BOOTSTRAP_SERVERS:localhost:9092}
configuration.ssl.endpoint.identification.algorithm: ${SSL_ALGORITHM:}
consumerProperties:
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://localhost:8081}
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
specific.avro.reader: false
So I start the app, it consumes for a while and then I get these types of logs:
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2,
groupId=youscript-app] Member consumer-2-dc0e4115-6e9a-4a99-9d04-a3a2efb2c7b3 sending LeaveGroup request to coordinator __MASKES__:9094 (id: 2147483646 rack: null)
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-2, groupId=my-app] Unsubscribed all topics or patterns and assigned partitions
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=my-app] Member consumer-3-701431a3-7cb9-4962-8f89-f2df41a176fd sending LeaveGroup request to coordinator __MASKED__:9094 (id: 2147483646 rack: null)
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-3, groupId=my-app] Unsubscribed all topics or patterns and assigned partitions
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2021-09-28 12:57:38.189 INFO 4254 --- [container-1-C-1] essageListenerContainer$ListenerConsumer : my-app: Consumer stopped
2021-09-28 12:57:38.197 INFO 4254 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : my-app: Consumer stopped

kafka SASL_PLAIN SCRAM is fail in spring boot consumer

I try kafka authentication SASL_PLAINTEXT / SCRAM but Authentication failed in spring boot.
i try change SASL_PLAINTEXT / PLAIN and it's working. but SCRAM is Authentication failed SHA-512 and SHA-256
did many diffrent things but it's not working....
how can i fix it?
broker log
broker1 | [2020-12-31 02:57:37,831] INFO [SocketServer brokerId=1] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
broker2 | [2020-12-31 02:57:37,891] INFO [SocketServer brokerId=2] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
Spring boot log
2020-12-31 11:57:37.438 INFO 82416 --- [ restartedMain] o.a.k.c.s.authenticator.AbstractLogin : Successfully logged in.
2020-12-31 11:57:37.497 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.6.0
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 62abe01bee039651
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1609383457495
2020-12-31 11:57:37.502 INFO 82416 --- [ restartedMain] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Subscribed to topic(s): test
2020-12-31 11:57:37.508 INFO 82416 --- [ restartedMain] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2020-12-31 11:57:37.528 INFO 82416 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-12-31 11:57:37.546 INFO 82416 --- [ restartedMain] i.m.k.p.KafkaProducerScramApplication : Started KafkaProducerScramApplication in 2.325 seconds (JVM running for 3.263)
2020-12-31 11:57:37.833 INFO 82416 --- [ntainer#0-0-C-1] o.apache.kafka.common.network.Selector : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Failed authentication with localhost/127.0.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512)
2020-12-31 11:57:37.836 ERROR 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Connection to node -1 (localhost/127.0.0.1:9091) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
2020-12-31 11:57:37.837 WARN 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Bootstrap broker localhost:9091 (id: -1 rack: null) disconnected
2020-12-31 11:57:37.842 ERROR 82416 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer exception
java.lang.IllegalStateException: This error handler cannot process 'org.apache.kafka.common.errors.SaslAuthenticationException's; no record information is available
at org.springframework.kafka.listener.SeekUtils.seekOrRecover(SeekUtils.java:151) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:113) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1425) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1122) ~[spring-kafka-2.6.4.jar:2.6.4]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
my docker-compose.yml
...
...
zookeeper3:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper3
container_name: zookeeper3
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper3:2183
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://zookeeper:2183
ZOOKEEPER_CLIENT_PORT: 2183
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/sasl/zookeeper_jaas.conf \
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider \
-Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider \
-Dquorum.auth.enableSasl=true \
-Dquorum.auth.learnerRequireSasl=true \
-Dquorum.auth.serverRequireSasl=true \
-Dquorum.auth.learner.saslLoginContext=QuorumLearner \
-Dquorum.auth.server.saslLoginContext=QuorumServer \
-Dquorum.cnxn.threads.size=20 \
-DrequireClientAuthScheme=sasl"
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker1:
image: confluentinc/cp-kafka:6.0.1
hostname: broker1
container_name: broker1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9091:9091"
- "9101:9101"
- "29091:29091"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29091,SASL_PLAINHOST://:9091
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker1:29090,OUTSIDE://localhost:29091,SASL_PLAINHOST://localhost:9091
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker2:
image: confluentinc/cp-kafka:6.0.1
hostname: broker2
container_name: broker2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9092:9092"
- "9102:9102"
- "29092:29092"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29092,SASL_PLAINHOST://:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker2:29090,OUTSIDE://localhost:29092,SASL_PLAINHOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9102
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
kaka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="password"
user_admin="password"
user_client="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="client"
password="password";
};
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_admin="password";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="password";
};
ConsumerConfig.java
private static final String BOOTSTRAP_ADDRESS = "localhost:9091,localhost:9092";
private static final String JAAS_TEMPLATE = "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"%s\" password=\"%s\";";
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
String jaasCfg = String.format(JAAS_TEMPLATE, "client", "password");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer");
props.put("sasl.jaas.config", jaasCfg);
props.put("sasl.mechanism", "SCRAM-SHA-512");
props.put("security.protocol", "SASL_PLAINTEXT");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
solved.
because i didn't add user information in zookeeper.
add this code.
zookeeper-add-kafka-users:
image: confluentinc/cp-kafka:6.0.1
container_name: "zookeeper-add-kafka-users"
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
command: "bash -c 'echo Waiting for Zookeeper to be ready... && \
cub zk-ready zookeeper1:2181 120 && \
cub zk-ready zookeeper2:2182 120 && \
cub zk-ready zookeeper3:2183 120 && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name admin && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name client '"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf
volumes:
- /home/mysend/dev/docker/kafka/sasl:/etc/kafka/secrets/sasl
kafka SASL_PLAIN SCRAM
if don't use docker can use command
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

Spring Cloud Gateway in Docker Compose returns ERR_NAME_NOT_RESOLVED

I'm building a microservices app and I've run into problem with configuring the Spring Cloud gateway to proxy the calls to the API from frontend running on Nginx server.
When I make a POST request to /users/login, I get this response: OPTIONS http://28a41511677e:8082/login net::ERR_NAME_NOT_RESOLVED.
The string 28a41511677e is the services docker container ID. When I call another service (using GET method), it returns data just fine.
I'm using Eureka discovery server which seems to find all the services correctly. The service in question is registered as 28a41511677e:users-service:8082
Docker compose:
version: "3.7"
services:
db:
build: db/
expose:
- 5432
registry:
build: registryservice/
expose:
- 8761
ports:
- 8761:8761
gateway:
build: gatewayservice/
expose:
- 8080
depends_on:
- registry
users:
build: usersservice/
expose:
- 8082
depends_on:
- registry
- db
timetable:
build: timetableservice/
expose:
- 8081
depends_on:
- registry
- db
ui:
build: frontend/
expose:
- 80
ports:
- 80:80
depends_on:
- gateway
Gateway implementation:
#EnableDiscoveryClient
#SpringBootApplication
public class GatewayserviceApplication {
#Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder){
return builder.routes()
.route("users-service", p -> p.path("/user/**")
.uri("lb://users-service"))
.route("timetable-service", p -> p.path("/routes/**")
.uri("lb://timetable-service"))
.build();
}
public static void main(String[] args) {
SpringApplication.run(GatewayserviceApplication.class, args);
}
}
Gateway settings:
spring:
application:
name: gateway-service
cloud:
gateway:
globalcors:
cors-configurations:
'[/**]':
allowedOrigins: "*"
allowedMethods:
- GET
- POST
- PUT
- DELETE
eureka:
client:
service-url:
defaultZone: http://registry:8761/eureka
Users service controller:
#RestController
#CrossOrigin
#RequestMapping("/user")
public class UserController {
private UserService userService;
#Autowired
public UserController(UserService userService) {
this.userService = userService;
}
#PostMapping(path = "/login")
ResponseEntity<Long> login(#RequestBody LoginDto loginDto) {
logger.info("Logging in user");
Long uid = userService.logIn(loginDto);
return new ResponseEntity<>(uid, HttpStatus.OK);
}
}
Edit:
This also happens on NPM dev server. I tried changing the lb://users-service to http://users:8082, with no success, still getting ERR_NAME_NOT_RESOLVED.
I however found that when I call the endpoint, the following output can be seen in log:
gateway_1 | 2019-05-19 23:55:10.842 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false
gateway_1 | 2019-05-19 23:55:10.866 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
gateway_1 | 2019-05-19 23:55:10.867 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
gateway_1 | 2019-05-19 23:55:10.868 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false
gateway_1 | 2019-05-19 23:55:10.868 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
gateway_1 | 2019-05-19 23:55:10.869 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: false
gateway_1 | 2019-05-19 23:55:10.871 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
gateway_1 | 2019-05-19 23:55:11.762 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
users_1 | 2019-05-19 21:55:19.268 INFO 1 --- [nio-8082-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
users_1 | 2019-05-19 21:55:19.273 INFO 1 --- [nio-8082-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
users_1 | 2019-05-19 21:55:19.513 INFO 1 --- [nio-8082-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 239 ms
users_1 | 2019-05-19 21:55:20.563 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false
users_1 | 2019-05-19 21:55:20.565 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
users_1 | 2019-05-19 21:55:20.565 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
users_1 | 2019-05-19 21:55:20.566 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false
users_1 | 2019-05-19 21:55:20.566 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
users_1 | 2019-05-19 21:55:20.566 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: false
users_1 | 2019-05-19 21:55:20.567 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
users_1 | 2019-05-19 21:55:20.958 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
Edit 2:
I enabled logging for the gateway service and this is the output whenever I call /user/login. According to the logs, the gateway matches the /users/login/ correctly, but then starts using just /login for some reason.
2019-05-20 12:58:47.002 DEBUG 1 --- [or-http-epoll-2] r.n.http.server.HttpServerOperations : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] New http connection, requesting read
2019-05-20 12:58:47.025 DEBUG 1 --- [or-http-epoll-2] reactor.netty.channel.BootstrapHandlers : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-05-20 12:58:47.213 DEBUG 1 --- [or-http-epoll-2] r.n.http.server.HttpServerOperations : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Increasing pending responses, now 1
2019-05-20 12:58:47.242 DEBUG 1 --- [or-http-epoll-2] reactor.netty.http.server.HttpServer : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Handler is being applied: org.springframework.http.server.reactive.ReactorHttpHandlerAdapter#575e590e
2019-05-20 12:58:47.379 TRACE 1 --- [or-http-epoll-2] o.s.c.g.f.WeightCalculatorWebFilter : Weights attr: {}
2019-05-20 12:58:47.817 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition CompositeDiscoveryClient_USERS-SERVICE applying {pattern=/USERS-SERVICE/**} to Path
2019-05-20 12:58:47.952 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition CompositeDiscoveryClient_USERS-SERVICE applying filter {regexp=/USERS-SERVICE/(?<remaining>.*), replacement=/${remaining}} to RewritePath
2019-05-20 12:58:47.960 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition matched: CompositeDiscoveryClient_USERS-SERVICE
2019-05-20 12:58:47.961 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition CompositeDiscoveryClient_GATEWAY-SERVICE applying {pattern=/GATEWAY-SERVICE/**} to Path
2019-05-20 12:58:47.964 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition CompositeDiscoveryClient_GATEWAY-SERVICE applying filter {regexp=/GATEWAY-SERVICE/(?<remaining>.*), replacement=/${remaining}} to RewritePath
2019-05-20 12:58:47.968 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.r.RouteDefinitionRouteLocator : RouteDefinition matched: CompositeDiscoveryClient_GATEWAY-SERVICE
2019-05-20 12:58:47.979 TRACE 1 --- [or-http-epoll-2] o.s.c.g.h.p.RoutePredicateFactory : Pattern "/user/**" matches against value "/user/login"
2019-05-20 12:58:47.980 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.h.RoutePredicateHandlerMapping : Route matched: users-service
2019-05-20 12:58:47.981 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.h.RoutePredicateHandlerMapping : Mapping [Exchange: POST http://gateway:8080/user/login] to Route{id='users-service', uri=lb://users-service, order=0, predicate=org.springframework.cloud.gateway.support.ServerWebExchangeUtils$$Lambda$333/0x000000084035ac40#276b060f, gatewayFilters=[]}
2019-05-20 12:58:47.981 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.h.RoutePredicateHandlerMapping : [ff6d8305] Mapped to org.springframework.cloud.gateway.handler.FilteringWebHandler#4faea64b
2019-05-20 12:58:47.994 DEBUG 1 --- [or-http-epoll-2] o.s.c.g.handler.FilteringWebHandler : Sorted gatewayFilterFactories: [OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.AdaptCachedBodyGlobalFilter#773f7880}, order=-2147482648}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyWriteResponseFilter#65a4798f}, order=-1}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardPathFilter#4c51bb7}, order=0}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RouteToRequestUrlFilter#878452d}, order=10000}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.LoadBalancerClientFilter#4f2613d1}, order=10100}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.WebsocketRoutingFilter#83298d7}, order=2147483646}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyRoutingFilter#6d24ffa1}, order=2147483647}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardRoutingFilter#426b6a74}, order=2147483647}]
2019-05-20 12:58:47.996 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.RouteToRequestUrlFilter : RouteToRequestUrlFilter start
2019-05-20 12:58:47.999 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.LoadBalancerClientFilter : LoadBalancerClientFilter url before: lb://users-service/user/login
2019-05-20 12:58:48.432 INFO 1 --- [or-http-epoll-2] c.netflix.config.ChainedDynamicProperty : Flipping property: users-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-05-20 12:58:48.492 INFO 1 --- [or-http-epoll-2] c.n.u.concurrent.ShutdownEnabledTimer : Shutdown hook installed for: NFLoadBalancer-PingTimer-users-service
2019-05-20 12:58:48.496 INFO 1 --- [or-http-epoll-2] c.netflix.loadbalancer.BaseLoadBalancer : Client: users-service instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=users-service,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
2019-05-20 12:58:48.506 INFO 1 --- [or-http-epoll-2] c.n.l.DynamicServerListLoadBalancer : Using serverListUpdater PollingServerListUpdater
2019-05-20 12:58:48.543 INFO 1 --- [or-http-epoll-2] c.netflix.config.ChainedDynamicProperty : Flipping property: users-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-05-20 12:58:48.555 INFO 1 --- [or-http-epoll-2] c.n.l.DynamicServerListLoadBalancer : DynamicServerListLoadBalancer for client users-service initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=users-service,current list of Servers=[157e1f567371:8082],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:157e1f567371:8082; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 01:00:00 CET 1970; First connection made: Thu Jan 01 01:00:00 CET 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#3cd9b0bf
2019-05-20 12:58:48.580 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.LoadBalancerClientFilter : LoadBalancerClientFilter url chosen: http://157e1f567371:8082/user/login
2019-05-20 12:58:48.632 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : Creating new client pool [proxy] for 157e1f567371:8082
2019-05-20 12:58:48.646 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439] Created new pooled channel, now 0 active connections and 1 inactive connections
2019-05-20 12:58:48.651 DEBUG 1 --- [or-http-epoll-2] reactor.netty.channel.BootstrapHandlers : [id: 0xa9634439] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (SimpleChannelPool$1#0 = io.netty.channel.pool.SimpleChannelPool$1), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-05-20 12:58:48.673 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}, [connected])
2019-05-20 12:58:48.679 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(GET{uri=/, connection=PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}}, [configured])
2019-05-20 12:58:48.682 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Registering pool release on close event for channel
2019-05-20 12:58:48.690 DEBUG 1 --- [or-http-epoll-2] r.netty.http.client.HttpClientConnect : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Handler is being applied: {uri=http://157e1f567371:8082/user/login, method=POST}
2019-05-20 12:58:48.701 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] New sending options
2019-05-20 12:58:48.720 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Writing object DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
POST /user/login HTTP/1.1
content-length: 37
accept-language: cs-CZ,cs;q=0.9,en;q=0.8
referer: http://localhost/user/login
cookie: JSESSIONID=6797219EB79F6026BD8F19E9C46C09DB
accept: application/json, text/plain, */*
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36
content-type: application/json;charset=UTF-8
origin: http://gateway:8080
accept-encoding: gzip, deflate, br
Forwarded: proto=http;host="gateway:8080";for="172.19.0.7:42958"
X-Forwarded-For: 172.19.0.1,172.19.0.7
X-Forwarded-Proto: http,http
X-Forwarded-Port: 80,8080
X-Forwarded-Host: localhost,gateway:8080
host: 157e1f567371:8082
2019-05-20 12:58:48.751 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Channel connected, now 1 active connections and 0 inactive connections
2019-05-20 12:58:48.759 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Writing object
2019-05-20 12:58:48.762 DEBUG 1 --- [or-http-epoll-2] reactor.netty.channel.FluxReceive : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Subscribing inbound receiver [pending: 1, cancelled:false, inboundDone: true]
2019-05-20 12:58:48.808 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Writing object EmptyLastHttpContent
2019-05-20 12:58:48.809 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(POST{uri=/user/login, connection=PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}}, [request_sent])
2019-05-20 12:58:49.509 INFO 1 --- [erListUpdater-0] c.netflix.config.ChainedDynamicProperty : Flipping property: users-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-05-20 12:58:49.579 DEBUG 1 --- [or-http-epoll-2] r.n.http.client.HttpClientOperations : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Received response (auto-read:false) : [Set-Cookie=JSESSIONID=7C47A99C1F416F910AB554F4617247D6; Path=/; HttpOnly, X-Content-Type-Options=nosniff, X-XSS-Protection=1; mode=block, Cache-Control=no-cache, no-store, max-age=0, must-revalidate, Pragma=no-cache, Expires=0, X-Frame-Options=DENY, Location=http://157e1f567371:8082/login, Content-Length=0, Date=Mon, 20 May 2019 10:58:49 GMT]
2019-05-20 12:58:49.579 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(POST{uri=/user/login, connection=PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}}, [response_received])
2019-05-20 12:58:49.581 TRACE 1 --- [or-http-epoll-2] o.s.c.g.filter.NettyWriteResponseFilter : NettyWriteResponseFilter start
2019-05-20 12:58:49.586 DEBUG 1 --- [or-http-epoll-2] reactor.netty.channel.FluxReceive : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Subscribing inbound receiver [pending: 0, cancelled:false, inboundDone: false]
2019-05-20 12:58:49.586 DEBUG 1 --- [or-http-epoll-2] r.n.http.client.HttpClientOperations : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Received last HTTP packet
2019-05-20 12:58:49.593 DEBUG 1 --- [or-http-epoll-2] r.n.channel.ChannelOperationsHandler : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Writing object DefaultFullHttpResponse(decodeResult: success, version: HTTP/1.1, content: EmptyByteBufBE)
HTTP/1.1 302 Found
Set-Cookie: JSESSIONID=7C47A99C1F416F910AB554F4617247D6; Path=/; HttpOnly
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Location: http://157e1f567371:8082/login
Date: Mon, 20 May 2019 10:58:49 GMT
content-length: 0
2019-05-20 12:58:49.595 DEBUG 1 --- [or-http-epoll-2] r.n.http.server.HttpServerOperations : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Detected non persistent http connection, preparing to close
2019-05-20 12:58:49.595 DEBUG 1 --- [or-http-epoll-2] r.n.http.server.HttpServerOperations : [id: 0xff6d8305, L:/172.19.0.4:8080 - R:/172.19.0.7:42958] Last Http packet was sent, terminating channel
2019-05-20 12:58:49.598 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] onStateChange(POST{uri=/user/login, connection=PooledConnection{channel=[id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082]}}, [disconnecting])
2019-05-20 12:58:49.598 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Releasing channel
2019-05-20 12:58:49.598 DEBUG 1 --- [or-http-epoll-2] r.n.resources.PooledConnectionProvider : [id: 0xa9634439, L:/172.19.0.4:59624 - R:157e1f567371/172.19.0.5:8082] Channel cleaned, now 0 active connections and 1 inactive connections
I managed to fix it. The problem was actually not in the gateway, it was in the users service. It had improper security configuration and required a login when accessing its endpoints. So, when I called any endpoint, it got redirected to /login.
I added the following code to the service and it works properly now.
#Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {
#Bean
public PasswordEncoder passwordEncoder() {
return new BCryptPasswordEncoder();
}
#Override
protected void configure(HttpSecurity httpSecurity) throws Exception {
httpSecurity.authorizeRequests().antMatchers("/").permitAll();
httpSecurity.cors().and().csrf().disable();
}
#Bean
CorsConfigurationSource corsConfigurationSource() {
CorsConfiguration configuration = new CorsConfiguration();
configuration.setAllowedOrigins(Arrays.asList("*"));
configuration.setAllowedMethods(Arrays.asList("*"));
configuration.setAllowedHeaders(Arrays.asList("*"));
configuration.setAllowCredentials(true);
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", configuration);
return source;
}
}
That's probably not a proper solution, but on a non production code it gets the job done.

Resources