How to Shutdown/Stop RabbitMQ queue of Spring Cloud stream bindings - spring-boot

I want to stop the consumers in rabbitmq of creating a queue from spring cloud stream bindings while hitting endpoint /prepare-for-shutdown. Please find below the configuration,
Added dependency in pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>
Application.yml:
spring:
cloud:
stream:
bindings:
produceChannel:
binder: rabbit
content-type: application/json
destination: internal-exchange
consumeChannel:
binder: rabbit
content-type: application/json
destination: internal-exchange
group: small-queue
rabbit:
bindings:
consumeChannel:
consumer:
autoBindDlq: true
durableSubscription: true
requeueRejected: false
republishToDlq: true
bindingRoutingKey: admin
produceChannel:
producer:
routingKeyExpression: '"admin"'
sample.java
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.SubscribableChannel;
public interface Sample{
#Input("consumeChannel")
SubscribableChannel consumeChannel();
#Output("produceChannel")
SubscribableChannel produceChannel();
}
The integration with RabbitMQ has been achieved using Spring Cloud's #StreamLinster and #EnableBinding abstractions as shown below:
#EnableBinding(Sample.class)
#StreamListener("consumeChannel")
public void sampleMessage(String message) {
// code
}
Looking forward to stop a consumer of a RabbitMQ queue programmatically.
Thanks in Advance

I analyzed the issue why am getting empty values by invoking the actuator endpoint '/actuator/bindings'
When hitting actuator binding endpoint, it invokes the method gatherInputBindings()in BindingsEndpoint.class.
In BindingsEndpoint.java, fetching the binding values from inputBindingLifecycle
(Collection<Binding<?>>) new DirectFieldAccessor(inputBindingLifecycle).getPropertyValue("inputBindings");
In below methods, setting empty bindings list to inputBindings
In InputBindingLifecycle.java,
void doStartWithBindable(Bindable bindable) {
this.inputBindings = bindable.createAndBindInputs(bindingService);
}
In Bindable.java,
default Collection<Binding<Object>> createAndBindInputs(BindingService adapter) {
return Collections.<Binding<Object>>emptyList();
}
Pls suggest me to fix these issues whether need to change any dependency or any code configuration

Related

Spring Cloud Stream [2021.0.5] Kafka Batch mode Avro native encoding doesn't work with spring cloud sleuth

i'm working on upgrading spring boot to 2.7.8 and spring cloud to 2021.0.5.
I have Spring cloud stream kafka consumer using avro deserialization in batch-mode, and I was trying to use useNativeEncoding according to documentation.
the problem is when using an input of Message<List> the spring cloud stream code overrides (when using sleuth) the flag of native encoding to false in this class SimpleFunctionRegistry, this the message payload is empty.
without using the Message> it works fine, i.e. List.
after spending more than one day trying to debug the problem without understanding why, I took it to a side project to test it, and it stopped working after using sleuth.
The Bug
the problem is one the class SimpleFunctionRegistry on methodprivate FunctionInvocationWrapper wrapInAroundAdviceIfNecessary(FunctionInvocationWrapper function) it calls the apply and override the flag
spring cloud stream team is there any workaround? or an easy fix?
application.yaml example
spring:
cloud:
stream:
binders:
kafka-string-avro-native:
type: kafka
defaultCandidate: true
environment.spring.cloud.stream.kafka.binder.consumerProperties:
dlqProducerProperties.configuration.key.serializer: org.apache.kafka.common.serialization.StringSerializer
dlqProducerProperties.configuration.value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://0.0.0.0:55013}
specific.avro.reader: true
useNativeDecoding: true
bindings:
revenueEventConsumer-in-0:
binder: kafka-string-avro-native
destination: email.campaign_revenue_events
group: test-4
consumer:
concurrency: 1
batch-mode: true
use-native-decoding: true
function:
definition: revenueEventConsumer
kafka:
binder:
brokers: 0.0.0.0:55008
i found a workaround for the issue by overriding the Bean TraceFunctionAroundWrapper and overriding the setSkipInputConversion(true)
see code below
#Bean
#Primary
TraceFunctionAroundWrapper customTraceFunctionAroundWrapper(Environment environment, Tracer tracer, Propagator propagator,
Propagator.Setter<MessageHeaderAccessor> injector, Propagator.Getter<MessageHeaderAccessor> extractor,
ObjectProvider<List<FunctionMessageSpanCustomizer>> customizers) {
return new CustomTraceFunctionAroundWrapper(environment, tracer, propagator, injector, extractor,
customizers.getIfAvailable(ArrayList::new));
}
public class CustomTraceFunctionAroundWrapper extends TraceFunctionAroundWrapper {
public CustomTraceFunctionAroundWrapper(Environment environment, Tracer tracer,
Propagator propagator,
Propagator.Setter<MessageHeaderAccessor> injector,
Propagator.Getter<MessageHeaderAccessor> extractor) {
super(environment, tracer, propagator, injector, extractor);
}
public CustomTraceFunctionAroundWrapper(Environment environment, Tracer tracer, Propagator propagator, Propagator.Setter<MessageHeaderAccessor> injector,
Propagator.Getter<MessageHeaderAccessor> extractor,
List<FunctionMessageSpanCustomizer> customizers) {
super(environment, tracer, propagator, injector, extractor, customizers);
}
#Override
protected Object doApply(Object message, SimpleFunctionRegistry.FunctionInvocationWrapper targetFunction) {
targetFunction.setSkipInputConversion(true);
return super.doApply(message, targetFunction);
}
}
this is only a workaround until the bug is fixed is spring cloud stream and sleuth

Spring-cloud kafka stream schema registry

I am trying to transform with functionnal programming (and spring cloud stream) an input AVRO message from an input topic, and publish a new message on an output topic.
Here is my transform function :
#Bean
public Function<KStream<String, Data>, KStream<String, Double>> evenNumberSquareProcessor() {
return kStream -> kStream.transform(() -> new CustomProcessor(STORE_NAME), STORE_NAME);
}
The CustomProcessor is a class that implements the "Transformer" interface.
I have tried the transformation with non AVRO input and it works fine.
My difficulties is how to declare the schema registry in the application.yaml file or in the the spring application.
I have tried a lot of different configurations (it seems difficult to find the right documentation) and each time the application don't find the settings for the schema.registry.url. I have the following error :
Error creating bean with name 'kafkaStreamsFunctionProcessorInvoker':
Invocation of init method failed; nested exception is
java.lang.IllegalStateException:
org.apache.kafka.common.config.ConfigException: Missing required
configuration "schema.registry.url" which has no default value.
Here is my application.yml file :
spring:
cloud:
stream:
function:
definition: evenNumberSquareProcessor
bindings:
evenNumberSquareProcessor-in-0:
destination: input
content-type: application/*+avro
group: group-1
evenNumberSquareProcessor-out-0:
destination: output
kafka:
binder:
brokers: my-cluster-kafka-bootstrap.kafka:9092
consumer-properties:
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: http://localhost:8081
I have tried this configuration too :
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: my-cluster-kafka-bootstrap.kafka:9092
configuration:
schema.registry.url: http://localhost:8081
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
evenNumberSquareProcessor-in-0:
consumer:
destination: input
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
evenNumberSquareProcessor-out-0:
destination: output
My spring boot application is declared in this way, with the activation of the schema registry client :
#EnableSchemaRegistryClient
#SpringBootApplication
public class TransformApplication {
public static void main(String[] args) {
SpringApplication.run(TransformApplication.class, args);
}
}
Thanks for any help you could bring to me.
Regards
CG
Configure the schema registry under the configuration then it will be available to all binders. By the way. The avro serializer is under the bindings and the specific channel. If you want use the default property default.value.serde:. Your Serde might be the wrong too.
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: localhost:9092
configuration:
schema.registry.url: http://localhost:8081
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
process-in-0:
consumer:
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
Don't use the #EnableSchemaRegistryClient. Enable the schema registry on the Avro Serde. In this example, I am using the bean Data of your definition. Try to follow this example here.
#Service
public class CustomSerdes extends Serdes {
private final static Map<String, String> serdeConfig = Stream.of(
new AbstractMap.SimpleEntry<>(SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081"))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
public static Serde<Data> DataAvro() {
final Serde<Data> dataAvroSerde = new SpecificAvroSerde<>();
dataAvroSerde.configure(serdeConfig, false);
return dataAvroSerde;
}
}

Why X-RateLimit-Remaining -1 in response header while using spring cloud api gateway ratelimit with redis?

I implemented ratelimit with redis in my spring cloud api gateway. Here is part of application.yml:
spring:
cloud:
gateway:
httpclient:
ssl:
useInsecureTrustManager: true
discovery:
locator:
enabled: true
routes:
- id: test-rest-service
uri: lb://test-rest-service
predicates:
- Path=/test/**
filters:
- RewritePath=/test/(?<path>.*), /$\{path}
- name: RequestRateLimiter
args:
key-resolver: "#{#userRemoteAddressResolver}"
redis-rate-limiter.replenishRate: 2
redis-rate-limiter.burstCapacity: 3
I called a GET API via postman and checked response header.
X-RateLimit-Remaining -1
X-RateLimit-Burst-Capacity 3
X-RateLimit-Replenish-Rate 2
The rate limit is not working. Why am I getting negative value for X-RateLimit-Remaining? What does it mean? How do I fix it?
This happened to me because there was no Redis instance launched. You have two options:
1) Download and run a Redis instance using docker:
docker run --name redis -d redis
2) You can use in testing an Embedded Redis Server as it is explained in the following article by adding the maven dependency:
<dependency>
<groupId>it.ozimov</groupId>
<artifactId>embedded-redis</artifactId>
<version>0.7.2</version>
<scope>test</scope>
</dependency>
And including the following snippet:
#TestConfiguration
public class TestRedisConfiguration {
private RedisServer redisServer;
public TestRedisConfiguration() {
this.redisServer = new RedisServer(6379);
}
#PostConstruct
public void postConstruct() {
redisServer.start();
}
#PreDestroy
public void preDestroy() {
redisServer.stop();
}
}
I faced the same issue recently. In my case, there was an older version of Redis installed which caused X-RateLimit-Remaining to be set to -1 constantly.
redis-cli shutdown

Why isn't the KafkaTransactionManager being applied to this Spring Cloud Stream Kafka Producer?

I have configured a Spring Cloud Stream Kafka application to use transactions (full source code available on Github):
spring:
application:
name: message-relay-service
cloud:
stream:
kafka:
binder:
transaction:
transaction-id-prefix: message-relay-tx-
producer:
configuration:
retries: 1
acks: all
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
bindings:
output:
destination: transfer
contentType: application/*+avro
schema-registry-client:
endpoint: http://localhost:8081
schema:
avro:
subjectNamingStrategy: org.springframework.cloud.stream.schema.avro.QualifiedSubjectNamingStrategy
datasource:
url: jdbc:h2:tcp://localhost:9090/mem:mydb
driver-class-name: org.h2.Driver
username: sa
password:
jpa:
hibernate:
ddl-auto: create
database-platform: org.hibernate.dialect.H2Dialect
server:
port: 8085
This app has a scheduled task that periodically checks records to send in a database using a #Scheduled task. This methods is annotated with #Transactional and the main class defines #EnableTransactionManagement.
However when debugging the code I've realized that the KafkaTransactionManager isn't being executed, that is to say, there are no Kafka transactions in place. What's the problem?
#EnableTransactionManagement
#EnableBinding(Source::class)
#EnableScheduling
#SpringBootApplication
class MessageRelayServiceApplication
fun main(args: Array<String>) {
runApplication<MessageRelayServiceApplication>(*args)
}
---
#Component
class MessageRelay(private val outboxService: OutboxService,
private val source: Source) {
#Transactional
#Scheduled(fixedDelay = 10000)
fun checkOutbox() {
val pending = outboxService.getPending()
pending.forEach {
val message = MessageBuilder.withPayload(it.payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, it.messageKey)
.setHeader(MessageHeaders.CONTENT_TYPE, it.contentType)
.build()
source.output().send(message)
outboxService.markAsProcessed(it.id)
}
}
}
I don't see #EnableTransactionManagement in account-service, only in message-relay-service.
In any case, your scenario is not currently supported; the transactional binder was designed for processors where the consumer starts the transaction, any records sent on the consumer thread participate in that transaction, the consumer sends the offset to the transaction and then commits the transaction.
It was not designed for producer-only bindings; please open a GitHub issue against the binder because it should be supported.
I am not sure why you are not seeing a transaction starting but, even if it does, the problem is that #Transactional will use Boot's auto-configured KTM (and producer factory) and the binding uses a different producer factory (the one from your configuration).
Even if a transaction is in process, the producer won't participate in it.

Kafka producer JSON serialization

I'm trying to use Spring Cloud Stream to integrate with Kafka. The message being written is a Java POJO and while it works as expected (the message is being written to the topic and I can read off with a consumer app), there are some unknown characters being added to the start of the message which are causing trouble when trying to integrate Kafka Connect to sink the messages from the topic.
With the default setup this is the message being pushed to Kafka:
 contentType "text/plain"originalContentType "application/json;charset=UTF-8"{"payload":{"username":"john"},"metadata":{"eventName":"Login","sessionId":"089acf50-00bd-47c9-8e49-dc800c1daf50","username":"john","hasSent":null,"createDate":1511186145471,"version":null}}
If I configure the Kafka producer within the Java app then the message is written to the topic without the leading characters / headers:
#Configuration
public class KafkaProducerConfig {
#Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"localhost:9092");
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
return new DefaultKafkaProducerFactory<String, Object>(configProps);
}
#Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
Message on Kafka:
{"payload":{"username":"john"},"metadata":{"eventName":"Login","sessionId":"089acf50-00bd-47c9-8e49-dc800c1daf50","username":"john","hasSent":null,"createDate":1511186145471}
Since I'm just setting the key/value serializers I would've expected to be able to do this within the application.yml properties file, rather than doing it through the code.
However, when the yml is updated to specify the serializers it's not working as I would expect i.e. it's not generating the same message as the producer configured in Java (above):
spring:
profiles: local
cloud:
stream:
bindings:
session:
destination: session
contentType: application/json
kafka:
binder:
brokers: localhost
zkNodes: localhost
defaultZkPort: 2181
defaultBrokerPort: 9092
bindings:
session:
producer:
configuration:
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
Message on Kafka:
"/wILY29udGVudFR5cGUAAAAMInRleHQvcGxhaW4iE29yaWdpbmFsQ29udGVudFR5cGUAAAAgImFwcGxpY2F0aW9uL2pzb247Y2hhcnNldD1VVEYtOCJ7InBheWxvYWQiOnsidXNlcm5hbWUiOiJqb2huIn0sIm1ldGFkYXRhIjp7ImV2ZW50TmFtZSI6IkxvZ2luIiwic2Vzc2lvbklkIjoiNGI3YTBiZGEtOWQwZS00Nzg5LTg3NTQtMTQyNDUwYjczMThlIiwidXNlcm5hbWUiOiJqb2huIiwiaGFzU2VudCI6bnVsbCwiY3JlYXRlRGF0ZSI6MTUxMTE4NjI2NDk4OSwidmVyc2lvbiI6bnVsbH19"
Should it be possible to configure this solely through the application yml? Are there additional settings that are missing?
Credit to #Gary for the answer above!
For completeness, the configuration which is now working for me is below.
spring:
profiles: local
cloud:
stream:
bindings:
session:
producer:
useNativeEncoding: true
destination: session
contentType: application/json
kafka:
binder:
brokers: localhost
zkNodes: localhost
defaultZkPort: 2181
defaultBrokerPort: 9092
bindings:
session:
producer:
configuration:
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
See headerMode and useNativeEncoding in the producer properties (....session.producer.useNativeEncoding).
headerMode
When set to raw, disables header embedding on output. Effective only for messaging middleware that does not support message headers natively and requires header embedding. Useful when producing data for non-Spring Cloud Stream applications.
Default: embeddedHeaders.
useNativeEncoding
When set to true, the outbound message is serialized directly by client library, which must be configured correspondingly (e.g. setting an appropriate Kafka producer value serializer). When this configuration is being used, the outbound message marshalling is not based on the contentType of the binding. When native encoding is used, it is the responsibility of the consumer to use appropriate decoder (ex: Kafka consumer value de-serializer) to deserialize the inbound message. Also, when native encoding/decoding is used the headerMode property is ignored and headers will not be embedded into the message.
Default: false.
Now, spring.kafka.producer.value-serializer property can be used
yml:
spring:
kafka:
producer:
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer

Resources