Spring Cloud Streams - Multiple dynamic destinations for sources and sinks - spring

There was a change request on my system, which currently listens to multiple channels and send messages to multiple channels as well, but now the destination names will be in the database and change any time.
I'm having trouble believing I'm the first one to come across this, but I see limited information out there.
All I found is these 2...
Dynamic sink destination:
https://github.com/spring-cloud-stream-app-starters/router/tree/master/spring-cloud-starter-stream-sink-router, but how would that work to active listening to those channels the way it's done by #StreamListener?
Dynamic source destinations:
https://github.com/spring-cloud/spring-cloud-stream-samples/blob/master/source-samples/dynamic-destination-source/, which does this
#Bean
#ServiceActivator(inputChannel = "sourceChannel")
public ExpressionEvaluatingRouter router() {
ExpressionEvaluatingRouter router = new ExpressionEvaluatingRouter(new SpelExpressionParser().parseExpression("payload.id"));
router.setDefaultOutputChannelName("default-output");
router.setChannelResolver(resolver);
return router;
}
But what's that "payload.id"? And where are the destinations specified there??

Feel free to improve my answer, I hope it will help others.
Now the code (It worked in my debugger). This is an example, not production ready!
This is how to send a message to dynamic destination
import org.springframework.messaging.MessageChannel;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.binding.BinderAwareChannelResolver;
#Service
#EnableBinding
public class MessageSenderService {
#Autowired
private BinderAwareChannelResolver resolver;
#Transactional
public void sendMessage(final String topicName, final String payload) {
final MessageChannel messageChannel = resolver.resolveDestination(topicName);
messageChannel.send(new GenericMessage<String>(payload));
}
}
And configuration for Spring Cloud Stream.
spring:
cloud:
stream:
dynamicDestinations: output.topic.1,output.topic2,output.topic.3
I found here
https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/index.html#dynamicdestination
It will work in spring Cloud Stream version 2+. I use 2.1.2
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
<version>2.1.2.RELEASE</version>
</dependency>
This is how to consume a message from dynamic destination
https://stackoverflow.com/a/56148190/4587961
Configuration
spring:
cloud:
stream:
default:
consumer:
concurrency: 2
partitioned: true
bindings:
# inputs
input:
group: application_name_group
destination: topic-1,topic-2
content-type: application/json;charset=UTF-8
Java consumer.
#Component
#EnableBinding(Sink.class)
public class CommonConsumer {
private final static Logger logger = LoggerFactory.getLogger(CommonConsumer.class);
#StreamListener(target = Sink.INPUT)
public void consumeMessage(final Message<Object> message) {
logger.info("Received a message: \nmessage:\n{}", message.getPayload());
final String topic = message.getHeaders().get("kafka_receivedTopic");
// Here I define logic which handles messages depending on message headers and topic.
// In my case I have configuration which forwards these messages to webhooks, so I need to have mapping topic name -> webhook URI.
}
}

Related

How to intercept message republished to DLQ in Spring Cloud RabbitMQ?

I want to intercept messages that are republished to DLQ after retry limit is exhausted, and my ultimate goal is to eliminate x-exception-stacktrace header from those messages.
Config:
spring:
application:
name: sandbox
cloud:
function:
definition: rabbitTest1Input
stream:
binders:
rabbitTestBinder1:
type: rabbit
environment:
spring:
rabbitmq:
addresses: localhost:55015
username: guest
password: guest
virtual-host: test
bindings:
rabbitTest1Input-in-0:
binder: rabbitTestBinder1
consumer:
max-attempts: 3
destination: ex1
group: q1
rabbit:
bindings:
rabbitTest1Input-in-0:
consumer:
autoBindDlq: true
bind-queue: true
binding-routing-key: q1key
deadLetterExchange: ex1-DLX
dlqDeadLetterExchange: ex1
dlqDeadLetterRoutingKey: q1key_dlq
dlqTtl: 180000
prefetch: 5
queue-name-group-only: true
republishToDlq: true
requeueRejected: false
ttl: 86400000
#Configuration
class ConsumerConfig {
companion object : KLogging()
#Bean
fun rabbitTest1Input(): Consumer<Message<String>> {
return Consumer {
logger.info("Received from test1 queue: ${it.payload}")
throw AmqpRejectAndDontRequeueException("FAILED") // force republishing to DLQ after N retries
}
}
}
First I tried to register #GlobalChannelInterceptor (like here), but since RabbitMessageChannelBinder uses its own private RabbitTemplate instance (not autowired) for republishing (see #getErrorMessageHandler) it doesn't get intercepted.
Then I tried to extend RabbitMessageChannelBinder class by throwing away the code related to x-exception-stacktrace and then declare this extension as a bean:
/**
* Forked from {#link org.springframework.cloud.stream.binder.rabbit.RabbitMessageChannelBinder} with the goal
* to eliminate {#link RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE} header from messages republished to DLQ
*/
class RabbitMessageChannelBinderWithNoStacktraceRepublished
: RabbitMessageChannelBinder(...)
// and then
#Configuration
#Import(
RabbitAutoConfiguration::class,
RabbitServiceAutoConfiguration::class,
RabbitMessageChannelBinderConfiguration::class,
PropertyPlaceholderAutoConfiguration::class,
)
#EnableConfigurationProperties(
RabbitProperties::class,
RabbitBinderConfigurationProperties::class,
RabbitExtendedBindingProperties::class
)
class RabbitConfig {
#Bean
#Primary
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
#Order(Ordered.HIGHEST_PRECEDENCE)
fun customRabbitMessageChannelBinder(
appCtx: ConfigurableApplicationContext,
... // required injections
): RabbitMessageChannelBinder {
// remove the original (auto-configured) bean. Explanation is after the code snippet
val registry = appCtx.autowireCapableBeanFactory as BeanDefinitionRegistry
registry.removeBeanDefinition("rabbitMessageChannelBinder")
// ... and replace it with custom binder. It's initialized absolutely the same way as original bean, but is of forked class
return RabbitMessageChannelBinderWithNoStacktraceRepublished(...)
}
}
But in this case my channel binder doesn't respect the YAML properties (e.g. addresses: localhost:55015) and uses default values (e.g. localhost:5672)
INFO o.s.a.r.c.CachingConnectionFactory - Attempting to connect to: [localhost:5672]
INFO o.s.a.r.l.SimpleMessageListenerContainer - Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused
On the other hand if I don't remove original binder from Spring context I get following error:
Caused by: java.lang.IllegalStateException: Multiple binders are available, however neither default nor per-destination binder name is provided. Available binders are [rabbitMessageChannelBinder, customRabbitMessageChannelBinder]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:145)
Could anyone give me a hint how to solve this problem?
P.S. I use Spring Cloud Stream 3.1.6 and Spring Boot 2.6.6
Disable the binder retry/DLQ configuration (maxAttempts=1, republishToDlq=false, and other dlq related properties).
Add a ListenerContainerCustomizer to add a custom retry advice to the advice chain, with a customized dead letter publishing recoverer.
Manually provision the DLQ using a Queue #Bean.
#SpringBootApplication
public class So72871662Application {
public static void main(String[] args) {
SpringApplication.run(So72871662Application.class, args);
}
#Bean
public Consumer<String> input() {
return str -> {
System.out.println();
throw new RuntimeException("test");
};
}
#Bean
ListenerContainerCustomizer<MessageListenerContainer> customizer(RetryOperationsInterceptor retry) {
return (cont, dest, grp) -> {
((AbstractMessageListenerContainer) cont).setAdviceChain(retry);
};
}
#Bean
RetryOperationsInterceptor interceptor(MessageRecoverer recoverer) {
return RetryInterceptorBuilder.stateless()
.maxAttempts(3)
.backOffOptions(3_000L, 2.0, 10_000L)
.recoverer(recoverer)
.build();
}
#Bean
MessageRecoverer recoverer(RabbitTemplate template) {
return new RepublishMessageRecoverer(template, "DLX", "errors") {
#Override
protected void doSend(#Nullable
String exchange, String routingKey, Message message) {
message.getMessageProperties().getHeaders().remove(RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE);
super.doSend(exchange, routingKey, message);
}
};
}
#Bean
FanoutExchange dlx() {
return new FanoutExchange("DLX");
}
#Bean
Queue dlq() {
return new Queue("errors");
}
#Bean
Binding dlqb() {
return BindingBuilder.bind(dlq()).to(dlx());
}
}

Spring Cloud Stream Kafka send message

How can i send a message with the new Spring Cloud Stream Kafka Functional Model?
The deprecated way looked like this.
public interface OutputTopic {
#Output("output")
MessageChannel output();
}
#Autowired
OutputTopic outputTopic;
public someMethod() {
outputTopic.output().send(MessageBuilder.build());
}
But how can i send a message in the functional style?
application.yml
spring:
cloud:
function:
definition: process
stream:
bindings:
process-out-0:
destination: output
binder: kafka
#Configuration
public class Configuration {
#Bean
Supplier<Message<String>> process() {
return () -> {
return MessageBuilder.withPayload("foo")
.setHeader(KafkaHeaders.MESSAGE_KEY, "bar".getBytes()).build();
};
}
I would Autowire a MessageChannel but there is no MessageChannel-Bean for process, process-out-0, output or something like that. Or can i send a message with a Supplier-Bean?
Could someone please give me an example?
Thanks a lot!
You can either use the StreamBridge or the reactor API - see Sending arbitrary data to an output (e.g. Foreign event-driven sources)

Unable to send custom header using spring cloud stream kafka

I have two microservices written in Java, using Spring Boot.
I use Kafka, through Spring Cloud Stream Kafka, to send messages between them.
I need to send a custom header, but with no success until now.
I have read and tried most of the things I have found on internet and Spring Cloud Stream documentation...
... still I have been unable to make it work.
Which means I never receive a message in the receiver because the header cannot be found and cannot be null.
I suspect the header is never written in the message. Right now I am trying to verify this with Kafkacat.
Any help will be wellcome
Thanks in advance.
------ information --------------------
Here it is the sender code:
#SendTo("notifications")
public void send(NotificationPayload payload, String eventId) {
var headerMap = Collections.singletonMap("EVENT_ID",
eventId.getBytes(StandardCharsets.UTF_8));
MessageHeaders headers = new MessageHeaders(headerMap);
var message = MessageBuilder.createMessage(payload, headers);
notifications.send(message);
}
Where notifications is a MessageChannel
Here is the related configuration for message sender.
spring:
cloud:
stream:
defaultBinder: kafka
bindings:
notifications:
binder: kafka
destination: notifications
contentType: application/x-java-object;type=com.types.NotificationPayload
producer:
partitionCount: 1
headerMode: headers
kafka:
binder:
headers: EVENT_ID
I have also tried with headers: "EVENT_ID"
Here is the code for the receiver part:
#StreamListener("notifications")
public void receiveNotif(#Header("EVENT_ID") byte[] eventId,
#Payload NotificationPayload payload) {
var eventIdS = new String((byte[]) eventId, StandardCharsets.UTF_8);
...
// do something with the payload
}
And the configuration for the receiving part:
spring:
cloud:
stream:
kafka:
bindings:
notifications:
consumer:
headerMode: headers
Versions
<spring-cloud-stream-dependencies.version>Horsham.SR4</spring-cloud-stream-dependencies.version>
<spring-cloud-stream-binder-kafka.version>3.0.4.RELEASE</spring-cloud-stream-binder-kafka.version>
<spring-cloud-schema-registry.version>1.0.4.RELEASE</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.0.4.RELEASE</spring-cloud-stream.version>
What version are you using? Describe "can't get it to work" in more detail.
This works fine...
#SpringBootApplication
#EnableBinding(Source.class)
public class So64586916Application {
public static void main(String[] args) {
SpringApplication.run(So64586916Application.class, args);
}
#InboundChannelAdapter(channel = Source.OUTPUT)
Message<String> source() {
return MessageBuilder.withPayload("foo")
.setHeader("myHeader", "someValue")
.build();
}
#KafkaListener(id = "in", topics = "output")
void listen(Message<?> in) {
System.out.println(in);
}
}
spring.kafka.consumer.auto-offset-reset=earliest
GenericMessage [payload=byte[3], headers={myHeader=someValue, kafka_offset=0, ...
GenericMessage [payload=byte[3], headers={myHeader=someValue, kafka_offset=1, ...
EDIT
I also tested it by sending to the channel directly; again with no problems:
#Autowired
MessageChannel output;
#Bean
public ApplicationRunner runner() {
return args -> {
this.output.send(MessageBuilder.withPayload("foo")
.setHeader("myHeader", "someValue")
.build());
};
}

Spring Cloud Stream Kafka Channel Not Working in Spring Boot Application

I have been attempting to get an inbound SubscribableChannel and outbound MessageChannel working in my spring boot application.
I have successfully setup the kafka channel and tested it successfully.
Furthermore I have create a basic spring boot application that tests adding and receiving things from the channel.
The issue I am having is when I put the equivalent code in the application it belongs in, it appears that the messages never get sent or received. By debugging it's hard to ascertain what's going on but the only thing that looks different to me is the channel-name. In the working impl the channel name is like application.channel in the non working app its localhost:8080/channel.
I was wondering if there is some spring boot configuration blocking or altering the creation of the channels into a different channel source?
Anyone had any similar issues?
application.yml
spring:
datasource:
url: jdbc:h2:mem:dpemail;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
platform: h2
username: hello
password:
driverClassName: org.h2.Driver
jpa:
properties:
hibernate:
show_sql: true
use_sql_comments: true
format_sql: true
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
contentType: application/json
email-out:
destination: email
contentType: application/json
Email
public class Email {
private long timestamp;
private String message;
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
Binding Config
#EnableBinding(EmailQueues.class)
public class EmailQueueConfiguration {
}
Interface
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
Controller
#RestController
#RequestMapping("/queue")
public class EmailQueueController {
private EmailQueues emailQueues;
#Autowired
public EmailQueueController(EmailQueues emailQueues) {
this.emailQueues = emailQueues;
}
#RequestMapping(value = "sendEmail", method = POST)
#ResponseStatus(ACCEPTED)
public void sendToQueue() {
MessageChannel messageChannel = emailQueues.outboundEmails();
Email email = new Email();
email.setMessage("hello world: " + System.currentTimeMillis());
email.setTimestamp(System.currentTimeMillis());
messageChannel.send(MessageBuilder.withPayload(email).setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON).build());
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(#Payload Email email) {
System.out.println("received: " + email.getMessage());
}
}
I'm not sure if one of the inherited configuration projects using Spring-Cloud, Spring-Cloud-Sleuth might be preventing it from working, but even when I remove it still doesnt. But unlike my application that does work with the above code I never see the ConsumeConfig being configured, eg:
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id = consumer-2
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
(This configuration is what I see in my basic Spring Boot application when running the above code and the code works writing and reading from the kafka channel)....
I assume there is some over spring boot configuration from one of the libraries I'm using creating a different type of channel I just cannot find what that configuration is.
What you posted contains a lot of unrelated configuration, so hard to determine if anything gets in the way. Also, when you say "..it appears that the messages never get sent or received.." are there any exceptions in the logs? Also, please state the version of Kafka you're using as well as Spring Cloud Stream.
Now, I did try to reproduce it based on your code (after cleaning up a bit to only leave relevant parts) and was able to successfully send/receive.
My Kafka version is 0.11 and Spring Cloud Stream 2.0.0.
Here is the relevant code:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
email-out:
destination: email
#SpringBootApplication
#EnableBinding(KafkaQuestionSoApplication.EmailQueues.class)
public class KafkaQuestionSoApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaQuestionSoApplication.class, args);
}
#Bean
public ApplicationRunner runner(EmailQueues emailQueues) {
return new ApplicationRunner() {
#Override
public void run(ApplicationArguments args) throws Exception {
emailQueues.outboundEmails().send(new GenericMessage<String>("Hello"));
}
};
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(String payload) {
System.out.println("received: " + payload);
}
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
}
Okay so after a lot of debugging... I discovered that something is creating a Test Support Binder (how don't know yet) so obviously this is used to not impact add messages to a real channel.
After adding
#SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
The kafka channel configurations have worked and messages are adding.. would be interesting to know what on earth is setting up this test support binder.. I'll find that sucker eventually.

Spring Cloud Stream dynamic channels

I am using Spring Cloud Stream and want to programmatically create and bind channels. My use case is that during application startup I receive the dynamic list of Kafka topics to subscribe to. How can I then create a channel for each topic?
I ran into similar scenario recently and below is my sample of creating SubscriberChannels dynamically.
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
BindingProperties bindingProperties = new BindingProperties();
bindingProperties.setConsumer(consumerProperties);
bindingProperties.setDestination(retryTopic);
bindingProperties.setGroup(consumerGroup);
bindingServiceProperties.getBindings().put(consumerName, bindingProperties);
SubscribableChannel channel = (SubscribableChannel)bindingTargetFactory.createInput(consumerName);
beanFactory.registerSingleton(consumerName, channel);
channel = (SubscribableChannel)beanFactory.initializeBean(channel, consumerName);
bindingService.bindConsumer(channel, consumerName);
channel.subscribe(consumerMessageHandler);
I had to do something similar for the Camel Spring Cloud Stream component.
Perhaps the Consumer code to bind a destination "really just a String indicating the channel name" would be useful to you?
In my case I only bind a single destination, however I don't imagine it being much different conceptually for multiple destinations.
Below is the gist of it:
#Override
protected void doStart() throws Exception {
SubscribableChannel bindingTarget = createInputBindingTarget();
bindingTarget.subscribe(message -> {
// have your way with the received incoming message
});
endpoint.getBindingService().bindConsumer(bindingTarget,
endpoint.getDestination());
// at this point the binding is done
}
/**
* Create a {#link SubscribableChannel} and register in the
* {#link org.springframework.context.ApplicationContext}
*/
private SubscribableChannel createInputBindingTarget() {
SubscribableChannel channel = endpoint.getBindingTargetFactory()
.createInputChannel(endpoint.getDestination());
endpoint.getBeanFactory().registerSingleton(endpoint.getDestination(), channel);
channel = (SubscribableChannel) endpoint.getBeanFactory().initializeBean(channel,
endpoint.getDestination());
return channel;
}
See here for the full source for more context.
I had a task where I did not know the topics in advance. I solved it by having one input channel which listens to all the topics I need.
https://docs.spring.io/spring-cloud-stream/docs/Brooklyn.RELEASE/reference/html/_configuration_options.html
Destination
The target destination of a channel on the bound middleware (e.g., the RabbitMQ exchange or Kafka topic). If the channel is bound as a consumer, it could be bound to multiple destinations and the destination names can be specified as comma-separated String values. If not set, the channel name is used instead.
So my configuration
spring:
cloud:
stream:
default:
consumer:
concurrency: 2
partitioned: true
bindings:
# inputs
input:
group: application_name_group
destination: topic-1,topic-2
content-type: application/json;charset=UTF-8
Then I defined one consumer which handles messages from all these topics.
#Component
#EnableBinding(Sink.class)
public class CommonConsumer {
private final static Logger logger = LoggerFactory.getLogger(CommonConsumer.class);
#StreamListener(target = Sink.INPUT)
public void consumeMessage(final Message<Object> message) {
logger.info("Received a message: \nmessage:\n{}", message.getPayload());
// Here I define logic which handles messages depending on message headers and topic.
// In my case I have configuration which forwards these messages to webhooks, so I need to have mapping topic name -> webhook URI.
}
}
Note, in your case it may not be a solution. I needed to forward messages to webhooks, so I could have configuration mapping.
I also thought about other ideas.
1) You kafka client consumer without Spring Cloud.
2) Create a predefined number of inputs, for example 50.
input-1
intput-2
...
intput-50
And then have a configuration for some of these inputs.
Related discussions
Spring cloud stream to support routing messages dynamically
https://github.com/spring-cloud/spring-cloud-stream/issues/690
https://github.com/spring-cloud/spring-cloud-stream/issues/1089
We use Spring Cloud 2.1.1 RELEASE
MessageChannel messageChannel = createMessageChannel(channelName);
messageChannel.send(getMessageBuilder().apply(data));
public MessageChannel createMessageChannel(String channelName) {
return (MessageChannel) applicationContext.getBean(channelName);}
public Function<Object, Message<Object>> getMessageBuilder() {
return payload -> MessageBuilder
.withPayload(payload)
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.build();}
For the incoming messages, you can explicitly use BinderAwareChannelResolver to dynamically resolve the destination. You can check this example where router sink uses binder aware channel resolver.

Resources