Sending a Message with Spring Cloud Stream and RabbitMq changes ID - spring

I'm using Spring Cloud Stream and RabbitMq to exchange Messages between different microservices.
Thats my setup to publish a message.
public interface OutputChannels {
static final String OUTPUT_CHANNEL = "outputChannel";
#Output
MessageChannel outputChannel();
}
.
#EnableBinding(OutputChannels.class)
#Log4j
public class OutputProducer {
#Autowired
private OutputChannels outputChannels;
public void createMessage(MyContent myContent) {
Message<MyContent> message = MessageBuilder
.withPayload(myContent)
.build();
outputChannels.outputChannel().send(message);
log.info("Sent message: " + message.getHeaders().getId() + myContent);
}
}
And the setup to receive the message
public interface InputChannels {
String INPUT_CHANNEL = "inputChannel";
#Input
SubscribableChannel inputChannel();
}
.
#EnableBinding(InputChannels.class)
#Log
public class InputConsumer {
#StreamListener(InputChannels.INPUT_CHANNEL)
public void receive(Message<MyContent> message) {
MyContent myContent = message.getPayload();
log.info("Received message: " + message.getHeaders().getId() + ", " + myContent);
}
}
I am able to successfully exchange messages with this setup. I would expect, that the IDs of the sent message and the received message are equal. But they are always different UUIDs.
Is there a way that the message keeps the same ID all the way from the producer, through the RabbitMq, to the consumer?

Spring Messaging messages are immutable; they get a new ID each time they are mutated.
You can use a custom header or IntegrationMessageHeaderAccessor.CORRELATION_ID to convey a constant value; in most use cases, the correlation id header is set by the application to the ID header at the start of a message's journey.

Related

Producer callback in Spring Cloud Stream with reactor core publisher

I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.

Bind RabbitMQ consumer using Spring Cloud Stream to RabbitMQ producer

I have two microservices, one for collecting XML files from internal FTP server ,transforming it to DTO objects and then publishing them as bytes in RabbitMQ and the other for deserializing the incoming bytes from RabbitMQ to DTO objects, mapping them to JPA entities and persisiting them to database.
I'd like configure RabbitMQ broker between these two microservices like below:
1) for microservice that collect XML files, I edited in application.properties as below:
spring.cloud.stream.bindings.output.destination=TOPIC
spring.cloud.stream.bindings.output.group=proactive-policy
2) for microservice that persist incoming DTO onjects, I configured in application.properties as following:
spring.cloud.stream.bindings.input.destination=TOPIC
spring.cloud.stream.bindings.input.group=proactive-policy
For receiving incoming bytes from RabbitMQ I'm using second microservice as sink:
#EnableJpaAuditing
#EnableBinding(Sink.class)
#SpringBootApplication(scanBasePackages = { "org.proactive.policy.data.cache" })
#RefreshScope
public class ProactivePolicyDataCacheApplication {
private static Logger logger = LoggerFactory.getLogger(ProactivePolicyDataCacheApplication.class);
#Autowired
PolicyService policyService;
public static void main(String[] args) {
SpringApplication.run(ProactivePolicyDataCacheApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void input(Message<byte[]> message) throws Exception {
if (Objects.isNull(message) || Objects.isNull(message.getPayload())) {
logger.error("the message is null ");
throw new IllegalArgumentException("`message` and `message.payload` cannot be null");
}
byte[] data = message.getPayload();
if (data.length == 0) {
logger.warn("Received empty message");
return;
}
logger.info("Got data from policy-collector = " + new String(data, "UTF-8"));
PolicyListDto policyListDto = (PolicyListDto) SerializationUtils.deserialize(data);
logger.info("Policies.xml from policy-collector = " + policyListDto.getPolicy().toString());
policyService.save(policyListDto);
}
}
But when I open RabbitMQ console for looking at exchanges I didn't receive any thing in Queue TOPIC.proactive-policy But the incoming messages are received in another Queue that I haven't configured it named FTPSTREAM.proactive-policy-collector
Is there any suggestion for resolving this issue
Couple of points:
1. There is no such thing as 'group' for the output binding. Consumer Group is a consumer property. Here is the fragment of the javadocs.
/**
* Unique name that the binding belongs to (applies to consumers only). Multiple
* consumers within the same group share the subscription. A null or empty String
* value indicates an anonymous group that is not shared.
* #see org.springframework.cloud.stream.binder.Binder#bindConsumer(java.lang.String,
* java.lang.String, java.lang.Object,
* org.springframework.cloud.stream.binder.ConsumerProperties)
*/
private String group;
2. The name 'FTPSTREAM.proactive-policy-collector' is definitely not something that is generated by the spring-cloud-stream, so consider looking into your configuration and see what have you missed.
It tells me that you have some consumer that has its 'destination' named FTPSTREAM and its 'group' proactive-policy-collector. It also tells me that your producer sends messages to the FTPSTREAM exchange.

AMQP unable to receive message back from listener

I have a issue with Receive message back from Listener to publisher. I am getting
**AmqpReplyTimeoutException **. Below is the code of Publisher from where i am publishing to queue.
for(CsvWrapperPojo item : items){
resultList.addAll(item.getDbResultList());
for(CSVPojo pojo :item.getQueueRequestList()){
sampleResponseMessageRabbitConverterFuture= asyncRabbitTemplate.convertSendAndReceive("spring-boot-rabbitmq-Interactive.async_Solve_InteractiveMsg", "Interactive_RequestQueue", pojo);
//CSVPojo res =(CSVPojo)rabbitTemplate.convertSendAndReceive("spring-boot-rabbitmq-Interactive.async_Solve_InteractiveMsg", "Interactive_RequestQueue", pojo);
System.out.println("heyyyyyy:" + sampleResponseMessageRabbitConverterFuture.get().getLatitute());
//resultList.add(res);
//resultList.add(sampleResponseMessageRabbitConverterFuture.get());
}
}
By using it i am able to publish to queue, i have subscriber code below.
#EnableRabbit
public class ListenerQueueSubscriber {
#RabbitHandler
#RabbitListener(containerFactory = "simpleMessageListenerContainerFactory", queues ="Interactive_RequestQueue")
public void subscribeToRequestQueue(#Payload CSVPojo sampleRequestMessage, Message message) throws InterruptedException {
System.out.println("inside listener");
sampleRequestMessage.setResult("Hello");
Thread.sleep(120000);
System.out.println("After sleep:" +sampleRequestMessage.getLongitude());
//return sampleRequestMessage;
}
}
By using above subscriber able to listen message and i am appending "Hello and put sleep for 2 minutes and after that i have to receive the message back to publisher from where i have published . But unfortunately not receiving the message with Hello appended getting **AmqpReplyTimeoutException **. Can please help to achieve this behavior.
Thanks in advance!!!!

Not able to to filter messages received using condition attribute in Spring Cloud Stream #StreamListener annotation

I am trying to create a event based system for communicating between services using Apache Kafka as Messaging system and Spring Cloud Stream Kafka.
I have written my Receiver class methods as below,
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeCreatedEvent'")
public void handleEmployeeCreatedEvent(#Payload String payload) {
logger.info("Received EmployeeCreatedEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeCreatedEvent.
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeTransferredEvent'")
public void handleEmployeeTransferredEvent(#Payload String payload) {
logger.info("Received EmployeeTransferredEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeTransferredEvent.
#StreamListener(target = Sink.INPUT)
public void handleDefaultEvent(#Payload String payload) {
logger.info("Received payload: " + payload);
}
This is the default method.
When I run the application, I am not able to see the methods annoated with condition attribute being called. I only see the handleDefaultEvent method being called.
I am sending a message to this Receiver Application from the Sending/Source App using the below CustomMessageSource class as below,
#Component
#EnableBinding(Source.class)
public class CustomMessageSource {
#Autowired
private Source source;
public void sendMessage(String payload,String eventType) {
Message<String> myMessage = MessageBuilder.withPayload(payload)
.setHeader("eventType", eventType)
.build();
source.output().send(myMessage);
}
}
I am calling the method from my controller in Source App as below,
customMessageSource.sendMessage("Hello","EmployeeCreatedEvent");
The customMessageSource instance is autowired as below,
#Autowired
CustomMessageSource customMessageSource;
Basicaly, I would like to filter messages received by the Sink/Receiver application and handle them accordingly.
For this I have used the #StreamListener annotation with condition attribute to simulate the behaviour of handling different events.
I am using Spring Cloud Stream Chelsea.SR2 version.
Can someone help me in resolving this issue.
It seems like the headers are not propagated. Make sure you include the custom headers in spring.cloud.stream.kafka.binder.headers http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#_kafka_binder_properties .

How data should be visible to user and their sub-user using spring websocket

I want to achieve this functionality using sockjs + stomp + spring-boot-websocket as mentioned in image:
You have 2 options:
1. Create utility that send message to user and inside you will need to send message to 3 users(User + Sub user) and use this utility inside your controller
Create dedicated channels that every user and sub user will subscribe into it. You can add security inside the subscribe message if you would like to.
Look below:
MyClassInterceptor extends ChannelInterceptorAdapter {
private static final Logger LOGGER = LogManager.getLogger(MyClassInterceptor .class);
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
MessageHeaders headers = message.getHeaders();
SimpMessageType type = (SimpMessageType) headers.get("simpMessageType");
String simpSessionId = (String) headers.get("simpSessionId");
if (type == SimpMessageType.CONNECT) {
Principal principal = (Principal) headers.get("simpUser");
LOGGER.debug("WsSession " + simpSessionId + " is connected for user " + principal.getName());
} else if (type == SimpMessageType.DISCONNECT) {
LOGGER.debug("WsSession " + simpSessionId + " is disconnected");
}
return message;
}
}
Personally i think option one is simplier and doesn't require from you to deal with to many things.
You cannot do it with spring because it create prefix for every user according to user name so every username will have specific queue

Resources