How to read the messages from rabbitmq other than using listener configuration? - spring

I am attempting to implement Spring Boot API to fetch the RabbitMQ Messages on demand for asynchronous cart notifications in UI. I already have a working implementation with the help of the Registered listener method. But I am looking for an alternative with or without spring.
#Component
public class Receiver {
private CountDownLatch latch = new CountDownLatch(1);
public void receiveMessage(String message) {
System.out.println("Received <" + message + ">");
latch.countDown();
}
public CountDownLatch getLatch() {
return latch;
}
}
Main Class with Reciever Configuration:
#SpringBootApplication
public class MessagingRabbitmqApplication {
static final String topicExchangeName = "spring-boot-exchange";
static final String queueName = "spring-boot";
#Bean
Queue queue() {
return new Queue(queueName, false);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with("foo.bar.#");
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(queueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
public static void main(String[] args) throws InterruptedException {
SpringApplication.run(MessagingRabbitmqApplication.class, args).close();
}
}
My Current Implementation Reference is from: https://spring.io/guides/gs/messaging-rabbitmq/

See RabbitTemplate API:
/**
* Receive a message if there is one from a specific queue. Returns immediately,
* possibly with a null value.
*
* #param queueName the name of the queue to poll
* #return a message or null if there is none waiting
* #throws AmqpException if there is a problem
*/
#Nullable
Message receive(String queueName) throws AmqpException;
/**
* Receive a message if there is one from a specific queue and convert it to a Java
* object. Returns immediately, possibly with a null value.
*
* #param queueName the name of the queue to poll
* #return a message or null if there is none waiting
* #throws AmqpException if there is a problem
*/
#Nullable
Object receiveAndConvert(String queueName) throws AmqpException;
And respective docs: https://docs.spring.io/spring-amqp/reference/html/#polling-consumer

Related

How to add a Spring Boot HealthIndicator to an Imap receiver IntegrationFlow

How do you integrate a Spring Boot HealthIndicator with an IntegrationFlow polling email with imap?
I can get exceptions from the IntegrationFlow via the errorChannel but how do I "clear" the exception once the IntegrationFlow starts working again, e.g., after a network outage.
#SpringBootApplication
public class MyApplication {
#Bean
IntegrationFlow mailFlow() {
return IntegrationFlows
.from(Mail.imapInboundAdapter(receiver()).get(),
e -> e.autoStartup(true)
.poller(Pollers.fixedRate(5000)))
.channel(mailChannel()).get();
}
#Bean
public ImapMailReceiver receiver() {
String mailServerPath = format("imaps://%s:%s#%s/INBOX", mailUser,
encode(mailPassword), mailServer);
ImapMailReceiver result = new ImapMailReceiver(mailServerPath);
return result;
}
#Bean
DirectChannel mailChannel() {
return new DirectChannel();
}
#Autowired
#Qualifier("errorChannel")
private PublishSubscribeChannel errorChannel;
#Bean
public IntegrationFlow errorHandlingFlow() {
return IntegrationFlows.from(errorChannel).handle(message -> {
MessagingException ex = (MessagingException) message.getPayload();
log.error("", ex);
}).get();
}
#Bean
HealthIndicator mailReceiverHealthIndicator() {
return () -> {
/*
* How to error check the imap polling ???
*/
return Health.up().build();
};
}
}
I would go with an AtomicReference<Exception> bean and set its value in that errorHandlingFlow. The HealthIndicator impl would consult that AtomicReference to down() when it has a value.
The PollerSpec for the Mail.imapInboundAdapter() could be configured with a ReceiveMessageAdvice:
/**
* Specify AOP {#link Advice}s for the {#code pollingTask}.
* #param advice the {#link Advice}s to use.
* #return the spec.
*/
public PollerSpec advice(Advice... advice) {
Its afterReceive() impl could just clean that AtomicReference up, so your HealthIndicator would return up().
The point is that this afterReceive() is called only when invocation.proceed() doesn't fail with an exception. and it is called independently if there are new messages to process or not.

How to write Integration Test for rabbit template with confirm and return callback

I have publisher that publish the message to exchange and associated rabbit template is configured to have confirm and return call back. I just would like to know How do I write integration test for this class using mock or any other frame work.
I have publisher that publish the message to exchange and associated rabbit template is configured to have confirm and return call back. I just would like to know How do I write integration test for this class using mock or any other frame work.
Code :
#Configuration
public class TopicConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(TopicConfiguration.class);
#Autowired
RabbitMqConfig rabbitMqConfig;
#Autowired
EPPQ2Publisher eppQ2Publisher;
/**
* Caching connection factory
* #return CachingConnectionFactory
*/
#Bean
public CachingConnectionFactory cachingConnectionFactory() {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(rabbitMqConfig.getPublisherHosts(),
rabbitMqConfig.getPublisherPort());
cachingConnectionFactory.setUsername(rabbitMqConfig.getPublisherUsername());
cachingConnectionFactory.setPassword(rabbitMqConfig.getPublisherPassword());
cachingConnectionFactory.setVirtualHost("hydra.services");
cachingConnectionFactory.createConnection();
cachingConnectionFactory.setPublisherReturns(true);
cachingConnectionFactory.setPublisherConfirms(true);
cachingConnectionFactory.setConnectionNameStrategy(f -> "publisherConnection");
return cachingConnectionFactory;
}
/**
* Bean RabbitTemplate
* #return RabbitTemplate
*/
#Bean
public RabbitTemplate template(
#Qualifier("cachingConnectionFactory") CachingConnectionFactory cachingConnectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(cachingConnectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
RetryTemplate retryTemplate = new RetryTemplate();
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(500);
backOffPolicy.setMultiplier(10.0);
backOffPolicy.setMaxInterval(10000);
retryTemplate.setBackOffPolicy(backOffPolicy);
rabbitTemplate.setRetryTemplate(retryTemplate);
rabbitTemplate.setExchange(rabbitMqConfig.getPublisherTopic());
rabbitTemplate.setUsePublisherConnection(true);
rabbitTemplate.setMandatory(true);
rabbitTemplate.setConfirmCallback((correlation, ack, reason) -> {
if (correlation != null) {
LOGGER.info("Received " + (ack ? " ack " : " nack ") + "for correlation: " + correlation);
if (ack) {
// this is confirmation received..
// here is code to ack Q1. correlation.getId() and ack it !!
eppQ2Publisher.ackMessage(new Long(correlation.getId().toString()));
} else {
// no confirmation received and no need to do any thing for
// retry..
}
}
});
rabbitTemplate.setReturnCallback((message, replyCode, replyText, exchange, routingKey) -> {
LOGGER.error("Returned: " + message + "\nreplyCode: " + replyCode + "\nreplyText: " + replyText
+ "\nexchange/rk: " + exchange + "/" + routingKey);
});
return rabbitTemplate;
}
/**
* Bean Jackson2JsonMessageConverter
* #return Jackson2JsonMessageConverter
*/
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
````java
#Component
public class EPPQ2PublisherImpl implements EPPQ2Publisher{
#Autowired
RabbitMqConfig rabbitMqConfig;
#Autowired
private RabbitTemplate rabbitTemplate;
private Channel channel;
/**
* Method sendMessage for sending individual scrubbed and encrypted message to publisher queue (Q2).
* #param msg - message domain object
* #param deliveryTag - is message delivery tag.
*/
#Override
public void sendMessage(Message msg,Long deliveryTag) {
rabbitTemplate.convertAndSend(rabbitMqConfig.getRoutingKey(), msg,new CorrelationData(deliveryTag.toString()));
}
/**
* sendMessages for sending list of scrubbed and encrypted messages to publisher queue (Q2)
* #param msgList - is list of scrubbed and encrypted messages
* #param channel - is ampq client channel
* #param deliveryTagList - is list of incoming message delivery tags.
*/
#Override
public void sendMessages(List<Message> msgList, Channel channel, List<Long>deliveryTagList) {
if(this.channel == null) {
this.channel = channel;
}
for (int i = 0 ; i < msgList.size(); i ++) {
sendMessage(msgList.get(i),deliveryTagList.get(i));
}
}
/**
* Method ackMessage for sending acknowledgement to subscriber queue (Q1).
* #param deliveryTag - is deliveryTag for each individual message.
*/
#Override
public void ackMessage(Long deliveryTag) {
try {
channel.basicAck(deliveryTag, false);
} catch (IOException e) {
e.printStackTrace();
}
}
}
could some one guide me with reference or example is available that would be great.
Some junit and integration testing for rabbit template with confirm and returns would be a great help.
There are lots of test cases in the framework itself...
RabbitTemplatePublisherCallbacksIntegrationTests
and RabbitTemplatePublisherCallbacksIntegrationTests2
and RabbitTemplatePublisherCallbacksIntegrationTests3

Error - Rabbit Template publish Confirm - reply-code=403, reply-text=ACCESS_REFUSED - cannot publish to internal exchange

My Use Case is:
subscribe to Q1 and read messages in Batches of specified size.
Pass the read message collection for processing.
publish the collected messages to Q2 and ack message to Q1 upon sucessful confirmation of q2 publish.
Code
#Component
public class EPPQ2Subscriber {
private static final Logger LOGGER = LoggerFactory.getLogger(EPPQ2Subscriber.class);
#Autowired
RabbitMqConfig rabbitMqConfig;
#Autowired
AppConfig appConfig;
List<Message> messageList = new ArrayList<Message>();
List<Long> diliveryTag = new ArrayList<Long>();
/**
* Method is listener's receive message method , invoked when there is message ready to read
* #param message - Domain object of message encapsulated
* #param channel - rabitmq client channel
* #param messageId - #TODO Delete it later.
* #param messageProperties - amqp message properties contains message properties such as delivery tag etc..
*/
#RabbitListener(id="messageListener",queues = "#{rabbitMqConfig.getSubscriberQueueName()}",containerFactory="queueListenerContainer")
public void receiveMessage(Message message, Channel channel, #Header("id") String messageId,
MessageProperties messageProperties) {
LOGGER.info("Result:" + message.getClass() + ":" + message.toString());
if(messageList.size() <= appConfig.getSubscriberChunkSize() ) {
messageList.add(message);
diliveryTag.add(messageProperties.getDeliveryTag());
} else {
// call the service here to decrypt, read pan, call danger to scrub, encrypt pan and re-pack them in message again.
//after this branch messageList should have scrubbed and encrypted message objects ready to publish.
// Here is call for publish and ack messages..
}
}
}
#Component
#Configuration
public class TopicConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(TopicConfiguration.class);
#Autowired
RabbitMqConfig rabbitMqConfig;
#Autowired EPPQ2Publisher eppQ2Publisher;
/**
* Caching connection factory
* #return CachingConnectionFactory
*/
#Bean
public CachingConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(rabbitMqConfig.getPublisherHosts(), rabbitMqConfig.getPublisherPort());
connectionFactory.setUsername(rabbitMqConfig.getPublisherUsername());
connectionFactory.setPassword(rabbitMqConfig.getPublisherPassword());
return connectionFactory;
}
/**
* Bean RabbitTemplate
* #return RabbitTemplate
*/
#Bean
public RabbitTemplate rabbitTemplate() {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
RetryTemplate retryTemplate = new RetryTemplate();
ExponentialBackOffPolicy backOffPolicy = new
ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(500);
backOffPolicy.setMultiplier(10.0);
backOffPolicy.setMaxInterval(10000);
retryTemplate.setBackOffPolicy(backOffPolicy);
rabbitTemplate.setRetryTemplate(retryTemplate);
/* rabbitTemplate.setExchange(rabbitMqConfig.getPublisherTopic());
rabbitTemplate.setRoutingKey(rabbitMqConfig.getRoutingKey());*/
rabbitTemplate.setConfirmCallback((correlation, ack, reason) ->
if(correlation != null ) {
LOGGER.info("Received " + (ack ? " ack " : " nack ") +
"for correlation: " + correlation);
if(ack) {
// this is confirmation received..
// here is code to ack Q1. correlation.getId and ack
eppQ2Publisher.ackMessage(new
Long(correlation.getId().toString()));
} else {
// no confirmation received and no need to do any
thing for retry..
}
}
});
rabbitTemplate.setReturnCallback((message, replyCode,
replyText, exchange, routingKey) ->
{
LOGGER.error("Returned: " + message + "\nreplyCode: " +
replyCode+ "\nreplyText: " + replyText +
"\nexchange/rk: " + exchange + "/" + routingKey);
});
return rabbitTemplate;
}
/**
* Bean Jackson2JsonMessageConverter
* #return Jackson2JsonMessageConverter
*/
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
public interface EPPQ2Publisher {
public void sendMessage(Message msg,Long deliveryTag);
public void sendMessages(List<Message> msgList, Channel channel, List<Long> deliveryTagList);
public void ackMessage(Long deliveryTag);
}
#Component
public class EPPQ2PublisherImpl implements EPPQ2Publisher{
#Autowired
RabbitMqConfig rabbitMqConfig;
#Autowired
private RabbitTemplate rabbitTemplate;
private Channel channel;
/**
* Method sendMessage for sending individual scrubbed and encrypted message to publisher queue (Q2).
* #param msg - message domain object
* #param deliveryTag - is message delivery tag.
*/
#Override
public void sendMessage(Message msg,Long deliveryTag) {
rabbitTemplate.convertAndSend(rabbitMqConfig.getPublisherTopic(), rabbitMqConfig.getRoutingKey(), msg,new CorrelationData(deliveryTag.toString()));
}
/**
* sendMessages for sending list of scrubbed and encrypted messages to publisher queue (Q2)
* #param msgList - is list of scrubbed and encrypted messages
* #param channel - is ampq client channel
* #param deliveryTagList - is list of incoming message delivery tags.
*/
#Override
public void sendMessages(List<Message> msgList, Channel channel, List<Long>deliveryTagList) {
if(this.channel == null) {
this.channel = channel;
}
for (int i = 0 ; i < msgList.size(); i ++) {
sendMessage(msgList.get(i),deliveryTagList.get(i));
}
}
/**
* Method ackMessage for sending acknowledgement to subscriber Q1
* #param deliveryTag - is deliveryTag for each individual message.
*/
#Override
public void ackMessage(Long deliveryTag) {
try {
channel.basicAck(deliveryTag, false);
} catch (IOException e) {
e.printStackTrace();
}
}
org.springframework.amqp.rabbit.connection.CachingConnectionFactory Creating cached Rabbit Channel from AMQChannel(amqp://dftp_subscriber#10.15.190.18:5672/hydra.services,2)
I expected to be dftp_publisher and I guess my topic configuration is not injected properly.
Error Log:
org.springframework.amqp.rabbit.core.RabbitTemplate[0;39m: Executing callback RabbitTemplate$$Lambda$285/33478758 on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://dftp_subscriber#10.15.190.18:5672/hydra.services,2), conn: Proxy#1dc339f Shared Rabbit Connection: SimpleConnection#2bd7c8 [delegate=amqp://dftp_subscriber#10.15.190.18:5672/hydra.services, localPort= 55553]
org.springframework.amqp.rabbit.core.RabbitTemplate[0;39m: Publishing message (Body:'{"HEADER":{"RETRY_COUNT":0,"PUBLISH_EVENT_TYPE":"AUTH"},"PAYLOAD":{"MTI":"100","MTI_REQUEST":"100","PAN":"6011000000000000","PROCCODE":"00","PROCCODE_REQUEST":"00","FROM_ACCOUNT":"00","TO_ACCOUNT":"00","TRANSACTION_AMOUNT":"000000000100","TRANSMISSION_MMDDHHMMSS":"0518202930","STAN":"000001","LOCALTIME_HHMMSS":"010054","LOCALDATE_YYMMDD":"180522","EXPIRATION_DATE_YYMM":"2302","MERCHANT_TYPE":"5311","ACQUIRING_COUNTRY_CODE":"840","POS_ENTRY_MODE":"02","POS_PIN_ENTRY_CAPABILITIES":"0","FUNCTION_CODE":"100","ACQUIRING_ID_CODE":"000000","FORWARDING_ID_CODE":"000000","RETRIEVAL_REFERENCE_NUMBER":"1410N644D597","MERCHANT_NUMBER":"601100000000596","CARD_ACCEPTOR_NAME":"Discover Acq Simulator","CARD_ACCEPTOR_CITY":"Riverwoods","CARD_ACCEPTOR_STATE":"IL","CARD_ACCEPTOR_COUNTRY":"840","CARD_ACCEPTOR_COUNTRY_3NUMERIC":"840","NRID":"123456789012345","TRANSACTION_CURRENCY_CODE":"840","POS_TERMINAL_ATTENDANCE_INDICATOR":"0","POS_PARTIAL_APPROVAL_INDICATOR":"0","POS_TERMINAL_LOCATION_INDICATOR":"0","POS_TRANSACTION_STATUS_INDICATOR":"0","POS_ECOMMERCE_TRAN_INDICATOR":"0","POS_TYPE_OF_TERMINAL_DEVICE":"0","POS_CARD_PRESENCE_INDICATOR":"0","POS_CARD_CAPTURE_CAPABILITIES_INDICATOR":"0","POS_TRANSACTION_SECURITY_INDICATOR":"0","POS_CARD_DATA_TERMINAL_INPUT_CAPABILITY_INDICATOR":"C","POS_CARDHOLDER_PRESENCE_INDICATOR":"0","DFS_POS_DATA":"0000000000C00","GEODATA_STREET_ADDRESS":"2500 LAKE COOK ROAD ","GEODATA_POSTAL_CODE":"600150000","GEODATA_COUNTY_CODE":"840","GEODATA_STORE_NUMBER":"10001","GEODATA_MALL_NAME":"DISCOVER FINANCIAL SR","ISS_REFERENCE_ID":"72967956","ISS_PROCESSOR_REFERENCE_ID":"123459875","VERSION_INDICATOR":"03141"}}' MessageProperties [headers={TypeId=com.discover.dftp.scrubber.domain.Message}, contentType=application/json, contentEncoding=UTF-8, contentLength=1642, deliveryMode=PERSISTENT, priority=0, deliveryTag=0])on exchange [hydra.hash2Syphon.exc], routingKey = [100]
org.springframework.amqp.rabbit.connection.CachingConnectionFactory$DefaultChannelCloseLogger[0;39m: Channel shutdown: channel error; protocol method: #method(reply-code=403, reply-text=ACCESS_REFUSED - cannot publish to internal exchange 'hydra.hash2Syphon.exc' in vhost 'hydra.services', class-id=60, method-id=40)
EDIT 2.
#Component
#Configuration
public class ListenerContainerFactory {
static final Logger logger = LoggerFactory.getLogger(ListenerContainerFactory.class);
#Autowired
RabbitMqConfig rabbitMqConfig;
#Autowired
EPPQ2Subscriber receiver;
#Autowired
EPPQ2ChanelAwareSubscriber receiverChanel;
public ListenerContainerFactory(ConfigurableApplicationContext ctx) {
printContainerStartMsg();
}
private void printContainerStartMsg() {
logger.info("----------- Scrubber Container Starts --------------");
}
#Bean
public SimpleRabbitListenerContainerFactory queueListenerContainer(AbstractConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
connectionFactory.setAddresses(rabbitMqConfig.getSubscriberHosts());
connectionFactory.setVirtualHost("hydra.services");
connectionFactory.setPort(rabbitMqConfig.getSubscriberPort());
connectionFactory.setUsername(rabbitMqConfig.getSubscriberUsername());
connectionFactory.setPassword(rabbitMqConfig.getSubscriberPassword());
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setErrorHandler(errorHandler());
return factory;
}
#Bean
MessageListenerAdapter listenerAdapter(EPPQ2Subscriber receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
/*#Bean
MessageListenerAdapter listenerAdapterWithChanel(EPPQ2ChanelAwareSubscriber receiverChanel) {
return new MessageListenerAdapter(receiverChanel);
}*/
#Bean
public ErrorHandler errorHandler() {
return new ConditionalRejectingErrorHandler(fatalExceptionStrategy());
}
#Bean
public ScrubberFatalExceptionStrategy fatalExceptionStrategy() {
return new ScrubberFatalExceptionStrategy();
}
}
and Latest Topic Configuration.
#Component
#Configuration
public class TopicConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(TopicConfiguration.class);
#Autowired
RabbitMqConfig rabbitMqConfig;
#Autowired EPPQ2Publisher eppQ2Publisher;
/**
* Bean Queue
* #return Queue
*/
#Bean
Queue queue() {
return new Queue(rabbitMqConfig.getPublisherQueueName(), false);
}
/**
* Bean TopicExchage
* #return TopicExchage
*/
#Bean
TopicExchange exchange() {
return new TopicExchange(rabbitMqConfig.getPublisherTopic());
}
/**
* Bean BindingBuilder
* #param queue - Queue
* #param exchange - TopicExchange
* #return BindingBuilder
*/
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(rabbitMqConfig.getRoutingKey());
}
/**
* Caching connection factory
* #return CachingConnectionFactory
*/
#Bean
public CachingConnectionFactory cachingConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(rabbitMqConfig.getPublisherHosts(),
rabbitMqConfig.getPublisherPort());
connectionFactory.setUsername(rabbitMqConfig.getPublisherUsername());
connectionFactory.setPassword(rabbitMqConfig.getPublisherPassword());
return connectionFactory;
}
/**
* Bean RabbitTemplate
* #return RabbitTemplate
*/
#Bean
public RabbitTemplate rabbitTemplate() {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(cachingConnectionFactory());
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
RetryTemplate retryTemplate = new RetryTemplate();
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(500);
backOffPolicy.setMultiplier(10.0);
backOffPolicy.setMaxInterval(10000);
retryTemplate.setBackOffPolicy(backOffPolicy);
rabbitTemplate.setRetryTemplate(retryTemplate);
/* rabbitTemplate.setExchange(rabbitMqConfig.getPublisherTopic());
rabbitTemplate.setRoutingKey(rabbitMqConfig.getRoutingKey());*/
rabbitTemplate.setConfirmCallback((correlation, ack, reason) -> {
if(correlation != null ) {
LOGGER.info("Received " + (ack ? " ack " : " nack ") + "for correlation: " + correlation);
if(ack) {
// this is confirmation received..
// here is code to ack Q1. correlation.getId() and ack
eppQ2Publisher.ackMessage(new
Long(correlation.getId().toString()));
} else {
// no confirmation received and no need to do any
}
}
});
rabbitTemplate.setReturnCallback(
(message, replyCode, replyText,
exchange, routingKey) ->
{
LOGGER.error("Returned: " + message + "\nreplyCode: " +
replyCode
+ "\nreplyText: " + replyText + "\nexchange/rk: " +
exchange + "/" + routingKey);
});
return rabbitTemplate;
}
/**
* Bean Jackson2JsonMessageConverter
* #return Jackson2JsonMessageConverter
*/
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
It's not clear what you are asking. If you mean the subscriber user doesn't have permissions to write to that exchange, your wiring is wrong.
You don't show the subscriber configuration.
Is it possible the subscriber connection factory bean is also called connectionFactory? In that case one or the other will win.
They need different bean named.
Please check the permissions for your user if while initiating the consumer/producer we don't have the exchange name it will fallback to default

How to build a nonblocking Consumer when using AsyncRabbitTemplate with Request/Reply Pattern

I'm new to rabbitmq and currently trying to implement a nonblocking producer with a nonblocking consumer. I've build some test producer where I played around with typereference:
#Service
public class Producer {
#Autowired
private AsyncRabbitTemplate asyncRabbitTemplate;
public <T extends RequestEvent<S>, S> RabbitConverterFuture<S> asyncSendEventAndReceive(final T event) {
return asyncRabbitTemplate.convertSendAndReceiveAsType(QueueConfig.EXCHANGE_NAME, event.getRoutingKey(), event, event.getResponseTypeReference());
}
}
And in some other place the test function that gets called in a RestController
#Autowired
Producer producer;
public void test() throws InterruptedException, ExecutionException {
TestEvent requestEvent = new TestEvent("SOMEDATA");
RabbitConverterFuture<TestResponse> reply = producer.asyncSendEventAndReceive(requestEvent);
log.info("Hello! The Reply is: {}", reply.get());
}
This so far was pretty straightforward, where I'm stuck now is how to create a consumer which is non-blocking too. My current listener:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public TestResponse onReceive(TestEvent event) {
Future<TestResponse> replyLater = proccessDataLater(event.getSomeData())
return replyLater.get();
}
As far as I'm aware, when using #RabbitListener this listener runs in its own thread. And I could configure the MessageListener to use more then one thread for the active listeners. Because of that, blocking the listener thread with future.get() is not blocking the application itself. Still there might be the case where all threads are blocking now and new events are stuck in the queue, when they maybe dont need to. What I would like to do is to just receive the event without the need to instantly return the result. Which is probably not possible with #RabbitListener. Something like:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public void onReceive(TestEvent event) {
/*
* Some fictional RabbitMQ API call where i get a ReplyContainer which contains
* the CorrelationID for the event. I can call replyContainer.reply(testResponse) later
* in the code without blocking the listener thread
*/
ReplyContainer replyContainer = AsyncRabbitTemplate.getReplyContainer()
// ProcessDataLater calls reply on the container when done with its action
proccessDataLater(event.getSomeData(), replyContainer);
}
What is the best way to implement such behaviour with rabbitmq in spring?
EDIT Config Class:
#Configuration
#EnableRabbit
public class RabbitMQConfig implements RabbitListenerConfigurer {
public static final String topicExchangeName = "exchange";
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
return connectionFactory;
}
#Bean
public MappingJackson2MessageConverter consumerJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public RabbitTemplate rabbitTemplate() {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(rabbitConnectionFactory());
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public AsyncRabbitTemplate asyncRabbitTemplate() {
return new AsyncRabbitTemplate(rabbitTemplate());
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
Queue queue() {
return new Queue("test", false);
}
#Bean
Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("foo.#");
}
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMaxConcurrentConsumers(5);
factory.setMessageConverter(producerJackson2MessageConverter());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return factory;
}
#Override
public void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) {
registrar.setContainerFactory(myRabbitListenerContainerFactory());
}
}
I don't have time to test it right now, but something like this should work; presumably you don't want to lose messages so you need to set the ackMode to MANUAL and do the acks yourself (as shown).
UPDATE
#SpringBootApplication
public class So52173111Application {
private final ExecutorService exec = Executors.newCachedThreadPool();
#Autowired
private RabbitTemplate template;
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
return args -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo", "test");
future.addCallback(r -> {
System.out.println("Reply: " + r);
}, t -> {
t.printStackTrace();
});
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(RabbitTemplate template) {
return new AsyncRabbitTemplate(template);
}
#RabbitListener(queues = "foo")
public void listen(String in, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag,
#Header(AmqpHeaders.CORRELATION_ID) String correlationId,
#Header(AmqpHeaders.REPLY_TO) String replyTo) {
ListenableFuture<String> future = handleInput(in);
future.addCallback(result -> {
Address address = new Address(replyTo);
this.template.convertAndSend(address.getExchangeName(), address.getRoutingKey(), result, m -> {
m.getMessageProperties().setCorrelationId(correlationId);
return m;
});
try {
channel.basicAck(tag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}, t -> {
t.printStackTrace();
});
}
private ListenableFuture<String> handleInput(String in) {
SettableListenableFuture<String> future = new SettableListenableFuture<String>();
exec.execute(() -> {
try {
Thread.sleep(2000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
future.set(in.toUpperCase());
});
return future;
}
public static void main(String[] args) {
SpringApplication.run(So52173111Application.class, args);
}
}

How to Bind Publisher Message to Custom class in my Receiver with out #Payload

In my application i am publishing Message from one of FileProcess service(which will process CSV file and convert that to CSVPojo and publish to queue by using RabbitTemplate.
rabbitTemplate.convertAndSend("spring-boot-rabbitmq-BulkSolve.async_BulkSolve_Msg", "BulkSolve_GeneralrequestQueue", pojo);
I have another service BusinessProcess service that have to Listen to this queue and get messages and do some business process on those messages.To do this we intended to do this using SpringBatch, so i created a job which will listen queue and process. The trigger point for the job is as below.
#EnableRabbit
public class Eventscheduler {
#Autowired
Job csvJob;
#Autowired
private JobLauncher jobLauncher;
//#Scheduled(cron="0 */2 * ? * *")
#RabbitListener(queues ="BulkSolve_GeneralrequestQueue")
public void trigger(){
Reader.batchstatus=false;
Map<String,JobParameter> maps= new HashMap<String,JobParameter>();
maps.put("time", new JobParameter(System.currentTimeMillis()));
JobParameters jobParameters = new JobParameters(maps);
JobExecution execution=null;
try {
//JobLauncher jobLauncher = new JobLauncher();
execution=jobLauncher.run(csvJob, jobParameters);
} catch (JobExecutionAlreadyRunningException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobRestartException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobInstanceAlreadyCompleteException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobParametersInvalidException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("JOB Executed:" + execution.getStatus());
}
}
so my job will trigger when there is a msg published to this Queue. And after my job triggered in my job iam getting exception in my reader. In reader i am getting below exception.
org.springframework.amqp.support.converter.MessageConversionException: failed to resolve class name [com.comcast.FileProcess.Pojo.CSVPojo]
Below is my Reader class which i used to read message as receiver.
#Component
public class Reader extends AmqpItemReader<List<RequestPojo>>{
#Autowired
#Qualifier("rabbitTemplate")
private RabbitTemplate rabbitTemplate;
public static boolean batchstatus;
private List<RequestPojo> reqList = new ArrayList<RequestPojo>();
/* #Autowired
private SimpleMessageListenerContainer messagelistener;*/
public Reader(AmqpTemplate rabbitTemplate) {
super(rabbitTemplate);
// TODO Auto-generated constructor stub
}
List<RequestPojo> msgList = new ArrayList<RequestPojo>();
#Override
#SuppressWarnings("unchecked")
public List<RequestPojo> read() {
if(!batchstatus){
RequestPojo msg=(RequestPojo)rabbitTemplate.receiveAndConvert("BulkSolve_GeneralrequestQueue");
//return (List<RequestPojo>) rabbitTemplate.receive();
System.out.println("I am inside Reader" );
msgList.add((RequestPojo) msg);
//Object result = rabbitTemplate.receiveAndConvert();
batchstatus=true;
return msgList;
}
return null;
}
}
Here Consumer is Getting the Pojo class with its pacakge name from publisher.
I am able to consume Messages by using #Payload below is my code using which successfully consumed messages(below is that code) but i want to consume messages by using RabbitTemplate.receiveAndConvert("QueueName") Which i showed in my Reader class.
/*Below code sucesfully consumed messages from receiver side using #Payload*/
#RabbitHandler
#RabbitListener(containerFactory = "simpleMessageListenerContainerFactory", queues ="BulkSolve_GeneralrequestQueue")
public void subscribeToRequestQueue(#Payload RequestPojo sampleRequestMessage, Message message) throws InterruptedException {
System.out.println(sampleRequestMessage.toString());
}
Can any one help on this to resolve my error to consume published messages from Receiver using RabbitTemplate.receiveAndConvert("QueueName")
As per your suggestion i have made some configuration changes as below for Jackson2JsonMessageConverter to bind the message to my custom class RequestPojo as per below but it still not bind the message to my custom class. Can you please suggest me what i am doing wrong here and suggest me what to do to make it work.
#Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate template = new RabbitTemplate(connectionFactory);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public MessageConverter jsonMessageConverter() {
return jsonCustomMessageConverter();
}
#Bean
public Jackson2JsonMessageConverter jsonCustomMessageConverter() {
Jackson2JsonMessageConverter jsonConverter = new Jackson2JsonMessageConverter();
jsonConverter.setClassMapper(classMapper());
return jsonConverter;
}
#Bean
public DefaultClassMapper classMapper() {
DefaultClassMapper classMapper = new DefaultClassMapper();
Map<String, Class<?>> idClassMapping = new HashMap<String, Class<?>>();
idClassMapping.put("RequestPojo", RequestPojo.class);
// idClassMapping.put("bar", Bar.class);
classMapper.setIdClassMapping(idClassMapping);
return classMapper;
}
Changed as per your suggestion but getting below error .
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: org.springframework.amqp.support.converter.MessageConversionException: Cannot handle message
... 15 common frames omitted
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [com.comcast.BusinessProcess.Pojos.RequestPojo] for GenericMessage [payload=byte[230], headers={amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedRoutingKey=BulkSolve_SummaryrequestQueue, amqp_contentEncoding=UTF-8, amqp_receivedExchange=spring-boot-rabbitmq-BulkSolve_summary.async_BulkSolve_Msg, amqp_deliveryTag=1, amqp_consumerQueue=BulkSolve_SummaryrequestQueue, amqp_redelivered=false, id=d79db57c-3cd4-d104-a343-9373215400b8, amqp_consumerTag=amq.ctag-sYwuWA5pmN07gnEUTO-p6A, contentType=application/json, __TypeId__=com.comcast.FileProcess.Pojo.CSVPojo, timestamp=1535661077865}]
at org.springframework.messaging.handler.annotation.support.PayloadArgumentResolver.resolveArgument(PayloadArgumentResolver.java:142) ~[spring-messaging-4.3.11.RELEASE.jar!/:4.3.11.RELEASE]
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:112) ~[spring-messaging-4.3.11.RELEASE.jar!/:4.3.11.RELEASE]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:135) ~[spring-messaging-4.3.11.RELEASE.jar!/:4.3.11.RELEASE]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:107) ~[spring-messaging-4.3.11.RELEASE.jar!/:4.3.11.RELEASE]
at org.springframework.amqp.rabbit.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:49) ~[spring-rabbit-1.7.4.RELEASE.jar!/:na]
at org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:126) ~[spring-rabbit-1.7.4.RELEASE.jar!/:na]
... 14 common frames omitted
Below is my RabbitConfiguration class.
#Configuration("asyncRPCConfig")
#EnableScheduling
#EnableRabbit
public class RabbitMqConfiguration {
public static String replyQueue;
public static String directExchange;
public static String requestRoutingKey;
public static String replyRoutingKey;
//public static final int threads=3;
/*#Bean
public ExecutorService executorService(){
return Executors.newFixedThreadPool(threads);
}*/
/*#Bean
public CsvPublisher csvPublisher(){
return new CsvPublisher();
}
#Bean
public ExcelPublisher excelPublisher(){
return new ExcelPublisher();
}*/
/*#Bean
public GeneralQueuePublisher generalQueuePublisher(){
return new GeneralQueuePublisher();
}
*/
/*#Bean
public SummaryQueuePublisher summaryQueuePublisher(){
return new SummaryQueuePublisher();
}*/
/*#Bean
public Subscriber subscriber(){
return new Subscriber();
}*/
/*#Bean
public Subscriber1 subscriber1(){
return new Subscriber1();
}
#Bean
public Subscriber2 subscriber2(){
return new Subscriber2();
}
#Bean
public RestClient restClient(){
return new RestClient();
}*/
/*#Bean
public SubscriberGeneralQueue1 SubscriberGeneralQueue1(){
return new SubscriberGeneralQueue1();
}*/
/*#Bean
public SubscriberSummaryQueue1 SubscriberSummaryQueue1(){
return new SubscriberSummaryQueue1();
}*/
#Bean
public Eventscheduler Eventscheduler(){
return new Eventscheduler();
}
#Bean
public Executor taskExecutor() {
return Executors.newCachedThreadPool();
}
#Bean
public SimpleRabbitListenerContainerFactory simpleMessageListenerContainerFactory(ConnectionFactory connectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setTaskExecutor(taskExecutor());
configurer.configure(factory, connectionFactory);
return factory;
}
/* #Bean
public SimpleRabbitListenerContainerFactory simpleMessageListenerContainerFactory_Summary(ConnectionFactory connectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setTaskExecutor(taskExecutor());
configurer.configure(factory, connectionFactory);
return factory;
}*/
#Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate template = new RabbitTemplate(connectionFactory);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public MessageConverter jsonMessageConverter() {
return jsonCustomMessageConverter();
}
#Bean
public Jackson2JsonMessageConverter jsonCustomMessageConverter() {
Jackson2JsonMessageConverter jsonConverter = new Jackson2JsonMessageConverter();
jsonConverter.setClassMapper(classMapper());
return jsonConverter;
}
#Bean
public DefaultClassMapper classMapper() {
DefaultClassMapper classMapper = new DefaultClassMapper();
Map<String, Class<?>> idClassMapping = new HashMap<String, Class<?>>();
idClassMapping.put("com.comcast.FileProcess.Pojo.CSVPojo", RequestPojo.class);
// idClassMapping.put("bar", Bar.class);
classMapper.setIdClassMapping(idClassMapping);
return classMapper;
}
#Bean
public Queue replyQueueRPC() {
return new Queue("BulkSolve_GeneralreplyQueue");
}
#Bean
public Queue requestQueueRPC() {
return new Queue("BulkSolve_GeneralrequestQueue");
}
/*below are the newly added method for two other queues*/
#Bean
public Queue summaryreplyQueueRPC() {
return new Queue("BulkSolve_SummaryreplyQueue");
}
#Bean
public Queue summaryrequestQueueRPC() {
return new Queue("BulkSolve_SummaryrequestQueue");
}
#Bean
public SimpleMessageListenerContainer rpcGeneralReplyMessageListenerContainer(ConnectionFactory connectionFactory) {
SimpleMessageListenerContainer simpleMessageListenerContainer = new SimpleMessageListenerContainer(connectionFactory);
simpleMessageListenerContainer.setQueues(replyQueueRPC());
simpleMessageListenerContainer.setTaskExecutor(taskExecutor());
//simpleMessageListenerContainer.setMessageListener(listenerAdapter1);
return simpleMessageListenerContainer;
}
#Bean
public SimpleMessageListenerContainer rpcSummaryReplyMessageListenerContainer(ConnectionFactory connectionFactory) {
SimpleMessageListenerContainer simpleMessageListenerContainer = new SimpleMessageListenerContainer(connectionFactory);
simpleMessageListenerContainer.setQueues(summaryreplyQueueRPC());
//simpleMessageListenerContainer.setMessageListener(listenerAdapter2);
simpleMessageListenerContainer.setTaskExecutor(taskExecutor());
return simpleMessageListenerContainer;
}
/* #Bean
#Qualifier("listenerAdapter1")
MessageListenerAdapter listenerAdapter1(SubscriberGeneralQueue1 generalReceiver) {
return new MessageListenerAdapter(generalReceiver, "receivegeneralQueueMessage");
}*/
/* #Bean
#Qualifier("listenerAdapter2")
MessageListenerAdapter listenerAdapter2(SubscriberSummaryQueue1 summaryReceiver) {
return new MessageListenerAdapter(summaryReceiver, "receivesummaryQueueMessage");
}*/
#Bean
public RequestPojo requestPojo(){
return new RequestPojo();
}
/* #Bean
#Qualifier("asyncGeneralRabbitTemplate")
public AsyncRabbitTemplate asyncGeneralRabbitTemplate(ConnectionFactory connectionFactory) {
AsyncRabbitTemplate asyncGeneralRabbitTemplate = new AsyncRabbitTemplate(rabbitTemplate(connectionFactory),
rpcGeneralReplyMessageListenerContainer(connectionFactory),
"spring-boot-rabbitmq-BulkSolve.async_BulkSolve_Msg" + "/" + "BulkSolve_GeneralreplyQueue");
AsyncRabbitTemplate at = new AsyncRabbitTemplate(connectionFactory, "spring-boot-rabbitmq-examples.async_rpc", "rpc_request", "replyQueueRPC","replyQueueRPC");
return asyncGeneralRabbitTemplate;
}
template defined for other 2 queues
#Bean
#Qualifier("asyncSummaryRabbitTemplate")
public AsyncRabbitTemplate asyncSummaryRabbitTemplate(ConnectionFactory connectionFactory) {
AsyncRabbitTemplate asyncSummaryRabbitTemplate = new AsyncRabbitTemplate(rabbitTemplate(connectionFactory),
rpcSummaryReplyMessageListenerContainer(connectionFactory),
"spring-boot-rabbitmq-BulkSolve_summary.async_BulkSolve_Msg" + "/" + "BulkSolve_SummaryreplyQueue");
AsyncRabbitTemplate at = new AsyncRabbitTemplate(connectionFactory, "spring-boot-rabbitmq-examples.async_rpc", "rpc_request", "replyQueueRPC","replyQueueRPC");
return asyncSummaryRabbitTemplate;
}*/
#Bean
public DirectExchange directExchange() {
return new DirectExchange("spring-boot-rabbitmq-BulkSolve.async_BulkSolve_Msg");
}
//Added new exchange
#Bean
public DirectExchange directExchange1() {
return new DirectExchange("spring-boot-rabbitmq-BulkSolve_summary.async_BulkSolve_Msg");
}
#Bean
public List<Binding> bindings() {
return Arrays.asList(
BindingBuilder.bind(requestQueueRPC()).to(directExchange()).with("BulkSolve_GeneralrequestQueue"),
BindingBuilder.bind(replyQueueRPC()).to(directExchange()).with("BulkSolve_GeneralreplyQueue"),
BindingBuilder.bind(summaryrequestQueueRPC()).to(directExchange1()).with("BulkSolve_SummaryrequestQueue"),
BindingBuilder.bind(summaryreplyQueueRPC()).to(directExchange1()).with("BulkSolve_SummaryreplyQueue")
);
}
}
//Below is my Reader class
#Component
public class Reader extends AmqpItemReader<List<RequestPojo>>{
#Autowired
#Qualifier("rabbitTemplate")
private RabbitTemplate rabbitTemplate;
public static boolean batchstatus;
private List<RequestPojo> reqList = new ArrayList<RequestPojo>();
/* #Autowired
private SimpleMessageListenerContainer messagelistener;*/
public Reader(AmqpTemplate rabbitTemplate) {
super(rabbitTemplate);
// TODO Auto-generated constructor stub
}
List<RequestPojo> msgList = new ArrayList<RequestPojo>();
#Override
#SuppressWarnings("unchecked")
public List<RequestPojo> read() {
if(!batchstatus){
RequestPojo msg=(RequestPojo)rabbitTemplate.receiveAndConvert("BulkSolve_GeneralrequestQueue" );
//rabbitTemplate.receiveandco
//return (List<RequestPojo>) rabbitTemplate.receive();
System.out.println("I am inside Reader" + msg);
msgList.add(msg);
//Object result = rabbitTemplate.receiveAndConvert();
batchstatus=true;
return msgList;
}
return null;
}
}
Below is my Trigger point code which trigger Job when message is there in queue.
#EnableRabbit
public class Eventscheduler {
#Autowired
Job csvJob;
#Autowired
private JobLauncher jobLauncher;
//#Scheduled(cron="0 */2 * ? * *")
#RabbitListener(queues ="BulkSolve_GeneralrequestQueue")
public void trigger(){
Reader.batchstatus=false;
Map<String,JobParameter> maps= new HashMap<String,JobParameter>();
maps.put("time", new JobParameter(System.currentTimeMillis()));
JobParameters jobParameters = new JobParameters(maps);
JobExecution execution=null;
try {
//JobLauncher jobLauncher = new JobLauncher();
execution=jobLauncher.run(csvJob, jobParameters);
} catch (JobExecutionAlreadyRunningException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobRestartException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobInstanceAlreadyCompleteException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobParametersInvalidException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("JOB Executed:" + execution.getStatus());
}
}
Thanks.
As you noticed there is slight difference between subscribeToRequestQueue(#Payload RequestPojo sampleRequestMessage) and rabbitTemplate.receiveAndConvert("BulkSolve_GeneralrequestQueue"). And right, that one is exactly that #Payload RequestPojo which is missed in the raw receiveAndConvert(). Therefore when this method is performed there is no target type to consult for the expected conversion. This way we just fallback to whatever we have in the incoming message. In your case that is a __TypeId__ header with the source type from the producer com.comcast.FileProcess.Pojo.CSVPojo.
If you really would like to the enforce the conversion on the consumer side into the RequestPojo, you need to consider to use an overloaded receiveAndConvert variant:
/**
* Receive a message if there is one from a specific queue and convert it to a Java
* object. Returns immediately, possibly with a null value. Requires a
* {#link org.springframework.amqp.support.converter.SmartMessageConverter}.
*
* #param queueName the name of the queue to poll
* #param type the type to convert to.
* #param <T> the type.
* #return a message or null if there is none waiting
* #throws AmqpException if there is a problem
* #since 2.0
*/
<T> T receiveAndConvert(String queueName, ParameterizedTypeReference<T> type) throws AmqpException;

Resources