mulitple queues in EJB messagedriven annotation - ejb-3.0

I have 3 queues and these three queues need to be listened by MDBbean and accordingly based on reading input, i will split out the task for each category of input.
As of now, the code is working fine for only one queue and i don't know how to implement it for more than one queue. Could you please guide me
#MessageDriven(mappedName="receiver1")
public class MDBMessages implements MessageListener
How i can make my MDBMessage to listen for receiver2 and receiver 3 queue.
Thanks
Prabhakar

From Documentation :
A message-driven bean is defined for a
single messaging type, in accordance
with the message listener interface it
employs.
Therefore it will not be possible to map a MDB for multiple destination types.
Haven't tried, but you can try configuring MDB in ejb-jar.xml with different JNDI names pointing to the same class & add different destination to each of them. If configuration works, then MDBMessages will be able to listen messages for all specified queues in xml.

use the deployment descriptor to create multiple instances of your mdb. Each instance listens to one queue.
also there are brokers (like activeMQ) that allow one mdb to listen on multiple destinations of the same type (queue, topic), if they use the activemq resource adapter.

#Consumer(activationConfig = { #ActivationConfigProperty(
propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#**ActivationConfigProperty(propertyName = "destination",
propertyValue = "queue/MyTasksProcess"),**
public class MyProcessorMDBean implements Downloader {
public void processSomething(Serializable anyParameter){
//Do the actual processing
}
for a given message driven bean you can rout your message to single queue so can use only single destination type in your bean class.

Related

Way to determine Kafka Topic for #KafkaListener on application startup?

We have 5 topics and we want to have a service that scales for example to 5 instances of the same app.
This would mean that i would want to dynamically (via for example Redis locking or similar mechanism) determine which instance should listen to what topic.
I know that we could have 1 topic that has 5 partitions - and each node in the same consumer group would pick up a partition. Also if we have a separately deployed service we can set the topic via properties.
The issue is that those two are not suitable for our situation and we want to see if it is possible to do that via what i explained above.
#PostConstruct
private void postConstruct() {
// Do logic via redis locking or something do determine topic
dynamicallyDeterminedVariable = // SOME LOGIC
}
#KafkaListener(topics = "{dynamicallyDeterminedVariable")
void listener(String data) {
LOG.info(data);
}
Yes, you can use SpEL for the topic name.
#{#someOtherBean.whichTopicToUse()}.

What is the difference between event, channels, topics?

With refernce to Kafka, what is the difference between all these?
Lets say I have a component "Order" which must emit events into a kafka-channel when I create/cancel/modify orders.
And I create a channel. As "Order-out". Is topic the name I can use for this channel?
What is topic vs channel?
And this is the Order-Details component. Which creates & maintains records of all such orders.
I want to use an orderEvents class inside subscriber section of this component.
public class OrderEvents {
public static final String ORDER_CREATED = "ORDER_CREATED";
public static final String ORDER_MODIFIED = "ORDER_MODIFIED";
public static final String ORDER_CANCELLED = "ORDER_CANCELLED";
}
An event is a single record. In Spring, you might work with a Message class to wrap an event.
Channel is a Spring Integration term used via Spring-Kafka or Spring Cloud Stream Binders for inputs and outputs. A Binder determines the implementation of the Channel.
Topic is a Kafka unit of organization.
An event will be serialized into bytes, and sent via a channel to a Kafka topic.
A Kafka record will be consumed from a Kafka topic, through a channel, and deserialized into an application event class.

Multiple consumers with the same name in different projects subscribed to the same queue

We have UserCreated event that gets published from UserManagement.Api. I have two other Apis, Payments.Api and Notification.Api that should react to that event.
In both Apis I have public class UserCreatedConsumer : IConsumer<UserCreated> (so different namespaces) but only one queue (on SQS) gets created for both consumers.
What is the best way to deal with this situation?
You didn't share your configuration, but if you're using:
x.AddConsumer<UserCreatedConsumer>();
As part of your MassTransit configuration, you can specify an InstanceId for that consumer to generate a unique endpoint address.
x.AddConsumer<UserCreatedConsumer>()
.Endpoint(x => x.InstanceId = "unique-value");
Every separate service (not an instance of the same service) needs to have a different queue name of the receiving endpoint, as described in the docs:
cfg.ReceiveEndpoint("queue-name-per-service-type", e =>
{
// rest of the configuration
});
It's also mentioned in the common mistakes article.

Workaround to fix StreamListener constant Channel Name

I am using cloud stream to consuming messages I am using something like
#StreamListener(target = "CONSTANT_CHANNEL_NAME")
public void readingData(String input){
System.out.println("consumed info is"+input);
}
But I want to keep channel name as per my environment and it should be picked from property file, while as per Spring channel name should be constant.
Is there any work around to fix this problem?
Edit:1
Let's see the actual situation
I am using multiple queues and dlq queues and it's binding is done with rabbit-mq
I want to change my channel name and queue name as per my environment
I want to do all on same AMQP host.
My Sink Code
public interfaceProcessorSink extends Sink {
#Input(CONSTANT_CHANNEL_NAME)
SubscribableChannel channel();
#Input(CONSTANT_CHANNEL_NAME_1)
SubscribableChannel channel2();
#Input(CONSTANT_CHANNEL_NAME_2)
SubscribableChannel channle2();
}
You can pick target value from property file as below:
#StreamListener(target = "${streamListener.target}")
public void readingData(String input){
System.out.println("consumed info is"+input);
}
application.yml
streamListener:
target: CONSTANT_CHANNEL_NAME
While there are many ways to do that I wonder why do you even care? In fact if anything you do want to make it constant so it is always the same, but thru configuration properties map it to different remote destinations (e.g., Kafka, Rabbit etc). For example spring.cloud.stream.bindings.input.destination=myKafkaTopic states that channel by the name input will be mapped to (bridged with) Kafka topic named myKafkaTopic'.
In fact, to further prove my point we completely abstracted away channels all together for users who use spring-cloud-function programming model, but that is a whole different discussion.
My point is that I believe you are actually creating a problem rather the solving it since with externalisation of the channel name you create probably that due to misconfiguration your actual bound channel and the channel you're mentioning in your properties are not going to be the same.

Is there a way to make the message queue processing by MDB, on WildFly, FIFO?

Creating a JMS queue on WildFly 8.2 (with JMS provider HornetQ), and having a message driven bean "activated" by this queue, I saw that if the producer sends, in a rapid succession, multiple messages to the queue, the message driven bean does not process them necessarily in the order in which they were sent. Can one configure WildFly so that the messages are processed in the order in which they are sent (first in first out)?
(I think to have understood what happens after reading https://stackoverflow.com/a/6744508/999264)
There are multiple threads which execute the onMessage method of the message driven bean (MDB), one thread per message, and thus, if multiple messages arrive almost simultaneously, one cannot know which message will be processed first (because one cannot know which one of the threads will finish onMessage execution first). The only way to know this is to make sure that the number of threads is 1: in this case the only thread first processes the first message, then the second one, and so on.
In WildFly and JBoss the annotation #MessageDriven has the "activation config property" maxSession, which, as I understand, controls the maximal number of threads used to process the messages which arrive from the queue to the MDB. Setting its value to 1, as below
#MessageDriven(activationConfig = {
#ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "java:/jms/queue/myOwnQueue"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "maxSession", propertyValue = "1")})
public class MyOwnMDB implements MessageListener {
public void onMessage(Message message) {
System.out.println("message received " + message.toString());
}
}
and running the code, I see that indeed the messages are being processed, by the message driven bean, in the order in which they were sent.
I changed the title of the question since the original title, "Is there a way to make the message queue on WildFly FIFO?", appears incorrect: the queue itself is FIFO (actually, I found written somewhere that this is a part of JMS spec, though I cannot pinpoint the exact place).

Resources