As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Have this design problem and trying to find a best way to implement it using JMS.
Web App has single Listener and multiple Producer and multiple WorkerBee. Listener pushes messages from its queue to WorkerBee. All of the messages produced are queued in a Listener. I want to implement a process than can send messages from Listener queue to each WorkerBee using some kind of load balancing process. Only single instance of the message occurs in the Listener queue or in the WorkerBee queue.
Whats the best way to do it? Does JMS sounds like a good choice here? How would I push the message from Listener queue to WorkerBee queue?
Open for suggestion :) And appreciate your response.
Messaging is good choice for such task. I think you should use Apache Camel framework as routing platform for described purposes. It allows to divide business logic from integration level in quite graceful way.
Camel supports plenty of components "out-of-box" including JMS endpoints.
You can delegate to one of your WorkerBee's using Load Balancer pattern.
JMS sounds a good option, since you have multiple instances of WorkerBee, make all WorkerBee listen to a single queue, e.g. to InWorkBeeQueue.
Now you can publish messages from Web App listener to InWorkBeeQueue. Write a simple java JMS producer code to publish messages to this queue. Depending on whichever instance of WorkBee is free, it will read message from InWorkBeeQueue and process it.
If you want to avoid writing new JMS producer code, you can directly map messages from Web app queue to InWorkBeeQueue using Apache Camel routes.
Related
I am currently exploring de-duplication strategies within Active MQ. Artemis supports duplicate detection, but I'm not sure about ActiveMQ 5
Is it possible to prevent a message from being placed on a queue if it currently exists on the queue in ActiveMQ 5?
Messages which are no longer on the queue and have been so in the past will be allowed back on the queue.
The underlying capability I am trying to achieve is flow control in which multiple messages of the same value are not placed on the queue as to remove duplicate processing.
Based on the documentation, I have tried using the message property defined _AMQ_DUPL_ID, but I am still experiencing duplication. I suspect this may not be supported in ActiveMQ 5 and am unsure what alternative option is. I'm open to suggestions.
NOTE: The Active MQ instance being used is provided by Amazon MQ.
As you suspect, ActiveMQ 5.x doesn't support automatic duplicate detection. This is only supported in ActiveMQ Artemis. That said, messages are not removed from the broker's duplicate ID cache when the message is consumed from the queue. This is because in most cases a duplicate sent after the message is consumed is still considered a duplicate.
You may be able to implement some kind of duplicate detection in a broker plugin, but I have no idea of Amazon MQ supports adding custom plugins. It's more likely that you'll have to implement duplicate detection in the clients themselves.
I'm confused when it comes to JMS Queue/Topic. What I want is messages should go to every subscriber and I want subscribers to receive messages from inactive time when they become active. However, I don't have control over whether or not subscribers have durable subscription. Is there a way to set up a persistent Queue, and set it up so that every subscriber will receive same message? And how to set this up using spring config
Thanks much.
This is mostly a question where the design of your system affects the outcome.
You could use UI tooling to create durable subscriptions for the clients that need to but that is cumbersome and error prone. You could use something like camel or other configuration on the target broker to fanout messages from an incoming Queue to outgoing Queues that map to the consumer subscriptions.
It all depends mostly on the requirements and your overall design so a real answer is beyond the scope of a SO answer without you doing some more legwork to narrow the scope a bit. JMS itself does not define any answer for this so it will come down a bit to the broker you've chosen and possibly other third party tooling that you might pick to do what you need.
At the Moment we design and plan to transform our system to a microservice architecture pattern.
To loose coupling we think about an event driven design with an JMS Topic. This looks great. But i don't now how we can solve the problem with multiple instances of a microservice.
For failover and load balancing we have n instances of each service. If an event is published to the topic each instance will receive and process that event.
It's possible to handle this with locks and processed states in the data storage. But this solution looks very expensive and every instance has the same work. This is not a load balaning for me.
Is there some good Solution or best practice for this pattern?
Why not use a Queue instead of a Topic? Then your instances will compete for messages rather than all get a copy.
EDIT
rabbitmq might be a better fit for you - publish to a fanout exchange and have any number of queues bound to it, with each queue having any number of competing consumers.
I have also seen JMS topics used where competing clients connect with the same client id. Some (all?) brokers will only allow one such client to consume. The others keep trying to reconnect until the current consumer dies.
I'm looking at swapping out ActiveMQ with RabbitMQ for a few reasons. I currently have multiple services which are each capable of publishing events (and they publish those events to a specific VirtualTopic in AMQ). Each of the services is also capable of consuming messages from the other services. Consumers are set up such that they subscribe as a consumer to a queue on the VirtualTopic.
This buys me the ability to fan messages out to multiple queues (topic-like functionality) while keeping the benefits of queues (load balancing and persistence).
It seems like this is roughly equivalent to RabbitMQ's fanout exchange. However, the part that I found very useful in ActiveMQ is that the producer doesn't need to have any knowledge of the consumers. It simply publishes to the virtual topic. It seems that in RabbitMQ, when the exchange is created, I need a definitive of queues to publish that message to.
tl;dr
Is there any routing scheme in RabbitMQ that is equivalent to ActiveMQ's Virtual Topic, such that I can produce messages to a topic that are distributed to any queue that has been created off of that Virtual Topic, without requiring a hard-coded routing scheme somewhere in RMQ?
I realized after posting this question that it is pretty trivial to do this (not sure why I never thought of it before).
I was looking at it from the wrong direction, wondering how I could automatically have the publisher configure queues for the recipients - which isn't the right way to approach this question.
Instead, I have the subscribers, when they start up, bind themselves to the exchange that the publisher users, which provides in the inversion of control I'm looking for (publishers need not know anything about their consumers).
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am trying to understand how Facebook's chat feature receives messages without continuously poling the server.
Firebug shows me a single GET XmlHttpRequest continuously sitting there, waiting for a response from the server. After 5 minutes, this never timed out.
How are they preventing timeout?
An AJAX request can just sit there like that indefinitely, waiting for a response?
Can I do this with JSONRequest? I see this at json.org:
JSONRequest is designed to support
duplex connections. This permits
applications in which the server can
asynchronously initiate transmissions.
This is done by using two simultaneous
requests: one to send and the other to
receive. By using the timeout
parameter, a POST request can be left
pending until the server determines
that it has timely data to send.
Or is there another way to let an AJAX call just sit there, waiting, besides using JSONRequest?
Facebook uses a technique which is now called Comet to push messages from the server to the client instead of having the client poll the server.
There are many ways that this can be implemented, with XMLHttpRequest long polling being just one option. The principle behind this method is that the client sends an ordinary XMLHttpRequest but the server doesn't respond until some event happens (such as another user sending a message), so the client is forced to wait. When the client receives a response (or if the request times out) the client simply creates a new request so that it always has one open request to the server.