Is there a way to implement a topology with ActiveMQ, where P is a publisher, s_a is a subscriber of service A and s_b1 and s_b2 are subscribers of service B. The latter are set in a cluster for load balancing (so s_b1 or s_b2 gets a message but not both).
Is there a way to combine publish-subscribe with a peer-to-peer messaging so one of the subscribers would be a queue which two consumers are listening on?
Thanks,
Gil
Yes this can be done.
You may want to look at Apache ActiveMQ Artemis which implements JMS 2.0 and support what you ask for out of the box. JMS 2.0 allows multiple subscribers per topic to load balance a cluster.
For ActiveMQ and JMS 1.0 you can use Virtual Destinations instead. They work with naming conventions.
If you name your topic: VirtualTopic.StockPrice and publish messages to it, you will be able to consume from queues named Consumer.Consumer1.VirtualTopic.StockPrice, Consumer.Consumer2.VirtualTopic.StockPrice etc. Consumer1 can be anything.
You can reconfigure ActiveMQ to use other names for Virtual Destinations (prefix, suffix etc).
Related
We have configured our ActiveMQ message broker as a Spring Boot project and there's another Spring Boot application (let's call it service-A) that has a listener configured to listen to some topics using #JmsListener annotation. It's a Spring Cloud microservice appilcation.
The problem:
It is possible that service-A can have multiple instances running.
If we have 2 instances running, then any message coming on topic gets listened to twice.
How can we avoid every instance listening to the topic?
We want to make sure that the topic is listened to only once no matte the number of service-A instances.
Is it possible to run the microservice in a cluster mode or something similar? I also checked out ActiveMQ virtual destinations but not too sure if that's the solution to the problem.
We have also thought of an approach where we can decide who's the leader node from the multiple instances, but that's the last resort and we are looking for a cleaner approach.
Any useful pointers, references are welcome.
What you really want is a shared topic subscription which was added in JMS 2. Unfortunately ActiveMQ 5.x doesn't support JMS 2. However, ActiveMQ Artemis does.
ActiveMQ Artemis is the next generation broker from ActiveMQ. It supports most of the same features as ActiveMQ 5.x (including full support for OpenWire clients) as well as many other features that 5.x doesn't support (e.g. JMS 2, shared-nothing high-availability using replication, last-value queues, ring queues, metrics plugins for integration with tools like Prometheus, duplicate message detection, etc.). Furthermore, ActiveMQ Artemis is built on a high-performance, non-blocking core which means scalability is much better as well.
I'm using ActiveMQ Artemis messaging system, and I'm testing my setup with STOMP (stomp.py).
I created an "address" on Artemis called Site.SOF.Order.Fulfillment.Submission.ActiveOmni.Topic, and attached two queues to it:
Site.SOF.Order.Fulfillment.Submission.ActiveOmni.queue (multicast)
Site.SOF.Order.Fulfillment.Submission.ActiveOmni.log.queue (multicast)
Here are the exported bindings:
<bindings>
<address-binding routing-types="ANYCAST" name="DLQ" id="2"/>
<address-binding routing-types="ANYCAST" name="ExpiryQueue" id="6"/>
<address-binding routing-types="MULTICAST" name="activemq.notifications" id="10"/>
<address-binding routing-types="MULTICAST" name="Site.SOF.Order.Fulfillment.Submission.Topic" id="92"/>
<queue-binding address="Site.SOF.Order.Fulfillment.Submission.Topic" filter-string="" name="Site.SOF.Order.Fulfillment.Submission.log.Queue" id="97" routing-type="MULTICAST"/>
<queue-binding address="DLQ" filter-string="" name="DLQ" id="4" routing-type="ANYCAST"/>
<queue-binding address="ExpiryQueue" filter-string="" name="ExpiryQueue" id="8" routing-type="ANYCAST"/>
<queue-binding address="Site.SOF.Order.Fulfillment.Submission.Topic" filter-string="" name="Site.SOF.Order.Fulfillment.Submission.ActiveOmni.Queue" id="94" routing-type="MULTICAST"/>
</bindings>
I created a user with access to Site.*
So how do I access the queues? For example, if I use the stomp.py command line tool like so:
> subscribe Site.SOF.Order.Fulfillment.Submission.ActiveOmni.queue
I get the error:
[username] does not have permission='CREATE_ADDRESS' on address Site.SOF.Order.Fulfillment.Submission.ActiveOmni.queue
So I try adding /queue/ to the front, as I've seen in the tutorials
> subscribe /queue/Site.SOF.Order.Fulfillment.Submission.ActiveOmni.queue
But I get the same error:
[username] does not have permission='CREATE_ADDRESS' on address /queue/Site.SOF.Order.Fulfillment.Submission.ActiveOmni.queue
I can send to the topic/address without trouble. The following results in a "hello" message appearing in both queues.
send Site.SOF.Order.Fulfillment.Submission.ActiveOmni.Topic "hello"
Is this a naming convention I'm missing? Or a way to specify topic vs queue? What am I missing here that's too obvious to be clearly documented?
Let me provide some background information first...
The ActiveMQ Artemis address model includes 3 main elements - addresses, queues, and routing types. These are low-level entities which are used to implement all the different semantics for all the different protocols and configurations the broker supports.
STOMP, by contrast, just supports ambiguous "destinations." The STOMP 1.2 specification says this about destinations:
A STOMP server is modelled as a set of destinations to which messages can be sent. The STOMP protocol treats destinations as opaque string and their syntax is server implementation specific. Additionally STOMP does not define what the delivery semantics of destinations should be. The delivery, or “message exchange”, semantics of destinations can vary from server to server and even from destination to destination. This allows servers to be creative with the semantics that they can support with STOMP.
In the core address model messages are sent to addresses and consumed from queues. Addresses and queues are named independently so the names that the core producer and consumer use can be different.
ActiveMQ Artemis supports both point-to-point and pub/sub semantics for STOMP destinations. You can read more about how to configure these semantics in the latest ActiveMQ Artemis STOMP documentation.
In the STOMP point-to-point use-case a message is sent to a destination and consumed from that same destination. The name that both the STOMP producer and consumer use are the same. In order to support these semantics the broker uses a core address and a core anycast queue with the same name.
In the STOMP pub/sub use-case a client creates a subscription on a destination which it can then use to receive messages sent to that destination. The name that both the STOMP subscriber and producer use are the same. In order to support these semantics the broker uses a core address and a core multicast queue with different names. The name of the core queue is generated by the broker automatically. The subscriber can then receive the messages directly from the underlying core subscription queue.
All this stuff is done behind the scenes for STOMP clients, but it's important to understand how STOMP maps to ActiveMQ Artemis' address model so you can configure things appropriately
In your case you've configured an address with 2 multicast queues and you're trying to consume from those queues. This use-case is a mix between point-to-point and pub/sub because you have statically defined queues (i.e. not queues auto-created by the broker in response to a subscriber) but the queues are multicast. These queues are like statically configured durable subscriptions. To access the queues directly you need to use their fully-qualified queue name (i.e. FQQN) from your client. The FQQN follows the pattern of <address>::<queue> so in your case you'd use either
Site.SOF.Order.Fulfillment.Submission.ActiveOmni.Topic::Site.SOF.Order.Fulfillment.Submission.ActiveOmni.queue
or
Site.SOF.Order.Fulfillment.Submission.ActiveOmni.Topic::Site.SOF.Order.Fulfillment.Submission.ActiveOmni.log.queue
We are using activemq for micro services message broker. My query is about how to build failover mechanism for consumer service. If the consumer service is down , how to proceed further. Can we do somethign like hystrix or any other failover/circuit breaker mechanism for such scenarios.
Short answer is 'yes'. The details on how to implement depend on your use case. Recommended architecture:
Publish to virtual topic
Option A: Multi-threaded consumers
2. Multiple competing consumers reading from the queue that is used as the subscriber from the virtual topic
Option B: Single-threaded HA consumers
2. Multiple competing consumers reading from the queue that is used as the subscriber from the virtual topic using an exclusive consumer to ensure only one thread is processing data and the other consumers are hot-standby
I read the hornetQ documentation, confused a lot. Can someone give an exact example to create a JMS topic in hornetQ. I mean the xml configurations in hornetq-jms.xml and hornetq-configuration.xml. assume we have a topic named top and 2 subscribers named: sub1, sub2. what I get is that we should define two queues(one for each subscribers) and bind them to an address which is the topic name actually, but how the subscriber would know they should connect to which one?(They only know the topic name)
I think you are confused by the way HornetQ handles topics internaly in contrast to the way the JMS specification describes topics.
Let's start with the JMS specification. Here you have one topic, where n subscribers can listen to the messages, that will be published by a client. In JMS we talk only about the destination in singular, eg. we'll send a message to the topic, or the queue respectively.
HornetQ is a JMS Provider - a server that implements the JMS specification, so Java-clients can connect to it, and use the JMS-API. The JMS Provider might change, but the code should still work when using another JMS provider.
However, HornetQ does not distinguish internally the destinations (topic or queue), since it tries to be a generic messaging middleware. In HornetQ all topics or queues are implemented as "addresses" and "queues". When you use the HornetQ API (CoreAPI) instead of the JMS-API, you have to deal with such things. You should read the Address section in the HornetQ documentation:
In core, there is no concept of a Topic, Topic is a JMS only term.
Instead, in core, we just deal with addresses and queues.
For example, a JMS topic would be implemented by a single address to
which many queues are bound. Each queue represents a subscription of
the topic. A JMS Queue would be implemented as a single address to
which one queue is bound - that queue represents the JMS queue.
For an example how to use a topic with JMS via HornetQ, I stongly recommend the examples that come with HornetQ itself. After donwloading and extracting the hornetq archive, just go to the examples/jms/topic directory and see the readme.html for a brief overview how to implement and how to execute the example (mvn verify).
For me JMS and ESB seem to be very related things and I'm trying to understand how exactly they are related.
I've seen a sentence that JMS can be used as a transport for ESB - then what else except the transport should be present in such an ESB? Is JMS a simple ESB or if not, then what it lacks from the real ESB?
JMS offers a set of APIs for messaging: put a message on a queue, someone else, sometime later, perhaps geographically far away takes the message off the queue and processes it. We have decoupled in time and location of the message provider and consumer. Even if the message consumer happens to be down for a time we can keep producing messages.
JMS also offers a publish/subscribe capability where the producer puts the message to a "topic" and any interested parties can subscribe to that topic, receiving messages as and when they are produced, but for the moment focus just on the queue capabilty.
We have decoupled some aspects of the relationship between provider and consumer. However some coupling remains. First, as things stand every message is processed in the same way. Suppose we want to introduce different kinds of processing for different kinds of messages:
if ( message.customer.type == Platinum )
do something special
Obviously we can write code like that, but an alternative would be to have a messaging system that can send different messages to different places we set up three queues:
Request Queue, the producer(s) puts their requests here
Platinum Queue, platinum consumer processing reads from here
Standard Queue, a standard consumer reads messages from here
And then all we need is a little bit of cleverness in the queue system itself to transfer then messsage from the Request Queue to the Platinum Queue or Standard Queue.
So this is a Content-Based Routing capability, and is something that an ESB provides. Note that the ESB uses the fundamental Queueing capabilities offered by JMS.
A second kind of couppling is that the consumer and producer must agree about the message format. In simple cases that's fine. But when you start to have many producers all putting message to the same queue you start to hit versioning problems. New message formats are introduced but you don't want to change all the existing providers.
Request Version 1 Queue Existing providers write here
Request Version 2 Queue New provider write here, New Consumer Reads here
And the ESB picks up the Version 1 Queue messages and transforms them into Version 2 messages and puts them onto the Version 2 queue.
Message transformation is another possible ESB capability.
Have a look at ESB products, see what they can do. As I work for IBM, I'm most familiar with WebSphere ESB
I would say ESB is like a facade into a number of protocals....JMS being one of them.
An addition to the above list is the latest Open Source ESB - UltraESB
JMS is not well suited for the integration of REST services, File systems, S/FTP, Email, Hessian, SOAP etc. which are better handled with an ESB that supports these types natively. For example, if you have a process that dumps a CSV file of 500MB at midnight, and you want another system to pickup the file, parse CSV and import into a database, this can easily be accomplished by an ESB - whereas a solution with just JMS will be bad. Similarly, integration of REST services, with load balancing/failover to multiple backend instances can be done better with an ESB supporting HTTP/S natively.
This Transformation does not happen automatically. You need to configure the mapping or write transformation service
Look at https://access.redhat.com/knowledge/docs/en-US/JBoss_Enterprise_SOA_Platform/4.2/html/SOA_ESB_Message_Transformation_Guide/ch02s03.html
Regards,
Raja Nagendra Kumar,
C.T.O
www.tejasoft.com
ESB offers integration with a lot of different protocols in addition to JMS.
Most use JMS behind the scenes to transfer, stor and move messages. One such solution OpenESB, uses XML format messages.
There are open source ESB which you could checkout -
OpenESB
Apache Camel
MuleESB
WSO2 ESB
JMS implementation like ActiveMQ come with Camel inbuilt into them.
JMS is a protocol for communicating with an underlying messaging layer. ESB operates at a higher level, offering integration with multiple technologies and protocols, one of which would be JMS, in a uniform way that makes management of complex flows much simpler.
There are JMS message brokers , that you can easily configure with ESB. https://docs.wso2.com/display/ESB470/JMS+Transport
JMS and ESB both provide a way of communication between different applications. But the context for JMS and ESB are different. JMS is for simple need. JMS is implemented by JMS Provider. It is Java specific.
Examples of JMS Providers are: Apache Active MQ, IBM MQ, HornetQ etc.
ESB is for complex need. ESB is a component in EAI providing communication facility to various applications. It is generic & not specific to Java. JMS is one of the supported protocols.
Examples of ESB provider are: MuleESB, Apache Camel, OpenESB
Use Case: It may be an overhead to use ESB, if all our communicating applications are in Java and are using the same message format. Here JMS may be sufficient.