I am using programmatic endpoint registration of listener endpoints:
MethodRabbitListenerEndpoint endpoint = new MethodRabbitListenerEndpoint();
endpoint.setId(endpointId);
endpoint.setQueues(eventsQueue);
endpoint.setBean(hanlderMethod.bean);
endpoint.setMethod(hanlderMethod.method);
endpoint.setMessageHandlerMethodFactory(messageHandlerMethodFactory);
registrar.registerEndpoint(endpoint);
My question is, how do I determine the routing key for this endpoint?
Edit: To further clarify, I am using a single queue for different types of messages, and I want to route them to different methods based on the routing key. This is in addition to the routing key used to route the messages to this queue to begin with.
Basically the use case is a general-purpose events bus. All the events go to the same exchange. Each type of event has a unique routing key. Each service has an events queue. Each service subscribes to the events it is interested in by adding the appropriate binding between the events exchange and its own events queue using the routing key for that event type. Each event type has a different handler method.
Look, you say Listener, so you are going to listen some queue for messages.
And right, you do that via setQueues().
Now regarding routingKey:
The routing key is a message attribute. The exchange might look at this key when deciding how to route the message to queues (depending on exchange type).
So, it really doesn't relate to the Listener.
Although I agree that we should declare Binding exactly in that place when we deal with queue. Therefore in the listener part.
So, if you do MethodRabbitListenerEndpoint registration manually (bypassing #RabbitListener definitions), you should declare and register Binding manually, too. And already here with an appropriate routingKey: http://docs.spring.io/spring-amqp/reference/html/_reference.html#_binding
UPDATE
There is no such a built-in feature which you are looking for.
We have MultiMethodRabbitListenerEndpoint who does the routing based on the payload type, but not any other possible filter.
What you want can be achieved with the Spring Integration router which can make the decision based on the AmqpHeaders.RECEIVED_ROUTING_KEY header.
From other side maybe that would be better to register unique queues for each routing key and have only one possible listener for that queue with appropriate method.
Related
I am trying to write a consumer for an existing queue.
RabbbitMQ is running in a separate instance and queue named "org-queue" is already created and binded to an exchange. org-queue is a durable queue and it has some additional properties as well.
Now I need to receive messages from this queue.
I have use the below code to get instance of the queue
conn = Bunny.new
conn.start
ch = conn.create_channel
q = ch.queue("org-queue")
It throws me an error stating different durable property. It seems by default the Bunny uses durable = false. So I've added durable true as parameter. Now it states the difference between other parameters. Do I need to specify all the parameters, to connect to it? As rabbitMQ is maintained by different environment, it is hard for me to get all the properties.
Is there a way to get list of queues and listening to the required queue in client instead of connecting to a queue by all parameters.
Have you tried the :passive=true parameter on queue()? A real example is the rabbitmq plugin of logstash. :passive means to only check queue existence rather than to declare it when consuming messages from it.
Based on the documentation here http://reference.rubybunny.info/Bunny/Queue.html and
http://reference.rubybunny.info/Bunny/Channel.html
Using the ch.queues() method you could get a hash of all the queues on that channel. Then once you find the instance of the queue you are wanting to connect to you could use the q.options() method to find out what options are on that rabbitmq queue.
Seems like a round about way to do it but might work. I haven't tested this as I don't have a rabbitmq server up at the moment.
Maybe there is way to get it with rabbitmqctl or the admin tool (I have forgotten the name), so the info about queue. Even if so, I would not bother.
There are two possible solutions that come to my mind.
First solution:
In general if you want to declare an already existing queue, it has to be with ALL correct parameters. So what I'm doing is having a helper function for declaring a specific queue (I'm using c++ client, so the API may be different but I'm sure concept is the same). For example, if I have 10 subscribers that are consuming queue1, and each of them needs to declare the queue in the same way, I will simply write a util that declares this queue and that's that.
Before the second solution a little something: Maybe here is the case in which we come to a misconception that happens too often :)
You don't really need a specific queue to get the messages from that queue. What you need is a queue and the correct binding. When sending a message, you are not really sending to the queue, but to the exchange, sometimes with routing key, sometimes without one - let's say with. On the receiving end you need a queue to consume a message, so naturally you declare one, and bind it to an exchange with a routing key. You don't need even need the name of the queue explicitly, server will provide a generic one for you, so that you can use it when binding.
Second solution:
relies on the fact that
It is perfectly legal to bind multiple queues with the same binding
key
(found here https://www.rabbitmq.com/tutorials/tutorial-four-java.html)
So each of your subscribers can delcare a queue in whatever way they want, as long as they do the binding correctly. Of course these would be different queues with different names.
I would not recommend this. This implies that every message goes to two queues for example and most likely a message (I am assuming the use case here needs to be processed only once by one subscriber).
It seems like the worker pattern, fanout, and filtered topics can all be implemented with topic exchanges. Why would I ever use a direct or fanout exchange instead?
We would like to codify common patterns found in our org in a library that abstracts the infinite flexibility of amqp (naming conventions, defaulting to durable, sending common headers, expirations etc.). Should we leverage the different exchange types or implement all patterns with topics; why?
(We have consumers/publishers in Java via spring boot, in golang, and in php)
Why shouldn't I use rabbitmq topic exchanges for everything?
nothing says you shouldn't. if it works for you, have fun with it!
From my RabbitMQ: Layout ebook:
The truth about exchange types is that there is no “master” type - not one to be used as a default, or most of the time. Sure, a given application may have its needs served by a single exchange or exchange type, but this will not always be the case. Even with in a single system, there may be a need to route messages in different ways and have them end up in the same queue.
If you find yourself in a situation where choosing one of the above exchange types will preclude a needed set of routing behaviors for your messages, use more than one exchange. You can route from any number of exchanges to a single queue, or from a single exchange to any number of queues.
Don’t limit your systems routing needs to a single exchange type for any given message or destination. Take advantage of each one, as needed.
On the different exchange types (again, from my ebook)
Direct:
A direct exchange allows you to bind a queue to an exchange with a routing key that is matched, case sensitively. This may be the most straight-forward exchange of them all, as there is no pattern matching or other behavior to track and consider. If a routing key from a message matches the routing key of a binding in the exchange, the message is routed.
Fanout:
Fanout exchanges allow you to broadcast a message to every queue bound to an exchange, with no way to filter which queues receive the message. If a queue is bound to a fanout exchange, it will receive any message published through that exchange.
and topic exchanges:
A topic exchange is similar to a direct exchange in that it uses routing keys. Unlike a direct exchange, though, the routing keys do not have to match exactly for a message to be routed. Topic exchanges allow you to specify wild-card matching of “topics” (routing keys) in your bindings. This lets you receive messages from more than one routing key and provides a level of flexibility not found in the other exchange types.
We have successfully integrated our multitenancy strategy with MassTransit due to some help from Chris Patterson. However we are stumbling over getting our (Automatonymous) sagas multitenant. I have something that works but I am not at all comfortable with it. We are using the "schema per tenant" database strategy, but are willing to flex this for sagas if that is the cleanest way to solve it.
We have tenant ID on the header of all messages. We scrape it off the IConsumeContext<> of incoming messaging and put it back on the IPublishContext<> of outgoing messages. This works fine with ISagaRepository<>.GetSaga(...) because one of its parameters is IConsumeContext<>. The problem is, when we call the other ISagaRepository<> methods, they do not have IConsumeContext<>, and we have no way of filtering by tenant within the repository. If we stick with our current database strategy, we have know the tenant so we know what schema to hit. If we change to have centralized tenant tables, we have to include tenant in the filtering because the thing that it is being correlated by is not necessarily unique across tenants.
The PropertySagaLocator<,> seems to be the key point based on my current understanding. In its Find(IConsumeContext<>) method we have the tenant context we need accessible, but it is not being passed down to the saga repository.
In my current attempt to get this working, I have therefore created a property saga locator for multitenancy that works with a specialized tenant saga repository and gives it the tenant context that it needs to use its .Where(...) method appropriately. But here's where it gets ugly. The PropertySagaLocator<,> concrete class is being instantiated by Automatonymous, and so to swap this out, I have to start at the edge of Automatonymous, at one of the .StateMachineSaga(...) extension methods and swap out concrete classes all the way down to the point where it is integrating with MassTransit by using the PropertySagaLocator<,> since it is a chain of concrete classes instantiating each other all the way down. I am not comfortable with making such a deep cut through Automatonymous, but it seems to me that whether we take the "schema per tenant" strategy or switch it, we are stuck with needing to integrate at this same point.
The other aspect of this is that we need to put tenant ID on outgoing messages when Automatonymous' .Publish(...) notation is employed. The way that I am currently doing this is with a decorator pattern on ServiceBus, and currently the point at which I am injecting the decorated, tenant-specific service bus is when the bus is copied from the consume context to the instance state, i.e. in my overrides of the saga message sink GetHandlers() method.
Does anyone have experience with how to integrate Automatonymous sagas with multitenancy? What we are doing now just seems to invasive and we would like to hit a more natural seam.
I've found another approach that's a lot less invasive, but is more restrictive. Specifically, you cannot use the PropertySagaLocator, i.e. all your correlations have to be Correlation IDs by inheriting from the CorrelatedBy<> interface. Make sure you don't do any StateMachineSagaRepository<>.Correlate(...) calls, because if you do, it will use the property locator, even if you give it the actual Correlation ID.
What that allows me to do is avoid the use of any methods on the saga repository except GetSaga(...), where I have the context I need for our multitenancy strategy. I then throw a NotImplementedByDesignException in the others.
That leaves me with only one thing to worry about; how to get tenant ID on the headers of messages going out from .Publish(...) calls. To do this I just subclassesed ConsumeContext<> and simultaneously implemented IConsumeContext<> and then overrode the Bus property with new so that I could set the bus on it. I then had a decorator pattern of the service bus that ensures that the bus publishes with tenant header no matter what method you call on it. Then in my saga repository I wrapped the actions I return in a lambda that passes my subclassed consume context along with my tenant-specific decorated bus into the consumer for the saga instead of just the straight consume context. This results in the bus that gets set on the saga state being specific to that tenant, and all outgoing messages then have tenant ID on them.
I'm evaluating Reactor (https://github.com/reactor/reactor) if it would be suitable for creating an event dispatching framework inside my Spring / enterprise application.
First, consider a scenario in which you have an interface A and concrete event classes B, C, and so on. I want to dispatch concrete events to multiple consumers i.e. observers. Those are registered to a global Reactor instance during bean post processing. However, you can register them dynamically. In most cases there is one producer sending events to multiple consumers at high rate.
I have used Selectors, namely, the ClassSelector to dispatch the correct event types to the correct consumers. This seems to work nicely.
Reactor reactor = ...
B event = ...
Consumer<Event<B>> consumer = ...
// Registration is used to cancel the subscription later
Registration<?> registration = reactor.on(T(event.getClass()), consumer);
To notify, use the type of the event as a key
B event = ...
reactor.notify(event.getClass(), Event.wrap(event));
However, I'm wondering if this is the suggested way to dispatch events efficiently?
Secondly, I was wondering that is it possible to filter events based on the event data? If I understand correctly, Selectors are only for inspecting the key. I'm not referring to event headers here but to the domain specific object properties. I was wondering of using Streams and Stream.filter(Predicate<T> p) for this but is it also possible to filter using Reactor and Selectors? Of course I could write a delegating consumer that inspects the data and delegates it to registered consumers if needed.
There is a helper object called Selectors that helps to create the various kinds of built-in Selector implementations. There you can see references to the PredicateSelector. The PredicateSelector is very useful as it allows you complete control over the matching of the notification key. It can be a Spring #Bean, an anonymous inner class, a lambda, or anything else conforming to the simple Predicate interface.
Optionally, if you have the JsonPath library in your classpath, then you can use the JsonPathSelector to match based on JsonPath queries.
In either of these cases you don't need to have a separate object for a key if the important data is actually the domain object itself. Just notify on the object and pass the Event<Object> as the second parameter.
MyPojo p = service.next();
reactor.notify(p, Event.wrap(p));
I'm working in the Symfony2 framework and wondering when would one use a Doctrine subscriber versus a listener. Doctrine's documentation for listeners is very clear, however subscribers are rather glossed over. Symfony's cookbook entry is similar.
From my point of view, there is only one major difference:
The Listener is signed up specifying the events on which it listens.
The Subscriber has a method telling the dispatcher what events it is listening to
This might not seem like a big difference, but if you think about it, there are some cases when you want to use one over the other:
You can assign one listener to many dispatchers with different events, as they are set at registration time. You only need to make sure every method is in place in the listener
You can change the events a subscriber is registered for at runtime and even after registering the subscriber by changing the return value of getSubscribedEvents (Think about a time where you listen to a very noisy event and you only want to execute something one time)
There might be other differences I'm not aware of though!
Don't know whether it is done accidentally or intentionally.. But subscribers have higher priority that listeners - https://github.com/symfony/symfony/blob/master/src/Symfony/Bridge/Doctrine/DependencyInjection/CompilerPass/RegisterEventListenersAndSubscribersPass.php#L73-L98
From doctrine side, it doesn't care what it is (listener or subscriber), eventually both are registered as listeners - https://github.com/doctrine/common/blob/master/lib/Doctrine/Common/EventManager.php#L137-L140
This is what I spotted.
You should use event subscriber when you want to deal with multiple events in one class, for example in this symfony2 doc page article, one may notice that event listener can only manage one event, but lets say you want to deal with several events for one entity, prePersist, preUpdate, postPersist etc... if you use event listener you would have to code several event listener, one for each event, but if you go with event subscriber you just have to code one class the event susbcriber, look that with the event subscriber you can manage more than one event in one class, well thats the way i use it, i preffer to code focused in what the model business need, one example of this may be went you want to handle several lifecycle events globaly only for a group of your entities, to do that you can code a parent class and defined those global methods in it, then make your entities inherit that class and later in your event susbcriber you subscribe every event you want, prePersist, preUpdate, postPersist etc... and then ask for that parent class and execute those global methods.
Another important thing: Doctrine EventSubscribers do not allow you to set a priority.
Read more on this issue here
Both allow you to execute something on a particular event pre / post persist etc.
However listeners only allow you to execute behaviours encapsulated within your Entity. So an example might be updating a "date_edited" timestamp.
If you need to move outside the context of your Entity, then you'll need a subscriber. A good example might be for calling an external API, or if you need to use / inspect data not directly related to your Entity.
Here is what the doc is saying about that in 4.1.
As this is globally applied to events, I suppose it's also valid for Doctrine (not 100% sure).
Listeners or Subscribers
Listeners and subscribers can be used in the same application indistinctly. The decision to use either of them is usually a matter
of personal taste. However, there are some minor advantages for each
of them:
Subscribers are easier to reuse because the knowledge of the events is kept in the class rather than in the service definition.
This is
the reason why Symfony uses subscribers internally;
Listeners are more flexible because bundles can enable or disable each of them conditionally depending on some configuration value.
http://symfony.com/doc/master/event_dispatcher.html#listeners-or-subscribers
From the documentation :
The most common way to listen to an event is to register an event
listener with the dispatcher. This listener can listen to one or more
events and is notified each time those events are dispatched.
Another way to listen to events is via an event subscriber. An event
subscriber is a PHP class that's able to tell the dispatcher exactly
which events it should subscribe to. It implements the
EventSubscriberInterface interface, which requires a single static
method called getSubscribedEvents().
See the example here :
https://symfony.com/doc/3.3/components/event_dispatcher.html