Consumes<T>.For<G> does not work - masstransit

I know that with MassTransit you can have a correlation id on your message and you can consume only messages that has the same correlation id.
I did this in a console application but it does not work, it gets all the messages even with different correlation id(s). Actually my "CorrelationId" property is never called.
Thanks

You can only register consumers of this type as an instance-based consumer.
x.Subscribe(s => s.Instance(consumer));
And the instance should have a fixed Guid for the CorrelationId.
This is really something that was put into MT very early on and is not really useful in practice, as the endpoint.SendRequest() feature is better for request/response. For content-based routing the distributor is a better choice.

Related

How does a microservice return data to the caller when using a message broker? or a message queue?

I am prettty new to microservices, and I am trying to figure out how to set a micro-service architecture in which my publisher that emits an event, can receive a response with data from the consumer within the publisher?
From what i have read about message-brokers and message-queues, it seems like it's one-way communication. The producer emits an event (or rather, sends a message) which is handled by the message broker, and then the consumer consumes that event and performs some action.
This allows for decoupled code, which is part of what im looking for, but i dont understand if the consumer is able to return any data to the caller.
Say for example I have a microservice that communicates with an external API to fetch data. I want to be able to send a message or emit an event from my front-facing server, which then calls the service that fetches data, parses the data, and then returns that data back to my servver1 (front-facing server)
Is there a way to make message brokers or queues bidirectional? Or is it only useable in one direction. I keep reading message brokers allow services to communicate with each other, but I only find examples in which data flow goes one way.
Even reading rabbitMQ documentation hasn't really made it very clear to me how i could do this
In general, when talking about messaging, it's one-way.
When you send a letter to someone you're not opening up a mind-meld so that they telepathically communicate their response to you.
Instead, you include a return address (or some other means of contacting you).
So to map a request-response interaction when communicating with explicit messaging (e.g. via a message queue), the solution is the same: you include some directions which the recipient can/will interpret as "send a response here". That could, for instance be, "publish a message on this queue with this correlation ID".
Your publisher then, after sending this message, subscribes to the queue it's designated and waits for a message with the expected correlation ID.
Needless to say, this is fairly elaborate: you are, in some sense, reimplementing a decent portion of a session protocol like TCP on top of a datagram protocol like IP (albeit in this case, we may have some stronger reliability guarantees than we'd get from IP). It's worth noting that this sort of request-response interaction intrinsically couples the two parties (we can't really say "sender and receiver": each is the other's audience), so we're basically putting in some effort to decouple the two sides and then some more effort to recouple them.
With that in mind, if the actual business use case calls for a request-response interaction like this, consider implementing it with an actual request-response protocol (e.g. REST over HTTP or gRPC...) and accept that you have this coupling.
Alternatively, if you really want to pursue loose coupling, go for broke and embrace the asynchronicity at the heart of the universe (maybe that way lies true enlightenment?). Have your publisher return success with that correlation ID as soon as its sent its message. Meanwhile, have a different service be tracking the state of those correlation IDs and exposing a query interface (CQRS, hooray!). Your client can then check at any time whether the thing it wanted succeeded, even if its connection to your publisher gets interrupted.
Queues are the wrong level of abstraction for request-reply. You can build an application out of them, but it would be nontrivial to support and operate.
The solution is to use an orchestration system like temporal.io or AWS Step Functions. These services out of the box provide state management, asynchronous communication, and automatic recovery in case of various types of failures.

Is it possible to define a single saga which will process many messages

My team is considering if we can use mass transit as a primary solution for sagas in RabbitMq (vs NServiceBus). I admit that our experience which solution like masstransit and nserviceBus are minimal and we have started to introduce messaging into our system. So I sorry if my question will be simple or even stupid.
However, when I reviewed the mass transit documentation I noticed that I am not sure if that is possible to solve one of our cases.
The case looks like:
One of our components will produce up to 100 messages which will be "sent" to queue. These messages are a result of a single operation in a system. All of the messages will have the same Correlated Id and our internal publication id (same too).
1) is it possible to define a single instance saga (by correlated id) which will wait until it receives all messages from a queue and then process them as a single batch?
2) otherwise, is there any solution to ensure all of the sent messages was processed? (Consistency batch?) I assume that correlated Id will serve as a way to found an existing saga instance (singleton). In the ideal case, I would like to complete an instance of a saga When the system will process every message which belongs to a single group (to one publication)
I look at CompositeEvent too but I do not sure if I could use it to "ensure" that every message was processed and then I would let to complete saga for specific correlated Id.
Can you explain how could it be achieved? And into what mechanism I should look at in order to correlated id a lot of messages with the same id to the single saga and then complete if all of msg will be consumed?
Thank you in advance for any response
What you describe is how correlation by id works. It is like that out of the box.
So, in short - when you configure correlation for your messages correctly, all messages with the same correlation id will be handled by the same saga instance.
Concerning the second question - unless you publish a separate event that would inform the saga about how messages it should expect, how would it know that? You can definitely schedule a long timeout, attempting and assuming that within the timeout all the messages will be received by the saga, but it's not reliable.
Composite events won't help here since they are for messages with different types to be handled as one when all of them arrive and it doesn't count for the number of messages of each type. It just waits for one message of each type.
The ability to receive a series of messages and then operate on them in a batch is a common case, so much so that there is a sample showing how to do just that:
Batch Sample
Each saga instance has a unique correlation identifier, and as long as those messages can be correlated to that single instance, MassTransit will manage the concurrency (either optimistic or pessimistic, and depending upon the saga storage engine).
I'd suggest reviewing the state machine in the sample, and seeing how that compares to your scenario.

What is the Purpose of the DestinationAddress field in the MassTransit Envelope?

When sending a message, MassTransit wraps that payload with an envelope which has a field called destinationAddress. What purpose does this field have?
I found this because I have a number of C# microservices communicating with some node and java based services - so I've been using the minimum payload defined here:
http://masstransit-project.com/MassTransit/advanced/interoperability.html
I've had no problem integrating the two services together I was just wondering what the point was of having the destinationAddress as part of the message itself? Is it just a belts and braces kind of thing to make sure messages don't go on the wrong queue by mistake?
I would have thought that all of this information can be derived since it is literally just built up of a) the message bus host and b) the queue name used when actually sending the message?
Transports have a variety of ways to delivering messages. For instance, publishing a message to a topic would set the destination address to (URI of topic) but it may be delivered to a queue (via a subscription, forwarded by the transport) with a different address. In this case, the envelope has the original destinationAddress, whereas the queue would have a different address.
There are also cases where messages may be scheduled, redelivered, faulted, etc., and having that information helps in troubleshooting production systems in cases where the original destination may not be known otherwise.
So, yeah, in the simplest case it seems superfluous, however, it comes in useful down the road when trying to figure out why something doesn't work.

Approach for taking action on reception of two different JMS messages

Say I have one JMS message FooCompleted
{"businessId": 1,"timestamp": "20140101 01:01:01.000"}
and another JMS message BazCompleted
{"businessId": 1,"timestamp": "20140101 01:02:02.000"}
The use case is that I want some action triggered when both messages have been received for the business id in question - essentially a join point of reception of the two messages. The two messages are published on two different queues and order between reception of FooCompleted and BazCompleted may change. In reality, I may need to have join of reception of several different messages for the businessId in question.
The naive approach was that to store the reception of the message in a db and check if message(s) its dependent join arm(s) have been received and only then kick off the action desired. Given that the problem seems generic enough, we were wondering if there is a better way to solve this.
Another thought was to move messages from these two queues into a third queue on reception. The listener on this third queue will be using a special avataar of DefaultMessageListenerContainer which overrides the doReceiveAndExecute to call receiveMessage for all outstanding messages in the queue and adding messages back to the queue whose all dependent messages have not yet arrived - the remaining ones will be acknowledged and hence removed. Given that the quantum of messages will be low, probing the queue over and adding messages again should not be a problem. The advantage would be avoiding the DB dependency and the associated scaffolding code. Wanted to see if there is something glaringly bad with this
Gurus, please critique and point out better ways to achieve this.
Thanks in advance!
Spring Integration with a JMS message-driven adapter and an aggregator with custom correlation and release strategies, and a peristent (JDBC) message store will provide your first solution without writing much (or any) code.

Is there an enterprise message queue which can drop duplicate messages (first value stays)?

I am looking looking for a message queue with these requirements. Couldn't find it; maybe the closest was the rabbitmq-lvc plugin (but I need the first value in the line to stick and stay in front).
Would anyone know a technology to support these?
message queue is FIFO
if a duplicate message is being enqueued, the message queue itself either rejects or drops it.
For example, producers put these three messages (each with a discriminator value) into the queue in this sequence: M1(discriminator=7654), M2(discriminator=2435), M3(discriminator=7654).
Now I want the message queue to see that M3 has the same discriminator value as M1 and thus drop/reject M3. Consumers receive only: M1, M2.
Thanks
Tom
I don't know the other transports but I know that WebSphere MQ doesn't do this and I believe that the explanation why would apply broadly across the category. I'd be very surprised to find that any messaging transport actually provides this. Here are a few reasons why:
Async messages are supposed to be atomic. Different vendors make their own accommodations for message affinity (a relationship between two or more messages) but as a rule, message affinity is to be avoided. Your use case not only requires the transport to deal with message affinity, but to do so over an indeterminate interval between related messages.
Message payload is a blob. For performance reasons, WMQ doesn't touch message payloads except for things like compression or code page conversion. Anything that requires parsing the message payload is a job for WebSphere Message Broker, DataPower or WebSphere ESB. I would expect any messaging transport which claims to be performant would face similar issues because parsing payloads results in longer code paths and non-linear performance degradation. The exception is message properties but WMQ uses these for selection only and I expect that is generally the case.
Stateless operation. As a transport, the state of the application may be stored in a persistent message but the state of the transport layer should not depend on the state of the application across different units of work. Again, an ESB type of product is best suited when you want to delegate management of some of the application state to the messaging layer and especially when such management spans many units of work.
Assured delivery. WMQ was designed to never lose your persistent message. If the app explicitly sets expiry the message might go away because the sender said it was OK to do so. If the message is non-persistent it might go away, but only in an exceptional condition and, again, because the sender said it was OK to do so. The use case you describe might result in a message going away not because the sender said it was OK, or even because the recipient said it was OK but because of an interaction with some unrelated 3rd party who happened to beat you to the queue with a duplicate value. What if that first message has an invalid header or code page problem and gets rolled back? What if I as an attacker spew out garbage messages with all possible 4-digit values for discriminator?
As I said, I don't know the other messaging products so there may be something out there which meets your requirement and if so I'll be interested to read about it. However in the event hat nobody replies, this post may shed some light on the reasons why.

Resources