Masstransit with RabbitMQ using .net creates 2 Exchanges for given Type - masstransit

Below is a sample POC developed in ASP.net Core 6.0 API that uses MassTransit and RabbitMQ to simulate a simple publish/subscribe using MassTransit consumer. However when the code is executed it results in creation of 2 Exchanges and 1 Queue in RabbitMQ.
Program.cs
builder.Services.AddMassTransit(msConfig =>
{
msConfig.AddConsumers(Assembly.GetEntryAssembly());
msConfig.UsingRabbitMq((hostcontext, cfg) =>
{
cfg.Host("localhost", 5700, "/", h =>
{
h.Username("XXXXXXXXXXX");
h.Password("XXXXXXXXXXX");
});
cfg.ConfigureEndpoints(hostcontext);
});
});
OrderConsumer.cs
public class OrderConsumer : IConsumer<OrderDetails>
{
readonly ILogger<OrderConsumer> _logger;
public OrderConsumer(ILogger<OrderConsumer> logger)
{
_logger = logger;
}
public Task Consume(ConsumeContext<OrderDetails> context)
{
_logger.LogInformation("Message picked by OrderConsumer. OrderId : {OrderId}", context.Message.OrderId);
return Task.CompletedTask;
}
}
Model
public class OrderDetails
{
public int OrderId { get; set; }
public string OrderName { get; set; }
public int Quantity { get; set; }
}
Controller
readonly IPublishEndpoint _publishEndpoint;
[HttpPost("PostOrder")]
public async Task<ActionResult> PostOrder(OrderDetails orderDetails)
{
await _publishEndpoint.Publish<OrderDetails>(orderDetails);
return Ok();
}
Output from Asp.Net
As highlighted 2 Exchanges are created Sample:OrderDetails and Order.
However, the Sample:OrderDetails is bound to Order (Exchange)
And the Order (Exchange) routes to "Order" queue.
So, the question is regarding the 2 Exchanges that got created where I am not sure if that's per design or its a mistake on the code that led to both getting created and if its per design, why the need for 2 exchange.

I was pondering the same question when I first started playing with MassTransit, and in the end came to understand it as follows:
You are routing two types of messages via MassTransit, events and commands. Events are multicast to potentially multiple consumers, commands to a single consumer. Every consumer has their own input queue to which messages are being routed via exchanges.
For every message type, MassTransit by default creates one fanout exchange based on the message type and one fanout exchange and one queue for every consumer of this message.
This makes absolute sense for events, as you are publishing events using the event type (with no idea who or if anyone at all will consume it), so in your case, you publish to the OrderDetails exchange. MassTransit has to make sure that all consumers of this event are bound to this exchange. In this case, you have one consumer, OrderConsumer. MassTransit by default generates the name of the consumer exchange based on the type name of this consumer, removing the Consumer suffix. The actual input queue for this consumer is bound to this exchange.
So you get something like this:
EventTypeExchange => ConsumerExchange => ConsumerQueue
or in your case:
Sample:OrderDetails (based on the type Sample.OrderDetails) => Order (based on the type OrderConsumer) => Order (again based on the OrderConsumer type)
For commands this is a bit less obvious, because a command can only ever be consumed by one consumer. In fact you can actually tell MassTransit not to create the exchanges based on the command type. However, what you would then have to do is route commands not based on the command type, but on the command handler type, which is really not a good approach as now you would have to know - when sending a command - what the type name of the handler is. This would introduce coupling that you really do not want. Thus, I think it's best to keep the exchanges based on the command type and route to them, based on the command type.
As Chriss (author of MassTransit) mentions in the MassTransit RabbitMQ deep dive video (YouTube), this setup also allows you to potentially do interesting stuff like siphon off messages to another queue for monitoring/auditing/debugging, just by creating a new queue and binding it to the existing fanout exchange.
All the above is based on me playing with the framework, so it's possible I got some of this wrong, but it does make sense to me at least. RabbitMQ is extremely flexible with its routing options, so Chriss could've chosen a different approach (e. g. Brighter, a "competing" library uses RabbitMQ differently to achieve the same result) but this one has merit as well.
MassTransit also - unlike some other frameworks like NServiceBus or Brighter - doesn't really technically distinguish or care about the semantic difference between these two, e. g. you can just as well send or publish a command just as you can an event.

Related

Way to determine Kafka Topic for #KafkaListener on application startup?

We have 5 topics and we want to have a service that scales for example to 5 instances of the same app.
This would mean that i would want to dynamically (via for example Redis locking or similar mechanism) determine which instance should listen to what topic.
I know that we could have 1 topic that has 5 partitions - and each node in the same consumer group would pick up a partition. Also if we have a separately deployed service we can set the topic via properties.
The issue is that those two are not suitable for our situation and we want to see if it is possible to do that via what i explained above.
#PostConstruct
private void postConstruct() {
// Do logic via redis locking or something do determine topic
dynamicallyDeterminedVariable = // SOME LOGIC
}
#KafkaListener(topics = "{dynamicallyDeterminedVariable")
void listener(String data) {
LOG.info(data);
}
Yes, you can use SpEL for the topic name.
#{#someOtherBean.whichTopicToUse()}.

Multiple consumers with the same name in different projects subscribed to the same queue

We have UserCreated event that gets published from UserManagement.Api. I have two other Apis, Payments.Api and Notification.Api that should react to that event.
In both Apis I have public class UserCreatedConsumer : IConsumer<UserCreated> (so different namespaces) but only one queue (on SQS) gets created for both consumers.
What is the best way to deal with this situation?
You didn't share your configuration, but if you're using:
x.AddConsumer<UserCreatedConsumer>();
As part of your MassTransit configuration, you can specify an InstanceId for that consumer to generate a unique endpoint address.
x.AddConsumer<UserCreatedConsumer>()
.Endpoint(x => x.InstanceId = "unique-value");
Every separate service (not an instance of the same service) needs to have a different queue name of the receiving endpoint, as described in the docs:
cfg.ReceiveEndpoint("queue-name-per-service-type", e =>
{
// rest of the configuration
});
It's also mentioned in the common mistakes article.

nestjs microservices - have one clientProxy to publish message to any microService

Sometimes, you want to say, "I have this message, who can handle it?"
In nestjs a client proxy is bounded directly to a single microservice.
So, as an example, let say that I have the following micro-services:
CleaningService, FixingService.
Both of the above can handle the message car, but only CleaningService can handle the message glass.
So, I want to have something like:
this.generalProxy.emit('car', {id: 2});
In this case, I want 2 different microservices to handle the car: CleaningService and FixingService.
in this case:
this.generalProxy.emit('glass', {id: 5});
I want only CleaningService to handle it.
How is that possible? how can I create clientProxy that is not bonded directly to a specific microservice.
The underlying transport layer matters because despite the fact that there is an abstraction in front of the different transports each underlying one has completely different characteristics and capabilities. The type of messaging pattern you're talking about is simple to accomplish with RabbitMQ because it has the notion of exchanges, queues, publisher, subscribers etc while a TCP based microservice requires a connection from one service to another. Likewise, the Redis transport layer uses simple channels without the necessary underlying implementation to be able to support some messages being fanned out to multiple subscribers and some going directly to specific subscribers.
This might not be the most popular opinion but I've been using NestJS professionally for over 3 years and I can definitely say that the official microservices packages are not sufficient for most actual production applications. They work great as a proof of concept but quickly fall apart because of exactly these types of issues.
Luckily, NestJS provides great building blocks and primitives in the form of the Module and DI system to allow for much more feature rich plugins to be built. I created one specifically for RabbitMQ to be able to support the exact type of scenario you are describing.
I highly recommend that since you're using RabbitMQ already that you check out #golevelup/nestjs-rabbitmq which can easily support what you want to accomplish using native RMQ concepts like Exchanges and Routing Keys. (Disclaimer: I am the author). It also allows you to manage as many exchanges and queues as you like (instead of being forced to try to push all things through a single queue) and has native support for multiple messaging patterns including PubSub and RPC.
You simply decorate your methods that you want to act as microservice message handlers with the appropriate metadata and messaging will just work as expected. For example:
#Injectable()
export class CleaningService {
#RabbitSubscribe({
exchange: 'app',
routingKey: 'cars',
queue: 'cleaning-cars',
})
public async cleanCar(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
#RabbitSubscribe({
exchange: 'app',
routingKey: 'glass',
queue: 'cleaning-glass',
})
public async cleanGlass(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
}
#Injectable()
export class FixingService {
#RabbitSubscribe({
exchange: 'app',
routingKey: 'cars',
queue: 'fixing-cars',
})
public async fixCar(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
}
With this setup both the cleaning service and the fixing service will receive the car message to their individual handlers (since they use the same routing key) and only the cleaning service will receive the glass message
Publishing message is simple. You just include the exchange and routing key and the right handlers will receive it based on their configuration:
amqpConnection.publish('app', 'cars', { year: 2020, make: 'toyota' });

How to balance multiple message queues

I have a task that is potentially long running (hours). The task is performed by multiple workers (AWS ECS instances in my case) that read from a message queue (AWS SQS in my case). I have multiple users adding messages to the queue. The problem is that if Bob adds 5000 messages to the queue, enough to keep the workers busy for 3 days, then Alice comes along and wants to process 5 tasks, Alice will need to wait 3 days before any of Alice's tasks even start.
I would like to feed messages to the workers from Alice and Bob at an equal rate as soon as Alice submits tasks.
I have solved this problem in another context by creating multiple queues (subqueues) for each user (or even each batch a user submits) and alternating between all subqueues when a consumer asks for the next message.
This seems, at least in my world, to be a common problem, and I'm wondering if anyone knows of an established way of solving it.
I don't see any solution with ActiveMQ. I've looked a little at Kafka with it's ability to round-robin partitions in a topic, and that may work. Right now, I'm implementing something using Redis.
I would recommend Cadence Workflow instead of queues as it supports long running operations and state management out of the box.
In your case I would create a workflow instance per user. Every new task would be sent to the user workflow via signal API. Then the workflow instance would queue up the received tasks and execute them one by one.
Here is a outline of the implementation:
public interface SerializedExecutionWorkflow {
#WorkflowMethod
void execute();
#SignalMethod
void addTask(Task t);
}
public interface TaskProcessorActivity {
#ActivityMethod
void process(Task poll);
}
public class SerializedExecutionWorkflowImpl implements SerializedExecutionWorkflow {
private final Queue<Task> taskQueue = new ArrayDeque<>();
private final TaskProcesorActivity processor = Workflow.newActivityStub(TaskProcesorActivity.class);
#Override
public void execute() {
while(!taskQueue.isEmpty()) {
processor.process(taskQueue.poll());
}
}
#Override
public void addTask(Task t) {
taskQueue.add(t);
}
}
And then the code that enqueues that task to the workflow through signal method:
private void addTask(WorkflowClient cadenceClient, Task task) {
// Set workflowId to userId
WorkflowOptions options = new WorkflowOptions.Builder().setWorkflowId(task.getUserId()).build();
// Use workflow interface stub to start/signal workflow instance
SerializedExecutionWorkflow workflow = cadenceClient.newWorkflowStub(SerializedExecutionWorkflow.class, options);
BatchRequest request = cadenceClient.newSignalWithStartRequest();
request.add(workflow::execute);
request.add(workflow::addTask, task);
cadenceClient.signalWithStart(request);
}
Cadence offers a lot of other advantages over using queues for task processing.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
See the presentation that goes over Cadence programming model.

Workaround to fix StreamListener constant Channel Name

I am using cloud stream to consuming messages I am using something like
#StreamListener(target = "CONSTANT_CHANNEL_NAME")
public void readingData(String input){
System.out.println("consumed info is"+input);
}
But I want to keep channel name as per my environment and it should be picked from property file, while as per Spring channel name should be constant.
Is there any work around to fix this problem?
Edit:1
Let's see the actual situation
I am using multiple queues and dlq queues and it's binding is done with rabbit-mq
I want to change my channel name and queue name as per my environment
I want to do all on same AMQP host.
My Sink Code
public interfaceProcessorSink extends Sink {
#Input(CONSTANT_CHANNEL_NAME)
SubscribableChannel channel();
#Input(CONSTANT_CHANNEL_NAME_1)
SubscribableChannel channel2();
#Input(CONSTANT_CHANNEL_NAME_2)
SubscribableChannel channle2();
}
You can pick target value from property file as below:
#StreamListener(target = "${streamListener.target}")
public void readingData(String input){
System.out.println("consumed info is"+input);
}
application.yml
streamListener:
target: CONSTANT_CHANNEL_NAME
While there are many ways to do that I wonder why do you even care? In fact if anything you do want to make it constant so it is always the same, but thru configuration properties map it to different remote destinations (e.g., Kafka, Rabbit etc). For example spring.cloud.stream.bindings.input.destination=myKafkaTopic states that channel by the name input will be mapped to (bridged with) Kafka topic named myKafkaTopic'.
In fact, to further prove my point we completely abstracted away channels all together for users who use spring-cloud-function programming model, but that is a whole different discussion.
My point is that I believe you are actually creating a problem rather the solving it since with externalisation of the channel name you create probably that due to misconfiguration your actual bound channel and the channel you're mentioning in your properties are not going to be the same.

Resources