The EndpointName property in a ConsumerDefinition file seems to be ignored by MassTransit. I know the ConsumerDefinition is being used because the retry logic works. How do I get different commands to go to a different queue? It seems that I can get them all to go through one central queue but I don't think this is best practice for commands.
Here is my app configuration that executes on startup when creating the MassTransit bus.
Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.Host(_config.ServiceBusUri, host => {
host.SharedAccessSignature(s =>
{
s.KeyName = _config.KeyName;
s.SharedAccessKey = _config.SharedAccessKey;
s.TokenTimeToLive = TimeSpan.FromDays(1);
s.TokenScope = TokenScope.Namespace;
});
});
cfg.ReceiveEndpoint("publish", ec =>
{
// this is done to register all consumers in the assembly and to use their definition files
ec.ConfigureConsumers(provider);
});
And my handler definition in the consumer (an azure worker service)
public class CreateAccessPointCommandHandlerDef : ConsumerDefinition<CreateAccessPointCommandHandler>
{
public CreateAccessPointCommandHandlerDef()
{
EndpointName = "specific";
ConcurrentMessageLimit = 4;
}
protected override void ConfigureConsumer(
IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<CreateAccessPointCommandHandler> consumerConfigurator
)
{
endpointConfigurator.UseMessageRetry(r =>
{
r.Immediate(2);
});
}
}
In my app that is sending the message I have to configure it to send to the "publish" queue, not "specific".
EndpointConvention.Map<CreateAccessPointsCommand>(new Uri($"queue:specific")); // does not work
EndpointConvention.Map<CreateAccessPointsCommand>(new Uri($"queue:publish")); // this does work
Because you are configuring the receive endpoint yourself, and giving it the name publish, that's the receive endpoint.
To configure the endpoints using the definitions, use:
cfg.ConfigureEndpoints(provider);
This will use the definitions that were registered in the container to configure the receive endpoints, using the consumer endpoint name defined.
This is also explained in the documentation.
Related
I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services
I am developing apis & microservices in nestJS,
this is my controller function
#Post()
#MessagePattern({ service: TRANSACTION_SERVICE, msg: 'create' })
create( #Body() createTransactionDto: TransactionDto_create ) : Promise<Transaction>{
return this.transactionsService.create(createTransactionDto)
}
when i call post api, dto validation works fine, but when i call this using microservice validation does not work and it passes to service without rejecting with error.
here is my DTO
import { IsEmail, IsNotEmpty, IsString } from 'class-validator';
export class TransactionDto_create{
#IsNotEmpty()
action: string;
// #IsString()
readonly rec_id : string;
#IsNotEmpty()
readonly data : Object;
extras : Object;
// readonly extras2 : Object;
}
when i call api without action parameter it shows error action required but when i call this from microservice using
const pattern = { service: TRANSACTION_SERVICE, msg: 'create' };
const data = {id: '5d1de5d787db5151903c80b9', extras:{'asdf':'dsf'}};
return this.client.send<number>(pattern, data)
it does not throw error and goes to service.
I have added globalpipe validation also.
app.useGlobalPipes(new ValidationPipe({
disableErrorMessages: false, // set true to hide detailed error message
whitelist: false, // set true to strip params which are not in DTO
transform: false // set true if you want DTO to convert params to DTO class by default its false
}));
how will it work for both api & microservice, because i need all at one place and with same functionality so that as per clients it can be called.
ValidationPipe throws HTTP BadRequestException, where as the proxy client expects RpcException.
#Catch(HttpException)
export class RpcValidationFilter implements ExceptionFilter {
catch(exception: HttpException, host: ArgumentsHost) {
return new RpcException(exception.getResponse())
}
}
#UseFilters(new RpcValidationFilter())
#MessagePattern('validate')
async validate(
#Payload(new ValidationPipe({ whitelist: true })) payload: SomeDTO,
) {
// payload validates to SomeDto
. . .
}
I'm going out on a limb and assuming in you main.ts you have the line app.useGlobalPipes(new ValidationPipe());. From the documentation
In the case of hybrid apps the useGlobalPipes() method doesn't set up pipes for gateways and micro services. For "standard" (non-hybrid) microservice apps, useGlobalPipes() does mount pipes globally.
You could instead bind the pipe globally from the AppModule, or you could use the #UsePipes() decorator on each route that will be needing validation via the ValidationPipe
More info on binding pipes here
As I understood, useGlobalPipes is working fine for api but not for microservice.
Reason behind this, nest microservice is a hybrid application and it has some restrictions. Please refer below para.
By default a hybrid application will not inherit global pipes, interceptors, guards and filters configured for the main (HTTP-based) application. To inherit these configuration properties from the main application, set the inheritAppConfig property in the second argument (an optional options object) of the connectMicroservice() call.
Please refer this Nest Official Document
So, you need to add inheritAppConfig option in connectMicroservice() method.
const microservice = app.connectMicroservice(
{
transport: Transport.TCP,
},
{ inheritAppConfig: true },
);
It worked for me!
I'm using the following code to send a request/response message between two different processes.
This is the process that "sends" the request:
// configure host
var hostUri = new Uri(configuration["RabbitMQ:Host"]);
services.AddSingleton(provider => Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(hostUri, h =>
{
h.Username(configuration["RabbitMQ:Username"]);
h.Password(configuration["RabbitMQ:Password"]);
});
}));
// add request client
services.AddScoped(provider => provider.GetRequiredService<IBus>().CreateRequestClient<QueryUserInRole, QueryUserInRoleResult>(new Uri(hostUri, "focus-authorization"), TimeSpan.FromSeconds(5)));
// add dependencies
services.AddSingleton<IPublishEndpoint>(provider => provider.GetRequiredService<IBusControl>());
services.AddSingleton<ISendEndpointProvider>(provider => provider.GetRequiredService<IBusControl>());
services.AddSingleton<IBus>(provider => provider.GetRequiredService<IBusControl>());
// add the service class so that the runtime can automatically handle the start and stop of our bus
services.AddSingleton<Microsoft.Extensions.Hosting.IHostedService, BusService>();
Here's the implementation of the BusService:
public class BusService : Microsoft.Extensions.Hosting.IHostedService
{
private readonly IBusControl _busControl;
public BusService(IBusControl busControl)
{
_busControl = busControl;
}
public Task StartAsync(CancellationToken cancellationToken)
{
return _busControl.StartAsync(cancellationToken);
}
public Task StopAsync(CancellationToken cancellationToken)
{
return _busControl.StopAsync(cancellationToken);
}
}
The problem is that when the CreateRequestClient code runs, the bus has not started up yet. Thus the response is never returned from the consumer.
If I replace the host configuration with the following code, I get the desired behavior:
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(hostUri, h =>
{
h.Username(configuration["RabbitMQ:Username"]);
h.Password(configuration["RabbitMQ:Password"]);
});
});
bus.Start();
services.AddSingleton(bus);
For some reason, the BusService(IHostedService) executes AFTER the AddScoped delegates.
My question is: what is the correct way to start up the bus before using the CreateRequestClient method? Or is the latter approach to starting up the bus sufficient?
I am trying to implement a heartbeat feature for my application hence was trying to implement recurring message feature from masstransit rabbitmq. I was trying to implement it on the sample given on masstransit's website. Here's all of the code.
namespace MasstransitBasicSample
{
using System;
using System.Threading.Tasks;
using MassTransit;
using MassTransit.Scheduling;
class Program
{
static void Main(string[] args)
{
var bus = Bus.Factory.CreateUsingRabbitMq(sbc =>
{
var host = sbc.Host(new Uri("rabbitmq://localhost"), h =>
{
h.Username("guest");
h.Password("guest");
});
sbc.UseMessageScheduler(new Uri("rabbitmq://localhost/quartz"));
sbc.ReceiveEndpoint(host, "test_queue", ep =>
{
ep.Handler<YourMessage>(context =>
{
return Console.Out.WriteLineAsync($"Received: {context.Message.Text}");
});
ep.Handler<PollExternalSystem>(context =>
{
return Console.Out.WriteLineAsync($"Received: {context.Message}");
});
});
});
bus.Start();
SetRecurring(bus);
Console.WriteLine("Press any key to exit");
Console.ReadKey();
bus.Stop();
}
private static async Task SetRecurring(IBusControl bus)
{
var schedulerEndpoint = await bus.GetSendEndpoint(new Uri("rabbitmq://localhost/quartz"));
var scheduledRecurringMessage = await schedulerEndpoint.ScheduleRecurringSend(
new Uri("rabbitmq://localhost/test_queue"), new PollExternalSystemSchedule(), new PollExternalSystem());
}
}
public class YourMessage { public string Text { get; set; } }
public class PollExternalSystemSchedule : DefaultRecurringSchedule
{
public PollExternalSystemSchedule()
{
CronExpression = "* * * * *"; // this means every minute
}
}
public class PollExternalSystem { }
}
I have created a queue called quartz in my rabbitmq queue.
When i run the application it sends a message to the quartz queue and that message just stays there , it does not go to the test queue.
I was also expecting another message to be sent to the quartz queue after a minute based on the cron expression, that also does not happen.
Is my setup wrong?
Any help would be much appreciated.
You need to run the scheduling service that listens on rabbitmq://localhost/quartz, where your messages are being sent.
The documentation page says:
There is a standalone MassTransit service, MassTransit.QuartzService,
which can be installed and used on servers for this purpose. It is
configured via the App.config file and is a good example of how to
build a standalone MassTransit service.
Alternatively, you can host Quartz scheduling in the same process by using in-memory scheduling, described here, by configuring it like this:
var busControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://localhost/"), h =>
{
h.Username("guest");
h.Password("guest");
});
cfg.UseInMemoryScheduler();
});
Configuring a RetryPolicy inside a ReceiveEndpoint of a queue used for store messages from commands and events (as appears bellow) appears does not works when the queue is a saga endpoint queue.
This configuration works fine (note the endpoint RegisterOrderServiceQueue):
...
var bus = BusConfigurator.ConfigureBus((cfg, host) =>
{
cfg.ReceiveEndpoint(host, RabbitMqConstants.RegisterOrderServiceQueue, e =>
{
e.UseRetry(Retry.Except<ArgumentException>().Immediate(3));
...
...Bus RetryPolicy configuration on windows service to run the Saga state machine does not work (note the endpoint SagaQueue):
...
var bus = BusConfigurator.ConfigureBus((cfg, host) =>
{
cfg.ReceiveEndpoint(host, RabbitMqConstants.SagaQueue, e =>
{
e.UseRetry(Retry.Except<ArgumentException>().Immediate(3));
...
StateMachine class source code that throws an ArgumentException:
...
During(Registered,
When(ApproveOrder)
.Then(context =>
{
throw new ArgumentException("Test for monitoring sagas");
context.Instance.EstimatedTime = context.Data.EstimatedTime;
context.Instance.Status = context.Data.Status;
})
.TransitionTo(Approved),
...
But when ApproveOrder occurs the RetryPolicy rules are ignored, and connecting a ConsumeObserver in the bus that Sagas is connected, the ConsumeFault's method is executed 5 times (that is the default behavior of masstransit).
This should work? There is any missconception on my configurations?