I've been receiving the following error while setting up the quartz.net scheduler with MassTransit
After the message is scheduled on the RabbitMQ, when the quartz.net tries to read it, the error is thrown:
MT-Fault-Message: Object reference not set to an instance of an object.
MT-Fault-Timestamp: 2021-06-02T18:46:56.1335404Z
MT-Fault-StackTrace: at MassTransit.QuartzIntegration.ScheduleMessageConsumer.TranslateJsonBody(String body, String destination)
at MassTransit.QuartzIntegration.ScheduleMessageConsumer.CreateJobDetail(ConsumeContext context, Uri destination, JobKey jobKey, Nullable`1 tokenId)
at MassTransit.QuartzIntegration.ScheduleMessageConsumer.Consume(ConsumeContext`1 context)
at MassTransit.Pipeline.ConsumerFactories.DelegateConsumerFactory`1.Send[TMessage](ConsumeContext`1 context, IPipe`1 next)
at MassTransit.Pipeline.ConsumerFactories.DelegateConsumerFactory`1.Send[TMessage](ConsumeContext`1 context, IPipe`1 next)
at MassTransit.Pipeline.Filters.ConsumerMessageFilter`2.GreenPipes.IFilter<MassTransit.ConsumeContext<TMessage>>.Send(ConsumeContext`1 context, IPipe`1 next)
at MassTransit.Pipeline.Filters.ConsumerMessageFilter`2.GreenPipes.IFilter<MassTransit.ConsumeContext<TMessage>>.Send(ConsumeContext`1 context, IPipe`1 next)
at GreenPipes.Partitioning.Partition.Send[T](T context, IPipe`1 next)
at GreenPipes.Filters.TeeFilter`1.<>c__DisplayClass5_0.<<Send>g__SendAsync|1>d.MoveNext()
--- End of stack trace from previous location ---
at GreenPipes.Filters.OutputPipeFilter`2.SendToOutput(IPipe`1 next, TOutput pipeContext)
at GreenPipes.Filters.OutputPipeFilter`2.SendToOutput(IPipe`1 next, TOutput pipeContext)
at GreenPipes.Filters.DynamicFilter`1.<>c__DisplayClass10_0.<<Send>g__SendAsync|0>d.MoveNext()
--- End of stack trace from previous location ---
at MassTransit.Pipeline.Filters.DeserializeFilter.Send(ReceiveContext context, IPipe`1 next)
at GreenPipes.Filters.RescueFilter`2.GreenPipes.IFilter<TContext>.Send(TContext context, IPipe`1 next)
MT-Fault-ConsumerType: MassTransit.QuartzIntegration.ScheduleMessageConsumer
MT-Fault-MessageType: MassTransit.Scheduling.ScheduleMessage
This is how the ConfigureServices is setup:
.ConfigureServices((host, services) =>
{
services.Configure<OtherOptions>(host.Configuration);
services.Configure<QuartzOptions>(host.Configuration.GetSection("Quartz"));
services.AddSingleton<QuartzConfiguration>();
services.AddMassTransit(x =>
{
x.UsingRabbitMq((context, cfg) =>
{
var options = context.GetService<QuartzConfiguration>();
cfg.AddScheduling(s =>
{
s.SchedulerFactory = new StdSchedulerFactory(options.Configuration);
s.QueueName = options.Queue;
});
var vhost = host.Configuration.GetValue<string>("RabbitMQ:VirtualHost");
cfg.Host(string.Empty, vhost, h =>
{
h.Username( host.Configuration.GetValue<string>("RabbitMQ:User"));
h.Password(host.Configuration.GetValue<string>("RabbitMQ:Password"));
h.UseCluster(c =>
{
c.Node(host.Configuration.GetValue<string>("RabbitMQ:Node1"));
c.Node(host.Configuration.GetValue<string>("RabbitMQ:Node2"));
});
});
});
});
services.AddMassTransitHostedService();
});
And this is how the quartz configs are setup:
public NameValueCollection Configuration
{
get
{
var configuration = new NameValueCollection(13)
{
{"quartz.scheduler.instanceName", _options.Value.InstanceName},
{"quartz.scheduler.instanceId", "AUTO"},
{"quartz.plugin.timeZoneConverter.type","Quartz.Plugin.TimeZoneConverter.TimeZoneConverterPlugin, Quartz.Plugins.TimeZoneConverter"},
{"quartz.serializer.type", "json"},
{"quartz.threadPool.type", "Quartz.Simpl.SimpleThreadPool, Quartz"},
{"quartz.threadPool.threadCount", (_options.Value.ThreadCount ?? 10).ToString("F0")},
{"quartz.jobStore.misfireThreshold", "60000"},
{"quartz.jobStore.type", "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz"},
{"quartz.jobStore.driverDelegateType", "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz"},
{"quartz.jobStore.tablePrefix", _options.Value.TablePrefix},
{"quartz.jobStore.dataSource", "default"},
{"quartz.dataSource.default.provider", _options.Value.Provider},
{"quartz.dataSource.default.connectionString", _options.Value.ConnectionString},
{"quartz.jobStore.useProperties", "true"}
};
foreach (var key in configuration.AllKeys)
{
_logger.LogInformation("{Key} = {Value}", key, configuration[key]);
}
return configuration;
}
}
Also the extension method to setup MassTransit scheduling:
public static void AddScheduling(this IBusFactoryConfigurator configurator, Action<InMemorySchedulerOptions> configure)
{
if (configurator == null)
throw new ArgumentNullException(nameof(configurator));
var options = new InMemorySchedulerOptions();
configure?.Invoke(options);
if (options.SchedulerFactory == null)
throw new ArgumentNullException(nameof(options.SchedulerFactory));
var observer = new SchedulerBusObserver(options);
configurator.ReceiveEndpoint(options.QueueName, e =>
{
var partitioner = configurator.CreatePartitioner(Environment.ProcessorCount);
e.Consumer(() => new ScheduleMessageConsumer(observer.Scheduler), x =>
x.Message<ScheduleMessage>(m => m.UsePartitioner(partitioner, p => p.Message.CorrelationId)));
e.Consumer(() => new CancelScheduledMessageConsumer(observer.Scheduler), x =>
x.Message<CancelScheduledMessage>(m => m.UsePartitioner(partitioner, p => p.Message.TokenId)));
configurator.UseMessageScheduler(e.InputAddress);
configurator.ConnectBusObserver(observer);
});
}
I tried debugging and couldn't find a reason for the error I'm receiving, the database connections are being created successfully and also the message scheduling on the consumer is being completed successfully.
Is there a way to actually debug while the scheduler is reading the message or to know what's exactly missing that is throwing the Object reference error?
EDIT
Debugging the Masstransit.Quartz framework I noticed the issue is while trying to deserialize the message on method TranslateJsonBody.
var envelope = JObject.Parse(body);
envelope["destinationAddress"] = destination;
var message = envelope["message"];
var payload = message["payload"];
var payloadType = message["payloadType"];
As you can see, the method uses camelCase, and our broker is sending with PascalCase, and since there's no handler, it's returning null when the method tries to get envelope["message"] because we're sending it as "Message". Is there any configuration to force camelCase when deserializing?
Related
I am trying to configure a Producer to send a message to a Consumer that has a deadletter queue configured. The Producer is using a SendEndpoint (Or rather the request/response pattern), but I get an exception from RabbitMQ.
I have the following consumer:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddMassTransit(x =>
{
x.AddConsumer<SomeMessageRequestConsumer>();
x.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(busConfig =>
{
busConfig.Host(new Uri("rabbitmq://rabbit#localhost"), "/", hostConfigurator =>
{
hostConfigurator.Password("Guest");
hostConfigurator.Username("Guest");
});
busConfig.ReceiveEndpoint(nameof(SomeMessage), x =>
{
x.ConfigureConsumer<SomeMessageRequestConsumer>(provider);
x.Durable = false;
x.ConfigureConsumeTopology = false;
x.BindDeadLetterQueue("SomeMessageDeadLetter", "SomeMessageDeadLetter", null);
});
}));
});
services.AddMassTransitHostedService();
}
I have the following Producer:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddSingleton<IReplyToClientFactory, ReplyToClientFactory>();
services.AddMassTransit(x =>
{
x.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(busConfig =>
{
busConfig.Host(new Uri("rabbitmq://rabbit#localhost"), "/", hostConfigurator =>
{
hostConfigurator.Password("Guest");
hostConfigurator.Username("Guest");
});
}));
});
services.AddMassTransitHostedService();
}
In the Producer project I have a controller that send the message like so:
public ProducerController(IReplyToClientFactory clientFactory)
{
this.clientFactory = clientFactory;
}
[HttpPost]
public async Task<IActionResult> Post(CancellationToken cancellationToken)
{
var serviceAddress = new Uri($"queue:{nameof(SomeMessage)}?durable=false");
var client = this.clientFactory.GetFactory().CreateRequestClient<SomeMessage>(serviceAddress);
var (successResponse, failResponse) = await client.GetResponse<SomeMessageSuccessResponse, SomeMessageFailResponse>(new SomeMessage()
{
Text = "Hello",
}, cancellationToken, TimeSpan.FromSeconds(5));
return Ok();
}
I get the following error on RabbitMQ :
operation queue.declare caused a channel exception precondition_failed: inequivalent arg 'x-dead-letter-exchange' for queue 'SomeMessage' in vhost '/': received none but current is the value 'SomeMessageDeadLetter' of type 'longstr'
I have tried to configure the deadletter on the Publish, Send and Message Topologies but with no success. Is what I am trying to do possible or am I chasing the wind here?
You could change the destination address from a queue to an exchange, to decouple your producer from the consumer queue configuration. To send to the exchange, changed your address format to:
$"exchange:{nameof(SomeMessage)}"
That way, you don't need to know the queue configuration to send the request.
I'm making a request to a 3rd party API via NestJS's built in HttpService. I'm trying to simulate a scenario where the initial call to one of this api's endpoints might return an empty array on the first try. I'd like to use RxJS's retryWhen to hit the api again after a delay of 1 second. I'm currently unable to get the unit test to mock the second response however:
it('Retries view account status if needed', (done) => {
jest.spyOn(httpService, 'post')
.mockReturnValueOnce(of(failView)) // mock gets stuck on returning this value
.mockReturnValueOnce(of(successfulView));
const accountId = '0812081208';
const batchNo = '39cba402-bfa9-424c-b265-1c98204df7ea';
const response =client.viewAccountStatus(accountId, batchNo);
response.subscribe(
data => {
expect(data[0].accountNo)
.toBe('0812081208');
expect(data[0].companyName)
.toBe('Some company name');
done();
},
)
});
My implementation is:
viewAccountStatus(accountId: string, batchNo: string): Observable<any> {
const verificationRequest = new VerificationRequest();
verificationRequest.accountNo = accountId;
verificationRequest.batchNo = batchNo;
this.logger.debug(`Calling 3rd party service with batchNo: ${batchNo}`);
const config = {
headers: {
'Content-Type': 'application/json',
},
};
const response = this.httpService.post(url, verificationRequest, config)
.pipe(
map(res => {
console.log(res.data); // always empty
if (res.status >= 400) {
throw new HttpException(res.statusText, res.status);
}
if (!res.data.length) {
this.logger.debug('Response was empty');
throw new HttpException('Account not found', 404);
}
return res.data;
}),
retryWhen(errors => {
this.logger.debug(`Retrying accountId: ${accountId}`);
// It's entirely possible the first call will return an empty array
// So we retry with a backoff
return errors.pipe(
delayWhen(() => timer(1000)),
take(1),
);
}),
);
return response;
}
When logging from inside the initial map, I can see that the array is always empty. It's as if the second mocked value never happens. Perhaps I also have a solid misunderstanding of how observables work and I should somehow be trying to assert against the SECOND value that gets emitted? Regardless, when the observable retries, we should be seeing that second mocked value, right?
I'm also getting
: Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout.Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout.Error:
On each run... so I'm guessing I'm not calling done() in the right place.
I think the problem is that retryWhen(notifier) will resubscribe to the same source when its notifier emits.
Meaning that if you have
new Observable(s => {
s.next(1);
s.next(2);
s.error(new Error('err!'));
}).pipe(
retryWhen(/* ... */)
)
The callback will be invoked every time the source is re-subscribed. In your example, it will call the logic which is responsible for sending the request, but it won't call the post method again.
The source could be thought of as the Observable's callback: s => { ... }.
What I think you'll have to do is to conditionally choose the source, based on whether the error took place or not.
Maybe you could use mockImplementation:
let hasErr = false;
jest.spyOn(httpService, 'post')
.mockImplementation(
() => hasErr ? of(successView) : (hasErr = true, of(failView))
)
Edit
I think the above does not do anything different, where's what I think mockImplementation should look like:
let err = false;
mockImplementation(
() => new Observable(s => {
if (err) {
s.next(success)
}
else {
err = true;
s.next(fail)
}
})
)
I would like to instantiate a turnContext to be used in integration testing. How would I be able to instantiate one without calling on the processActivity() method of the adapter?
I am looking at the documentation but it shows that I would need the request of the post call as the parameter. I would like my testing to be independant of the post call. I would then assume that I would need to instantiate the request? How would I go about doing so?
Image of documentation
This is a bit hard to answer without knowing how you are planning to use the code. That being said, it's not that hard to create a new turnContext and also bypass the processActivity(). Given how you are referencing turnContext and processActivity(), I'm assuming you are using the Node SDK. Implementing in C# wouldn't be too different.
Here are two options, both utilizing the creation of a new adapter, however you can also pass in an already established turnContext, if desired:
Use .createContext in server.post in the index.js file, or
Maintain the processActivity() method in the server.post. This calls a new "onTurn" method in the bot.js file. In doing so, this allows you to control when and how the new "onTurn" is accessed.
Option 1: In the index.js file, you will want to create a new adapter or make a copy of the first depending on your needs:
const adapter = new BotFrameworkAdapter({
appId: endpointConfig.appId || process.env.MicrosoftAppId,
appPassword: endpointConfig.appPassword || process.env.MicrosoftAppPassword
});
const newAdapter = adapter;
or
const adapter = new BotFrameworkAdapter({
appId: endpointConfig.appId || process.env.MicrosoftAppId,
appPassword: endpointConfig.appPassword || process.env.MicrosoftAppPassword
});
const newAdapter = new BotFrameworkAdapter({
appId: endpointConfig.appId || process.env.MicrosoftAppId,
appPassword: endpointConfig.appPassword || process.env.MicrosoftAppPassword
});
Include the onTurnError code to catch errors:
// Catch-all for errors.
adapter.onTurnError = async (context, error) => {
console.error(`\n [onTurnError]: ${ error }`);
await context.sendActivity(`Oops. Something went wrong!`);
};
// Catch-all for errors.
newAdapter.onTurnError = async (context, error) => {
console.error(`\n [onTurnError]: ${ error }`);
await context.sendActivity(`Oops. Something went wrong!`);
};
Then, set the new adapters and create the new turnContext:
server.post('/api/messages', (req, res) => {
adapter.processActivity(req, res, async (turnContext) => {
await bot.onTurn(turnContext);
});
newAdapter.createContext(req, res);
});
Options 2: In the index.js file, building off of the above code, set the adapters to await the individual "onTurn" methods:
// Listen for incoming requests.
server.post('/api/messages', (req, res) => {
adapter.processActivity(req, res, async (turnContext) => {
await bot.onTurn(turnContext);
});
newAdapter.processActivity(req, res, async (turnContext) => {
await bot.newOnTurn(turnContext);
});
});
In the bot.js file, you will have your two "onTurn" methods. In this example, the different "onTurn" methods are called based on whether a message is sent or I am deleting user data (I am sending this event via the Emulator => Conversation menu item). What you decide to match on is up to you.
async newOnTurn(turnContext) {
if (turnContext.activity.type === ActivityTypes.DeleteUserData) {
const dc = await this.dialogs.createContext(turnContext);
await dc.context.sendActivity(`Looks like you deleted some user data.`);
}
}
async onTurn(turnContext) {
if (turnContext.activity.type === ActivityTypes.Message) {
const dc = await this.dialogs.createContext(turnContext);
await dc.context.sendActivity(`Looks like you sent a message.`);
}
}
Hope of help!
Note : After resolving the redirection issue i had an another issue that is getting an error "Cannot cast Newtonsoft.Json.Linq.JArray to Newtonsoft.Json.Linq.JToken". So in my answer I have provided the correct solution for both.
I have identity server project and the client project, everything works up to authentication without any issues and even it redirects to the correct client url but the url ex : "https://localhost:44309/signin-oidc" gives the blank page.
Note : SSl enabled for Identity Server and Client application.
It is authenticating the user as expected as you can see below in the screen shot.
My Identity server contains the following config values for client.
// OpenID Connect hybrid flow and client credentials client (MVC)
new Client
{
ClientId = "mvc",
ClientName = "MVC Client",
AllowedGrantTypes = GrantTypes.HybridAndClientCredentials,
ClientSecrets =
{
new Secret("secret".Sha256())
},
RedirectUris = { /*"http://localhost:5002/signin-oidc",*/"https://localhost:44309/signin-oidc" },
PostLogoutRedirectUris = { /*"http://localhost:5002/signout-callback-oidc",*/"https://localhost:44309/signout-callback-oidc" },
AllowedScopes =
{
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile,
//"api1"
},
AllowOfflineAccess = true
}
The startup.cs is as follows.
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
// configure identity server with in-memory stores, keys, clients and scopes
services.AddIdentityServer()
.AddDeveloperSigningCredential()
.AddInMemoryIdentityResources(Config.GetIdentityResources())
.AddInMemoryApiResources(Config.GetApiResources())
.AddInMemoryClients(Config.GetClients())
.AddTestUsers(Config.GetUsers());
services.AddAuthentication()
//.AddGoogle("Google", options =>
//{
// options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
// options.ClientId = "434483408261-55tc8n0cs4ff1fe21ea8df2o443v2iuc.apps.googleusercontent.com";
// options.ClientSecret = "3gcoTrEDPPJ0ukn_aYYT6PWo";
//})
.AddOpenIdConnect("oidc", "dataVail Login", options =>
{
options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
options.SignOutScheme = IdentityServerConstants.SignoutScheme;
options.Authority = "https://login.microsoftonline.com/d0e2ebcc-0961-45b2-afae-b9ed6728ead7";//"https://demo.identityserver.io/";
options.ClientId = "f08cc131-72da-4831-b19d-e008024645e4";
options.UseTokenLifetime = true;
options.CallbackPath = "/signin-oidc";
options.RequireHttpsMetadata = false;
options.TokenValidationParameters = new TokenValidationParameters
{
NameClaimType = "name",
RoleClaimType = "role"
};
});
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
app.Use(async (context, next) =>
{
context.Request.Scheme = "https";
await next.Invoke();
});
app.UseIdentityServer();
app.UseStaticFiles();
app.UseMvcWithDefaultRoute();
}
Here is the startup.cs for my client app
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
services.AddAuthentication(options =>
{
options.DefaultScheme = "Cookies";
options.DefaultChallengeScheme = "oidc";
})
.AddCookie("Cookies")
.AddOpenIdConnect("oidc", options =>
{
options.SignInScheme = "Cookies";
options.Authority = "https://localhost:44392/";
options.RequireHttpsMetadata = false;
options.ClientId = "mvc";
options.ClientSecret = "secret";
options.ResponseType = "code id_token";
options.SaveTokens = true;
options.GetClaimsFromUserInfoEndpoint = true;
//options.Scope.Add("api1");
options.Scope.Add("offline_access");
});
}
Can anyone please try helping me to sort this out.
I could resolved this with the help of Identity Server 4 folks.
If any one come across this problem here is the solution.
I missed adding "UseAuthentication" in Configure the client MVC pipeline. So after adding that i was redirected as expected and then I had another issue as shown below.
System.InvalidCastException: Cannot cast Newtonsoft.Json.Linq.JArray to Newtonsoft.Json.Linq.JToken. at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler1.d__12.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter1.GetResult() at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.<Invoke>d__6.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.<Invoke>d__7.MoveNext()
I'm getting this exception while connecting my application to IdentityServer4 with AzureAD as external authentication provider. My application is using Hybrid flow to connect to IdentityServer4. I get properly redirected to Azure, login, and code and id_tokens are properly issued. This exception is raised in my application when userInfo endpoint is invoked.
In order resolve this I had to remove the claim which has the name twice.
I confirmed that AAD sends two name claims. Removing one of them resolved the problem.
var namesClaim = externalUser.FindFirst(ClaimTypes.Name) ??
throw new Exception("Unknown names");
if (namesClaim!=null)
{
claims.Remove(namesClaim);
}
Hope this may help someone.
I had the same problem with having multiple roles. Here is the solution for it:
.AddOpenIdConnect("oidc", options =>
{
// ...
options.Scope.Add("roles");
// ... using MapJsonKey instead of MapUniqueJsonKey for having 2 or more roles
options.ClaimActions.MapJsonKey(claimType: "role", jsonKey: "role");
});
I am trying to add a microservice to a system that contains a MassTransit observer, which will observe request response or publish messages already being used in the system. I cannot redeploy the existing services easily so would prefer to avoid it if possible.
The following code only executes when the service starts, it does not execute when a message is sent.
BusControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri($"{settings.Protocol}://{settings.RabbitMqHost}/"), h =>
{
h.Username(settings.RabbitMqConsumerUser);
h.Password(settings.RabbitMqConsumerPassword);
});
cfg.ReceiveEndpoint(host, "pub_sub_flo", ec => { });
host.ConnectSendObserver(new RequestObserver());
host.ConnectPublishObserver(new RequestObserver());
});
Observers:
public class RequestObserver : ISendObserver, IPublishObserver
{
public Task PreSend<T>(SendContext<T> context) where T : class
{
return Task.CompletedTask;
}
public Task PostSend<T>(SendContext<T> context) where T : class
{
var proxy = new StoreProxyFactory().CreateProxy("fabric:/MessagePatterns");
proxy.AddEvent(new ConsumerEvent()
{
Id = Guid.NewGuid(),
ConsumerId = Guid.NewGuid(),
Message = "AMQPRequestResponse",
Date = DateTimeOffset.Now,
Type = "Observer"
}).Wait();
return Task.CompletedTask;
}
public Task SendFault<T>(SendContext<T> context, Exception exception) where T : class
{
return Task.CompletedTask;
}
public Task PrePublish<T>(PublishContext<T> context) where T : class
{
return Task.CompletedTask;
}
public Task PostPublish<T>(PublishContext<T> context) where T : class
{
var proxy = new StoreProxyFactory().CreateProxy("fabric:/MessagePatterns");
proxy.AddEvent(new ConsumerEvent()
{
Id = Guid.NewGuid(),
ConsumerId = Guid.NewGuid(),
Message = "AMQPRequestResponse",
Date = DateTimeOffset.Now,
Type = "Observer"
}).Wait();
return Task.CompletedTask;
}
public Task PublishFault<T>(PublishContext<T> context, Exception exception) where T : class
{
return Task.CompletedTask;
}
}
Can anyone help?
Many thanks in advance.
The observers are only called for messages sent, published, etc. on the bus instance to which they are attached. They will not observe messages sent or received by other bus instances.
If you want to observe those messages, you could create an observer queue and bind that queue to your service exchanges so that copies of the request messages are sent to your service. The replies, however, would not be easy to get since they're sent directly to the client queues via temporary exchanges.
cfg.ReceiveEndpoint(host, "service-observer", e =>
{
e.Consumer<SomeConsumer>(...);
e.Bind("service-endpoint");
});
This will bind the service endpoint exchange to your receive endpoint queue, so that copies of the messages are sent to your consumer.
This is commonly referred to as a wire tap.