MassTransit: How to configure retry policy for Send/Publish - masstransit

I am using MassTransit with Azure Service Bus and would like to configure a retry policy for Send/Publish.
The way I did it is:
```private void ConfigureUsingAzureServiceBus(IServiceCollectionConfigurator x)
{
x.AddBus(provider => Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.ConfigurePublish(c =>
{
c.UseRetry(rc => rc.Interval(90, TimeSpan.FromSeconds(2)));
});
cfg.ConfigureSend(c =>
{
c.UseRetry(rc => rc.Interval(90, TimeSpan.FromSeconds(2)));
});```
I am not sure if this is the right way because I get sometimes: Microsoft.Azure.ServiceBus.ServiceBusException and my message is not sent to the bus.

MassTransit does not support a retry policy for Publish/Send.
For Azure Service Bus, the transport uses a retry policy under the hood with the Azure Service Bus .NET client library. If the exception is ultimately thrown, it's because the client library gave up.

Related

MassTransit endpoint name is ignored in ConsumerDefinition

The EndpointName property in a ConsumerDefinition file seems to be ignored by MassTransit. I know the ConsumerDefinition is being used because the retry logic works. How do I get different commands to go to a different queue? It seems that I can get them all to go through one central queue but I don't think this is best practice for commands.
Here is my app configuration that executes on startup when creating the MassTransit bus.
Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.Host(_config.ServiceBusUri, host => {
host.SharedAccessSignature(s =>
{
s.KeyName = _config.KeyName;
s.SharedAccessKey = _config.SharedAccessKey;
s.TokenTimeToLive = TimeSpan.FromDays(1);
s.TokenScope = TokenScope.Namespace;
});
});
cfg.ReceiveEndpoint("publish", ec =>
{
// this is done to register all consumers in the assembly and to use their definition files
ec.ConfigureConsumers(provider);
});
And my handler definition in the consumer (an azure worker service)
public class CreateAccessPointCommandHandlerDef : ConsumerDefinition<CreateAccessPointCommandHandler>
{
public CreateAccessPointCommandHandlerDef()
{
EndpointName = "specific";
ConcurrentMessageLimit = 4;
}
protected override void ConfigureConsumer(
IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<CreateAccessPointCommandHandler> consumerConfigurator
)
{
endpointConfigurator.UseMessageRetry(r =>
{
r.Immediate(2);
});
}
}
In my app that is sending the message I have to configure it to send to the "publish" queue, not "specific".
EndpointConvention.Map<CreateAccessPointsCommand>(new Uri($"queue:specific")); // does not work
EndpointConvention.Map<CreateAccessPointsCommand>(new Uri($"queue:publish")); // this does work
Because you are configuring the receive endpoint yourself, and giving it the name publish, that's the receive endpoint.
To configure the endpoints using the definitions, use:
cfg.ConfigureEndpoints(provider);
This will use the definitions that were registered in the container to configure the receive endpoints, using the consumer endpoint name defined.
This is also explained in the documentation.

Dynamics 365 SDK throws exception "The Security Support Provider Interface (SSPI) negotiation failed"

I'm connecting to a Dynamics 365 v9.0 on-premises organization across Active Directory domains through the Microsoft.Xrm.Sdk + Microsoft.Pfe.Xrm.Core NuGet packages to trigger SDK requests. Sometimes I get an exception back: The Security Support Provider Interface (SSPI) negotiation failed.
My machine and the Dynamics server are located in different domains. Fiddler traces show that both machines are accessible in the network.
The exception is thrown in the PFE lib, more specifically the operation() line below.
Parallel.ForEach<TRequest, ParallelOrganizationOperationContext<TRequest, bool>>(requests,
new ParallelOptions() { MaxDegreeOfParallelism = this.MaxDegreeOfParallelism },
() => new ParallelOrganizationOperationContext<TRequest, bool>(),
(request, loopState, index, context) =>
{
try
{
operation(request, threadLocalProxy.Value);
}
catch (FaultException<OrganizationServiceFault> fault)
{
//Track faults locally
if (errorHandler != null)
{
context.Failures.Add(new ParallelOrganizationOperationFailure<TRequest>(request, fault));
}
else
{
throw;
}
}
return context;
},
(context) =>
{
//Join faults together
Array.ForEach(context.Failures.ToArray(), f => allFailures.Add(f));
});
Source: https://github.com/seanmcne/XrmCoreLibrary/blob/8892a9e93c42d8c35aac2a212588d45359cfd1a2/v8/Client/ParallelServiceProxy.cs#L236
Sandrino Di Mattia provided with a workaround in the Early binding tips and tricks for Dynamics CRM 2011 article:
If you’re working with a virtual machine that is part of an other domain you might get this error (cross domain call). To solve this you’ll need to change the way you pass the authentication arguments to CrmSvcUtil.exe Instead of calling CrmSvcUtil.exe using the following line:
CrmSvcUtil.exe /url:"http:/srv/org/XRMServices/2011/Organization.svc" /out:Context.cs
/username:"sandrino" /password:"pass" /domain:"somedomain" /serviceContextName:Context
Change it to the following:
CrmSvcUtil.exe /url:"http:/srv/org/XRMServices/2011/Organization.svc" /out:Context.cs
/username:"sandrino#somedomain" /password:"pass" /serviceContextName:Context
By removing the domain argument and appending the domain to the username (separated with the # sign) you’ll solve the cross domain problem.

Service Fabric https endpoint with kestrel and reverse proxy

I've been trying to setup Https on a stateless API endpoint following the instructions on the microsoft documentations and diverse post/blogs I could find. It works fine locally, but I'm struggling to make it work after deploying it on my dev server getting
Browser : HTTP ERROR 504
Vm event viewer : HandlerAsyncOperation EndProcessReverseProxyRequest failed with FABRIC_E_TIMEOUT
SF event table : Error while processing request: request url = https://mydomain:19081/appname/servicename/api/healthcheck/ping, verb = GET, remote (client) address = xxx, request processing start time = 2018-03-13T14:50:17.1396031Z, forward url = https://0.0.0.0:44338/api/healthcheck/ping, number of successful resolve attempts = 48, error = 2147949567, message = , phase = ResolveServicePartition
in code I have in the instancelistener
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 44338, listenOptions =>
{
listenOptions.UseHttps(GetCertificate());
});
})
servicemanifest
<Endpoint Protocol="https" Name="SslServiceEndpoint" Type="Input" Port="44338" />
startup
services.AddMvc(options =>
{
options.SslPort = 44338;
options.Filters.Add(new RequireHttpsAttribute());
});
+
var options = new RewriteOptions().AddRedirectToHttps(StatusCodes.Status301MovedPermanently, 44338);
app.UseRewriter(options);
here is what I got in azure (deployed through ARM template)
Health probes
NAME PROTOCOL PORT USED BY
AppPortProbe TCP 44338 AppPortLBRule
FabricGatewayProbe TCP 19000 LBRule
FabricHttpGatewayProbe TCP 19080 LBHttpRule
SFReverseProxyProbe TCP 19081 LBSFReverseProxyRule
Load balancing rules
NAME LOAD BALANCING RULE BACKEND POOL HEALTH PROBE
AppPortLBRule AppPortLBRule (TCP/44338) LoadBalancerBEAddressPool AppPortProbe
LBHttpRule LBHttpRule (TCP/19080) LoadBalancerBEAddressPool FabricHttpGatewayProbe
LBRule LBRule (TCP/19000) LoadBalancerBEAddressPool FabricGatewayProbe
LBSFReverseProxyRule LBSFReverseProxyRule (TCP/19081) LoadBalancerBEAddressPool SFReverseProxyProbe
I have a Cluster certificate, ReverseProxy Certificate, and auth to the api through azure ad and in ARM
"fabricSettings": [
{
"parameters": [
{
"name": "ClusterProtectionLevel",
"value": "[parameters('clusterProtectionLevel')]"
}
],
"name": "Security"
},
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "ApplicationCertificateValidationPolicy",
"value": "None"
}
]
}
],
Not sure what else could be relevant, if you have any ideas/suggestions, those are really welcome
Edit : code for GetCertificate()
private X509Certificate2 GetCertificate()
{
var certificateBundle = Task.Run(async () => await GetKeyVaultClient()
.GetCertificateAsync(Environment.GetEnvironmentVariable("KeyVaultCertifIdentifier")));
var certificate = new X509Certificate2();
certificate.Import(certificateBundle.Result.Cer);
return certificate;
}
private KeyVaultClient GetKeyVaultClient() => new KeyVaultClient(async (authority, resource, scope) =>
{
var context = new AuthenticationContext(authority, TokenCache.DefaultShared);
var clientCred = new ClientCredential(Environment.GetEnvironmentVariable("KeyVaultClientId"),
Environment.GetEnvironmentVariable("KeyVaultSecret"));
var authResult = await context.AcquireTokenAsync(resource, clientCred);
return authResult.AccessToken;
});
Digging into your code I've realized that there is nothing wrong with it except one thing. I mean, as you use Kestrel, you don't need to set up anything extra in the AppManifest as those things are for Http.Sys implementation. You don't even need to have an endpoint in the ServiceManifest(although recommended) as all these things are about URL reservation for the service account and SSL binding configuration, neither of which is required with Kestrel.
What you do need to do is to use IPAddress.IPv6Any while you configure SSL. Aside the fact that it turns out to be the recommended way which allows you to accept both IPv4 and IPV6 connections, it also does a 'correct' endpoint registration in the SF. See, when you use IPAddress.Any, you'll get the SF setting up an endpoint like https://0.0.0.0:44338, and that's how the reverse proxy will try to reach the service which obviously wouldn't work. 0.0.0.0 doesn't correspond to any particular ip, it's just the way to say 'any IPv4 address at all'. While when you use IPAddress.IPv6Any, you'll get a correct endpoint mapped to the vm ip address that could be resolved from within the vnet. You could see that stuff by yourself in the SF Explorer if you go down to the endpoint section in the service instance blade.

How to use masstransit's retry policy with sagas?

Configuring a RetryPolicy inside a ReceiveEndpoint of a queue used for store messages from commands and events (as appears bellow) appears does not works when the queue is a saga endpoint queue.
This configuration works fine (note the endpoint RegisterOrderServiceQueue):
...
var bus = BusConfigurator.ConfigureBus((cfg, host) =>
{
cfg.ReceiveEndpoint(host, RabbitMqConstants.RegisterOrderServiceQueue, e =>
{
e.UseRetry(Retry.Except<ArgumentException>().Immediate(3));
...
...Bus RetryPolicy configuration on windows service to run the Saga state machine does not work (note the endpoint SagaQueue):
...
var bus = BusConfigurator.ConfigureBus((cfg, host) =>
{
cfg.ReceiveEndpoint(host, RabbitMqConstants.SagaQueue, e =>
{
e.UseRetry(Retry.Except<ArgumentException>().Immediate(3));
...
StateMachine class source code that throws an ArgumentException:
...
During(Registered,
When(ApproveOrder)
.Then(context =>
{
throw new ArgumentException("Test for monitoring sagas");
context.Instance.EstimatedTime = context.Data.EstimatedTime;
context.Instance.Status = context.Data.Status;
})
.TransitionTo(Approved),
...
But when ApproveOrder occurs the RetryPolicy rules are ignored, and connecting a ConsumeObserver in the bus that Sagas is connected, the ConsumeFault's method is executed 5 times (that is the default behavior of masstransit).
This should work? There is any missconception on my configurations?

Subscribing to a removed queue with spring-websocket and RabbitMQ broker (Queue NOT_FOUND)

I have a spring-websocket (4.1.6) application on Tomcat8 that uses a STOMP RabbitMQ (3.4.4) message broker for messaging. When a client (Chrome 47) starts the application, it subscribes to an endpoint creating a durable queue. When this client unsubscribes from the endpoint, the queue will be cleaned up by RabbitMQ after 30 seconds as defined in a custom made RabbitMQ policy. When I try to reconnect to an endpoint that has a queue that was cleaned up, I receive the following exception in the RabbitMQ logs: "NOT_FOUND - no queue 'position-updates-user9zm_szz9' in vhost '/'\n". I don't want to use an auto-delete queue since I have some reconnect logic in case the websocket connection dies.
This problem can be reproduced by adding the following code to the spring-websocket-portfolio github example.
In the container div in the index.html add:
<button class="btn" onclick="appModel.subscribe()">SUBSCRIBE</button>
<button class="btn" onclick="appModel.unsubscribe()">UNSUBSCRIBE</button>
In portfolio.js replace:
stompClient.subscribe("/user/queue/position-updates", function(message) {
with:
positionUpdates = stompClient.subscribe("/user/queue/position-updates", function(message) {
and also add the following:
self.unsubscribe = function() {
positionUpdates.unsubscribe();
}
self.subscribe = function() {
positionUpdates = stompClient.subscribe("/user/queue/position-updates", function(message) {
self.pushNotification("Position update " + message.body);
self.portfolio().updatePosition(JSON.parse(message.body));
});
}
Now you can reproduce the problem by:
Launch the application
click unsubscribe
delete the position-updates queue in the RabbitMQ console
click subscribe
Find the error message in the websocket frame via the chrome devtools and in the RabbitMQ logs.
reconnect logic in case the websocket connection dies.
and
no queue 'position-updates-user9zm_szz9' in vhost
Are fully different stories.
I'd suggest you implement "re-subscribe" logic in case of deleted queue.
Actually that is how STOMP works: it creates auto-deleted (generated) queue for the subscribe and yes, it is removed on the unsubscrire.
See more info in the RabbitMQ STOMP Adapter Manual.
From other side consider to subscribe to the existing AMQP queue:
To address existing queues created outside the STOMP adapter, destinations of the form /amq/queue/<name> can be used.
The problem is Stomp won't recreate the queue if it get's deleted by the RabbitMQ policy. I worked around it by creating the queue myself when the SessionSubscribeEvent is fired.
public void onApplicationEvent(AbstractSubProtocolEvent event) {
if (event instanceof SessionSubscribeEvent) {
MultiValueMap nativeHeaders = (MultiValueMap)event.getMessage().getHeaders().get("nativeHeaders");
List destination = (List)nativeHeaders.get("destination");
String queueName = ((String)destination.get(0)).substring("/queue/".length());
try {
Connection connection = connectionFactory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(queueName, true, false, false, null);
} catch (IOException e) {
e.printStackTrace();
}
}
}

Resources