MassTransit saga states receiving unexpected events - masstransit

I am using MassTransit.Automatonymous (version 3.3.5 ) to manage a saga and I seem to be receiving unexpected events after a state has transitioned.
Here is my state set up:
Initially(
When(Requested)
.ThenAsync(InitialiseSaga)
.TransitionTo(Initialising)
);
During(Initialising,
When(InitialisationCompleted)
.ThenAsync(FetchTraceSetMetaData)
.TransitionTo(FetchingTraceSetMetaData)
);
During(FetchingTraceSetMetaData,
When(TraceSetMetaDataRetrieved)
.ThenAsync(ExtractTiffFiles)
.TransitionTo(ExtractingTiffFiles)
);
During(ExtractingTiffFiles,
When(TiffFilesExtracted)
.ThenAsync(DispatchTiffParseMessages)
.TransitionTo(DispatchingTiffParseMessages)
);
The error I sometimes receive is:
The TraceSetMetaDataRetrieved event is not handled during the
ExtractingTiffFiles state for the ImportTraceSetDataStateMachine state
machine
My understanding of how this should work at the point of the error is as follows:
During the FetchingTraceSetMetaData state, at some point I'll receive a TraceSetMetaDataRetrieved event. When this occurs, run the ExtractTiffFiles method, and transition to the ExtractingTiffFiles state.
Once in the ExtractingTiffFiles state, I wouldn't expect the TraceSetMetaDataRetrieved event since it's what got us into the ExtractingTiffFiles state.
The two FetchTraceSetMetaData and ExtractTiffFiles methods are as follows (truncated for brevity):
public async Task FetchTraceSetMetaData(BehaviorContext<ImportTraceSetDataSagaState, InitialisationCompleteEvent> context)
{
var traceSetId = context.Instance.TraceSetId;
_log.Information($"Getting pixel indicies for trace set with id {traceSetId}");
// Snip...!
await context.Publish(new TraceSetMetaDataRetrievedEvent { CorrelationId = context.Data.CorrelationId });
}
public async Task ExtractTiffFiles(BehaviorContext<ImportTraceSetDataSagaState, TraceSetMetaDataRetrievedEvent> context)
{
_log.Information($"Extracting tiffs for {context.Instance.TiffZipFileKey} and trace set with id {context.Instance.TraceSetId}");
// Snip...!
// Dispatch an event to put the saga in the next state where we dispatch the parse messages
await context.Publish(new TiffFilesExtractedEvent { CorrelationId = context.Data.CorrelationId });
}
Post posting pondering
It's just occurred to me that perhaps I should have my TransitionTo statements before my ThenAsync statements. e.g.
During(FetchingTraceSetMetaData,
When(TraceSetMetaDataRetrieved)
.TransitionTo(ExtractingTiffFiles)
.ThenAsync(ExtractTiffFiles)
);
Is that what I'm doing wrong?

Related

Masstransit How to disconnect from from RabbitMq

I am using Masstransit with RabbitMQ. As part of some deployment procedure, At some point in time I need my service to disconnect and stop receiving any messages.
Assuming that I won't need the bus until the next restart of the service, will it be Ok to use bus.StopAsync()?
Is there a way to get list of end points and then remove them from listining ?
You should StopAsync the bus, and then when ready, call StartAsync to bring it back up (or start it at the next service restart).
To stop receiving messages without stopping the buss I needed a solution that will avoid the consume message pipeline from consuming any type of message. I tried with observers but unsuccessfully. My solution came up with custom consuming message filter.
The filter part looks like this
public class ComsumersBlockingFilter<T> :
IFilter<ConsumeContext<T>>
where T : class
{
public void Probe(ProbeContext context)
{
var scope = context.CreateFilterScope("messageFilter");
}
public async Task Send(ConsumeContext<T> context, IPipe<ConsumeContext<T>> next)
{
// Check if the service is degraded (true for this demo)
var isServiceDegraded = true;
if (isServiceDegraded)
{
//Suspend the message for 5 seconds
await Task.Delay(TimeSpan.FromMilliseconds(5000), context.CancellationToken);
if (!context.CancellationToken.IsCancellationRequested)
{
//republish the message
await context.Publish(context.Message);
Console.WriteLine($"Message {context.MessageId} has been republished");
}
// NotifyConsumed to avoid skipped message
await context.NotifyConsumed(TimeSpan.Zero, "messageFilter");
}
else
{
//Next filter in the pipe is called
await next.Send(context);
}
}
}
The main idea is to delay with cancellation token and the republish the message. After that call contect.NotifyConsumed to avoid the next pipeline filters and return normally.

receiving data from 1st error but not subsequent ones in Subject

I am facing a peculiar issue. My component subscribes to a Subject. I receive data from the associated observable when first error is called but not from the subsequent ones. I have not unsubscribed.
I sign up with details of an existing user. I get error the first time (right behaviour). Then I again click sign up button to send the same request but this time the component doesn't receive the message from error.
Interestingly, if I use next instead of error then the code works ok. Does observable stops working if error is called ?
The code snippets
component.ts
ngOnInit() {
this.userSignupSubscription = this.subscribeToSignupAttempt();
this.createForm();
}
subscribeToSignupAttempt() {
return this.userManagementService.userSignUpState$.subscribe(
(res: Result) => { console.log('signup response ',res);this.handleSignupResponse(res); },
(error: Result) => { console.log('signup error response ',error);this.handleSignupErrorResponse(error); },
);
}
the backend service sending data is
#Injectable()
export class UserManagementService{
...
private signUpStateSubject: Subject<Result>;
public userSignUpState$: Observable<Result>; // naming convention for Streams has $ in the end.
constructor(){
this.signUpStateSubject = new Subject<Result>();
this.userSignUpState$ = this.signUpStateSubject.asObservable();
}
addUser(){
...
(error: ServerResponseAPI) => { //This code send the message.
console.log("got error from the Observable: ",error);
const errorMessage: string = this.helper.userFriendlyErrorMessage(error);
this.signUpStateSubject.error(new Result(errorMessage, error['additional-info'])); //change this to .next and the code works
}
}
see the pic below
It seems that I should use next instead of error because it is how rxjs works. It is the contract of rxjs. error means that the stream is not operational anymore and new one would need to be opened.

Can a TPL Dataflow ActionBlock be Reset after a Fault?

I have a TPL Dataflow Action Block that I'm using to receive trigger messages for a camera, then do some processing. If the processing task throws an exception, the ActionBlock enters the faulted state. I would like to send a faulted message to my UI and send a reset message to the ActionBlock so it can continue processing incoming trigger messages. Is there a way to return the ActionBlock to a ready state (clear the fault)?
Code for the curious:
using System.Threading.Tasks.Dataflow;
namespace Anonymous
{
/// <summary>
/// Provides a messaging system between objects that inherit from Actor
/// </summary>
public abstract class Actor
{
//The Actor uses an ActionBlock from the DataFlow library. An ActionBlock has an input queue you can
// post messages to and an action that will be invoked for each received message.
//The ActionBlock handles all of the threading issues internally so that we don't need to deal with
// threads or tasks. Thread-safety comes from the fact that ActionBlocks are serialized by default.
// If you send two messages to it at the same time it will buffer the second message until the first
// has been processed.
private ActionBlock<Message> _action;
...Properties omitted for brevity...
public Actor(string name, int id)
{
_name = name;
_id = id;
CreateActionBlock();
}
private void CreateActionBlock()
{
// We create an action that will convert the actor and the message to dynamic objects
// and then call the HandleMessage method. This means that the runtime will look up
// a method called ‘HandleMessage’ with a parameter of the message type and call it.
// in TPL Dataflow if an exception goes unhandled during the processing of a message,
// (HandleMessage) the exception will fault the block’s Completion task.
//Dynamic objects expose members such as properties and methods at run time, instead
// of at compile time. This enables you to create objects to work with structures that
// do not match a static type or format.
_action = new ActionBlock<Message>(message =>
{
dynamic self = this;
dynamic msg = message;
self.HandleMessage(msg); //implement HandleMessage in the derived class
}, new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 1 // This specifies a maximum degree of parallelism of 1.
// This causes the dataflow block to process messages serially.
});
}
/// <summary>
/// Send a message to an internal ActionBlock for processing
/// </summary>
/// <param name="message"></param>
public async void SendMessage(Message message)
{
if (message.Source == null)
throw new Exception("Message source cannot be null.");
try
{
_action.Post(message);
await _action.Completion;
message = null;
//in TPL Dataflow if an exception goes unhandled during the processing of a message,
// the exception will fault the block’s Completion task.
}
catch(Exception ex)
{
_action.Completion.Dispose();
//throw new Exception("ActionBlock for " + _name + " failed.", ex);
Trace.WriteLine("ActionBlock for " + _name + " failed." + ExceptionExtensions.GetFullMessage(ex));
if (_action.Completion.IsFaulted)
{
_isFaulted = true;
_faultReason = _name + " ActionBlock encountered an exception while processing task: " + ex.ToString();
FaultMessage msg = new FaultMessage { Source = _name, FaultReason = _faultReason, IsFaulted = _isFaulted };
OnFaulted(msg);
CreateActionBlock();
}
}
}
public event EventHandler<FaultMessageEventArgs> Faulted;
public void OnFaulted(FaultMessage message)
{
Faulted?.Invoke(this, new FaultMessageEventArgs { Message = message.Copy() });
message = null;
}
/// <summary>
/// Use to await the message processing result
/// </summary>
public Task Completion
{
get
{
_action.Complete();
return _action.Completion;
}
}
}
}
An unhandled exception in an ActionBlock is like an unhandled exception in an application. Don't do this. Handle the exception appropriately.
In the simplest case, log it or do something inside the block's delegate. In more complex scenarios you can use a TransformBlock instead of an ActionBlock and send Succes or Failure messages to downstream blocks.
The code you posted though has some critical issues. Dataflow blocks aren't agents and agents aren't dataflow blocks. You can use the one to build the other of course, but they represent different paradigms. In this case your Actor emulates ActionBlock's own API with several bugs.
For example, you don't need to create a SendAsync, blocks already have one. You should not complete the block once you send a message. You won't be able to handle any other messages. Only call Complete() when you really don't want to use the ActionBlock any more. You don't need to set a DOP of 1, that's the default value.
You can set bounds to a DataflowBlock so that it accepts only eg 10 messages at a time. Otherwise all messages would be buffered until the block found the chance to process them
You could replace all of this code with the following :
void MyMethod(MyMessage message)
{
try
{
//...
}
catch(Exception exc)
{
//ToString logs the *complete exception, no need for anything more
_log.Error(exc.ToString());
}
}
var blockOptions new ExecutionDataflowBlockOptions {
BoundedCapacity=10,
NameFormat="Block for MyMessage {0} {1}"
};
var block=new ActionBlock<MyMessage>(MyMethod,blockOptions);
for(int i=0;i<10000;i++)
{
//Will await if more than 10 messages are waiting
await block.SendAsync(new MyMessage(i);
}
block.Complete();
//Await until all leftover messages are processed
await block.Completion;
Notice the call to Exception.ToString(). This will generate a string containing all exception information, including the call stack.
NameFormat allows you to specify a name template for a block that can be filled by the runtime with the block's internal name and task ID.

RxJS: Auto (dis)connect on (un)subscribe with Websockets and Stomp

I'm building a litte RxJS Wrapper for Stomp over Websockets, which already works.
But now I had the idea of a really cool feature, that may (hopefully - correct me if I'm wrong) be easily done using RxJS.
Current behavior:
myStompWrapper.configure("/stomp_endpoint");
myStompWrapper.connect(); // onSuccess: set state to CONNECTED
// state (Observable) can be DISCONNECTED or CONNECTED
var subscription = myStompWrapper.getState()
.filter(state => state == "CONNECTED")
.flatMap(myStompWrapper.subscribeDestination("/foo"))
.subscribe(msg => console.log(msg));
// ... and some time later:
subscription.unsubscribe(); // calls 'unsubscribe' for this stomp destination
myStompWrapper.disconnect(); // disconnects the stomp websocket connection
As you can see, I must wait for state == "CONNECTED" in order to subscribe to subscribeDestination(..). Else I'd get an Error from the Stomp Library.
The new behavior:
The next implementation should make things easier for the user. Here's what I imagine:
myStompWrapper.configure("/stomp_endpoint");
var subscription = myStompWrapper.subscribeDestination("/foo")
.subscribe(msg => console.log(msg));
// ... and some time later:
subscription.unsubscribe();
How it should work internally:
configure can only be called while DISCONNECTED
when subscribeDestination is called, there are 2 possibilities:
if CONNECTED: just subscribe to the destination
if DISCONNECTED: first call connect(), then subscribe to the destination
when unsubscribe is called, there are 2 possibilities:
if this was the last subscription: call disconnect()
if this wasn't the last subscription: do nothing
I'm not yet sure how to get there, but that's why I ask this question here ;-)
Thanks in advance!
EDIT: more code, examples and explanations
When configure() is called while not disconnected it should throw an Error. But that's not a big deal.
stompClient.connect(..) is non-blocking. It has an onSuccess callback:
public connect() {
stompClient.connect({}, this.onSuccess, this.errorHandler);
}
public onSuccess = () => {
this.state.next(State.CONNECTED);
}
observeDestination(..) subscribes to a Stomp Message Channel (= destination) and returns an Rx.Observable which then can be used to unsubscribe from this Stomp Message Channel:
public observeDestination(destination: string) {
return this.state
.filter(state => state == State.CONNECTED)
.flatMap(_ => Rx.Observable.create(observer => {
let stompSubscription = this.client.subscribe(
destination,
message => observer.next(message),
{}
);
return () => {
stompSubscription.unsubscribe();
}
}));
}
It can be used like this:
myStompWrapper.configure("/stomp_endpoint");
myStompWrapper.connect();
myStompWrapper.observeDestination("/foo")
.subscribe(..);
myStompWrapper.observeDestination("/bar")
.subscribe(..);
Now I'd like to get rid of myStompWrapper.connect(). The code should automatically call this.connect() when the first one subscribes by calling observeDestination(..).subscribe(..) and it should call this.disconnect() when the last one called unsubscribe().
Example:
myStompWrapper.configure("/stomp_endpoint");
let subscription1 = myStompWrapper.observeDestination("/foo")
.subscribe(..); // execute connect(), because this
// is the first subscription
let subscription2 = myStompWrapper.observeDestination("/bar")
.subscribe(..);
subscription2.unsubscribe();
subscription1.unsubscribe(); // execute disconnect(), because this
// was the last subscription
RxJS: Auto (dis)connect on (un)subscribe with Websockets and Stomp
I agree the code you are suggesting to tuck away into myStompWrapper will be happier in its new home.
I would still suggest to use a name like observeDestination rather than subscribeDestination("/foo") as you are not actually subscribing from that method but rather just completing your observable chain.
configure() can only be called while DISCONNECTED
You do not specify here what should happen if it is called while not DISCONNECTED. As you do not seem to be returning any value here that you would use, I will assume that you intend to throw an exception if it has an inconvenient status. To keep track of such statuses, I would use a BehaviourSubject that starts with the initial value of DISCONNECTED. You likely will want to keep state within observeDestination to decide whether to throw an exception though
if CONNECTED: just subscribe to the destination
if DISCONNECTED: first call connect(), then subscribe to the destination
As I mentioned before, I think you will be happier if the subscription does not happen within subscribeDestination("/foo") but rather that you just build your observable chain. As you simply want to call connect() in some cases, I would simply use a .do() call within your observable chain that contains a condition on the state.
To make use of the rx-y logic, you likely want to call disconnect() as part of your observable unsubscribe and simply return a shared refcounted observable to start with. This way, each new subscriber does not recreate a new subscription, instead .refCount() will make a single subscription to the observable chain and unsubscribe() once there is no more subscribers downstream.
Assuming the messages are coming in as this.observedData$ in myStompWrapper My suggested code as part of myStompWrapper would look something like this:
observeDestination() {
return Rx.Observable.create(function (observer) {
var subscription = this.getState()
.filter(state => state == "CONNECTED")
.do(state => state ? this.connect() : Observable.of(true))
.switchMap(this.observedData$)
.refCount();
.subscribe(value => {
try {
subscriber.next(someCallback(value));
} catch(err) {
subscriber.error(err);
}
},
err => subscriber.error(err),
() => subscriber.complete());
return { unsubscribe() { this.disconnect(); subscription.unsubscribe(); } };
}
Because I am missing some of your code, I am allowing myself to not test my code. But hopefully it illustrates and presents the concepts I mentioned in my answer.

Handling transition to state for multiple events

I have a MassTransitStateMachine that orchestrates a process which involves creating multiple events.
Once all of the events are done, I want the state to transition to a 'clean up' phase.
Here is the relevant state declaration and filter function:
During(ImportingData,
When(DataImported)
// When we get a data imported event, mark this source as done.
.Then(MarkImportCompletedForLocation),
When(DataImported, IsAllDataImported)
// Once all are done, we can transition to cleaning up...
.Then(CleanUpSources)
.TransitionTo(CleaningUp)
);
...snip...
private static bool IsAllDataImported(EventContext<DataImportSagaState, DataImportMappingCompletedEvent> ctx)
{
return ctx.Instance.Locations.Values.All(x => x);
}
So while the state is ImportingData, I expect to receive multiple DataImported events. Each event marks its location as done so that that IsAllDataImported method can determine if we should transition to the next state.
However, if the last two DataImported events arrive at the same time, the handler for transitioning to the CleaningUp phase fires twice, and I end up trying to perform the clean up twice.
I could solve this in my own code, but I was expecting the state machine to manage this. Am I doing something wrong, or do I just need to handle the contention myself?
The solution proposed by Chris won't work in my situation because I have multiple events of the same type arriving. I need to transition only when all of those events have arrived. The CompositeEvent construct doesn't work for this use case.
My solution to this was to raise a new AllDataImported event during the MarkImportCompletedForLocation method. This method now handles determining whether all sub-imports are complete in a thread safe way.
So my state machine definition is:
During(ImportingData,
When(DataImported)
// When we get a data imported event, mark the URI in the locations list as done.
.Then(MarkImportCompletedForLocation),
When(AllDataImported)
// Once all are done, we can transition to cleaning up...
.TransitionTo(CleaningUp)
.Then(CleanUpSources)
);
The IsAllDataImported method is no longer needed as a filter.
The saga state has a Locations property:
public Dictionary<Uri, bool> Locations { get; set; }
And the MarkImportCompletedForLocation method is defined as follows:
private void MarkImportCompletedForLocation(BehaviorContext<DataImportSagaState, DataImportedEvent> ctx)
{
lock (ctx.Instance.Locations)
{
ctx.Instance.Locations[ctx.Data.ImportSource] = true;
if (ctx.Instance.Locations.Values.All(x => x))
{
var allDataImported = new AllDataImportedEvent {CorrelationId = ctx.Instance.CorrelationId};
this.CreateEventLift(AllDataImported).Raise(ctx.Instance, allDataImported);
}
}
}
(I've just written this so that I understand how the general flow will work; I recognise that the MarkImportCompletedForLocation method needs to be more defensive by verifying that keys exist in the dictionary.)
You can use a composite event to accumulate multiple events into a subsequent event that fires when the dependent events have fired. This is defined using:
CompositeEvent(() => AllDataImported, x => x.ImportStatus, DataImported, MoreDataImported);
During(ImportingData,
When(DataImported)
.Then(context => { do something with data }),
When(MoreDataImported)
.Then(context => { do smoething with more data}),
When(AllDataImported)
.Then(context => { okay, have all data now}));
Then, in your state machine state instance:
class DataImportSagaState :
SagaStateMachineInstance
{
public int ImportStatus { get; set; }
}
This should address the problem you are trying to solve, so give it a shot. Note that event order doesn't matter, they can arrive in any order as the state of which events have been received is in the ImportStatus property of the instance.
The data of the individual events is not saved, so you'll need to capture that into the state instance yourself using .Then() methods.

Resources