Masstransit error handling for batch consumer - masstransit

I've configured a consumer for receiving Batch of messages. The consumer has also been configured for retry and re-delivery in case of any error happen. The batch consumer is working perfectly. But as I notice, in case of any error happen inside batch message, the whole batch becomes faulted.
As shown in the below example, let's assume that Message[2] becomes faulted, then it seems like that the whole batch is retrying/redelivering again before it's become faulted.
My Query: Is there any way we can configure the consumer so that only faulted message(s) inside a batch try to redeliver or becomes faulted AND other messages inside batch will be resumed.
public class MyConsumer : IConsumer<Batch<MyClass>>, IConsumer<Fault<MyClass>>
{
public async Task Consume(ConsumeContext<Batch<MyClass>> context)
{
for (int i = 0; i < context.Message.Length; i++)
{
if (i == 2)
{
throw new Exception();
}
}
}
public async Task Consume(ConsumeContext<Fault<MyClass>> context)
{
Console.WriteLine($"Error in Context. Name :{context.Message.Message.Name}");
}
}

When configured properly, the entire batch would be retried as a unit, which means you'd need to keep track of individual items another way. One approach is to have each item be a separate idempotent operation, but at that point the real question becomes "why are you using batch for discrete operations?"
If you're processing each message in a batch separately, just use a regular consumer and save yourself the complexity and hassle.
If you must use batch for whatever reason, consider catching the exceptions of individual messages and publishing some type of event signaling that the individual item failed.

Related

How to balance multiple message queues

I have a task that is potentially long running (hours). The task is performed by multiple workers (AWS ECS instances in my case) that read from a message queue (AWS SQS in my case). I have multiple users adding messages to the queue. The problem is that if Bob adds 5000 messages to the queue, enough to keep the workers busy for 3 days, then Alice comes along and wants to process 5 tasks, Alice will need to wait 3 days before any of Alice's tasks even start.
I would like to feed messages to the workers from Alice and Bob at an equal rate as soon as Alice submits tasks.
I have solved this problem in another context by creating multiple queues (subqueues) for each user (or even each batch a user submits) and alternating between all subqueues when a consumer asks for the next message.
This seems, at least in my world, to be a common problem, and I'm wondering if anyone knows of an established way of solving it.
I don't see any solution with ActiveMQ. I've looked a little at Kafka with it's ability to round-robin partitions in a topic, and that may work. Right now, I'm implementing something using Redis.
I would recommend Cadence Workflow instead of queues as it supports long running operations and state management out of the box.
In your case I would create a workflow instance per user. Every new task would be sent to the user workflow via signal API. Then the workflow instance would queue up the received tasks and execute them one by one.
Here is a outline of the implementation:
public interface SerializedExecutionWorkflow {
#WorkflowMethod
void execute();
#SignalMethod
void addTask(Task t);
}
public interface TaskProcessorActivity {
#ActivityMethod
void process(Task poll);
}
public class SerializedExecutionWorkflowImpl implements SerializedExecutionWorkflow {
private final Queue<Task> taskQueue = new ArrayDeque<>();
private final TaskProcesorActivity processor = Workflow.newActivityStub(TaskProcesorActivity.class);
#Override
public void execute() {
while(!taskQueue.isEmpty()) {
processor.process(taskQueue.poll());
}
}
#Override
public void addTask(Task t) {
taskQueue.add(t);
}
}
And then the code that enqueues that task to the workflow through signal method:
private void addTask(WorkflowClient cadenceClient, Task task) {
// Set workflowId to userId
WorkflowOptions options = new WorkflowOptions.Builder().setWorkflowId(task.getUserId()).build();
// Use workflow interface stub to start/signal workflow instance
SerializedExecutionWorkflow workflow = cadenceClient.newWorkflowStub(SerializedExecutionWorkflow.class, options);
BatchRequest request = cadenceClient.newSignalWithStartRequest();
request.add(workflow::execute);
request.add(workflow::addTask, task);
cadenceClient.signalWithStart(request);
}
Cadence offers a lot of other advantages over using queues for task processing.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
See the presentation that goes over Cadence programming model.

Using Observables to process queue messages which require a callback at end of processing?

This is a bit of a conceptual question, so let me know if it's off topic.
I'm looking at writing yet another library to process messages off a queue - in this case an Azure storage queue. It's pretty easy to create an observable and throw a message into it every time a message is available.
However, there's a snag here that I'm not sure how to handle. The issue is this: when you're done processing the message, you need to call an API on the storage queue to actually delete the message. Otherwise the visibility timeout will expire and the message will reappear to be dequeued again.
As an example, here's how this loop looks in C#:
public event EventHandler<string> OnMessage;
public void Run()
{
while(true)
{
// Read message
var message = queue.GetMessage();
if (message != null)
{
// Run any handlers
OnMessage?.Invoke(this, message.AsString);
// Delete off queue when done
queue.DeleteMessage(message);
}
else
{
Thread.Sleep(2500);
}
}
}
The important thing here is that we read the message, trigger any registered event handlers to do things, then delete the message after the handlers are done. I've omitted error handling here, but in general if the handler fails we should NOT delete the message, but instead let it return to visibility automatically and get redelivered later.
How do you handle this kind of thing using Rx? Ideally I'd like to expose the observable for anyone to subscribe to. But I need to do stuff at the end of processing for that message, whatever the "end" happens to mean here.
I can think of a couple of possible solutions, but I don't really like any of them. One would be to have the library call a function supplied by the consumer, that takes in the source observable, hooks up whatever it wants, then returns a new observable that the library can then subscribe on to do the final cleanup. But that's pretty limiting, as consumers basically only have one shot to hook up to the messages, which seems pretty limiting.
I guess I could put the call to delete the message after the call to onNext, but then I don't know if the processing succeeded or failed unless there's some sort of back channel in that api I don't know about?
Any ideas/suggestions/previous experience here?
Try having a play with this:
IObservable<int> source =
Observable
.Range(0, 3)
.Select(x =>
Observable
.Using(
() => Disposable.Create(() => Console.WriteLine($"Removing {x}")),
d => Observable.Return(x)))
.Merge();
source
.Subscribe(x => Console.WriteLine($"Processing {x}"));
It produces:
Processing 0
Removing 0
Processing 1
Removing 1
Processing 2
Removing 2

Azure ServiceBus TopicClient SendAsync implementation in own wrapper

what is the proper implementation of SendAsync method of Azure ServiceBus TopicClient?
In the second implementation, will the BrokeredMessage actually be disposed before the SendAsync happens?
public async Task SendAsync<TMessage>(TMessage message, IDictionary<string, object> properties = null)
{
using (var bm = MessagingHelper.CreateBrokeredMessage(message, properties))
{
await this._topicClient.Value.SendAsync(bm);
}
}
public Task SendAsync<TMessage>(TMessage message, IDictionary<string, object> properties = null)
{
using (var bm = MessagingHelper.CreateBrokeredMessage(message, properties))
{
return this._topicClient.Value.SendAsync(bm);
}
}
I would like to get most from await/async pattern.
Answer to your question: the second approach could cause issues with disposed objects, you have to wait ending of SendAsync execution before you can release resources.
Detailed explanation.
If you call await, execution of a method will be stopped at the same moment and will not continue till awaitable method is not returned. Brokered message will be stored in a local hidden variable and will not be disposed.
If you don't call await, execution will continue and all resources of brokered message will be freed before they are actually consumed (as using is calling Dispose on object at the end) or in the process of consumption. This definetely will lead to exceptions inside SendAsync. At this point, execution of SendAsync is actually started.
What await does is “pausing” any current thread and waits for completion of task and it's result. And that's what you actually need. Purpose of async-await is to allow execution of some task concurrently with something else, it provides ability to wait for a result of concurrent operation when it is really necessary and further execution isn't possible without it.
First approach is good if every method to the top is an async method too. I mean, if caller of your SendAsync is async Task, and caller of that caller and so on to the top calling method.
Also, consider exceptions that could raise, they are listed here. As you can see, there are so-called transient errors. This is a kind of errors that retry can possibly fix. In your code, there is no such exception handling. Example of retry pattern could be found here, but mentioned article on exceptions can suggest better solutions and it is a topic of another question. I would also add some logging system to at least be aware of any non transient exceptions.

Not receiving Apache Camel Event Notifications under the smallest load

I have extended EventNotiferSupport, and set the isEnable() to respond True for all events. I have a notify() that logs what events I receive and the corresponding Exchange ID for the event.
I have added my ExchangeMessageNotifier with this.context.getManagementStrategy().addEventNotifier(this.exchangeMessageNotifier);
I run my program under basically no load, sending 1 message at a time 1 second delay between messages into Camel to send out. Everything works the way I expect. I receive my events everything looks good.
I decrease the delay between messages to 0 milliseconds, and I find that 1 out of approximately 20 messages I fail to receive one of the Events, (Often the Completed event).
Add a second thread sending at the same rate and I don't get any events for any messages.
What am I missing? I've done searches and I don't find anything that I need to do differently. Is there something I am missing?
I am using Apache Camel 2.16.3, and moved to 2.18.1 still see the same behavior.
Well found my own answer. Part of the fun of inheriting code without any informaiton.
In your implementation of the EventNotifierSupport you need to override the doStart() method and configure the EventNotifierSupport for what events you wish to receive.
protected void doStart() throws Exception {
// filter out unwanted events
setIgnoreCamelContextEvents(true);
setIgnoreServiceEvents(true);
setIgnoreRouteEvents(true);
setIgnoreExchangeCreatedEvent(true);
setIgnoreExchangeCompletedEvent(false);
setIgnoreExchangeFailedEvents(true);
setIgnoreExchangeRedeliveryEvents(true);
setIgnoreExchangeSentEvents(false);
}
This is in addition to doing the following:
public boolean isEnabled(EventObject event) {
return true;
}
Which enables you to determine if you want a particular event, out of the selected groups you had set in the doStart().
Once these changes were in I was receiving consistent events.

Cancel running job scheduled with Hangfire.io

I schedule job using hangfire.io library and I can observe it being processed in built in dashboard. However, my system has requirement that the job can be cancelled from the dashboard.
There is an option to delete running job, but this only changes the state of the job in database and does not stop running job.
I see in documentation there is option to pass IJobCancellationToken however as I understand it it is used to correctly stop the job when whole server is shutting down.
Is there a way to achieve programmatic cancellation of already running task?
Should I write my own component that would periodically pull database and check whether current server instance is running job that has been cancelled? For instance maintain dictionary jobId -> CancellationTokenSource and then signal cancellation using appropriate tokensource.
Documentation is incomplete a bit. IJobCancellationToken.ThrowIfCancellationRequested method throws an exception after any of the following conditions met:
Hangfire Server shutdown initiated. This event is triggered when someone calls Stop or Dispose methods of BackgroundJobServer.
Background job is not in Processed state.
Background job is being performed by another worker.
The latter two cases are performed by querying job storage for the current background job state. So, cancellation token will throw if you delete or re-queue it from the dashboard also.
This works if you delete the job from the dashboard
static public void DoWork(IJobCancellationToken cancellationToken)
{
Debug.WriteLine("Starting Work...");
for (int i = 0; i < 10; i++)
{
Debug.WriteLine("Ping");
try
{
cancellationToken.ThrowIfCancellationRequested();
}
catch (Exception ex)
{
Debug.WriteLine("ThrowIfCancellationRequested");
break;
}
//Debug.WriteProgressBar(i);
Thread.Sleep(5000);
}
Debug.WriteLine("Finished.");
}

Resources