MassTransit 3.2.1 - Validation - validation

I want to validate an incoming message, using FluentValidation in my case, and if it fails it should return immediately. I looked into http://docs.masstransit-project.com/en/latest/usage/observers.html, and in my case, I like the idea of
public class ConsumeObserver : IConsumeObserver
{
Task IConsumeObserver.PreConsume<T>(ConsumeContext<T> context)
{
//1.Validate here
//2. If success go on to consumer
//3. If fails exit with the result of validation and don't go through consumer.
}
Task IConsumeObserver.PostConsume<T>(ConsumeContext<T> context)
{
}
Task IConsumeObserver.ConsumeFault<T>(ConsumeContext<T> context, Exception exception)
{
}
}
because I get the message already deserialized and so is easy to use the validator. The problem is that I don't know how to return without going through consumer and a the same time keep the validation errors.
Thank you.

Observers typically watch versus take action, and that's the approach with observers in MassTransit. While you could throw an exception from the PreConsume method, which would cause the message to either retry or be sent to the error queue, it's not the most obvious behavior to developers down the road who may not understand why the message is failing.
Another approach would be to create middleware component that can validate the message, and if it's not valid, perform a specific action on the message (such as moving it to an invalid queue, or dumping it to a log, or whatever) so that the message is removed from the queue. It's important to understand how this might impact the message producer.
For instance, if it was a request message, and the sender is waiting on a response, discarding the message means that no response will be received. The default behavior of a consumer that throws an exception is to propagate the fault back to the requestor, completing the cycle, so keep that in mind.
Another option is to just add the validation behavior to the consumer, using either an injected validation interface, or within the consumer itself. That way, the handling of the message is close to the consumer which improves code cohesion and makes it easy to see what is happening.
Ideally, validating at the message producer is the best option, to avoid flooding the queue with invalid messages. So that's another option.
So, several choices, your requirements will dictate which makes the most sense.

Related

Splittling SQS Lambda batch into partial success/partial failure

The AWS SQS -> Lambda integration allows you to process incoming messages in a batch, where you configure the maximum number you can receive in a single batch. If you throw an exception during processing, to indicate failure, all the messages are not deleted from the incoming queue and can be picked up by another lambda for processing once the visibility timeout has passed.
Is there any way to keep the batch processing, for performance reasons, but allow some messages from the batch to succeed (and be deleted from the inbound queue) and only leave some of the batch un-deleted?
The problem with manually re-enqueueing the failed messages to the queue is that you can get into an infinite loop where those items perpetually fail and get re-enqueued and fail again. Since they are being resent to the queue their retry count gets reset every time which means they'll never fail out into a dead letter queue. You also lose the benefits of the visibility timeout. This is also bad for monitoring purposes since you'll never be able to know if you're in a bad state unless you go manually check your logs.
A better approach would be to manually delete the successful items and then throw an exception to fail the rest of the batch. The successful items will be removed from the queue, all the items that actually failed will hit their normal visibility timeout periods and retain their receive count values, and you'll be able to actually use and monitor a dead letter queue. This is also overall less work than the other approach.
Considerations
Only override the default behavior if there has been a partial batch failure. If all the items succeeded, let the default behavior take its course
Since you're tracking the failures of each queue item, you'll need to catch and log each exception as they come in so that you can see what's going on later
I recently encountered this problem and the best way to handle this without writing any code from our side is to use the FunctionResponseTypes property of EventSourceMapping. Using this we just have to pass the list of failed message Id and the event source will handle to delete the successful message.
Please checkout Using SQS and Lambda
Cloudformation template to configure Eventsource for lambda
"FunctionEventSourceMapping": {
"Type": "AWS::Lambda::EventSourceMapping",
"Properties": {
"BatchSize": "100",
"Enabled": "True",
"EventSourceArn": {"Fn::GetAtt": ["SQSQueue", "Arn"]},
"FunctionName": "FunctionName",
"MaximumBatchingWindowInSeconds": "100",
"FunctionResponseTypes": ["ReportBatchItemFailures"] # This is important
}
}
After you configure your Event source with above configuration it should look something like below
Then we just have to return the response in the below-mentioned format from our lambda
{"batchItemFailures": [{"itemIdentifier": "85f26da9-fceb-4252-9560-243376081199"}]}
Provide the list of failed message Ids in batchIntemFailures list
If your lambda runtime environment is in python than please return dict in the above mentioned format for java based runtime you can use aws-lambda-java-event
Sample Python code
Advantages of this approach are
You don't have to add any code to manually delete the message from SQS queue
You don't have to include any third party library or boto just for deleting the message from the queue it will help you to reduce your final artifact size.
Keep it simple an stupid
On a side note make sure your lambda have the required permission on sqs to get and delete the message.
Thanks
One option is to manually send back the failed messages to the queue, and then replying with a success to the SQS so that there are no duplicates.
You could do something like setting up a fail count, so that if all messages failed you can simply return a failed status for all messages, otherwise if the fail count is < 10 (10 being the max batch size you can get from SQS -> Lambda event) then you can individually send back the failed messages to the queue, and then reply with a success message.
Additionally, to avoid any possible infinite retry loop, add a property to the event such as a "retry" count before sending it back to the queue, and drop the event when "retry" is greater than X.

How to test delivery in PublishSubscribeChannel?

I have a PublishSubscribeChannel in my application, which should deliver messages to different MessageHandlers inside the same JVM. Handlers are subscribed to the channel using #StreamListener annotation. Channel uses Executors so delivery is asynchronous.
Now, I want to test that senders and handlers agree on the specific object type which send through channel (the type of Message body). AFAIU I have two ways to test this:
Find all subscribers of the given channel and verify their
signature.
Send a message to a channel and verify that no handlers have thrown an exception.
I have no idea how to do (1). And I think I could do (2) by listening to errorChannel (there should be no messages there), but I don't quite understand how long should I wait for error messages.
Any suggestions?
For 1, you can use reflection to look at the collection of handlers in the channel's dispatcher; then use reflection again to look at the hander's Method.
However, your design is flawed, unless you don't mind losing messages; the incoming message will be ack'd as soon as you hand off to the executor; if the server then crashes, the message will be lost.
If you get rid of the executor, it would be simpler to add an interceptor to the channel, which will be notified of any exceptions in its afterSendCompletion() method (satisfying your 2).

#KafkaListener should pull new data only when a certain conditions is met, If condition fails pulling of data should stop until the condition is met

The use case that I am working on is that message received from KafkaListener triggers an Async method. I want this Aysnc method to finish and only then receive a new message from kafka queue. Any ideas or suggestions regarding this implementation? Can kakfka support such kind of a scenario.
eg
while(asyncMethod.idle()){
#KafkaListener(String data)
public void listen(){
process(message);
asyncMethod.execute();
}
}
I am confused by this question, but it sounds like you would want to make this synchronous vs. asynchronous?
Either that or you could implement a lock to basically make sure that it doesn't listen unless the lock is false and set the lock to true once it has received a message.
You may want to work on your implementation/architecture though, Kafka shouldn't be used to maintain order or block that way.

Strategy for passing same payload between messages when optional outbound gateways fail

I have a workflow whose message payload (MasterObj) is being enriched several times. During the 2nd enrichment () an UnknownHostException was thrown by an outbound gateway. My error channel on the enricher is called but the message the error-channel receives is an exception, and the failed msg in that exception is no longer my MasterObj (original payload) but it is now the object gotten from request-payload-expression on the enricher.
The enricher calls an outbound-gateway and business-wise this is optional. I just want to continue my workflow with the payload that I've been enriching. The docs say that the error-channel on the enricher can be used to provide an alternate object (to what the enricher's request-channel would return) but even when I return an object from the enricher's error-channel, it still takes me to the workflow's overall error channel.
How do I trap errors from enricher's + outbound-gateways, and continue processing my workflow with the same payload I've been working on?
Is trying to maintain a single payload object for the entire workflow the right strategy? I need to be able to access it whenever I need.
I was thinking of using a bean scoped to the session where I store the payload but that seems to defeat the purpose of SI, no?
Thanks.
Well, if you worry about your MasterObj in the error-channel flow, don't use that request-payload-expression and let the original payload go to the enricher's sub-flow.
You always can use in that flow a simple <transformer expression="">.
On the other hand, you're right: it isn't good strategy to support single object through the flow. You carry messages via channel and it isn't good to be tied on each step. The Spring Integration purpose is to be able to switch from different MessageChannel types at any time with small effort for their producers and consumers. Also you can switch to the distributed mode when consumers and producers are on different machines.
If you still need to enrich the same object several times, consider to write some custom Java code. You can use a #MessagingGateway on the matter to still have a Spring Integration gain.
And right, scope is not good for integration flow, because you can simply switch there to a different channel type and lose a ThreadLocal context.

Need help to handle MDB Exception in two ways

I'm trying to handle two different types of problems while processing a message.
The first problem is if the remote database is down. In that case, the message should stop processing, and try again later. This message should never go to a DLQ, and should keep trying until the remote database is up.
The second problem is when there is a problem with the message. In that case, it should go to the DLQ.
How should I be structuring the following code?
#Override
public void onMessage(Message message) {
try {
// Do some processing
messageProcessing(message); // Should DLQ if message is bad
// Save to the database
putNamedLocation(message); // <<--- Exception when external DB is down
} catch (Exception e) {
logger.error(e.getMessage());
mdc.setRollbackOnly();
}
}
Assuming you can detect bad messages definitively in the code body of the MDB, I would write the bad messages to the DLQ directly. This gives you a bit more freedom to perhaps categorize the error and optionally send different types of bad messages to different "DLQ-Like" queues, and/or apply a time-to-live to DLQ'ed messages so that no-hope-of-ever-being-processed type messages don't pile up in the queue for ever. You can add #Resource annotated instance variables to your MDB class referencing the ConnectionFactory and Queue references to support the sending of the messages to the target DLQ. The bottom line is, make sure you detect the error and DLQ the message yourself.
As for the DB being down, you can detect this by catching exceptions when acquiring a connection or writing your updates. In this case, clean up your resources and throw a RuntimeException. This will cause the message to be redelivered, but you will want to check the JMS configuration for two things:
Make sure the max-redelivery count is high enough, otherwise the count will tick over and the message will be DLQed eventually anyway.
If your JMS implementation supports it, add a redelivery delay to rejected messages to allow some time for the DB to come back up, otherwise your messages will endlessly spin in a deliver/reject loop.
To avoid #2 (which is tricky if your JMS implementation does not support redilvery delay, like WebSphereMQ), you can use the JBoss JMX management interface for the MDB to stop (and later restart) delivery on the MDB. However, you can't do this inside the MDB in the same thread that is processing the message because the MDB will wait for the message to complete processing, which it can't because it is waiting for the MDB to stop, which it can't because...[and so on] so... your best bet is to start some sort of sentry that polls the DB and when it finds it down, stops the MDB and when it finds it up again, restarts it. See this question for a snippet on how to do that.
That last part should help deal with any unexpected exceptions resulting from message validations. (i.e. the DB is fine, but for some reason the message is totally fubar resulting in uncaught exceptions which causes the message to be redelivered). Since down-DB messages should not be redelivered more than a few times (on account of your sentry), you can check a message's redelivery count and if it is ridiculously high then you know you have poison message and you can ditch it, or DLQ it.
Hope that's helpful.

Resources