Serial consumption between message types - masstransit

I have a MassTransit system that will consume 2 message types, one for a batch process, the other for CRUD operations on a single entity. Whilst the batch process is running, the CRUD operations should not be de-queued.
Is this possible to achieve using MassTransit? It seems the exchange binding -> type name, would potentially make this behavior difficult.
A solution would be to use one message type to denote both operations and then interrogate the message contents to discern between single and batch but this feels like a code smell. Also, this would require concurrency configuration to ensure only one consumer is ever active.
Can anyone help with an alternative solution here? Essentially, we need to pause all message consumption whilst an event driven process is running.
Thanks in advance.

By pause, do you mean that you want the CRUD operations to be able to occur without being blocked by the batch process? Because if it's only a matter of not having the two separate messages get in the way of each other, the most logical solution is using two separate queues, one receive endpoint for the batch process and another for the CRUD operations.
Now, if you truly need to separate the batch process such that it doesn't happen during the CRUD operations, that will require more work. And what if you receive a CRUD operation while the batch process is already running?
I think the separate queues is your best solution, however.

Related

Camunda: Receive multiple, different messages at once

I am currently developing a kinda complex workflow with camunda. The goal of this workflow is to orchestrate the execution of different external business processes. Which includes start, overwatch and synchronize these workflows. Everything besides the synchronization works as expected.
Example:
My example has one main workflow which starts multiple sub workflows. The main workflow has to be aware when all sub workflows are finished. Every sub workflow is triggered by a message and sends a message back to the main workflow at the end of execution. Therefore, all sub workflows should be synchronized in the main workflow.
Xml can be accessed on this site: https://pastebin.com/2aj4z0zU
Unfortunately, this leads to numerous message correlation exceptions at the choke point in the main workflow (1st lane, after the first parallel gateway). I am using the following code to correlate the messages:
this.runtimeService.createMessageCorrelation(messageName)
.processInstanceId(processInstanceId)
.setVariables(payload)
.correlate();
The whole workflow is executable and runs without errors, but only if one example_workflow at a time is executed. Starting multiple example_workflows quickly one after another results in this type of exception randomly for every message type:
ENGINE-16004 Exception while closing command context: Cannot correlate message 'PROCESS_B_FINISHED': No process definition or execution matches the parameters org.camunda.bpm.engine.MismatchingMessageCorrelationException: Cannot correlate message 'PROCESS_B_FINISHED': No process definition or execution matches the parameters
at org.camunda.bpm.engine.impl.cmd.CorrelateMessageCmd.execute(CorrelateMessageCmd.java:88) ~[camunda-engine-7.14.0.jar!/:7.14.0]
Currently, the correlation exceptions occur if a postgresql database is used. The same workflow runs much better, but not perfect, when we use a h2 file-based database. All receive tasks are not configured asynchronously, only send tasks are (async before + exclusive).
Questions:
Is this already the best practice to synchronize multiple messages in one workflow?
What could be the reason for the correlation exceptions while using a postgresql database?
Used software:
spring boot application [Version:2.3.4]
camunda [Version:7.14.0]
h2 [Version:1.4.200]
postgresql [Version:42.2.22]
the process model seems to contain sequences where it can run into a deadlock (What if blue is followed directly by green? Or yellow?) or where you have race conditions. If the process has not reached a state where it is in a receiving state for the message, then the message delivery will fail (as indicated in the error message you shared)
(The reason you are observing the CorellationException more frequently on postgresql if the race condition. With this external database some operations take slightly more time, increasing the chance of the race condition occurring).
The process engine needs to be able to match a message to a unique receiver. If there are multiple potential receivers for the same message name, and no other correlation criteria creating a unique match is provided, then the delivery will also fail. You either need to use unique message names per instance or better use a businessKey or a process data which is unique per instance as additional correlation criteria. This is why it does not work when you run multiple process instances.
Modelling a workflow with this parallel message bottleneck leads to a race condition, as mentioned by #rob2universe's post.
To solve this problem, I had firstly to correlate the messages directly. I did this by adding a unique identifier to every message, which was not a big deal due to the fact that an item ID was defined within the payload of every message. Secondly, I had to remove all asynchronous and exclusive markers for every receive task and connected gateways. And thirdly, I had to reset the job executor properties to default values. Limiting the pool size and jobs per acquisition did not benefit the workflow execution.
After all these changes, my workflow now runs as expected with no errors. Unfortunately, due to the described bottleneck optimistic logging exceptions are common, but the workflow engine handles these exceptions without further errors.

How to use Event-Driven architecture to remove "api-based lambda calling another lambda" anti-pattern?

Suppose, I have an api POST /order which invokes PlaceOrder lambda and expects response from this. PlaceOrder lambda does some works, invokes another lambda ProcessPayment lambda and expects response. Also, ProcessPayment invokes CreateInvoice lambda expecting response. Whole architecture is like a RequestResponse cycle. I woulde like to achieve that without lambda invoking another lambda as it is considered as anti-pattern. My question is what is the best design pattern to achieve this behavior within 29 seconds with event-driven architecture.
What AWS suggests: As per this official documentation, they suggests to use SQS. But regarding using SQS, I have some thoughts.
My thoughts:
At event sources architecture, I can orchestrate these lambdas with SQS, SNS etc other event sources, but in that case, the nature would not be synchronous and thus I would not get response from the api.
My other solution:
Using Step Function: I can orchestrate this workflow with step function, and I think it is more elegant solution in this synchronous calling case. But I would like to achieve
this via event sources.
How can I design this scenerio with best practices using event-based achitecture?
In an Event-Driven Architecture, the communication between producers and consumers is asynchronous by design, that's the way the architecture scales.
You can get nearly synchronous communication between 2 services in an EDA, by creating dedicated queues / channels to communicate between them, make sure they're scaled up to a level where the latency is acceptable (close to synchronous values).
This adds some complexity, because the services which need responses, have to wait in a hot-loop to get them as soon as possible, and also if messages are lost, you need to have retry policies, etc.
I think you need to focus more on the mechanics of your program and a bit less on design patterns. You need to use the design patterns that fit your use-case, the other way around will not work. In the end, you build a program to fulfill a certain task or set of tasks, so that should be your end goal.
You’re stating that you have a process order Lambda, a create invoice Lambda and a process payment Lambda. I’d say the most interesting question is what you need to get done before you return a response to the user. Maybe you can process the order, respond to the user that it is done and handle the invoicing and payments on a later moment. Typically that would mean you put a message in a SQS queue or on an SNS topic.
It could be that you need your payment to be processed before you respond to the user, because they need to be informed about the status of the payment. You could then combine both actions in a single Lambda, because there is no way to spit the two tasks from one another. Keep in mind that often another option exist where you process the order first, put a message in a queue for the process payment (as it typically is a process that involves a third party) and the front end will poll for an update on the payment status. This way you can return a response quickly and still give an update on the payment as soon as possible.
The create invoice process is typically something you would never want to synchronously invoke during order confirmation. What if your invoicing application (intern or extern) is down? Theoretically you could still process orders as long as you create the invoice at some later moment in time. If you couple everything together you make order confirmation dependent on your invoice creation process, which I would regard as an unnecessary dependency.
I would really advice against step functions for this use-case. They can be utilized for long running processes that need to keep state and ‘wake up’ at specific moments, but for this specific flow I would say they do not help and are unnecessarily complex. If you have 3 things you need to do that you cannot separate from
one another, just run them in the same Lambda.

How get a data without polling?

This is more of a theorical question.
Well, imagine that I have two programas that work simultaneously, the main one only do something when he receives a flag marked with true from a secondary program. So, this main program has a function that will keep asking to the secondary for the value of the flag, and when it gets true, it will do something.
What I learned at college is that the polling is the simplest way of doing that. But when I started working as an developer, coworkers told me that this method generate some overhead or it's waste of computation, by asking every certain amount of time for a value.
I tried to come up with some ideas for doing this in a different way, searched on the internet for something like this, but didn't found a useful way about how to do this.
I read about interruptions and passive ways that can cause the main program to get that data only if was informed by the secondary program. But how this happen? The main program will need a function to check for interruption right? So it will not end the same way as before?
What could I do differently?
There is no magic...
no program will guess when it has new information to be read, what you can do is decide between two approaches,
A -> asks -> B
A <- is informed <- B
whenever use each? it depends in many other factors like:
1- how fast you need the data be delivered from the moment it is generated? as far as possible? or keep a while and acumulate
2- how fast the data is generated?
3- how many simoultaneuos clients are requesting data at same server
4- what type of data you deal with? persistent? fast-changing?
If you are building something like a stocks analyzer where you need to ask the price of stocks everysecond (and it will change also everysecond) the approach you mentioned may be the best
if you are writing a chat based app like whatsapp where you need to check if there is some new message to the client and most of time wont... publish subscribe may be the best
but all of this is a very superficial look into a high impact architecture decision, it is not possible to get the best by just looking one factor
what i want to show is that
coworkers told me that this method generate some overhead or it's
waste of computation
it is not a right statement, it may be in some particular scenario but overhead will always exist in distributed systems
The typical way to prevent polling is by using the Publish/Subscribe pattern.
Your client program will subscribe to the server program and when an event occurs, the server program will publish to all its subscribers for them to handle however they need to.
If you flip the order of the requests you end up with something more similar to a standard web API. Your main program (left in your example) would be a server listening for requests. The secondary program would be a client hitting an endpoint on the server to trigger an event.
There's many ways to accomplish this in every language and it doesn't have to be tied to tcp/ip requests.
I'll add a few links for you shortly.
Well, in most of languages you won't implement such a low level. But theorically speaking, there are different waiting strategies, you are talking about active waiting. Doing this you can easily eat all your memory.
Most of languages implements libraries to allow you to start a process as a service which is at passive waiting and it is triggered when a request comes.

Compensating Events on CQRS/ES Architecture

So, I'm working on a CQRS/ES project in which we are having some doubts about how to handle trivial problems that would be easy to handle in other architectures
My scenario is the following:
I have a customer CRUD REST API and each customer has unique document(number), so when I'm registering a new customer I have to verify if there is another customer with that document to avoid duplicity, but when it comes to a CQRS/ES architecture where we have eventual consistency, I found out that this kind of validations can be very hard to address.
It is important to notice that my problem is not across microservices, but between the command application and the query application of the same microservice.
Also we are using eventstore.
My current solution:
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%. That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
Altough this works, there are 2 things that bother me here, the first thing is my command application relying on the query application, so if my query application is down, my command is affected (today I just return false on my validation if query is down but still...) and second thing is, should a query/read model really be able to emit events? And if so, what is the correct way of doing it? Should the command have some kind of API for that? Or should the query emit the event directly to eventstore using some common shared library? And if I have more than one view/read? Which one should I choose to handle this?
Really hope someone could shine a light into these questions and help me this these matters.
For reference, you may want to be reviewing what Greg Young has written about Set Validation.
I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right?
That's exactly right - your read model is stale copy, and may not have all of the information collected by the write model.
That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
This spelling doesn't quite match the usual designs. The more common implementation is that, if we detect a problem when reading data, we send a command message to the write model, telling it to straighten things out.
This is commonly referred to as a process manager, but you can think of it as the automation of a human supervisor of the system. Conceptually, a process manager is an event sourced collection of messages to be sent to the command model.
You might also want to consider whether you are modeling your domain correctly. If documents are supposed to be unique, then maybe the command model should be using the document number as a key in the book of record, rather than using the customer. Or perhaps the document id should be a function of the customer data, rather than being an arbitrary input.
as far as I know, eventstore doesn't have transactions across different streams
Right - one of the things you really need to be thinking about in general is where your stream boundaries lie. If set validation has significant business value, then you really need to be thinking about getting the entire set into a single stream (or by finding a way to constrain uniqueness without using a set).
How should I send a command message to the write model? via API? via a message broker like Kafka?
That's plumbing; it doesn't really matter how you do it, so long as you are sure that the command runs within its own transaction/unit of work.
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%.
No, you cannot safely rely on the query side, which is eventually consistent, to prevent the system to step into an invalid state.
You have two options:
You permit the system to enter in a temporary, pending state and then, eventually, you will bring it into a valid permanent state; for this you could allow the command to pass, yield CustomerRegistered event and using a Saga/Process manager you verify against a uniquely-indexed-by-document-collection and issue a compensating command (not event!), i.e. UnregisterCustomer.
Instead of sending a command, you create&start a Saga/Process that preallocates the document in a uniquely-indexed-by-document-collection and if successfully then send the RegisterCustomer command. You can model the Saga as an entity.
So, in both solution you use a Saga/Process manager. In order for the system to be resilient you should make sure that RegisterCustomer command is idempotent (so you can resend it if the Saga fails/is restarted)
You've butted up against a fairly common problem. I think the other answer by VoicOfUnreason is worth reading. I just wanted to make you aware of a few more options.
A simple approach I have used in the past is to create a lookup table. Your command tries to register the key in a unique constraint table. If it can reserve the key the command can go ahead.
Depending on the nature of the data and the domain you could let this 'problem' occur and raise additional events to mark it. If it is something that's important to the business/the way the application works then you can deal with it either manually or at the time via compensating commands. if the latter then it would make sense to use a process manager.
In some (rare) cases where speed/capacity is less of an issue then you could consider old-fashioned locking and transactions. Admittedly these are much better suited to CRUD style implementations but they can be used in CQRS/ES.
I have more detail on this in my blog post: How to Handle Set Based Consistency Validation in CQRS
I hope you find it helpful.

Restful triggering of Camunda process definitions from nodejs

I’m a beginner at Camunda/BPMN and I want to use it to control what is going on in nodejs, mostly likely using a REST API, at least for now. (Unless folks have a better idea for how nodejs should talk to Camunda.) My goal is to deliver systems where non-programmers can update the business logic in very practical ways.
I'd like to trigger the start of perhaps more-than-one process by sending a REST message, say to reflect that "a new insurance policy has been sold" and that might trigger the instantiation of say 2 processes on Monday but perhaps on Tuesday we add a third and now the same REST API call should now trigger more activity on Wednesday. (I figure it is better for nodejs to know about events but not about the process definitions. After all, my goal is to use Camunda as a sort of business logic server for my application. The less the nodejs code needs to know, the better.)
Which REST API should I be using to express the message that, say "a new insurance policy has been sold"? When I look at:
https://docs.camunda.org/manual/develop/reference/rest/signal/post-signal/
I find it very confusing. What should "name" match in the biz process definitions? I assume I don't need an executionId? I assume I can leave out tenantId?
Would some string in the message match the ID of a start event in one or more process definitions (or what has to match what)?
When I look at a process, is there an easy way to tell what variables I need to supply to start that process running?
Should I perhaps avoid using this event-oriented style of kicking off processes and just use the POST /process-definition/key/{key}/start? It would seem to me to be better form to trigger activity with events or signals or something like that rather than to have my nodejs code know about the specific process definition by name.
Should I be using events or signals in this case?
I gather that the start event should not be a "None Start Event" but I'm not clear on what type of start event TO use if I want automatic triggering based on events or signals or something? Would a "Non-interrupting - Message Start Event" be the right sort? I'm finding this confusing.
Once I have triggered the process to start, what does nodejs need to send to step the process forward from one task in that instance to the next?
Thanks!
In order to instantiate a new workflow instance you have the following possibilities:
Start exactly one instance:
Start a workflow instance by its known "key": https://docs.camunda.org/manual/develop/reference/rest/process-definition/post-start-process-instance/
Start a workflow by a message start event: https://docs.camunda.org/manual/develop/reference/rest/message/post-message/. A message can only start one specific workflow instance, it is not allowed that this is not a unique relationship. The message start event is the one you have to use in your BPMN process model. See also https://docs.camunda.org/manual/develop/reference/bpmn20/events/message-events/. This might indeed be the better approach to make your client independent of the process definition key.
Start multiple instances:
- Start a workflow instance by a BPMN signal event: https://docs.camunda.org/manual/develop/reference/rest/signal/post-signal/. The signal name could start many instances as once.
The name of the message or name of signal would be configured in the BPMN model. Both could work for your use case.
Once a process instance is started it will move automatically execute the next steps.
Probably following this example (https://blog.bernd-ruecker.com/use-camunda-without-touching-java-and-get-an-easy-to-use-rest-based-orchestration-and-workflow-7bdf25ac198e) step by step can give you some better idea?

Resources