how to implement outbox like pattern with third party api - microservices

I am implementing integration with a third party system, that I don't have a control over it, and use rabbitmq as message queue to publish a message after doing some updates on the third party system, my implementation as the following scenario
await createItemOnProvider()
await queue.publishMessage()
If I implement a database update and want to publish a message after it succeeds, I use the outbox pattern to handle that case, but in the current case, I need to make it atomic but there is no transaction wrapper that handles doing both or not, I am not sure what pattern should be used in that case, for example if publishing a message failed, what to do?

Don't reinvent the wheel. Use an orchestrator like temporal.io to implement your logic. You can write code exactly as per your requirements:
await createItemOnProvider()
await publishMessage()
and Temporal is going to ensure that both lines complete execution in the presence of any failures including process crashes and network outages.

Related

How to know the the running status of a spring integration flow

I have a simple integration flow that poll data based on a cron job from database, publish on a DirectChannel, then do split and transformations, and publish on another executor service channel, do some operations and finally publish to an output channel, its written using dsl style.
Also, I have an endpoint where I might receive an http request to trigger this flow, at this point I send the messages one of the mentioned channels to trigger the flow.
I want to make sure that the manual trigger doesn’t happen if the flow is already running due to either the cron job or another request.
I have used the isRunning method of the StandardIntegrationFlow, but it seems that it’s not thread safe.
I also tried using .wireTap(myService) and .handle(myService) where this service has an atomicBoolean flag but it got set per every message, which is not a solution.
I want to know if the flow is running without much intervention from my side, and if this is not supported how can I apply the atomic boolean logic on the overall flow and not on every message.
How can I simulate the racing condition in a test in order to make sure my implementation prevent this?
The IntegrationFlow is just a logical container for configuration phase. It does have those lifecycle methods, but only for an internal framework logic. Even if they are there, they don't help because endpoints are always running if you want to do them something by some event or input message.
It is hard to control all of that since it is in an async state as you explain. Even if we can stop a SourcePollingChannelAdapter in the beginning of that flow to let your manual call do do something, it doesn't mean that messages in other threads are not in process any more. The AtomicBoolean cannot help here for the same reason: even if you set it to true in the MessageSourceMutator.beforeReceive() and reset back to false in its afterReceive() when message is null, it still doesn't mean that messages you pushed down in other thread are already processed.
You might consider to use an aggregator for AtomicBoolean resetting in the end of batch since you mention that you pull data from DB, so perhaps there is a number of records per poll you can track downstream. This way your manual call could be skipped until aggregator collects results for that batch.
You also need to think about stopping a SourcePollingChannelAdapter at the moment when manual action is permitted, so there won't be any further race conditions with the cron.

Atomically update database and send message. Outbox pattern or not?

You have a command/operation which means you both need to save something in database end send an event/message to another system. For example you have an OrderService and when a new order is created you want to publish an "OrderCreated"-event for another system/systems to react on (either direct message or using a message broker) and do something.
The easiest (and naive) implementation is to save in db and if successful then send message. But of course this is not bullet proof because the other service/message broker is down or your service crash before sending message.
One (and common?) solution is to implement "outbox pattern", i.e. instead of publish messages directly you save the message to an outbox table in your local database as part of your database transaction (in this example save to outbox table as well as order table) and have a different process (polling db or using change data capture) reading the outbox table and publish messages.
What is your solution to this dilemma, i.e. "update database and send message or do neither"? Note: I am not talking about using SAGAs (could be part of a SAGA though but this is next level).
I have in the past used different approaches:
"Do nothing", i.e just try to send the message and hope it will be sent. Which might be fine in some cases especially with a stable message broker running on same machine.
Using DTC (in my case MSDTC). Beside all the problem with DTC it might not work with your current solution.
Outbox pattern
Using an orchestrator which will retry process if you have not got a "completed" event.
In my current project it is not handled well IMO and I want to change it to be more resilient and self correcting. Sometimes when a service is calling another service and it fails the user might retry and it might work ok. But some operations might require out support to fix it (if it is even discovered).
ATM it is not a Microservice solution but rather two large (legacy) monoliths communicating and is running on same server but moving to a Microservice architecture in the near future and might run on multiple machines.

Accessing saga repository

I have the need to access the saga repository from within a consumer to read the current status of the saga correlated to the message being consumed.
Scenario:
I have an external service, when this service consumes an event coming from the saga I want to see if the saga is still in the correct state because if meanwhile the saga changed its state the consumer must skip the event.
How: I surely could query the saga repository implementation chosen by using its the native framework, but I would like to use an abstraction, an interface, to load the saga state from within the consumer, in order to be able to switch to a different repository implementation in the future.
Any help is appreciated.
If the saga initiated the command, sending it to the consumer, why would the consumer need to check the saga's state? Is there a long delay between the time the command is sent and the consumer is able to process it?
The type of check you are asking about sort of goes against what a system would generally do when processing commands. If you do need to do this type of check, I'd actually suggest a request/response interaction using the request client to which the saga would respond if the command is still valid. That way, the logic (and locking) of the saga repository remains under the saga's control.
If needed, a separate endpoint could be used for that request to ensure it isn't backed up behind other messages targeting the saga. If that is desired, post a comment and I'll update the answer.

How to share events code in a microservice architecture

I'm working on a "microservice-like" architecture. Each microservice can fire some events to RabbitMQ. The events are identified by an event code. At the moment, the code of the event triggered is an hard coded const string declared inside the microservice that fire the event.
My problem is that each microservice that want to subscribe to this event must duplicate this event code string. This is error prone especially when an event code is renamed because all microservices that subscribed to this event code need to be changed accordingly... which is very bad.
I see the possible alternatives:
Declare the event code only in the microservice that fire the event. Let the consumers microservices directly access to the code declared in the microservice that fire the event. In this case, the event is declared once but it creates a source code dependency between microservices... which is bad.
Create a source file (outside all microservices) that contains all the events code of all the application. This source file is shared by all microservices. In this case, each event is declared once but it creates a global dependency for all microservices which is against the single responsability principle... which is bad.
How do you tackle this problem ?
At the moment, the code of the event triggered is an hard coded const string declared inside the microservice that fire the event. My problem is that each microservice that want to subscribe to this event must duplicate this event code string. This is error prone especially when an event code is renamed because all microservices that subscribed to this event code need to be changed accordingly... which is very bad.
Events are messages. All of the constraints that we use to manage the evolution of messages applies to events as well.
In a microservices architecture, we expect to be able to deploy instances of the services independently of one another. Requiring that all of the services shut down together to coordinate a change in message schema kind of misses the point. That in turn implies that we need to design reasonable behaviors for the cases where the producer and consumer don't have matching understandings of the message.
In practice, this means something like
We never introduce a new required field, only optional fields (with documented default values).
Unrecognized fields are ignored (but forwarded)
Consumers of optional fields know to use default value to use when an expected field is missing.
When these constraints cannot be satisfied, then you are introducing a new message.
If you have the message contracts in place, then you aren't restricting yourself to microservice implementations that share the same runtime platform (because two different implementations of the same contract are equivalent).
Recommend reading:
ZeroMQ RFC 42/C4, specifically section 2.6 which describes the evolution of public contracts
Versioning in an Event Sourced System, speficically "Basic Type Based Versioning"

EasyNetQ / RabbitMQ consuming events in Web API

I have created Web API which allows messages to be sent to the Queue. My Web API is designed with CQRS and DDD in mind. I want my message consumer to always be waiting for any messages on the queue to receive. Currently the way its done, this will only read messages if I make a request to the API to hit the method.
Is there a way of either using console application or something that will always be running to consume messages at anytime given without having to make a request from the Web Api. So more of a automation task ?
If so, how do I go about with it i.e. if its console app how would I keep it always running (IIS ?) and is there way to use Dependency Injection as I need to consume the message then send to my repository which lives on separate solution. ?
or a way to make EasyNetQ run at start up ?
The best way to handle this situation in your case is to subscribe to bus events using AMPQ through EasyNetQ library. The recommended way of hosting it is by writing a windows service using topshelf library and subscribe to bus events inside that service on start.
IIS processes and threads are not reliable for such tasks as they are designed to be recycled on a regular basis which may cause some instabilities and inconsistencies in your application.
and is there way to use Dependency Injection as I need to consume the message then send to my repository which lives on separate solution.
It is better to create a separate question for this, as it is obviously off-topic. Also, it requires a further elaboration as it is not clear what specifically you are struggling with.

Resources