axon transaction manager between command and query side - spring

i have a spring boot maven multi module project with axon 4.4.2, the hierarchy of project is as below:
application
--core
--command-side
----command-side-axon
----command-side-rest
--query--side
----query-side-persistence
----query-side-rest
i have an example of create a new catalog as below, when i send a request from command-side-rest, it always returns an id as response, even if the query-side-persistence crashes and the data not is saved in the database. How can i handle transactions in this case ? I want when the query-side-persistence crashes the event does not get saved in the event base and it throws an exception.
command-side-rest
#PostMapping
public String save(#RequestBody Catalog catalog) {
return (String) commandGateway.sendAndWait(new CreateCatalogCommand(catalog));
}
command-side-axon
#CommandHandler
public void handle(CreateCatalogCommand cmd) {
apply(new CatalogCreatedEvent(cmd.externalId, cmd.name));
}
#EventSourcingHandler
#Order(1)
public void on(CatalogCreatedEvent evt) {
this.externalId = evt.externalId;
}
query-side-persistence
#EventHandler
#Transactional(propagation = Propagation.REQUIRED)
#Order(2)
public void on(CatalogCreatedEvent event) {}

I think I can give you some information on the matter, concerning your main question:
I want when the query-side-persistence crashes the event does not get saved in the event base and it throws an exception.
Well, that essentially means you do not want to use CQRS at all. What CQRS stands for is the segregation of your Command Model (e.g. the Aggregate handling the command and publishing the event) and the Query Models (e.g. the classes handling events which updates your models).
Segregating these two gives you the benefit that you can optimize your models for both scenarios. Added, the request to perform an operation, aka the command, is no longer influenced on whether the subsequent event actually can update a query model. On top of that, it shouldn't if you are doing CQRS! Having this exact segregation means you can recreate your Query Models whenever you need, entirely separated from performing the commands.
Thus essentially, your Event Store because the single model you can base all models from, independent from another. This is why using events when going for CQRS works so nicely with Event Sourcing (which states to recreate your Command Model based on the events it has published).
So, my recommendation? I would embrace the fact that if event handling to update your query model fails, that the event still has happened. It essentially didn't change the fact your system has made the decision to publish that event, so let it be so.
If you do want to have your Command Model and Query Model in a single transaction, you could still achieve this by only using the SubscribingEventProcessor in Axon. Again though, I would discuss with your team/business whether that is really what you need.
Check out some of the posts on the "learning" page of AxonIQ to have a feel why CQRS, DDD and Event Sourcing can be beneficial for you.
Hope this sheds some light on the situation Aymen!

Related

guava eventbus post after transaction/commit

I am currently playing around with guava's eventbus in spring and while the general functionality is working fine so far i came across the following problem:
When a user want's to change data on a "Line" entity this is handled as usual in a backend service. In this service the data will be persisted via JPA first and after that I create a "NotificationEvent" with a reference to the changed entity. Via the EventBus I send the reference of the line to all subscribers.
public void notifyUI(String lineId) {
EventBus eventBus = getClientEventBus();
eventBus.post(new LineNotificationEvent(lineId));
}
the eventbus itself is created simply using new EventBus() in the background.
now in this case my subscribers are on the frontend side, outside of the #Transactional realm. so when I change my data, post the event and let the subscribers get all necessary updates from the database the actual transaction is not committed yet, which makes the subscribers fetch the old data.
the only quick fix i can think of is handling it asynchronously and wait for a second or two. But is there another way to post the events using guava AFTER the transaction has been committed?
I don't think guava is "aware" of spring at all, and in particular not with its "#Transactional" stuff.
So you need a creative solution here. One solution I can think about is to move this code to the place where you're sure that the transaction has finished.
One way to achieve that is using TransactionSyncrhonizationManager:
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization(){
void afterCommit(){
// do what you want to do after commit
// in this case call the notifyUI method
}
});
Note, that if the transaction fails (rolls back) the method won't be called, in this case you'll probably need afterCompletion method. See documentation
Another possible approach is refactoring your application to something like this:
#Service
public class NonTransactionalService {
#Autowired
private ExistingService existing;
public void entryPoint() {
String lineId = existing.invokeInTransaction(...);
// now you know for sure that the transaction has been committed
notifyUI(lineId);
}
}
#Service
public class ExistingService {
#Transactional
public String invokeInTransaction(...) {
// do your stuff that you've done before
}
}
One last thing I would like to mention here, is that Spring itself provides an events mechanism, that you might use instead of guava's one.
See this tutorial for example

Spring Boot - Camel - Tracking an exchange all the way through

We are trying to setup a very simple auditing database table for a very complex Spring Boot, Camel application with many routes (mostly internal routes using seda://)...the idea being we record in the database table each route's processing outcome. Then when issues arise we can login to the database, query the table and pinpoint exactly where the issue happened. I thought I could just use the exchange-id as the unique tracking identifier, but quickly learned that all the seda:// routes make new exchanges, or at least that's what I'm seeing (camel version 2.24.3). Frankly, I don't care what we use for the unique identifier...I can generate a UUID easily enough and the use the exchange.setProperty("id-unique", UUID).
I did manage to get something to work using the exchange.setProperty("id-exchange", exchange.getExchangeId()) and have it persist the unique identifier thru the routes...(I did read that certain pre-defined route prefixes such as jms:// will not persist exchange properties though). The thought being, the very first Processor places the exchangeId (unique-id) on the exchange properties, my tracking logic is in a processor that I can include as part of the Route's definition :
#Override
public void configure() throws Exception {
// EVENTS : Collect statistics from Camel events
this.getContext().getManagementStrategy().addEventNotifier(this.camelEventNotifier);
// INITIAL : ${body} exchange coming from a simple URL endpoint
// POST request with an XML Message...simulates an MQ
// message from Central MQ. The Web/UI service places the
// message onto the camel route using producerTemplate.
from("direct:" + Globals.ROUTEID_LBR_INTAKE_MQ)
.routeId(Globals.ROUTEID_LBR_INTAKE_MQ)
.description("Loss Backup Reports MQ XML inbound messages")
.autoStartup(false)
.process(processor)
.process(getTrackingProcessor())
.to("seda:" + Globals.ROUTEID_LBR_VALIDATION)
.end();
}
This Proof-of-Concept (POC) allowed me to at least get things tracking like we want...note multiple rows with the same unique identifier :
ID_ROW ID_EXCHANGE PROCESS_GROUP PROCESS_STEP RESULTS_STEP RESULTS_MESSAGE
1 ID-LIBP45P-322256M-1603188596161-4-6 Loss Backup Reports lbr-intake-mq add lbr-intake-mq
2 ID-LIBP45P-322256M-1603188596161-4-6 Loss Backup Reports lbr-validation add lbr-intake-mq
Thing is, this POC is proving to be rigid and difficult to record outcomes such as SUCCESS versus EXCEPTION.
My question is, has anyone done anything like this? And if so, how was it implemented? Or is there a fancy way in Camel to handle this that I just couldn't find on the web?
My other ideas were :
Set an old fashion Abstract TrackerProcessor class that all my tracked Processors extend. Then just have a handful of methods in there to create, update, etc... Each processor then just calls inherited methods to create and manage the audit entries. The advantage here being the exchange is readily available with all the data involved to store in the database table.
#Component
public abstract class ProcessorAbstractTracker implements Processor {
#Override
abstract public void process(Exchange exchange) throws Exception;
public void createTracker ( Exchange exchange ) {
}
public void updateTracker ( Exchange exchange, String theResultsMessage, String theResultsStep ) {
}
}
Set an #Autowired Bean that every tracked Camel Processor wires in and put the tracking logic in the bean. This seems to be simple and clean. My only concern/question here is how to scope the bean (maybe prototype)...since there would be many routes utilizing the bean concurrently, is there any chance we get mixed processing values...
#Autowired
ProcessorTracker tracker;
Other ideas?
tia, adym

DDD: Where to raise "created" domain event

I struggle to find and implement the best practise for the following problem: where is the best location to raise create domain event (the event that notifies for the creation of an aggregate). For example if we have Order aggregate in our bounded context we would like to notifie all interested parties when order is create. The event could be OrderCreatedEvent.
What I tried in first place is to raise this event in the constructor (I have a collection of domain events in each aggregate). This way it is okay only when we create the order. Because when we would like to do anything with this aggregate in the future we are going to create new instance of it through the constructor. Then OrderCreatedEvent will be raised again but it is not true.
However, I thought it would be okey to raise the event in the application layer but it is an anti-pattern (the domain events should live only in the domain). Maybe to have a method Create that will just add the OrderCreatedEvent to its domain events list and call it in the application layer when order is created is an option.
Interesting fact I found on the internet is that it is an anti-pattern to raise domain events in the contructor which means the last described option (to have Create method) would be the best approach.
I am using Spring Boot for the application and MapStruct for the mapper that maps the database/repository Entity to the domain model aggregate. Also, tried to find a way to create a mapper that is going to skip the contructor of the target class but as all properties of the Order aggregate are private seems impossible.
Usually constructor are used only to make assignations on object's fields. This is not the right place to trigger behaviours, especially when they throw exceptions or have side effects
DDD theorists (from Eric Evans onwards) suggest implementing factories for aggregates creation. A factory method, for example, can invoke aggregate constructor (and wire up the aggregate with child domain objects as well) and also register an event.
Publishing events from the application layer is not an anti-pattern per se. Application services can depend from the domain events publisher, the important thing is that it's not application layer to decide which event to send
To summarize, with a stack like Java Spring Boot and domain events support, your code could look like
public class MyAggregate extends AbstractAggregateRoot {
public static MyAggregate create() {
MyAggregate created = new MyAggregate();
created.registerEvent(new MyAggregateCreated());
return created;
}
}
public class MyApplicationService {
#Autowired private MyAggregateRepository repository;
public void createAnAggregate() {
repository.save(MyAggregate.create());
}
}
notice that event publishing happens automagically after calling repository.save(). The downside here is that, when you use db-generated identifiers, aggregate id is not avaliable in the event payload since it's associated after persisting the aggregate. If i will change the application service code like that:
public class MyApplicationService {
#Autowired private MyAggregateRepository repository;
#Autowired private ApplicationEventPublisher publisher;
public void createAnAggregate() {
repository.save(MyAggregate.create()).domainEvents().forEach(evt -> {
publisher.publish(evt);
});
}
}
Application layer is in charge to decide what to do to fulfill this workflow (create an aggregate, persist it and send some event) but all the steps happen transparently. I can add a new property to the aggregate root, change DBMS or change event contract, this won't change these lines of code. Application layer decides what to do and domain layer decides how to do it. Factories are part of the domain layer, events are a transient part of the aggregate state and the publishing part is transparent from domain standpoint
Check out this question!
Is it safe to publish Domain Event before persisting the Aggregate?.
However, I thought it would be okey to raise the event in the application layer but it is an anti-pattern (the domain events should live only in the domain). - Domain events live in Domain layer, but Application layer references Domain layer and can easily emit domain events.

Axon State-Stored Aggregate Test IllegalStateException

PROBLEM: Customer technical limitations force me to use Axon with state-stored Aggregates in PostgreSQL. I try a simple JPA-Entity Axon-Test and get IllegalStateException.
RESEARCH: A simplified project on the case is available at https://gitlab.com/ZonZonZon/simple-axon.git
In my test on
fixture.givenState(MyAggregate::new)
.when(command)
.expectState(state -> {
System.out.println();
});
I get
The state of this aggregate cannot be retrieved because it has been modified in a Unit of Work that was rolled back
java.lang.IllegalStateException: The state of this aggregate cannot be retrieved because it has been modified in a Unit of Work that was rolled back
at org.axonframework.common.Assert.state(Assert.java:44)
QUESTION: How to test an aggregate state using Axon and escape the error?
there are some missing parts in your project to let the test run properly. I will try to tackle them as concisely as possible:
your Command should contain the piece of information that connects it to the Aggregate. #TargetAggregateIdentifier is the annotation provided by the framework that connects a certain field to its #AggregateIdentifier counterpart into your Aggregate. You can read more here https://docs.axoniq.io/reference-guide/implementing-domain-logic/command-handling/aggregate#handling-commands-in-an-aggregate.
Said so, a UUID field needs to be added to your Create command.
This information will be then passed into the Created event : events are stored and can be processed both by a replay or an Aggregate re-hydration (upon client’s restart). (These) are the source of truth for our information.
#EventSourcingHandler annotated method will be responsible for applying the event and updating the #Aggregate values
public void on(Created event) {
uuid = event.getUuid();
login = event.getLogin();
password = event.getPassword();
token = event.getToken();
}
the test will then look like
public void a_VideochatAccount_Created_ToHaveData() {
Create command = Create.builder()
.uuid(UUID.randomUUID())
.login("123")
.password("333")
.token("d00a1f49-9e37-4976-83ae-114726938c73")
.build();
Created expectedEvent = Created.builder()
.uuid(command.getUuid())
.login(command.getLogin())
.password(command.getPassword())
.token(command.getToken())
.build();
fixture.givenNoPriorActivity()
.when(command)
.expectEvents(expectedEvent);
}
This test will validate your Command Part of your CQRS.
I will then suggest to separate the Query Part from your #Aggregate: you will then need to handle events with #EventHandler annotation placed on a method into a Projection #Component class, and implement the piece of logic that will take care of storing the information in the form that you need into PostgreSQL #Entity, using the #Repository JPA way, which I am sure you are familiar with.
You can find useful information on the ref guide https://docs.axoniq.io/reference-guide/implementing-domain-logic/event-handling following the video example on The Query Model based on code that you can be found in this repo https://github.com/AxonIQ/food-ordering-demo/tree/master
Hope that all is clear,
Corrado.

ArgumentResolvers within single transaction?

I am wondering if there is a way to wrap all argument resolvers like for #PathVariables or #ModelAttributes into one single transaction? We are already using the OEMIV filter but spring/hibernate is spawning too many transactions (one per select if they are not wrapped within a service class which is be the case in pathvariable resolvers for example).
While the system is still pretty fast I think this is not necessary and neither consistent with the rest of the architecture.
Let me explain:
Let's assume that I have a request mapping including two entities and the conversion is based on a StringToEntityConverter
The actual URL would be like this if we support GET: http://localhost/app/link/User_231/Item_324
#RequestMapping("/link/{user}/{item}", method="POST")
public String linkUserAndItem(#PathVariable("user") User user, #PathVariable("item") Item item) {
userService.addItem(user, item);
return "linked";
}
#Converter
// simplified
public Object convert(String classAndId) {
return entityManager.find(getClass(classAndId), getId(classAndId));
}
The UserService.addItem() method is transactional so there is no issue here.
BUT:
The entity converter is resolving the User and the Item against the database before the call to the Controller, thus creating two selects, each running in it's own transaction. Then we have #ModelAttribute methods which might also issue some selects again and each will spawn a transaction.
And this is what I would like to change. I would like to create ONE readonly Transaction
I was not able to find any way to intercept/listen/etc... by the means of Spring.
First I wanted to override the RequestMappingHandlerAdapter but the resolver calls are well "hidden" inside the invokeHandleMethod method...
The ModelFactory is not a spring bean, so i cannot write an interceptor either.
So currently I only see a way by completely replacing the RequestMappingHandlerAdapter, but I would really like to avoid that.
And ideas?
This seems like a design failure to me. OEMIV is usually a sign that you're doing it wrong™.
Instead, do:
#RequestMapping("/link/User_{userId}/Item_{itemId}", method="POST")
public String linkUserAndItem(#PathVariable("userId") Long userId,
#PathVariable("itemId") Long itemId) {
userService.addItem(userId, itemId);
return "linked";
}
Where your service layer takes care of fetching and manipulating the entities. This logic doesn't belong in the controller.

Resources