Axon State-Stored Aggregate Test IllegalStateException - spring-boot

PROBLEM: Customer technical limitations force me to use Axon with state-stored Aggregates in PostgreSQL. I try a simple JPA-Entity Axon-Test and get IllegalStateException.
RESEARCH: A simplified project on the case is available at https://gitlab.com/ZonZonZon/simple-axon.git
In my test on
fixture.givenState(MyAggregate::new)
.when(command)
.expectState(state -> {
System.out.println();
});
I get
The state of this aggregate cannot be retrieved because it has been modified in a Unit of Work that was rolled back
java.lang.IllegalStateException: The state of this aggregate cannot be retrieved because it has been modified in a Unit of Work that was rolled back
at org.axonframework.common.Assert.state(Assert.java:44)
QUESTION: How to test an aggregate state using Axon and escape the error?

there are some missing parts in your project to let the test run properly. I will try to tackle them as concisely as possible:
your Command should contain the piece of information that connects it to the Aggregate. #TargetAggregateIdentifier is the annotation provided by the framework that connects a certain field to its #AggregateIdentifier counterpart into your Aggregate. You can read more here https://docs.axoniq.io/reference-guide/implementing-domain-logic/command-handling/aggregate#handling-commands-in-an-aggregate.
Said so, a UUID field needs to be added to your Create command.
This information will be then passed into the Created event : events are stored and can be processed both by a replay or an Aggregate re-hydration (upon client’s restart). (These) are the source of truth for our information.
#EventSourcingHandler annotated method will be responsible for applying the event and updating the #Aggregate values
public void on(Created event) {
uuid = event.getUuid();
login = event.getLogin();
password = event.getPassword();
token = event.getToken();
}
the test will then look like
public void a_VideochatAccount_Created_ToHaveData() {
Create command = Create.builder()
.uuid(UUID.randomUUID())
.login("123")
.password("333")
.token("d00a1f49-9e37-4976-83ae-114726938c73")
.build();
Created expectedEvent = Created.builder()
.uuid(command.getUuid())
.login(command.getLogin())
.password(command.getPassword())
.token(command.getToken())
.build();
fixture.givenNoPriorActivity()
.when(command)
.expectEvents(expectedEvent);
}
This test will validate your Command Part of your CQRS.
I will then suggest to separate the Query Part from your #Aggregate: you will then need to handle events with #EventHandler annotation placed on a method into a Projection #Component class, and implement the piece of logic that will take care of storing the information in the form that you need into PostgreSQL #Entity, using the #Repository JPA way, which I am sure you are familiar with.
You can find useful information on the ref guide https://docs.axoniq.io/reference-guide/implementing-domain-logic/event-handling following the video example on The Query Model based on code that you can be found in this repo https://github.com/AxonIQ/food-ordering-demo/tree/master
Hope that all is clear,
Corrado.

Related

guava eventbus post after transaction/commit

I am currently playing around with guava's eventbus in spring and while the general functionality is working fine so far i came across the following problem:
When a user want's to change data on a "Line" entity this is handled as usual in a backend service. In this service the data will be persisted via JPA first and after that I create a "NotificationEvent" with a reference to the changed entity. Via the EventBus I send the reference of the line to all subscribers.
public void notifyUI(String lineId) {
EventBus eventBus = getClientEventBus();
eventBus.post(new LineNotificationEvent(lineId));
}
the eventbus itself is created simply using new EventBus() in the background.
now in this case my subscribers are on the frontend side, outside of the #Transactional realm. so when I change my data, post the event and let the subscribers get all necessary updates from the database the actual transaction is not committed yet, which makes the subscribers fetch the old data.
the only quick fix i can think of is handling it asynchronously and wait for a second or two. But is there another way to post the events using guava AFTER the transaction has been committed?
I don't think guava is "aware" of spring at all, and in particular not with its "#Transactional" stuff.
So you need a creative solution here. One solution I can think about is to move this code to the place where you're sure that the transaction has finished.
One way to achieve that is using TransactionSyncrhonizationManager:
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization(){
void afterCommit(){
// do what you want to do after commit
// in this case call the notifyUI method
}
});
Note, that if the transaction fails (rolls back) the method won't be called, in this case you'll probably need afterCompletion method. See documentation
Another possible approach is refactoring your application to something like this:
#Service
public class NonTransactionalService {
#Autowired
private ExistingService existing;
public void entryPoint() {
String lineId = existing.invokeInTransaction(...);
// now you know for sure that the transaction has been committed
notifyUI(lineId);
}
}
#Service
public class ExistingService {
#Transactional
public String invokeInTransaction(...) {
// do your stuff that you've done before
}
}
One last thing I would like to mention here, is that Spring itself provides an events mechanism, that you might use instead of guava's one.
See this tutorial for example

Spring Boot - Camel - Tracking an exchange all the way through

We are trying to setup a very simple auditing database table for a very complex Spring Boot, Camel application with many routes (mostly internal routes using seda://)...the idea being we record in the database table each route's processing outcome. Then when issues arise we can login to the database, query the table and pinpoint exactly where the issue happened. I thought I could just use the exchange-id as the unique tracking identifier, but quickly learned that all the seda:// routes make new exchanges, or at least that's what I'm seeing (camel version 2.24.3). Frankly, I don't care what we use for the unique identifier...I can generate a UUID easily enough and the use the exchange.setProperty("id-unique", UUID).
I did manage to get something to work using the exchange.setProperty("id-exchange", exchange.getExchangeId()) and have it persist the unique identifier thru the routes...(I did read that certain pre-defined route prefixes such as jms:// will not persist exchange properties though). The thought being, the very first Processor places the exchangeId (unique-id) on the exchange properties, my tracking logic is in a processor that I can include as part of the Route's definition :
#Override
public void configure() throws Exception {
// EVENTS : Collect statistics from Camel events
this.getContext().getManagementStrategy().addEventNotifier(this.camelEventNotifier);
// INITIAL : ${body} exchange coming from a simple URL endpoint
// POST request with an XML Message...simulates an MQ
// message from Central MQ. The Web/UI service places the
// message onto the camel route using producerTemplate.
from("direct:" + Globals.ROUTEID_LBR_INTAKE_MQ)
.routeId(Globals.ROUTEID_LBR_INTAKE_MQ)
.description("Loss Backup Reports MQ XML inbound messages")
.autoStartup(false)
.process(processor)
.process(getTrackingProcessor())
.to("seda:" + Globals.ROUTEID_LBR_VALIDATION)
.end();
}
This Proof-of-Concept (POC) allowed me to at least get things tracking like we want...note multiple rows with the same unique identifier :
ID_ROW ID_EXCHANGE PROCESS_GROUP PROCESS_STEP RESULTS_STEP RESULTS_MESSAGE
1 ID-LIBP45P-322256M-1603188596161-4-6 Loss Backup Reports lbr-intake-mq add lbr-intake-mq
2 ID-LIBP45P-322256M-1603188596161-4-6 Loss Backup Reports lbr-validation add lbr-intake-mq
Thing is, this POC is proving to be rigid and difficult to record outcomes such as SUCCESS versus EXCEPTION.
My question is, has anyone done anything like this? And if so, how was it implemented? Or is there a fancy way in Camel to handle this that I just couldn't find on the web?
My other ideas were :
Set an old fashion Abstract TrackerProcessor class that all my tracked Processors extend. Then just have a handful of methods in there to create, update, etc... Each processor then just calls inherited methods to create and manage the audit entries. The advantage here being the exchange is readily available with all the data involved to store in the database table.
#Component
public abstract class ProcessorAbstractTracker implements Processor {
#Override
abstract public void process(Exchange exchange) throws Exception;
public void createTracker ( Exchange exchange ) {
}
public void updateTracker ( Exchange exchange, String theResultsMessage, String theResultsStep ) {
}
}
Set an #Autowired Bean that every tracked Camel Processor wires in and put the tracking logic in the bean. This seems to be simple and clean. My only concern/question here is how to scope the bean (maybe prototype)...since there would be many routes utilizing the bean concurrently, is there any chance we get mixed processing values...
#Autowired
ProcessorTracker tracker;
Other ideas?
tia, adym

axon transaction manager between command and query side

i have a spring boot maven multi module project with axon 4.4.2, the hierarchy of project is as below:
application
--core
--command-side
----command-side-axon
----command-side-rest
--query--side
----query-side-persistence
----query-side-rest
i have an example of create a new catalog as below, when i send a request from command-side-rest, it always returns an id as response, even if the query-side-persistence crashes and the data not is saved in the database. How can i handle transactions in this case ? I want when the query-side-persistence crashes the event does not get saved in the event base and it throws an exception.
command-side-rest
#PostMapping
public String save(#RequestBody Catalog catalog) {
return (String) commandGateway.sendAndWait(new CreateCatalogCommand(catalog));
}
command-side-axon
#CommandHandler
public void handle(CreateCatalogCommand cmd) {
apply(new CatalogCreatedEvent(cmd.externalId, cmd.name));
}
#EventSourcingHandler
#Order(1)
public void on(CatalogCreatedEvent evt) {
this.externalId = evt.externalId;
}
query-side-persistence
#EventHandler
#Transactional(propagation = Propagation.REQUIRED)
#Order(2)
public void on(CatalogCreatedEvent event) {}
I think I can give you some information on the matter, concerning your main question:
I want when the query-side-persistence crashes the event does not get saved in the event base and it throws an exception.
Well, that essentially means you do not want to use CQRS at all. What CQRS stands for is the segregation of your Command Model (e.g. the Aggregate handling the command and publishing the event) and the Query Models (e.g. the classes handling events which updates your models).
Segregating these two gives you the benefit that you can optimize your models for both scenarios. Added, the request to perform an operation, aka the command, is no longer influenced on whether the subsequent event actually can update a query model. On top of that, it shouldn't if you are doing CQRS! Having this exact segregation means you can recreate your Query Models whenever you need, entirely separated from performing the commands.
Thus essentially, your Event Store because the single model you can base all models from, independent from another. This is why using events when going for CQRS works so nicely with Event Sourcing (which states to recreate your Command Model based on the events it has published).
So, my recommendation? I would embrace the fact that if event handling to update your query model fails, that the event still has happened. It essentially didn't change the fact your system has made the decision to publish that event, so let it be so.
If you do want to have your Command Model and Query Model in a single transaction, you could still achieve this by only using the SubscribingEventProcessor in Axon. Again though, I would discuss with your team/business whether that is really what you need.
Check out some of the posts on the "learning" page of AxonIQ to have a feel why CQRS, DDD and Event Sourcing can be beneficial for you.
Hope this sheds some light on the situation Aymen!

How can I test that a JPA save actually saves data?

I am using JPA with Spring and saving an entity in a test. In the process of writing a test to validate that an entity's relationship with another entity is correctly set up, I have come across a problem that I come across frequently. I have a test method (set to rollback) that:
Creates entity
Saves entity
Flushes
Retrieves entity
Validates entity
The problem is that when I look at the Hibernate logs, I only see a single insert to the database where I'd expect to see an insert and then a select.
I know this is because Hibernate's trying to save me some time and knows that it's got the entity with the ID I'm trying to retrieve but that bypasses an important step: I want to make sure that the entity actually made it to the database and looks like what I thought it should. What's the best way to deal with this so I can test that the entity is actually in the database?
Note: I assume this involves somehow detaching the entity or telling Hibernate to clear its cache but I'm not sure how to do that when all I have access to is a JpaRepository object.
Some code:
public interface UserRepository extends JpaRepository<User, Long> {
//...
}
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = JpaConfig.class, // JpaConfig just loads our config stuff
loader = AnnotationConfigContextLoader.class)
#TransactionConfiguration(defaultRollback = true)
public class UserRepositoryTest {
#Test
#Transactional
public void testRoles() {
User user = new User("name", "email#email.com");
// eventually more here to test entity-to-entity relationship
User savedUser = userRepository.save(user);
userRepository.flush();
savedUser = userRepository.findOne(savedUser.getId());
Assert.assertNotNull(savedUser);
// more validation here
}
}
You basically want to test Hibernate's functionality instead of your own code. My first suggestion: don't do it! It is already tested and validated many times.
If you really want to test it, there are a couple of options:
Execute a query (rather than a get. The query will get executed (you should see it in the log) and the result interpreted. The object you get back would still be the same object you saved, since that is in the session.
You can evict the object from the session and then get it again. If you use SessionFactory.getCurrentSession(), you'll get the same season that the repository is using. With that you can evict the object.
You have two strategies:
issue a native SQL query therefor bypassing any JPA cache.
ensure the persistence context is cleared before reloading.
For (1) you can change your tests to extend the following Spring class which, in addition to automatically beginning/rolling back a transaction at the start/end of each test, will give you access to a Spring JdbcTemplate you can use to issue the native SQL.
http://docs.spring.io/spring-framework/docs/2.5.6/api/org/springframework/test/context/junit4/AbstractTransactionalJUnit4SpringContextTests.html
http://docs.spring.io/spring-framework/docs/2.5.6/api/org/springframework/jdbc/core/simple/SimpleJdbcTemplate.html
For (2) you can clear the persistence context by doing the following (where the EntityManagerFactory is injected into your test:
EntityManagerFactoryUtils.getTransactionalEntityManager(entityManagerFactory).clear();
See the following base test class which I normally use and demonstrates the above and also allows for populating the database with known data before each test (via DBUnit).
https://github.com/alanhay/spring-data-jpa-bootstrap/blob/master/src/test/java/uk/co/certait/spring/data/repository/AbstractBaseDatabaseTest.java
(In fact in the above I am actually creating a new JdbcTemplate by injecting a datasource. Can't remember why...)

ArgumentResolvers within single transaction?

I am wondering if there is a way to wrap all argument resolvers like for #PathVariables or #ModelAttributes into one single transaction? We are already using the OEMIV filter but spring/hibernate is spawning too many transactions (one per select if they are not wrapped within a service class which is be the case in pathvariable resolvers for example).
While the system is still pretty fast I think this is not necessary and neither consistent with the rest of the architecture.
Let me explain:
Let's assume that I have a request mapping including two entities and the conversion is based on a StringToEntityConverter
The actual URL would be like this if we support GET: http://localhost/app/link/User_231/Item_324
#RequestMapping("/link/{user}/{item}", method="POST")
public String linkUserAndItem(#PathVariable("user") User user, #PathVariable("item") Item item) {
userService.addItem(user, item);
return "linked";
}
#Converter
// simplified
public Object convert(String classAndId) {
return entityManager.find(getClass(classAndId), getId(classAndId));
}
The UserService.addItem() method is transactional so there is no issue here.
BUT:
The entity converter is resolving the User and the Item against the database before the call to the Controller, thus creating two selects, each running in it's own transaction. Then we have #ModelAttribute methods which might also issue some selects again and each will spawn a transaction.
And this is what I would like to change. I would like to create ONE readonly Transaction
I was not able to find any way to intercept/listen/etc... by the means of Spring.
First I wanted to override the RequestMappingHandlerAdapter but the resolver calls are well "hidden" inside the invokeHandleMethod method...
The ModelFactory is not a spring bean, so i cannot write an interceptor either.
So currently I only see a way by completely replacing the RequestMappingHandlerAdapter, but I would really like to avoid that.
And ideas?
This seems like a design failure to me. OEMIV is usually a sign that you're doing it wrong™.
Instead, do:
#RequestMapping("/link/User_{userId}/Item_{itemId}", method="POST")
public String linkUserAndItem(#PathVariable("userId") Long userId,
#PathVariable("itemId") Long itemId) {
userService.addItem(userId, itemId);
return "linked";
}
Where your service layer takes care of fetching and manipulating the entities. This logic doesn't belong in the controller.

Resources