I am using AxonIQ AxonFramework version 4.5.3 with Spring Boot and custom event store.
I'm using MongoEventStorageEngine and configured a separate MongoDB database for the EventStorage.
I am doing some business logic with my business database through a microservice. In the same microservice, I've configured the custom EventStorage.
But a few tables (viz. association_value_entry, saga_entry, token_entry) are getting created on my business database which is a PostgresDB.
Why is AxonFramework creating new tables in my business database as I have already configured a separate MongoDB database for EventStorage. All the related database objects for Axon to work should be ideally created in the EventStorage database rather than in my business database.
The tables you are mentioned should be part of your 'read' model (I believe that is what you called business database).
They are not used for Event Storage or Event Sourcing but rather to specific things that are controlled on client side. For example, token_entry, among other things, is the table where your app keep track of the tokens and events it already consumed - you can read more about it here. Similar to the saga tables, where Sagas are stored on the client side having nothing to do with the Event Store - you can read more about it here.
Related
I'm using CQRS with axon Framework in a project,and I'm using Kafka like evnt bus and MongoDB like event store
I have two Microservices, One for the Command Side and the Other for Query Side.
In the Query side I'm try using MySQL database for storing aggregates events, but the eventHandler does not work and I don Know why.
Command Microservice
Query Microservice
Could you share the configuration of the event handler? Or maybe some kind of error you are getting? Could be a lot of things. Might be because you use MySQL and Spring Boot, it actually trying to use MySQL as event store, and not MongoDB.
I'm here to ask you for a suggestion on which technology to use to add new features on an existing application based on Spring Boot.
The new features consist of some workflows, exposed as synchronous REST services, that must update the database and call REST services exposed by external applications, which consequently will update their own database.
For example, a service could implement this workflow:
insert rows in database
call the REST service of the application X, which will update its database
update rows in database
call the REST service of the application Y, which will update its database
update rows in database
This workflow must be synchronous and it is started from an human operator that will receive the outcome in few seconds.
If, for example, the step 4) fails, I need to:
make a compensation, calling another REST service of the application X, in order to undo what it made in step 2)
rollback the insert/update made in my database in steps 1) and 3)
Which technology, framework, tool or other would you use? In the past I implemented a similar scenario using Oracle SOA, but in this case I would avoid to introduce a new infrastructure in my application based on Spring Boot.
Thank you
I guess you need to learn a bit more about Spring Framework and Spring Boot.
1.insert rows in database : Spring Data JPA
2.call the REST service of the application X, which will update its database : A Http Client such as RestTemplate or WebClient
3.update rows in database : Spring Data JPA (again)
4.call the REST service of the application Y, which will update its database update rows in database : RestTemplate...
So so and so...
If you want to make a real workflow, you can use Activiti.
We are rewriting legacy app using microservices. Each microservice has its own DB. There are certain api calls that require to call another microservice and persist data into both DBs. How to implement distributed transaction management effectively in this case?
Since we are not migrated completely to the new micro services environment, we still writeback data to old monolith. For this when an microservice end point is called, we call monolith service from microservice api to writeback same data. How to deal with the same problem in this case as well.
Thanks in advance.
There are different distributer transaction frameworks usually included and maintained as part of heavy application servers like JBoss and WebLogic.
The standard usually used by such services is Jakarta Transactions (JTA; formerly Java Transaction API).
Tomcat and Spring don't support distributed transactions out-of-the-box. You can add this functionality using third party framework like Atomikos (just googled, I've never used it).
But remember, microservice with JTA ist not "micro" anymore :-)
Here is a small overview over available technologies and possible workarounds:
https://www.baeldung.com/transactions-across-microservices
If you can afford to write to the legacy system later (i.e. allow some latency between updating the microservice and the legacy system) you can use the outbox pattern.
Essentially that means that you write to the microservice database in a transactional way both to the tables you usually write and an additional "outbox" table of changes to apply and then have a separate process that reads that table and updates the legacy system.
You can also achieve something similar with a change data capture mechanism on the db used in the microservice(s)
Check out this answer on "Why is 2-phase commit not suitable for a microservices architecture?": https://stackoverflow.com/a/55258458/3794744
I am starting a project where I want to have multiple services that communicate with each other using the axon server.
I have more than one service with the following stack:
Spring Boot 2.3.0.RELEASE (with starters: Data, JPA, web, mysql)
Axon
Spring Boot Starter - 4.2.1
Each one of the services uses different schemas in the mysql server.
When I start the spring boot service with the axon framework activated, some tables for tokens, sagas, etc are created in the database schema of each application.
I have two questions
In the architecture that I am trying to build, should I have only
one database for all the ‘axon enabled’ services, so the sagas,
tokens, events, etc are only in one place?
If so, can anyone
provide an example of how to configure a custom
EntityManagerProvider to have the database of the service separated
from the database of Axon?
I assume each of your microservices models a sub-domain. Since the events do model a (sub)domain, along with aggregates, entities and value objects, I very much favor keeping the Axon-related schemas separated, most likely along with the databases/schemas corresponding to each service. I would, thus, prefer a modeling-first approach when considering such technical options.
It is what we're currently doing in our microservices ecosystem.
There is at least one more technical reason to go with the same schema (one per sub-domain, that is), both for Axon assets and application-specific assets. It was pointed out to me by my colleague Marian. If you (will) use Event Sourcing (thus reconstructing the state of an aggregate by fetching and applying all past events resulted after handling the commands) then you will, most likely, need transactions which encompass this fetching as well as the command handling code which might, in turn, trigger (through events) writes to your microservice-specific database.
Axon can require five tables, depending on your usages of Axon of course.
These are:
The Event table.
The Snapshot Event table.
The Token table.
The Saga table.
The Association Value Entry table.
When using Axon Server, tables 1 and 2 will not be created since Axon Server is the storage solution for events and snapshots.
When not using Axon Server, I would indeed suggest to have a dedicated datasource for these.
Table 3 which services the TokenStore, should be as close as possible to your Query Models. The tokens portray how far a given EventProcessor is with handling events. As these EventProcessors typically service projectors which create your query models, keeping them together is sensible from a transactional perspective.
Table 4 and 5 are both required for Sagas. The "Saga table" stores the serialized sagas, whereas the "Association Value Entry table" carries the associations values between events and sagas so that the framework can load the right sagas. I'd store these either in a dedicated database or along with the other tables of the given (micro)service.
I have a particular instance of a class that loads some data from the database, so every time the database is updated the system should recreate the instance of that class to get the updated data
There are trivial several solutions:
(1) Use scheduling to load the latest data from DB periodically.
(2) Provide a web service such as RESTful API to load the latest data from DB.
(3) If your DB supports event-driven listeners, you can trigger your application to achieve this either by invoking a service described in (2) or send a message to queue and handle it by consumer.