Use eventstore with Axon Framework 3 and Spring Boot - spring-boot

I'm trying to realize a simple distributed applications, and I would like to save all the events into the event store.
For this reason, as suggested in the "documentation" of Axon here, I would like to use Mysql as event store.
Since I haven't very much experience with Spring, I cannot understand how to getting it working.
I would have two separate services one for the command side and one for the query side. Since I'm planning to have more services, I would like to know how to configure them to use an external event store (not stored inside of any of these services).
For the distribution of the commands and events, I'm using RabbitMQ:
#Bean
public org.springframework.amqp.core.Exchange exchange() {
return ExchangeBuilder.fanoutExchange("AxonEvents").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("AxonEvents").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin)
{
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
This creates the required queue on a local running RabbitMQ instance (with default username and password).
My question is: How can I configure Axon to use mysql as an event store?

As the Reference Guide currently does not specify this, I am gonna point this out here.
Currently you've roughly got two approaches you follow when distributing an Axon application or separating an Axon application into (micro) services:
Use an full open source approach
Use AxonHub / AxonDb
Taking approach 2, which you can do in a developer environment, you would only have to run AxonHub and AxonDb and configure them to your application.
That's it, your done; you can scale out your application and all the messages are routed as desired.
If you want to take route 1 however, you will have to provide several configurations
Firstly, you state you use RabbitMQ to route commands and events.
In fact, the framework does not simply allow using RabbitMQ to route commands at all. Do note it is a solution to distribute EventMessages, just not CommandMessages.
I suggest either using JGroups or Spring Cloud to route your commands in a open-source scenario (I have added links to the Reference Guide pages regarding distributing the CommandBus for JGroups and Spring Cloud).
To distribute your events, you can take three approaches:
Use a shared database for your events.
Use AMQP to send your evens to different instances.
Use Kafka to send your evens to different instances.
My personal preference when starting an application though, is to begin with one monolith and separate when necessary.
I think the term 'Evolutionary Micro Services' catches this nicely.
Any how, if you use the messaging paradigm supported by Axon to it's fullest, splitting out the Command side from the Query side after wards should be quite simple.
If you'd in addition use the AxonHub to distribute your messages, then you are practically done.
Concluding though, I did not find a very exact request from your issues.
Does this give you the required information to proceed, #Federico Ponzi?
Update
After having given it some thought, I think your solution is quite simple.
You are using Spring Boot and you want to set up your EventStore to use MySQL. For Axon to set the right EventStorageEngine (the infra component used under the covers to read/write events), you can simply add a dependency on the spring-boot-starter-data-jpa.
Axon it's auto configuration will in that scenario automatically notice that you have Spring Data JPA on your classpath, and as such will set the JpaEventStorageEngine.

Related

Using multiple file suppliers in Spring Cloud Stream

I have an application with file-supplier Srping Cloud module included. My workflow is like track file creating/modifying in particular directory->push events to Kafka topic if there are such events. FileSupplierConfiguration is used to configure this file supplier. But now I have to track one more directory and push events to another relevant Kafka topic. So, there is an issue, because there is no possibility to include multiple FileSupplierConfiguration in project for configuration another file supplier. I remember that one of the main principles of microservices for which spring-cloud-stream was designed for is you do one thing and do it well without affecting others, but it still the same microservice with same tracking/pushing functionality but for another directory and topic. Is there any possibility to add one more file supplier with relevant configuration with file-supplier module? Or is the best solution for this issue to run one more application instance with another configuration?
Yes, the best way is to have another instance of this application, but with its own specific configuration properties. This is really how these functions have been designed: microservice based on the convention on configuration. What you are asking really contradicts with Spring Boot expectations. Imaging you'd need to connect to several data bases. So, you only can have a single JdbcTemplate auto-configured. The rest is only possible manually. But better to have the same code base which relies on the auto-configuration and may apply different props.

Different polling delay for different suppliers in Spring Cloud Stream Function

I'm trying to implement suppliers using Spring Cloud Function and Kafka. I need that one supplier should publish after every 10 secs and other should publish after every 30 secs. I could see from documentation, I can change delay using spring.cloud.stream.poller.fixed-delay property. Reference
But I need to set different delay for each topic. Is there any way to do it?
From the spring-cloud-function perspective there isn't any kind of polling as it is not the responsibility of the framework.
From the spring-cloud-stream perspective that uses spring-cloud-function indeed there is a mechanism that you have described. However, keep in mind that spring-cloud-stream is primarily designed to support concept of microservices (not your general messaging framework) and in microservices we embrace do one thing but do it well without affecting others approach. So having more then one supplier kind of goes against this model.
If you are building a general purpose messaging app, then i'd suggest to use Spring Integration framework which provides all the necessary hooks to accomplish what you need, but will require a bit more configuration details.

How to consume different commands in different microservices using Axon?

I have 2 different microservices todo-service and validation-service and command types CreateTodoCommand and ValidateTodoCommand.
If I have one command handler in one service and another one in the second service I receive No node known to accept exception on sending a command that the service is not aware of.
Can I split my #CommandHandlers to have them in different services?
Yes, that's definitely possible #Stephan.
Assuming you're in a Spring Boot environment, the only thing you have to do is set the right dependency for the type of DistributedCommandBus you want (thus SpringCloud or JGroups) and set the axon.distributed.enabled property to true.
The framework will automatically search for all the #CommandHandler annotated functions. If your application still states it cannot find 'a node to expect the command', then something fails during the wiring process.
Or simpler put, I can think of two reasons:
1) Your #CommandHandler annotated functions aren't registered to the DistributedCommandBus.
2) The message-handling information isn't shared between your nodes.
Any how, the simple answer to your question is: yes, you can have your #CommandHandler annotated functions in different services.
Why it isn't working though, is something different.

blocking consumers in apache camel using existing components

I would like to use the Apache Camel JDBC component to read an Oracle table. I want Camel to run in a distributed environment to meet availability concerns. However, the table I am reading is similar to a queue, so I only want to have a single reader at any given time so I can avoid locking issues (messy in Oracle).
If the reader goes down, I want another reader to take over.
How would you accomplish this using the out-of-the-box Camel components? Is it possible?
It depends on your deployment architecture. For example, if you deploy your Camel apps on Servicemix (or ActiveMQ) in a master/slave configuration (for HA), then only one consumer will be active at a given time...
But, if you need multiple running (clustered for scalability), then (by default) they will compete/duplicate reads from the table unless you write your own locking logic.
This is easy using Hazelcast Distributed Locking. There is a camel-hazelcast component, but it doesn't support the lock API. Once you configure your apps to participate in a Hazelcast cluster, then just just the lock API around any code that you need to synchronize for a given object...
import com.hazelcast.core.Hazelcast;
import java.util.concurrent.locks.Lock;
Lock lock = Hazelcast.getLock(myLockedObject);
lock.lock();
try {
// do something here
} finally {
lock.unlock();
}

OSGI bundle (or service)- how to register for a given time period?

Search did not give me a hint, how can i behave with the following situation:
I'd love to have 2 OSGI implementations of the same interface: one is regular, the other should work (be active/present/whatever) on the given time period (f.e for Christmas weeks :))
The main goal is to call the same interface without specifying any flags/properties/without manual switching of ranking. Application should somehow switch implementation for this special period, doing another/regular job before and after :)
I'm a newbie, maybe i do not completely understand OSGI concept somewhere, sorry for that of give me a hint or link, sorry for my English.
Using Felix/Equinox with Apache Aries.
The publisher of a service can register and unregister that service whenever it likes using the normal API. If it chooses then it can do so according to some regular schedule.
If there is another service instance that is available continuously, then the consumer of the service will sometimes see two instances of the service and sometimes see one. When there is only one instance available then it is trivial to get only that instance. When there are two instances then you need a way to ensure you get your "preferred" instance. The SERVICE_RANKING property is a way to do this. The getService method of a normal ServiceTracker will always return the higher ranked service, so this would appear to satisfy your requirement.
I have yet to see an OSGI container that at a framework level supports date/time based availability of services.
If I were you I would simply drop a proxy service in front of the two interface implementations and put the service invocation based on date logic in there.
I don't believe there is any framework support for what you are asking for.
If you are intent on avoiding service filters, you might try this.
Implement a PolicyService. This service is in charge of deciding which instance of your service should be registered at a given point in time. When its time for the policy service to switch implementations, it just uses the register/unregister apis as usual. You policy service implementation can read in a config file that specifies the date range to service implementation mapping. This will allow you to add new behavior by modifying your config file and installing a new bundle with the new service.
I agree with Neil that a service should only publish itself if it can actually be invoked. My solution to this problem would be to have all service producers be dependent on a "time constraint dependency". Whilst such a dependency is not available in the standard dependency frameworks (like Declarative Services, Blueprint, iPOJO) it is easily implemented with the Apache Felix Dependency Manager, which allows you to create your own types of dependencies. Note that writing such a new dependency once is some work, but if this is a core piece of your application I think it's worth it. Service consumers would not require any special logic, they would simply invoke the service that was there.
Ok, what i have finally done...
Implemented a common dispatcher bundle - and call any of services only thru it (therefore cron is not needed, when call is on-demand)
When dispatcher obtains request - it searches for interface in its own cache and -
When there are >1 service with the same ranking and both are equal (registered) - then
Dispatcher resolves needed service via own written #TimigPolicy annotation with "from" and "to" fields
For the given server new datetime() - dispatcher returns proper service instance
Much work, i should say :) But works nearly perfect :)
Thanks everybody for giving me hints, it was really helpful!!!

Resources