How to consume different commands in different microservices using Axon? - event-sourcing

I have 2 different microservices todo-service and validation-service and command types CreateTodoCommand and ValidateTodoCommand.
If I have one command handler in one service and another one in the second service I receive No node known to accept exception on sending a command that the service is not aware of.
Can I split my #CommandHandlers to have them in different services?

Yes, that's definitely possible #Stephan.
Assuming you're in a Spring Boot environment, the only thing you have to do is set the right dependency for the type of DistributedCommandBus you want (thus SpringCloud or JGroups) and set the axon.distributed.enabled property to true.
The framework will automatically search for all the #CommandHandler annotated functions. If your application still states it cannot find 'a node to expect the command', then something fails during the wiring process.
Or simpler put, I can think of two reasons:
1) Your #CommandHandler annotated functions aren't registered to the DistributedCommandBus.
2) The message-handling information isn't shared between your nodes.
Any how, the simple answer to your question is: yes, you can have your #CommandHandler annotated functions in different services.
Why it isn't working though, is something different.

Related

Spring ServiceFacade

Sorry for potentially a silly question - but is it acceptable with Spring to create a one dedicated service, let's say ServiceFacade, inject 20-30 other services into it and then pass such a ServiceFacade reference as a parameter to different business logic? Will such approach lead to issues within the application?
Yes, it is possible, Spring will correctly handle a bean with 20-30 other dependencies. However it is discouraged from a design point of view. Instead of one ServiceFacade you might have multiple facades, each with manageable number of dependencies, e.g. 5 and a factory returning different facades instances.

Spring Beans Dependency Inyection with different configurations

I have the following doubt, probably a very basic one, that I have already managed to work out but I would like to listen if there is a different approach or actually if I am getting something wrong.
Background
I have an implementation with Springboot with a classic layered approach using Spring StereoTypes and wiring all up using Field DI (yes... I am aware it is not the best approach)
Service -> Repository -> (Something else)
In my case (something else) is a third party Rest API which I am calling using a RestTemplate with a specific configuration.The current solution has many services and repositories to deal with each of the Third Party domain entities. All of them using the same RestTemplate bean. The bean is inyected at the repository level.
Problem
So now I have been told from the Third Party System that depending on which business scenario my local services are executing, repositories need to use one of two different users, therefore, I assume that a different restTemplate config needs to be added. At first glance it drives me to move even higher the decision of which restTemplate to use. At Service level, not at the repo level. So I would need to have, lets say, a service A under a specific context whose dependencies (the repository) will need to have a specific template, and the same service A given another context, with a different dependency config.
Approach
The approach that I took is to have a configuration class where I generate different versions of the same service with different dependencies, in particular, their repositories using a specific template. Github Example
This approach seems like odd to me because up till now I have never had to do something like this ...and leaves me with the doubt if something different can be done to achive the same.
Another approach would be to inject both RestTemplates in the base repository and with an extra parameter to decide which to use in each method that it is being use at service level and repo level. Which I dislike.

Use eventstore with Axon Framework 3 and Spring Boot

I'm trying to realize a simple distributed applications, and I would like to save all the events into the event store.
For this reason, as suggested in the "documentation" of Axon here, I would like to use Mysql as event store.
Since I haven't very much experience with Spring, I cannot understand how to getting it working.
I would have two separate services one for the command side and one for the query side. Since I'm planning to have more services, I would like to know how to configure them to use an external event store (not stored inside of any of these services).
For the distribution of the commands and events, I'm using RabbitMQ:
#Bean
public org.springframework.amqp.core.Exchange exchange() {
return ExchangeBuilder.fanoutExchange("AxonEvents").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("AxonEvents").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin)
{
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
This creates the required queue on a local running RabbitMQ instance (with default username and password).
My question is: How can I configure Axon to use mysql as an event store?
As the Reference Guide currently does not specify this, I am gonna point this out here.
Currently you've roughly got two approaches you follow when distributing an Axon application or separating an Axon application into (micro) services:
Use an full open source approach
Use AxonHub / AxonDb
Taking approach 2, which you can do in a developer environment, you would only have to run AxonHub and AxonDb and configure them to your application.
That's it, your done; you can scale out your application and all the messages are routed as desired.
If you want to take route 1 however, you will have to provide several configurations
Firstly, you state you use RabbitMQ to route commands and events.
In fact, the framework does not simply allow using RabbitMQ to route commands at all. Do note it is a solution to distribute EventMessages, just not CommandMessages.
I suggest either using JGroups or Spring Cloud to route your commands in a open-source scenario (I have added links to the Reference Guide pages regarding distributing the CommandBus for JGroups and Spring Cloud).
To distribute your events, you can take three approaches:
Use a shared database for your events.
Use AMQP to send your evens to different instances.
Use Kafka to send your evens to different instances.
My personal preference when starting an application though, is to begin with one monolith and separate when necessary.
I think the term 'Evolutionary Micro Services' catches this nicely.
Any how, if you use the messaging paradigm supported by Axon to it's fullest, splitting out the Command side from the Query side after wards should be quite simple.
If you'd in addition use the AxonHub to distribute your messages, then you are practically done.
Concluding though, I did not find a very exact request from your issues.
Does this give you the required information to proceed, #Federico Ponzi?
Update
After having given it some thought, I think your solution is quite simple.
You are using Spring Boot and you want to set up your EventStore to use MySQL. For Axon to set the right EventStorageEngine (the infra component used under the covers to read/write events), you can simply add a dependency on the spring-boot-starter-data-jpa.
Axon it's auto configuration will in that scenario automatically notice that you have Spring Data JPA on your classpath, and as such will set the JpaEventStorageEngine.

How do I manage name spaces in a Spring Integration project with multiple flows

I have a Spring Integration project that has several flows (some where between 10-15). I would like to keep my namespace clean since several flows might have similar sounding components (for ex - several flows might have a channel named fileValidatorChannel). I think I have a couple of different options to keep names from colliding with each other:
A. Preface every component name with the flow that it belongs to. For ex - flowAFileValidatorChannel, flowBFileValidatorChannel, etc
B. Create a context hierarchy where every flow is it's own context and every flow inheriting from a master context where all the common beans/sub-flows are.
What's the better approach? Is there are better way to keep my name space clean?
To be honest your problem isn't clear.
Any Spring Integration component is a bean finally. So, their ids are just to distinguish them from other bean.
Let's imaging if you don't have Spring Integration in your application. So, you would worry about some clean naming strategy for all your beans anyway?
From other side consider to use Spring Integration Flow project:
The goal is to support these, and potentially other semantics while providing better encapsulation and configuration options. Configuration is provided via properties and or referenced bean definitions. Each flow is initialized in a child application context. This allows you to configure multiple instances of the same flow differently.

OSGI bundle (or service)- how to register for a given time period?

Search did not give me a hint, how can i behave with the following situation:
I'd love to have 2 OSGI implementations of the same interface: one is regular, the other should work (be active/present/whatever) on the given time period (f.e for Christmas weeks :))
The main goal is to call the same interface without specifying any flags/properties/without manual switching of ranking. Application should somehow switch implementation for this special period, doing another/regular job before and after :)
I'm a newbie, maybe i do not completely understand OSGI concept somewhere, sorry for that of give me a hint or link, sorry for my English.
Using Felix/Equinox with Apache Aries.
The publisher of a service can register and unregister that service whenever it likes using the normal API. If it chooses then it can do so according to some regular schedule.
If there is another service instance that is available continuously, then the consumer of the service will sometimes see two instances of the service and sometimes see one. When there is only one instance available then it is trivial to get only that instance. When there are two instances then you need a way to ensure you get your "preferred" instance. The SERVICE_RANKING property is a way to do this. The getService method of a normal ServiceTracker will always return the higher ranked service, so this would appear to satisfy your requirement.
I have yet to see an OSGI container that at a framework level supports date/time based availability of services.
If I were you I would simply drop a proxy service in front of the two interface implementations and put the service invocation based on date logic in there.
I don't believe there is any framework support for what you are asking for.
If you are intent on avoiding service filters, you might try this.
Implement a PolicyService. This service is in charge of deciding which instance of your service should be registered at a given point in time. When its time for the policy service to switch implementations, it just uses the register/unregister apis as usual. You policy service implementation can read in a config file that specifies the date range to service implementation mapping. This will allow you to add new behavior by modifying your config file and installing a new bundle with the new service.
I agree with Neil that a service should only publish itself if it can actually be invoked. My solution to this problem would be to have all service producers be dependent on a "time constraint dependency". Whilst such a dependency is not available in the standard dependency frameworks (like Declarative Services, Blueprint, iPOJO) it is easily implemented with the Apache Felix Dependency Manager, which allows you to create your own types of dependencies. Note that writing such a new dependency once is some work, but if this is a core piece of your application I think it's worth it. Service consumers would not require any special logic, they would simply invoke the service that was there.
Ok, what i have finally done...
Implemented a common dispatcher bundle - and call any of services only thru it (therefore cron is not needed, when call is on-demand)
When dispatcher obtains request - it searches for interface in its own cache and -
When there are >1 service with the same ranking and both are equal (registered) - then
Dispatcher resolves needed service via own written #TimigPolicy annotation with "from" and "to" fields
For the given server new datetime() - dispatcher returns proper service instance
Much work, i should say :) But works nearly perfect :)
Thanks everybody for giving me hints, it was really helpful!!!

Resources