EventStore 3.0 and Publishing Events - event-sourcing

So what would be the recommended way to publish events via Event Store 3.0? Assuming I wire up the EventStore like this:
.UsingAsynchronousDispatchScheduler()
.DispatchTo(new DelegateMessageDispatcher(DispatchCommit))
where 'DispatchCommit' looks like this:
DispatchCommit(Commit commit)
I can watch the committed events fire off as expected. However, ES 2.0 had the IContainer passing into the message dispatcher and I could resolve a bus instance and send events. Shall I use a class that implements IDispatchCommits ?
Anyone using ES 3.0 with any thoughts?

Here's the code I'm using in production to dispatch commits: https://gist.github.com/1311195
I have my container configured to only create a single instance of the dependency.

Related

Sending rabbitMQ events after transaction commit

I have big task in transaction (#Transactional method). In this task the id-s are collected and then after transaction commit they must be sent to RabbitMQ (with convertAndSend-method). The RabbitMQ-listener in the other side takes the ids and updates the statuses in DB (it's important to update statuses after transaction changes because only after them the updated data is actual)
I have next questions:
What is the best way to use like the hook in the end (commit) of transaction? I need make the one "afterCommit" method for several service classes (many transactional-methods);
What need I use as the storage of ids? I have thought about smth like ThreadLocal variable but this is not a variant - if there is the parallelStream, the new Thread is created;
May be some other solution?
I have read about the delay RabbitMQ plugin but it is not the variant 4 my task - their time is very different.
Looking at the tags in your question I suppose you are using the spring framework.
So you could use Spring Events and its ApplicationEventPublisher to publish your specific ApplicationEvent together with all the necessary data (the ids in your case).
Spring allows you to bind an event listener to a phase of the current transaction. Just use the #TransactionalEventListener annotation on a method that sends the data to RabbitMQ finally. Binding is possible to different transaction phases. With the default binding (AFTER_COMMIT) the event will only be fired if the transaction has completed successfully.
This Baeldung article on Spring Events is a nice place to find more detailed information.

Use eventstore with Axon Framework 3 and Spring Boot

I'm trying to realize a simple distributed applications, and I would like to save all the events into the event store.
For this reason, as suggested in the "documentation" of Axon here, I would like to use Mysql as event store.
Since I haven't very much experience with Spring, I cannot understand how to getting it working.
I would have two separate services one for the command side and one for the query side. Since I'm planning to have more services, I would like to know how to configure them to use an external event store (not stored inside of any of these services).
For the distribution of the commands and events, I'm using RabbitMQ:
#Bean
public org.springframework.amqp.core.Exchange exchange() {
return ExchangeBuilder.fanoutExchange("AxonEvents").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("AxonEvents").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin)
{
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
This creates the required queue on a local running RabbitMQ instance (with default username and password).
My question is: How can I configure Axon to use mysql as an event store?
As the Reference Guide currently does not specify this, I am gonna point this out here.
Currently you've roughly got two approaches you follow when distributing an Axon application or separating an Axon application into (micro) services:
Use an full open source approach
Use AxonHub / AxonDb
Taking approach 2, which you can do in a developer environment, you would only have to run AxonHub and AxonDb and configure them to your application.
That's it, your done; you can scale out your application and all the messages are routed as desired.
If you want to take route 1 however, you will have to provide several configurations
Firstly, you state you use RabbitMQ to route commands and events.
In fact, the framework does not simply allow using RabbitMQ to route commands at all. Do note it is a solution to distribute EventMessages, just not CommandMessages.
I suggest either using JGroups or Spring Cloud to route your commands in a open-source scenario (I have added links to the Reference Guide pages regarding distributing the CommandBus for JGroups and Spring Cloud).
To distribute your events, you can take three approaches:
Use a shared database for your events.
Use AMQP to send your evens to different instances.
Use Kafka to send your evens to different instances.
My personal preference when starting an application though, is to begin with one monolith and separate when necessary.
I think the term 'Evolutionary Micro Services' catches this nicely.
Any how, if you use the messaging paradigm supported by Axon to it's fullest, splitting out the Command side from the Query side after wards should be quite simple.
If you'd in addition use the AxonHub to distribute your messages, then you are practically done.
Concluding though, I did not find a very exact request from your issues.
Does this give you the required information to proceed, #Federico Ponzi?
Update
After having given it some thought, I think your solution is quite simple.
You are using Spring Boot and you want to set up your EventStore to use MySQL. For Axon to set the right EventStorageEngine (the infra component used under the covers to read/write events), you can simply add a dependency on the spring-boot-starter-data-jpa.
Axon it's auto configuration will in that scenario automatically notice that you have Spring Data JPA on your classpath, and as such will set the JpaEventStorageEngine.

Prism event aggregator - ThreadOption.UIThread worked in v4 but not v6

I've been using Prism 4.1 and have numerous classes that subscribe to events (usually in their constructor) using the IEventAggregator and the ThreadOption.UIThread option.
I've now upgraded to Prism 6 but when I run my application it falls over on one such line with an InvalidOperationException. The message is:
To use the UIThread option for subscribing, the EventAggregator must be constructed on the UI thread
The call stack shows that the class in question is being resolved by my DI container (Castle Windsor) hence why it's not on the UI thread. However it all worked fine with Prism 4.1 so what's changed?
It turns out it was down to the way my application was starting up. I was using a "Main" style of entry point, but I needed to move it to the App.xaml OnStartup() instead:
Prism EventAggregator Exception - must be constructed on the UI thread
I doubt it worked before in the sense of "the subscribers were run on the ui thread", rather they were run on a random thread the EventAggregator is created on. In older version of prism, it just didn't care (and ThreadOption.UIThread lied, in a way).
I don't know Castle Windsor that well, but just because being resolved by the DI framework doesn't mean being created on a different thread. To be on the safe side, resolve the EventAggregator once in InitializeModules in the boot strapper, so that it gets created on the ui thread.
With Unity it would look like this:
internal class Bootstrapper : UnityBootstrapper
{
protected override void InitializeModules()
{
Container.Resolve<IEventAggregator>();
base.InitializeModules();
}
}

Spring application level transaction

We are using a service provide which populates some general services. Some of the services need to be done in a transaction, like below:
service.transferAccount()
service.changeAccountType()
service.closeAccount()
So we need to develop a method TransderAccount_ChangeAccount_CloseAccount() which may look something like below
TransderAccount_ChangeAccount_CloseAccount()
try{
service.transferAccount()
service.changeAccountType()
service.closeAccount()
}catch(All Errors){
//find which service failed and rollback until everything backs to start state
}
We do not have any database here. These are only services... and the transaction need to be managed at the service level.
The project is based on Spring.
Can you please let me know what is the best approach for this usecase?

OSGi dynamic version for one class

I have OSGi bundle in which One service class is used by two clients. both the clients using the same version of service class say 1.0, now i have made some changes in service class and updating the version of service class say 1.1, now my problem is I want both the versions of the service class means one client can use 1.0 version and another client can use 1.1 how can achieve this? If any sample can provide me for dynamic versioning its really helpfull for me.
Thanks.
The version of a service is the version of the interface it implements, which comes from the version of the exported package that that interface exists in.
The service implementation class version is irrelevant because the consumer of the service has no knowledge of the implementation class. Therefore if you register multiple services with the same interface, they will all be visible to the consumer.
I don't think OSGi has a real notion of service versions, but you can use any key/value pair you like when registering a service. The Knopflerfish tutorial is pretty good I think.
For example, when registering a service:
Hashtable props = new Hashtable();
props.put("version", "1.0");
bundleContext.registerService(ServiceInterface.class.getName(), impl, props);
Then, when consuming a service, you can use those attributes to require certain attributes.
Having multiple versions of this service is very easy, the tricky part is how the service consumers deal with it.
If you have two consumers using version 1.0, and 1.1 appears (for example when a new bundle has been started), should the consumers stop using 1.0 and start using 1.1? In your example, one of the consumers should ignore this, while the other should rewire to 1.1. This gets especially complicated when one consumer consumes multiple services.
I recommend looking into Declarative services, it can make this a lot easier and keep your code cleaner, I'd say start here

Resources