How to upcast events with Axon Server? - spring-boot

I'm relatively new to the Axon Framework and just evaluating if the framework is suitable for a project of me. Versioning of events is described in this post. But in the example the EventStore is changed to Jpa. Is it possible to upcast events with Axon Server as event store? Or did I misunderstand something?

The Upcaster logic provided by Axon Framework is in now way biased to the type of EventStore backing your application. It is thus perfectly doable to provide an UpcasterChain to the AxonServerEventStore, containing the EventUpcaster implementation you have written.
Upcaster Registration Update
Nicolas asked the following as a follow-up on my response:
But the upcaster is only applied for the service that contains it.
So is it possible to register an upcaster globally or do I have to implement it in every service connected with the Axon Server?
Axon Server will not delegate registered Upcaster instances throughout the connected Axon Server Clients (aka, the Axon Framework implementations).
It doesn't because you could have a heterogeneous deployment of services, one with old event versions and one with the most recent version including such an Upcaster.
Think about it from a Blue-Green Deployment strategy, or a Rolling-Upgrade approach; you wouldn't want Axon Server to push the upcasters to the clients directly, as the client should be in charge of the exact version they're interested in.
Having said this, you would thus have to share the upcasters together with your messages, as part of your API so to say. This would be a requirement for an Axon application regardless of whether you'd use Axon Server. Having said that, this is the case for Axon Server for at least (the upcoming) version 4.3. I do not know (yet) whether such a feature will be added in the future.

Related

Running multiple Quarkus instances on one machine

I have an application separated in various OSGI bundles which run on a single Apache Karaf instance. However, I want to migrate to a microservice framework because
Apache Karaf is pretty tough to set up due its dependency mechanism and
I want to be able to bring the application later to the cloud (AWS, GCloud, whatever)
I did some research, had a look at various frameworks and concluded that Quarkus might be the right choice due to its container-based approach, the performance and possible cloud integration opportunities.
Now, I am struggeling at one point and I didn't find a solution so far, but maybe I also might have a misunderstanding here: my plan is to migrate almost every OSGI bundle of my application into a separate microservice. In that way, I would be able to scale horizontally only the services for which this is necessary and I could also update/deploy them separately without having to restart the whole application. Thus, I assume that every service needs to run in a separate Quarkus instance. However, Quarkus does not not seem to support this out of the box?!? Instead I would need to create a separate configuration for each Quarkus instance.
Is this really the way to go? How can the services discover each other? And is there a way that a service A can communicate with a service B not only via REST calls but also use objects of classes and methods of service B incorporating a dependency to service B for service A?
Thanks a lot for any ideas on this!
I think you are mixing some points between microservices and osgi-based applications. With microservices you usually have a independent process running each microservice which can be deployed in the same o other machines. Because of that you can scale as you said and gain benefits. But the communication model is not process to process. It has to use a different approach and its highly recommended that you use a standard integration mechanism, you can use REST, you can use Json RPC, SOAP, or queues or topics to use a event-driven communication. By this mechanisms you invoke the 'other' service operations as you do in osgi, but you are just using a different interface, instead of a local invocation you do a remote invocation.
Service discovery is something that you can do with just Virtual IP's accessing other services through a common dns name and a load balancer, or using kubernetes DNS, if you go for kubernetes as platform. You could use also a central configuration service or let each service register itself in a central registry. There are already plenty different flavours of solutions to tackle this complexity.
Also more importantly, you will have to be aware of your new complexities, but some you already have.
Contract versioning and design
Synchronous or asynchronous communication between services.
How to deal with security in the boundary of the services / Do i even need security in most of my services or i just need information about the user identity.
Increased maintenance cost and redundant side code for common features (here quarkus helps you a lot with its extensions and also you have microprofile compatibility).
...
Deciding to go with microservices is not an easy decision and not one that should be taken in a single step. My recommendation is that you analyse your application domain and try to check if your design is ok to go with microservices (in terms of separation of concenrs and model cohesion) and extract small parts of your osgi platform into microservices, otherwise you mostly will be force to make changes in your service interfaces which would be more difficult to do due to the service to service contract dependency than change a method and some invocations.

Transaction management in microservices

We are rewriting legacy app using microservices. Each microservice has its own DB. There are certain api calls that require to call another microservice and persist data into both DBs. How to implement distributed transaction management effectively in this case?
Since we are not migrated completely to the new micro services environment, we still writeback data to old monolith. For this when an microservice end point is called, we call monolith service from microservice api to writeback same data. How to deal with the same problem in this case as well.
Thanks in advance.
There are different distributer transaction frameworks usually included and maintained as part of heavy application servers like JBoss and WebLogic.
The standard usually used by such services is Jakarta Transactions (JTA; formerly Java Transaction API).
Tomcat and Spring don't support distributed transactions out-of-the-box. You can add this functionality using third party framework like Atomikos (just googled, I've never used it).
But remember, microservice with JTA ist not "micro" anymore :-)
Here is a small overview over available technologies and possible workarounds:
https://www.baeldung.com/transactions-across-microservices
If you can afford to write to the legacy system later (i.e. allow some latency between updating the microservice and the legacy system) you can use the outbox pattern.
Essentially that means that you write to the microservice database in a transactional way both to the tables you usually write and an additional "outbox" table of changes to apply and then have a separate process that reads that table and updates the legacy system.
You can also achieve something similar with a change data capture mechanism on the db used in the microservice(s)
Check out this answer on "Why is 2-phase commit not suitable for a microservices architecture?": https://stackoverflow.com/a/55258458/3794744

How to implement microservice Saga in google cloud platform

I am investigating solution to implement microservice Saga pattern in platform hosted in K8S in GCP.
There are 2 options: Eventulate Tram and Axon. However, these frameworks seem not to support message broker managed by cloud provider such as google-cloud-Pubsub whereas I do not want to deploy either Kafka or RabbitMQ to K8S since GCP support PubSub already.
So is there any way to integrate either Eventulate or Axon to use google cloud PubSub?
Thanks
Uncertain about Eventuate's angle on this, but Axon works with extensions as message brokers other than Axon Server. Throughout Axon's lifecycle (read: last 10 years), some of these have been provided, but none are currently used for all types of messages defined by Axon Framework. So, you wouldn't be able to use Kafka for sending commands in Axon for example.
Reasoning for this? Commands, events and queries have different routing requirements which should be reflected by using the right tool for the job.
To be a bit more specific on Axon's side, the following extensions can be used for distributing your messages:
AMQP -> for Events
Kafka -> for Events
JGroups -> for Commands
Spring Cloud Discovery -> for Commands
As you can tell, there currently is no Pub/Sub extension out there to allow you to distribute your messages. Added on top of that, my gut would tell me if it was available, then it would likely only be used for Event messages due to Pub/Sub's intent when it comes to being a message broker.
Luckily this actually makes it rather straightforward to create just such a extension yourself. Going into all the details to build this would be a little much, so I would recommend to have a look at Axon's AMQP extension first when it comes to achieving this. Hints on the matter are that for publication, you should add a component to handle Axon's events and publish them on Pub/Sub. For handling events, you are required to build a StreamableMessageSource or SubscribableMessageSource. These interfaces are used respectively by the TrackingEventProcessor and SubscribingEventProcessor, which in turn are the component in charge of dealing with the technical aspect of handling events.
By the way, if you would be building such an extension and you need a hand, it would be best to request this at AxonIQ forum, which you can find here.
Last note, and rather important I'd say, is the argument that such a connector would not be able to deal with all types of messages. If you would require a more full fledged Axon application to run in a distributed fashion, I would highly recommend to give Axon Server a try prior to building your own solution from the ground up.

How to update data in real-time

I have a small stock-market application with Spring boot and if any product updated I want to serve an updated product to the clients in realtime
does it make sense to use message queues like RabbitMQ and Sse(Server Sent Events) for this, or is there a more sensible solution?
Solution
Publish your updated data to some channel
Your clients should subscribe to that channel to get updated feed in real-time.
Tools
Use in-house setup for RabbitMQ, ActiveMQ, Kafka or other open-source tools and implement WebSocket (For Front end applications)
Use commercial service like Google Cloud PubSub
Readymade and fully packaged solution with supported SDK for backend and frontend, https://www.pubnub.com/.
For this you can use either of
Spring Integration
Web Sockets
JMS
Spring Integration is an implementation of Enterprise Integration Patterns and is ideal for asynchronous processing data at realtime.
However, looking at your scope, it is only about publisher-subscriber pattern. Hence can be solved with JMS.
With JMS the subscribers/consumers can register/de-register dynamically. Also it provides ways to have fall-backs and tracking.

Azure alternative to spring cloud dataflow process

I'm looking for the azure alternative for the Data flow model of Data Source-processor-sink.
I want the three entities to be separate microservices. I want to use messaging as a link between these three.
Basically, Source app takes the data from another service and sends it to processor while processor app acts on it and sends relevant notification/alert to sink.
I'm aware I can use rabbitmq for the messaging but I need to know which one will be better in azure - service bus topics or eventhub? and how can I use them?
At the moment, there isn't a Spring Cloud Stream binder implementation for Azure Event Hubs.
Unless we have this, the out-of-the-box or the custom apps cannot be built as a messaging-microservice app, where Spring Cloud Stream provides the programming model and Spring Cloud Data Flow lets you orchestrate the individual microserivces in to a data pipeline (i.e., source-processor-sink) via the DSL/Drag-and-Drop GUI.
Microsoft was exploring the binder implementation in the past; possibly it would end up in Azure Spring Boot project. Feel free to drop an issue on their backlog.

Resources