How do microservices with independent databases communicate with each other? - microservices

For example Microservice A is having DB A and microservice B has DB B. Now Microservoce B want some share data from A. How we can handle such scenario?

Microservice B can make a synccall to get the data it needs. (like REST call) If microservice B needs this data frequently, it is good to avoid synccall to avoid network cost.
For such cases it is recommended to replicate data with an event driven architecture. With this architecture data changes in microservice A is published to a message broker like Kafka and then consumed by Microservice B. Microservice B update its own database with the information from the microservice A’s events. By that way coupling is also avoided.

They use a communication protocol, like HTTP.

Each Microservice needs to own its database schema and should only query its own database schema . In this case since Microservice B needs data from DB-A, Microservice A needs to expose this Data, possibly through a Rest Api and then, Microservice B needs to call that Rest Api to get the Data from DB - A

Related

Should microservices connected with axon share the axon framework related tables?

I am starting a project where I want to have multiple services that communicate with each other using the axon server.
I have more than one service with the following stack:
Spring Boot 2.3.0.RELEASE (with starters: Data, JPA, web, mysql)
Axon
Spring Boot Starter - 4.2.1
Each one of the services uses different schemas in the mysql server.
When I start the spring boot service with the axon framework activated, some tables for tokens, sagas, etc are created in the database schema of each application.
I have two questions
In the architecture that I am trying to build, should I have only
one database for all the ‘axon enabled’ services, so the sagas,
tokens, events, etc are only in one place?
If so, can anyone
provide an example of how to configure a custom
EntityManagerProvider to have the database of the service separated
from the database of Axon?
I assume each of your microservices models a sub-domain. Since the events do model a (sub)domain, along with aggregates, entities and value objects, I very much favor keeping the Axon-related schemas separated, most likely along with the databases/schemas corresponding to each service. I would, thus, prefer a modeling-first approach when considering such technical options.
It is what we're currently doing in our microservices ecosystem.
There is at least one more technical reason to go with the same schema (one per sub-domain, that is), both for Axon assets and application-specific assets. It was pointed out to me by my colleague Marian. If you (will) use Event Sourcing (thus reconstructing the state of an aggregate by fetching and applying all past events resulted after handling the commands) then you will, most likely, need transactions which encompass this fetching as well as the command handling code which might, in turn, trigger (through events) writes to your microservice-specific database.
Axon can require five tables, depending on your usages of Axon of course.
These are:
The Event table.
The Snapshot Event table.
The Token table.
The Saga table.
The Association Value Entry table.
When using Axon Server, tables 1 and 2 will not be created since Axon Server is the storage solution for events and snapshots.
When not using Axon Server, I would indeed suggest to have a dedicated datasource for these.
Table 3 which services the TokenStore, should be as close as possible to your Query Models. The tokens portray how far a given EventProcessor is with handling events. As these EventProcessors typically service projectors which create your query models, keeping them together is sensible from a transactional perspective.
Table 4 and 5 are both required for Sagas. The "Saga table" stores the serialized sagas, whereas the "Association Value Entry table" carries the associations values between events and sagas so that the framework can load the right sagas. I'd store these either in a dedicated database or along with the other tables of the given (micro)service.

Why does most of the diagram of stateless microservice have the database inside the service and stateful microservice has external database?

Because as I understand it stateless micro service do not rely on state. So why does it need the database inside the micro-service? I thought it should be other way around.
I hope the location of database does not matter as long as the idea of the stateless is that the server will not store any session or any state but it will be stored in a database. While stateful ones do store session and other stuff.
Most of the diagrams related to Microservices architecture have database associated with a service. This is to display the fact that independent micro services have independent databases. In traditional monolith apps, the app would be connected to a single database. When we break a monolith into multiple micro services using domains, the ideal way is for each micro service to have a different database so that services can run and evolve independently. This is a true microservices architecture.
So, to answer your question, database in a micro service block in a diagram just shows the independence of the service with its own data model and logic.

How to ensure application availability when one or more microservices fail?

If a microservice is not responding due to any of the following reasons, how do we ensure the overall application availability?
Microservice crashes
Network partition happens or other transient error happens
Service is overloaded
other microservice calling the same microservice
If you have services calling one another, that doesn't sound like they are using Kafka, then.
If you have applications sending to Kafka, then those messages are persisted to the broker logs. Any downstream consumer can stay offline for as long as the messages are (configurably) retained in the Kafka cluster.
Ultimately, when using Kafka (any persistent message queue), services do not know about one another, and only the brokers.
You should avoid coupling in microservices architecture as much as possible.
In your case, I guess you are sending a read-only request to a microservice to get a data but called microservice is not up. So caller microservice can't do its job.
To avoid this kind of situations you can use data duplication technique. In this technique microservice which is the source of the data send insert, update, delete information about the data as an event with using a broker like Kafka. Then other microservices which also needs to this data get the data from corresponding topic. By this way, you don't need to make a read-only request to get the data. Then you will avoid coupling between microservices.
What will happen in that case?
In this case, if there is no redundancy for microservice which is called, caller microservice will get an exception like "No instances available for CalledMicroservice"

Spring boot Distrubuted transaction

We need to find best way to address distributed transaction management in our microservices architecture.
Here is the Problem Statement.
We have one Composite microservice which shall interact with underlying other 2 Atomic microservices (Which are meant for specific purpose obviously) and have separate database e.g. We can consider these 2 microservices as
STUDENT_SERVICE (STU_DB)
TEACHER_SERVICE (TEACHR_DB)
Here in Composite Service Usecase is like user (Administrator) can assign a Teacher to a student for the specific course etc.
I wonder how can we address this problem in one transaction as each servie (STUDENT_SERVICE and TEACHER_SERVICE ) has separate DB and all should happen in one transaction either commit or rollback.
Since those 2 services are separate and I see JTA would not be of help as it is meant for having these 2 applications (services) deployed on same application server!
I have opted out JTA as mentioned above
//Pseudo Code
class CompositeService{
AssignStaff(resquest){
//txn Start
updateStudentServiceAPI(request);
UpdateTeacherServiceAPI(request);
//txn End
}
}
System should be in consistent state after api execution
This is a tricky question even it's not obvious at the first sight.
The functionality you call for is understood to be an anti-pattern for microservice architecture.
Microservice architecture is in general a distributed system. Transactions in distributed systems are hard (see https://martin.kleppmann.com/2015/09/26/transactions-at-strange-loop.html). Your application consists from two services.
The JTA is a Java API for ACID style transactions. ACID transactions usually requires locks to be established in databases. As the transaction spans over multiple services (in your case there are two) then a failure of one service can block processing of the other service. In such case you are loosing the advantage of the microservice architecture - loose coupling and Independence of the services. You can end up of building a distributed monolith (see nice article https://blog.christianposta.com/microservices/the-hardest-part-about-microservices-data/).
Btw. there are several discussion on the topic of transactions in microservices here at Stackoverflow. Just search or check e.g.
Distributed transactions in microservices
Transactions in microservices
Transactions across REST microservices?
What are your options
(disclaimer: I'm a developer for http://narayana.io and presented options are from perspective of Java EE and Narayana. There could be other projects providing similar functionality. Plus, even Narayana integrates nicely with Spring you will possibly need to handle some integration issues.)
you really need to run the ACID style transaction in your project - aka you insists you need the transaction behaviour in way you describe. Then you need to span transaction over services. Then if services communicate over REST you can consider for example Narayana REST-AT (http://jbossts.blogspot.com/2011/03/rest-cloud-and-transactions.html, start looking into quickstart here https://github.com/jbosstm/quickstart/tree/master/rts)
you relax your requirements for atomicity and then you can cosider some transaction model relaxing the consistency (you are fine to be eventual consistent). You can consider for example LRA (https://github.com/eclipse/microprofile-lra/blob/master/spec/src/main/asciidoc/microprofile-lra-spec.adoc). (Unfortunately the spec and implementation is still not ready but PoC could be run on current state.)
you want to use a different approach for transaction processing completely. Then you can investigate on event sourcing. You would deploy e.g. Apache Kafka and send events for updates to the event store. Each service will reads those events and updates independently the DBs.

Understanding Microservice Architecture

Since I am trying hard to understand the microservice architecture pattern for some work, I came across the following question:
It's always said that a microservice usually has its own database. But does this mean that it always has to be on the same server or container (for example having one docker container that runs a MongoDB and my JAR)? Or can this also mean that on one server my JAR is running while my MongoDB is located somewhere else (so two containers for example)?
If the first one is correct (JAR and database within one container), how can I prevent that after some changes regarding my application and after a new deployment of my JAR my data of the MongoDB is resetted (since a whole new container is now running)?
Thanks a lot already :-)
Alternative opinion:
In 99% of real life cases you musnt have a single container that runs
database and the application, those should be separated, since one
(db) is keeping state, while the other (app) should be stateless.
You don't need a separate database for microservice, very often a separate schema is more than enough (e.g. you dont want to deploy a separate Exadata for each microservice :)). What is important is that only this microservice can read and write and make modifications to given tables others can operate on those tabls only through interfaces exposed by the microservice.
First of all each Microservice should have its own database.
Secondly it's not necessary and also not recommended to have the Microservice and its database on the same container.
Generally a single Microservice will have multiple deployments for scaling and they all connect to a single Database instance which should be a diff. container and if using things like NoSql DB's its a database cluster.
Yes, Each Microservice should have its own database and if any other Microservice needs data owned by another microservice, then they do it using an API exposed by Microservices. No, it's not at all necessary to have the Microservice and its database to be hosted on the same server. For Example - A Microservice can be hosted on-premise and its database can live in the cloud like AWS DynamoDB or RDS.

Resources