How should I design my Spring Microservice? - spring

I am trying to create a Microservice architecture for a hobby project and I am confused about some decisions. Can you please help me as I never worked using Microservice before?
One of my requirements is that my AngularJS GUI will need to show some drop-down or List of values (example: a list of countries). This can be fetched using a Microservice REST call, but where should the values come from? Can I fetch these from my Config Server? or should it come from Database? If the latter, then should each of the Microservice have their own Database for lookup value or can it be a common one?
How would server-side validation work in this case? I mean, there will certainly be a Microservice call the GUI will make for validation but should the validation service be a common Microservice for all Use Cases/Screens or should it be one per GUI page or should the CRUD Microservice be reused for validation as well?
How do I deal with a use-case where the back-end is not a Database but a Web-service call? Will I need some local DB still to maintain some state in between these calls (especially to take care of scenario where the Web-service call fails) and finally pass on the status to GUI?

First of all, there is no single way design micro-service , one has to choose according to the use case and project requirement.
Can I keep these in a Config Server? or should it come from Database?
Again, it depends upon the use case and requirement. However, because every MS should have their own DB then you can use DB if the countries have only names. But if they have some relationship with City/State then you should use DB only.
If DB should each of the Microservice have their own DB for lookup
value or can it be a common one?
No, IMO multiple MS should not depend on a single DB.Because if the DB fails then all the MS will fail, which should not be done. Each MS should work alone with depending on other DB or MS.
should the validation service be a common microservice for all
UseCases/Screens
Same as point 2
How do I deal with a use-case where the backend is not a Database call
but another Web-service call? Will I need some local DB still to
maintain some state in between these calls and finally pass on the
status to GUI?
If you are using HTTP then you should not save the state of any request. If you want to redirect the request to another MS then you can use Feign client which provides a very good way to call rest-api and other important features like: Load balancing.

Microservice architecture is simple. Here we divide each task into separate services(like Spring-boot application).
Example in every application there will be login function,registration function so on..each of these will a separate services in micro-service architecture.
1.You can store that in database, since in feature if you want add more values it is easy to add.
You can maintain separate or single db. Single db with separate collections or table for each microservices.
Validation means you are asking about who can use which microservice(Role based access)???
3.I think you have to use local db.

Microservices is a collection loosely coupled services. For example, if you are creating an ecommerce application, user management can be a service, order management can be a service and refund & chargeback management can be another service. Now each of these services can be further divided into smaller units, lets call them API Endpoints. For example - user management can have login as an endpoint and signup as another endpoint.
If you want to leverage the power of Microservice architecture in its true sense, here is what I would suggest. For the above example, create 3 Springboot Applications for each service. First thing that you should do after this, is establish trust between those applications. I would prefer JWTs for trust establishment. After that everything is a piece of cake. Here are the answers you are looking for :
You should ideally use a database, as opposed to keeping the values in config server, for fetching a list of countries so that you need not recompile your code every time a new country is added.
You can easily restrict access using #PreAuthorize if Role based access is what you are referring to.
You can use OkHttp or any other HttpClient in this usecase. And you certainly need not maintain any local db. However, you can cache the output of the webservice call if that is a requirement.
P.S.: Establishing trust between microservices can be a complex task if you dont understand all the delicacies. In which case, I would recommend going ahead with a single Springboot application; which is a monolithic architecture. I would still recommend JWTs though.

Related

How do I access data that my microservice does not own?

A have a microservice that needs some data it does not own. It needs a read-only cache of data that is owned by another service. I am looking for guidence on how to implement this.
I dont' want my microserivce to call another microservice. I have too much data that is used in a join for this to be successful. In addition, I don't want my service to be dependent on another service (which may be dependent on another ...).
Currently, I am publishing an event to a queue. Then my service subscribes and maintains a copy of the data. I am haivng problem staying in sync with the source system. Plus, our DBAs are complaining about data duplication. I don't see a lot of informaiton on this topic.
Is there a pattern for this? What the name?
First of all, there are couple of ways to share data and two of them you mention.
One service call another service to get the data when it is required. This is good as you get up to date data and also there is no extra management required on consuming service. Problem is that if you are calling this too many times then other service performance may impact.
Another solution is maintained local copy of that data in consuming service using Pub/Sub mechanism.
Depending on your requirement and architecture you can keep this in actual db of consuming service or some type of cache ( persisted cache)
Here cons is consistency. When working with distributed architecture you will not get strong consistency but you have to depends on Eventual consistency.
Another solution is that and depends on your required you can separate out that tables that needs to join in some separate service. It depends on your use case.
If you still want consistency then at the time when first service call that update the data and then publish. Instead create some mediator component and that will call two service in sync fashion. Here things get complicated as you now try to implement transaction over distributed system.
One another point, when product build around Microservice architecture then it is not only technical move, as a organization and as a team your team needs to understand something that work in Monolith, it is not same in Microservices. DBA needs to understand that part and in Microservices Duplication of data across schema ( other aspect like code) prefer over reusability.
Last but not least, If it is always required to call another service to get data, It is worth checking service boundary as well. It may possible that sometime service needs to merge as business functionality required to stay together.

What approach we should follow to create relationship between two microservices without duplicating?

Microservice architecture is docker-based, one microservice(transaction database with userId) is in Node JS, and the other is in Rust language(User database). We need to create a common API or function to retrieve data from both microservices. MongoDB is used as Database for both microservices.
There are several approaches to do that.
One possible solution is that one of the microservices will be responsible of aggregate this data so this microservice will call the other to obtain the data and then combine it with its own data and return it to the caller. This makes sense when the operation to be done is part of the domain of one of the microservices. For example, if the consumer needs user information it is normal to call the user service and this service makes whatever calls are needed to other services to return all the information.
Another possibility is to use the BFF (Backend For Frontend) pattern, this makes sense when the consumer (for example a frontend) needs different information from different domains to populate the UI, in this case, you will create an additional service that will expose an API with all the information needed for the consumer and this service will do the aggregation of the information. In certain cases, this can be done directly in the API gateway if you are using one.
The third way is similar to the first one but it needs to duplicate data so I don't know if it will be suitable for you. It consists of having a read-only copy of the data owned by one of the service in the other service and updates it asynchronously using events when this data is modified. The benefit of this approach is the performance will be better because you don't need to make the communication between services. The disadvantage is eventual consistency.

Why do microservices need to communicate with each other?

I'm new with this but in case if we have fronted + few different microservices I just don't get why do we need any of them to communicate with each other if we can manipulate between their data via axios on frontend. What is the purpose of event bus and event-driven architecture in the case if we use both frontend and backend microservices?
Okay, for my example I'm using 5 microservices. There are 2 of them:
Shopping cart
Posts
And I want to access posts microservice directly, pass their data through the event bus, so the shopping cart microservice would have its information. The reason is that posts and shopping cart both have different data bases, so is a good example doing this that way, or just through frontend with axios service?
What you are suggesting could be true for a very simple application, which hardly even needs an architecture such as microservice. It is clear why services need communication:
some services are not even accessable (for various reasons such as security) in client, so a change in them must be initiated in other backend services with such priviledge
some changes are raised in backend services and not client, e.g. a cronjob for doing some task
it would question reusability as you must consider the service to be used by not only client, but in any environment
what would happen if you want your services to be used by public, what if they do not implement part of the needed logic intentionally or by mistake
making client do everything could be so complex and would reduce flexibility
some services such as authentication are acting as a supporting mechanism to ensure safety (or anything other than main logic), these should be communicated directly by the service needing them
As for second part of your question, it depends on several factors like your business needs & models, desired scalability, performance, availability, etc. so the right answer or in another say, the answer that fits would be different.
For your problem, using event bus which is async would not be a good solution as it would hurt consistency in your services. Instead synchronous ways like a simple API call in your posts service would be a better idea.

Is it considered as a good practice to connect to two different databases in on microservice?

Is it considered as a good practice to connect to two different databases in on microservice API Or I need to implement another microservice for working with the second database and call the new microservice API inside the first one?
The main thing is that you have only one microservice per database, but it is ok to have multiple databases per microservice if the business case requires it.
Your microservice can abstract multiple data sources, connect them, etc. and then just give consistent api to whoever is using it. And who's using it, doesn't care how many data sources there actually is.
It becomes an issue, if you have same database abstracted by multiple microservices. Then your microservice is no longer isolated and can break, because the data source you are using was changed by another team who's using the same data source.

What is the role of falcor in a microservice architecture?

Say we have following taxi-hailing application that is composed of loosely coupled microservices:
The example is taken from https://www.nginx.com/blog/introduction-to-microservices/
Each services has its own rest api and all services are combined in a single api gateway. The client does not talk to a single service but to the gateway. The gateway requests information from several services and combines them to a single response. For the client it looks like it is talking to a monolithic application.
I am trying to understand: where could we incorporate falcor into this application?
One Model Everywhere from http://netflix.github.io/falcor/
Falcor lets you represent all your remote data sources as a single
domain model via a virtual JSON graph. You code the same way no matter
where the data is, whether in memory on the client or over the network
on the server.
In this taxi-hailing application each microservice represents a single domain model already. Can you think of any benefit we could thrive by wrapping each microservice with falcor? I cannot.
However I think it is very convenient to incorporate falcor into the api gateway because we can abstract away the different domain models created by the microservices into one single or at least a few models.
What is your opinion?
You are right. This is how Netflix uses Falcor and what the Falcor router is designed for.
From the documentation:
The Router is appropriate as an abstraction over a service layer or REST API. Using a Router over these types of APIs provides just enough flexibility to avoid client round-trips without introducing heavy-weight abstractions. Service-oriented architectures are common in systems that are designed for scalability. These systems typically store data in different data sources and expose them through a variety of different services. For example, Netflix uses a Router in front of its Microservice architecture.
It is rarely ideal to use a Router to directly access a single SQL Database. Applications that use a single SQL store often attempt to build one SQL Query for every server request. Routers work by splitting up requests for different sections of the JSON Graph into separate handlers and sending individual requests to services to retrieve the requested data. As a consequence, individual Router handlers rarely have sufficient context to produce a single optimized SQL query. We are currently exploring different options for supporting this type of data access pattern with Falcor in future.
Falcor is really a great api if it is used in the correct way for very relevant use cases, like :
If your page has to make multiple REST end point calls
These calls don't depend on each other
All the REST calls happens on initial page load
Performance : If you want to cache the REST responses (for example, the microservice uses gemfire caching, you may not need falcor cache. You could still use falcor caching if you want to reduce the network latency)
Server requests batching : When running Falcor in node environment, you may want to cut down the amount of calls to node server from the client side.
Easier response parsing : If you don't want the client code to worry about extracting the data-points from REST response (Including error handling)
and so on ..
However, there are plenty of situations where falcor does not serve the purpose as much and feel that it is better off calling the end point directly :
If REST calls are dependent on one another
If you want to pass lot of parameters for calling the end point
If you don't intend to cache the response(s)
If you want to share some secure cookies (ex:XSRF tokens) with the REST web service

Resources