Circular dependency of client libraries - microservices

I am working on 2 micro-services currently, microservice A includes microservice B's SDK to call B's API and access some entity classes.
Similarly microservice B also includes microservice A's SDK and accesses microservice A's enitity classes.
Now I am facing one issue when I need to bump up the version of microservice B in microservice A and vice versa.
How should I solve this problem?

You've already broken the cardinal rule of microservices by tightly coupling the two services.
The right answer here is going to be to refactor these services such that they properly and completely encapsulate functionality. That could involve combining them (if B is always and completely dependent on A, they may not really be separate services), splitting them out into more services, or just shifting responsibility.
But, the path you're on with tight-coupled microservices leads to a distributed monolith which is unlikely to provide the benefits you're after (specifically including the co-dependent revision concern you mention here).
Here's a good answer to a related question that might offer some more insight.

Related

Separate microservice just for microservices orchestration?

I have a few microservices where each microservice has REST endpoints for CRUD operations.
I have to create a workflow that will start from one microservice with some initial input, but later outputs from a microservice can be used as input to other microservices. There can be some synchronous and asynchronous calls to these REST APIs.
I have looked for some of the workflow-engines but I do not think that I can create my workflow without writing any java code.
Should I write a separate microservice just for microservices orchestration? This orchestration microservice will know the exact workflow and can be configurable for inputs required to start the workflow, and it can also use some third-party workflow engines like Camunda to store the definition of the workflow.
Is this correct thinking to have a separate microservice just for microservices orchestration? Till now the existing microservices have no idea about the other microservices. There could be a chance that output from one microservice needs to be massaged before using as input for other microservice.
I have looked for some of the workflow-engines but I do not think that
I can create my workflow without writing any java code.
This depends on your business processes and the complexity of your workflow. Usually yes you will need to write some code to achieve it.
Should I write a separate microservice just for microservices
orchestration? This orchestration microservice will know the exact
workflow and can be configurable for inputs required to start the
workflow, and it can also use some third-party workflow engines like
Camunda to store the definition of the workflow.
Yes you can do that. I did something similar on a system using micro-services. This would be a very good Idea on the long run as you could configure your workflow based on environments as well. For example on your development machine you would have a little different workflow/configuration. This is practical for Developers or QA's testing their solutions. On the other hand on Staging/Production you can pre-define Customer setups/orchestration which you can reuse any time if you get new customers or users.
Is this correct thinking to have a separate microservice just for
microservices orchestration? Till now the existing microservices have
no idea about the other microservices. There could be a chance that
output from one microservice needs to be massaged before using as
input for other microservice.
Yes you can do that without problems although I would be careful with the name orchestration as this has another meaning in context in micro-service architecture(Docker, Docker-Swarm, Kubernetes). Similar examples would be some kind of EndToEndTest or Cross micro-service testing-micro-service. That would test cross micro-service business operations and assert the results. Usually business operations involve more then 1 micro-service so in order to test that you can use this approach. This micro-service would call APIs from multiple micro-services and test the results and scenarios based on your Business rules. Another example would be something like seeder-micro-service(which seems to be very similar to what you are trying to do here). This seeder-micro-service would be responsible for seeding(creating) test data to your micro-services. This test data is some basic setup/configuration data which you need in order to have your micro-service business processes to work. This is very handy for development machines or some test environments where you need to quickly setup an environment. Using this seeder-micro-service you can easily setup do your work or tests and dispose the environment(data) as you need it. This is especially useful for development machines setups but it can also be used on shared test environments and etc. Both of those examples are micro-services which server your needs and make your life easier to work with your system.
One final note regarding this:
Till now the existing microservices have no idea about the other
microservices.
They should be abstracted from each other in a way that they are not aware of internal implementation or data(separate databases) but they should communicate between each other in order to perform business operations which sometimes are cross micro-services. Like the typical example of payment-micro-service and order-micro-service from an online shop example. So it is fine that they know about each other and communicate but this communication has to be very carefully designed in order to avoid some common pitfalls.
They usually communicate with each other with direct calls over HTTP or some other protocol or through some message queue like Apache Kafka or RabbitMq or others. You can read more about it in this answer.
Yes, you should cover the orchestration part in a separate service. And, yes, go with a BPMN 2 process engine as orchestrator, as you already suspected. Yes, this may include writing a little code mostly for data mapping or connectors.
Benefits include for instance ootb support for:
state management and long running processes / persistence for data
versioning (!)
retries and error handling
tooling to modify state and date in case something went wrong
timeouts, parallel execution (if necessary)
scalability
graphical process model
audit trail
and end to end visibility in monitoring tools based on BPMN 2 model
ability to include business rules tasks (DMN) for more complex rules
combination of push and pull communication pattern and a/sync communication
business-IT alignment via BPMN 2
support for the various BPMN 2 events
standardization (skills, security, software quality, features)
...
This is a great related article about the WHY using an airline ticket booking as an example:
https://blog.bernd-ruecker.com/3-common-pitfalls-in-microservice-integration-and-how-to-avoid-them-3f27a442cd07
This is about the important design consideration in case you go with a process engine:
https://blog.bernd-ruecker.com/the-microservice-workflow-automation-cheat-sheet-fc0a80dc25aa
I faced a similar problem as in the original question. I'd a few microservices with simple dependencies that needed to be managed and did go down the path of writing my own microservice https://github.com/pedro-r-marques/workflow to manage the dependencies and do the orchestration. It uses a yaml definition file to describe the dependencies and rabbitmq for message passing. One could also replace the usage of rabbitmq by REST API calls front-ended by a load-balancer.

How to decide on the number of microservices and whether to have common jars

I have 3 micorservices one that serves request from UI and the other that serves request from public apis and the third which does some data processing and storing the data provided from the kafka topic by UI/public.
I have written common service and dao jar for the services, as the data is coming from the common data source.
If I dont have common service/dao then lot of code will be duplicated.
I am now feeling that this is causing coupling between the services.
Is it the right design?
Using a common DAO across microservices is right if it is making development faster and easier to understand for everyone, and wrong if it's not. You are right that this is creating some coupling between the services, but it's coupling that you could easily do away with if the DAOs for the services began to diverge. Since the final shared package will be inside each service's runtime, there would be zero issues introduced if one of the other services decided to stop using the DAO and use a different one.
That being said, you may have a larger coupling issue if all three services are using this DAO to connect to a shared database. If each is dependent on the same tables/schema, it makes it very hard for one service to diverge from the others and make independent schema changes without impacting the others.

Using microservices architecture in spring

I'm building a project which based on microservices architecture in spring boot.The project was divided multiple modules and I used maven dependency management.
Now I want to use services from one module in other module. I have many spring applications. For example, I have 2 application which is named A and B. I want to use classes from A in B and classes of B in A. In this case I used maven dependencies but it is not completely way to using services in one another because I faced with circular dependency.
What should do to use for solve this problem?
It is not a good idea to share classes between microservices, if you want to replace microservice A, you'll have to adapt Microservice B.
Every Service must implement its own data classes which holds the fields which are needed for the service.
MicroService A and MicroService B both can contain a class Foo but this classes can be different by its fields. Perhaps both contain the field 'id' and 'name' but only Microservice A also needs a field 'date' to do his work.
If you have classes that need to be in some of your Microservices, i think it's better to make a shared library and put your shared classes in that, then use your shared library in your Microservices.
Actually i think it's a good idea to put classes that need to be in most of your Microservices in a shared library and use that library. But should be careful, because it may comes to tight coupling which isn't a good thing in Microservices Architecture.
Personally i think some Configuration classes and some Event models that most of your Microservices use are good candidates. But i don't think sharing your Service classes between your Microservices are a good idea. Instead they should use each other's services as they are completely independent and are using external services.
create one common entity application and add that entity application as a dependency. For example assume you have stored user data in micorservice1(MC1) and need this class(User) in other microservices(like MC2, MC3,MC4, and so on) then you can create one entity application like util and add this dependency in required microservices.

Difference between OSGi services and REST microservices [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
OSGi talks about microservices and the press talks about microservices. However, they do not seem to be the same. What is the difference between these microservices.
OSGi and microservices share the same architectural style but differ in their granularity. We actually used to call OSGi services microservices until the web stole that name. We now sometimes call them nanoservices.
The principle of (micro|nano)services is to tunnel the communications between modules through a gate with a well defined API. Since an API is, or at least should be, independent of the implementations you can change one module without affecting the other modules. One of the most important benefits is that designs of even large systems can remain understandable when looking at the service diagram. In a way, a service based design captures the essence of the system, leaving the details for the modules.
With web/micro services the gate is a communication endpoint (host:port for example) and protocol (REST for example). The API is defined informally or with something like Swagger/OpenAPI or SOAP.
OSGi defines a (nano) service as an object that is made available to other modules (bundles) to use. Java is used to define the API.
Since nanoservices are OSGi most important design primitive there is a lot of support to make them easy to work with. Interestingly, since the service registry is dynamic and reflective, it is quite straightforward to map a nanoservice to a microservice and vice versa. The OSGi Alliance standardized this in their model for distributed OSGi, the "Remote Service Admin". This specification allows you to take an OSGi nanoservice and map it to REST, SOAP, or other protocols.
Therefore, choosing OSGi allows you not only to defer the decision to support microservices, it also allows you to add microservices to your system afterwards. Having a unified architectural style for the most basic functions as well as the highest level functions makes systems easier to understand as well scale.
I don't think you're comparing apples to apples here. OSGI is an application architecture, while microservices is a distributed systems concept.
In my experience, microservices offer a number of benefits:
Individual microservices are easy to deploy, test, and maintain.
Microservices are language agnostic. That means you could write one microservice in python, another in javascript, a third in go, and a yet another in java.
Microservices are easy to scale individually. That means that if one type of request is made more often than others, you could scale the microservice you need to, without scaling anything else in the system.
Each microservice in your system owns its own data. This ensures clear boundaries and separation of concerns.
However, they also have some drawbacks:
There are more infrastructure concerns when deploying.
It's difficult to keep messaging between microservices clean and efficient.
It's harder to do end-to-end testing on a system with many moving parts.
There's more overhead in messaging. Instead of a call to another service being a direct method call, it needs to use HTTP or some other form of network communication.
There's a good article describing some of the differences here.

What is a good reason for separating intelligence and dao layers in a microservice?

I am having a long-term debate with my architect about architecture choices. The entreprise where I work in is migrating from a monolithic architecture to a microservices one.
The debate is located on the good approach for handling database access. One of us is standing there is no need for separating DAO and service (database access are directly handled by the service class), and the other stands for the contrary.
We are discussing about it for days, and I can't find good reason for convincing him, and he cannot either convince me.
The question is actually really simple: in an atomic REST microservice (a microservice handling only one REST method), should we have a separate DAO class or not? Which arguments could you give for separating, or keeping everything together?
This is a OOP oriented project (Java), if it matters.
Edit: This quest has been reposted on softwareengineering

Resources