My product is migrating to microservices and they have presented an architecture where there are 2 parts:
Micro App : This is UI + an Orchestration layer.
Microservices : The individual microservices that micro app interacts with.
Now, in this architecture, they said that the individual microservices can interact with each other directly despite the presence of the orchestration layer. This is contrary to what I read (and understood). My understanding is that individual microservices don't interact with each other directly if there is an orchestrator. Is my understanding correct?
My understanding is that individual microservices don't interact with each other directly if there is an orchestrator. Is my understanding correct?
Yes, you are correct.
In orchestration, by definition, there is a central brain that do the all the communication between microservices. The idea is that individual microservices do not know of each other, so how can they interact with each other?
For more information you can read this book, page 43.
Related
I have an application separated in various OSGI bundles which run on a single Apache Karaf instance. However, I want to migrate to a microservice framework because
Apache Karaf is pretty tough to set up due its dependency mechanism and
I want to be able to bring the application later to the cloud (AWS, GCloud, whatever)
I did some research, had a look at various frameworks and concluded that Quarkus might be the right choice due to its container-based approach, the performance and possible cloud integration opportunities.
Now, I am struggeling at one point and I didn't find a solution so far, but maybe I also might have a misunderstanding here: my plan is to migrate almost every OSGI bundle of my application into a separate microservice. In that way, I would be able to scale horizontally only the services for which this is necessary and I could also update/deploy them separately without having to restart the whole application. Thus, I assume that every service needs to run in a separate Quarkus instance. However, Quarkus does not not seem to support this out of the box?!? Instead I would need to create a separate configuration for each Quarkus instance.
Is this really the way to go? How can the services discover each other? And is there a way that a service A can communicate with a service B not only via REST calls but also use objects of classes and methods of service B incorporating a dependency to service B for service A?
Thanks a lot for any ideas on this!
I think you are mixing some points between microservices and osgi-based applications. With microservices you usually have a independent process running each microservice which can be deployed in the same o other machines. Because of that you can scale as you said and gain benefits. But the communication model is not process to process. It has to use a different approach and its highly recommended that you use a standard integration mechanism, you can use REST, you can use Json RPC, SOAP, or queues or topics to use a event-driven communication. By this mechanisms you invoke the 'other' service operations as you do in osgi, but you are just using a different interface, instead of a local invocation you do a remote invocation.
Service discovery is something that you can do with just Virtual IP's accessing other services through a common dns name and a load balancer, or using kubernetes DNS, if you go for kubernetes as platform. You could use also a central configuration service or let each service register itself in a central registry. There are already plenty different flavours of solutions to tackle this complexity.
Also more importantly, you will have to be aware of your new complexities, but some you already have.
Contract versioning and design
Synchronous or asynchronous communication between services.
How to deal with security in the boundary of the services / Do i even need security in most of my services or i just need information about the user identity.
Increased maintenance cost and redundant side code for common features (here quarkus helps you a lot with its extensions and also you have microprofile compatibility).
...
Deciding to go with microservices is not an easy decision and not one that should be taken in a single step. My recommendation is that you analyse your application domain and try to check if your design is ok to go with microservices (in terms of separation of concenrs and model cohesion) and extract small parts of your osgi platform into microservices, otherwise you mostly will be force to make changes in your service interfaces which would be more difficult to do due to the service to service contract dependency than change a method and some invocations.
Because as I understand it stateless micro service do not rely on state. So why does it need the database inside the micro-service? I thought it should be other way around.
I hope the location of database does not matter as long as the idea of the stateless is that the server will not store any session or any state but it will be stored in a database. While stateful ones do store session and other stuff.
Most of the diagrams related to Microservices architecture have database associated with a service. This is to display the fact that independent micro services have independent databases. In traditional monolith apps, the app would be connected to a single database. When we break a monolith into multiple micro services using domains, the ideal way is for each micro service to have a different database so that services can run and evolve independently. This is a true microservices architecture.
So, to answer your question, database in a micro service block in a diagram just shows the independence of the service with its own data model and logic.
Going through literature on microservices, a common concept is that if a microservice relies on another service to service a direct request, it is not truly autonomous.
Does it mean truly autonomous microservices don't interact at all? How are systems supposed to work then?
In fact, when you decide go with Microservices architecture you need to be decoupling your business logic between your "services" as much as possible. Otherwise, you need to combine them in 1 service.
I am applying DDD for the M part of my MVC and after some research (studying up!), I have come to the realization that I need my controller to be interacting with domain services (in the model). This would make my controller the consumer of the domain services and therefore an application service (in DDD terms). Is this accurate? Is there a difference between a controller and what DD defines as an application service?
The application layer is somewhere between the domain layer and the presentation layer. The controller is part of the presentation layer, sends commands or queries to the application layer, where the application services execute them using the services and objects of the domain model. So controllers are different from the application services, and they are probably bound to the actual communication form, e.g. HTTP. You should not call domain services from controllers directly, this might be a sign of misplaced code.
Domain Driven Design: Domain Service, Application Service
Domain Services : Encapsulates business logic that doesn't naturally fit within a domain object, and are NOT typical CRUD operations -
those would belong to a Repository.
Application Services : Used by external consumers to talk to your system (think Web Services). If consumers need access to CRUD
operations, they would be exposed here.
So your service is probably an application service and not a domain service, or some part app services, some part domain service. You should check and refactor your code. I guess after 4 years this does not matter, but I had the same thoughts by an application I am currently developing. This app might be too small to use DDD on it, so confusing controllers with app services is a sign of overengineering here.
It is an interesting question when to start add more layers. I think every app should start with some kind of domain model and adapters to connect to that domain model. So if the app is simple enough, adding more than 2 layers might be not necessary. But that's just a thought, I am not that experienced with DDD.
The controller is not considered a service in DDD. The controllers operate in the UI tier. The application services gets data from the DB, validates data, passes data to client (MVC could be a client but so could a request coming from a winforms app) etc etc.
All the controller is doing is servicing requests from the UI. Its not part of the application domain.
In DDD Reference controllers are not mentioned at all. So I think from DDD POV the answer is undefined. So I would consider more practical question: "Do we need to separate Controllers and Application Service"?
Pros:
Separating input processing from use case implementation.
Cons:
Extra layer of indirection that complicates understanding of simpler cases. (It is also annoying to implement some trivial thing by pulling data though many layers without anything actually being done).
So I would choose separation of controller and application service if input processing obfuscates use case logic. I would choose to keep use case logic in controller if both are simple enough so that you can easily see in code use cases separately from input processing.
A Layered Architecture splits the application up into UI-Layer, App-Layer, Domain Layer and Infrastructure Layer (Vaugn Vernons Implementing Domain-Driven Design (location 2901)). The controller falls in "Application Layer" of this broader design architecture and will therefore interact directly with the domain services in the model and is considered an application service. Not only that, it'll will obviously also use the entities and aggregates as available.
I want to adopt a solution of EIP for cloud deployment for a web application:
The application will be developed in such an approach that each layer (e.g. data, service, web) will come out as a separate module and artifact.
Each layer has the opportunity to deployed on a different virtual resource on the cloud. In this regards, web nodes will in a way find the related service nodes and likewise service nodes are connected to data nodes.
Objects in the service layer provide REST access to the services in the application. Web layer is supposed to use REST services from service layer to complete requests for users of the application.
For the above requirement to deliver a "highly-scalable" application on the cloud, it seems that solutions such as Apache Camel, Spring Integration, and Mule ESB are of significant options.
There seems to be other discussion such as a question or a blog post on this topic, but I was wondering if anybody has had specific experiences with such a deployment scheme on "the cloud"? I'd be thankful for any ideas and sharing experiences. TIA.
To me this looks a bit like overengineering. Is there a real reason that you need to separate all those layers? What you describe looks a lot like the J2EE applications from some years ago.
How about deploying all layers of the application onto each node and just use simple Java calls or OSGi services to communicate.
This aproach has several advantages:
Less complexity
No serialization or DTOs
Transactions are easy / no distributed transactions necessary
Load Balancing and Failover is much easier as you can do it on the web layer only
Performance is probably a lot higher
You can implement such an application using spring or blueprint (on OSGi).
Another options is to use a modern JavaEE server. If this is interesting to you take a look at some of the courses of Adam Bien. He shows how to use JavaEE in a really lean way.
For communicating between nodes I have good experiences with Camel and CXF but you should try to avoid remoting as much as possible.