Microservices Architecture Best Practice [closed] - spring-boot

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I will be appritiated if anyone answer to below quesion.
How a system will be designed with hundreds of services if each and every service has to be independent with a dedicated port as per microservices architecture? i mean is it a good practice to open hundreds of ports on OS for example?
Best Regards.

For security reasons microservices are hosted in private vpc, i.e. the nodes (where the microservices are run) does not have public ip. And the only way to get access to them is via a gateway api (see below). Also "each and every services has to be independent" should be in the means of domain link1 link2.
To expose services use the API gateway pattern: "a service that provides a single-entry point for certain groups of microservices" link1 link2. Note that api gateway is for a group of microservices, i.e. there may be several gateways for different groups of services (one for public api, one for mobile api, etc).
Only you can answer this question because only you knows what problem you try to solve. Before deciding I recommend to read about MonolithFirst approach

Micro services architecture is somehow the next generation of ESB products but in this case due to the high number of services,I am not sure if it is a solution!

Related

How two microservices can communicate when they are hosted on same machine vs distributed? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am new to microservices.How can I communicate between my microservices? I have four microservices and parent pom.xml has dependencies for all microservices. Now, as they are not hosted on different machines, I no need to call rest api to communicate. how communication happens between these services? I am little confused how microservices are designed different modules on same machine? or differnt machines as separate projects and then call via rest apis?
The microservice design is deployment agnostic. They are not supposed to know about each other's deployment. They can be deployed together or separately. They communicate via REST/SOAP, AMQP, Common resource (like file, DB), etc.
So in your case it you need to pick one of the standard ways of communication.

gRPC based microservice architecture for inter-service communication [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am trying to comprehend how does the inter-service communication works when implementing these services using gRPC. While there are lot of articles out there covering the basics about getting started with gRPC and how it can be compiled to multiple different languages. I am still kind of missing some guidelines on how best should each service in microservice architecture communicate.
My general understanding, after following through something like this : https://www.oreilly.com/library/view/practical-grpc/9781939902580/
That if I need to use gRPC in microservice architecture where inter-service communication will be gRPC based each service will essentially (and as needed) should do server and client stub implementation to talk to other services.
So for me it would look something like this
And if above is the case, then getting each of these services deployed in K8s environment seems like quite some effort esp. with making each service discoverable in the entire cluster.
Some additional notes
I am developing with Go primarily and using protobuf for defining proto files.
It'll be greatly helpful if someone could comment on this or has a resource that I can go through to better my understanding.
Thanks!
Your understanding is correct. If you want to have separate services that talk to each other remotely in kubernetes, you will have to deploy them (helm could help here) and you have to create services for cluster discovery.

Microservices calling other external services [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
My apologies as the realm of this question is very broad. We are starting on a new journey of defining microservices and starting with a DDD (We are based off .NET tech stack but I reckon for purposes of discussion this topic is independent of the stack)
At this moment, we have roughly identified the domains and we have defined layers like Domain Layer, Infrastructure Layer, Application layers. So for example if we have a customer / client we have defined the following layers for like so. The point where we are really getting confused is how this microservice with other service which are not microservice per se. Say for example, if there is a rule that a CreateCustomer command, as a part of its creation, needs CreditScore verfication and if this service is provided by some external provider via a facade that could be written in house, how should a microservice communicate with such a service?
Are there any patterns or any recomendation re how such microservice to other services communication needs to be defined? Any recomendations / suggestions are welcome.
services dependency manager
+ protocol (REST/RPC/native API's)
+ circuit breakers
just like any other micro/service would do.
Don't understand why (external) micro/service is not just another micro/service for your system.

Differences between the service Mesh projects Istio and Conduit [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
" 2018: The Year of Service Mesh "
Recently, I did some research on the service Mesh for handling service-to-service communication, and the implementations created in the last two years. But I'm still so confused about the real differences between Istio and Conduit that already give the same features.
So it is obvious that they are competitors but, based on what, we can choose, as clients, the project that we should take?
Both technologies provide service mesh communication, and developed with plugin architecture, the list includes Prometheus,Zipkin etc.
As far as features are concerned, they will be at feature parity if not now, but at some point in the future. Especially with Kubernetes etc, you can always replace one with other without affecting your services much.
I would go with Istio if your services are based on Kubernetes as both are backed by Google, and expect them to work much better compared to Conduit going forward.
Last I read Istio will be supported on Docker Swarm as well, so it all depends where you are running your services and which mesh framework is most suited for it.

WCF REST to web API [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want to migrate from wcf rest services to web API, (around 30 endpoints to be created with 6 complex methods) just want to decide based on the budget (1 month time with one resource) available, which amongst the below would be a better solution.
Writing whole new code for creating web API, just utilizing logic already present in wcf rest services.
Creating API endpoints and calling wcf services inside that.
There is no real way to tell for sure without knowing more details (or maybe the entire project).
If you're not sure the time will be enough, one thing you can do is to start with option 2 and then replace each endpoint with the actual code from the WCF service. If one month proves to not be enough, you may end up with a mixed solution (where some methods are implemented in the Web Api and some are wrappers calling the WCF service). However, you will be able to just keep slowly moving the methods back to the Web Api and finish it eventually.

Resources