Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am trying to comprehend how does the inter-service communication works when implementing these services using gRPC. While there are lot of articles out there covering the basics about getting started with gRPC and how it can be compiled to multiple different languages. I am still kind of missing some guidelines on how best should each service in microservice architecture communicate.
My general understanding, after following through something like this : https://www.oreilly.com/library/view/practical-grpc/9781939902580/
That if I need to use gRPC in microservice architecture where inter-service communication will be gRPC based each service will essentially (and as needed) should do server and client stub implementation to talk to other services.
So for me it would look something like this
And if above is the case, then getting each of these services deployed in K8s environment seems like quite some effort esp. with making each service discoverable in the entire cluster.
Some additional notes
I am developing with Go primarily and using protobuf for defining proto files.
It'll be greatly helpful if someone could comment on this or has a resource that I can go through to better my understanding.
Thanks!
Your understanding is correct. If you want to have separate services that talk to each other remotely in kubernetes, you will have to deploy them (helm could help here) and you have to create services for cluster discovery.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I will be appritiated if anyone answer to below quesion.
How a system will be designed with hundreds of services if each and every service has to be independent with a dedicated port as per microservices architecture? i mean is it a good practice to open hundreds of ports on OS for example?
Best Regards.
For security reasons microservices are hosted in private vpc, i.e. the nodes (where the microservices are run) does not have public ip. And the only way to get access to them is via a gateway api (see below). Also "each and every services has to be independent" should be in the means of domain link1 link2.
To expose services use the API gateway pattern: "a service that provides a single-entry point for certain groups of microservices" link1 link2. Note that api gateway is for a group of microservices, i.e. there may be several gateways for different groups of services (one for public api, one for mobile api, etc).
Only you can answer this question because only you knows what problem you try to solve. Before deciding I recommend to read about MonolithFirst approach
Micro services architecture is somehow the next generation of ESB products but in this case due to the high number of services,I am not sure if it is a solution!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
So at first I built a simple monolith application and deploy it using docker and nginx (for reverse proxy only). for now I have plan to separate each services because some services require a lot of time and IO to do their jobs. I have researched about it and I know some components that I'll need like spring cloud eureka, service discovery and etc. I'm a bit confused because I only use docker and nginx if I add these components do I still need nginx on top it? can you give me an example of structure that I should know or apply to my project.
In your first iteration of the refactoring you can do without Service Discovery:
create a SpringBoot app for each microservice
services talk to each other directly (no need to have Nginx), also without Service Discovery it means that you hardcode (or store in a property file) the URL of the endpoints
deploy NGINX in front of the application/service which serves the end users (ie a Web Application)
Once you have validated your new architecture (splitting the responsibilities across the microservices) you can introduce Service Discovery (Eureka) so the endpoints are no longer hardcoded.
Nginx is pretty light so it can also be used for handling internal traffic if you like, but at this point you architecture should start considering volume of traffic and number of components to decide what works better.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
My apologies as the realm of this question is very broad. We are starting on a new journey of defining microservices and starting with a DDD (We are based off .NET tech stack but I reckon for purposes of discussion this topic is independent of the stack)
At this moment, we have roughly identified the domains and we have defined layers like Domain Layer, Infrastructure Layer, Application layers. So for example if we have a customer / client we have defined the following layers for like so. The point where we are really getting confused is how this microservice with other service which are not microservice per se. Say for example, if there is a rule that a CreateCustomer command, as a part of its creation, needs CreditScore verfication and if this service is provided by some external provider via a facade that could be written in house, how should a microservice communicate with such a service?
Are there any patterns or any recomendation re how such microservice to other services communication needs to be defined? Any recomendations / suggestions are welcome.
services dependency manager
+ protocol (REST/RPC/native API's)
+ circuit breakers
just like any other micro/service would do.
Don't understand why (external) micro/service is not just another micro/service for your system.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
" 2018: The Year of Service Mesh "
Recently, I did some research on the service Mesh for handling service-to-service communication, and the implementations created in the last two years. But I'm still so confused about the real differences between Istio and Conduit that already give the same features.
So it is obvious that they are competitors but, based on what, we can choose, as clients, the project that we should take?
Both technologies provide service mesh communication, and developed with plugin architecture, the list includes Prometheus,Zipkin etc.
As far as features are concerned, they will be at feature parity if not now, but at some point in the future. Especially with Kubernetes etc, you can always replace one with other without affecting your services much.
I would go with Istio if your services are based on Kubernetes as both are backed by Google, and expect them to work much better compared to Conduit going forward.
Last I read Istio will be supported on Docker Swarm as well, so it all depends where you are running your services and which mesh framework is most suited for it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Should we use ServiceMix ESB as bus (i.e. communication channels) or as container to host services?
My current company host services (JMS/SOAP/RESTFUL etc, built by Java) in their own separate containers/servers etc, then each of these communicate to each other via the ServiceMix ESB, by adding extra bindings.
Is this a correct approach?
Should we migrate all existing services to become OSGI bundles, then host on ServiceMix?
I'd say it depends more on your current system land-scape. How do you handle failover and such. I personally would have all my service on that machine and if a routing is needed would try to do an "in-memory" routing instead of doing external service calls, would be much faster. On the other hand this again depends purely on how your application stack is working and if you have "time-critical" service calls that would perform better if run inside the same jvm. So actually there can't be a "silver-bullet" approach on this. As usual it depends ...