How two microservices can communicate when they are hosted on same machine vs distributed? [closed] - microservices

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am new to microservices.How can I communicate between my microservices? I have four microservices and parent pom.xml has dependencies for all microservices. Now, as they are not hosted on different machines, I no need to call rest api to communicate. how communication happens between these services? I am little confused how microservices are designed different modules on same machine? or differnt machines as separate projects and then call via rest apis?

The microservice design is deployment agnostic. They are not supposed to know about each other's deployment. They can be deployed together or separately. They communicate via REST/SOAP, AMQP, Common resource (like file, DB), etc.
So in your case it you need to pick one of the standard ways of communication.

Related

gRPC based microservice architecture for inter-service communication [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am trying to comprehend how does the inter-service communication works when implementing these services using gRPC. While there are lot of articles out there covering the basics about getting started with gRPC and how it can be compiled to multiple different languages. I am still kind of missing some guidelines on how best should each service in microservice architecture communicate.
My general understanding, after following through something like this : https://www.oreilly.com/library/view/practical-grpc/9781939902580/
That if I need to use gRPC in microservice architecture where inter-service communication will be gRPC based each service will essentially (and as needed) should do server and client stub implementation to talk to other services.
So for me it would look something like this
And if above is the case, then getting each of these services deployed in K8s environment seems like quite some effort esp. with making each service discoverable in the entire cluster.
Some additional notes
I am developing with Go primarily and using protobuf for defining proto files.
It'll be greatly helpful if someone could comment on this or has a resource that I can go through to better my understanding.
Thanks!
Your understanding is correct. If you want to have separate services that talk to each other remotely in kubernetes, you will have to deploy them (helm could help here) and you have to create services for cluster discovery.

Microservices calling other external services [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
My apologies as the realm of this question is very broad. We are starting on a new journey of defining microservices and starting with a DDD (We are based off .NET tech stack but I reckon for purposes of discussion this topic is independent of the stack)
At this moment, we have roughly identified the domains and we have defined layers like Domain Layer, Infrastructure Layer, Application layers. So for example if we have a customer / client we have defined the following layers for like so. The point where we are really getting confused is how this microservice with other service which are not microservice per se. Say for example, if there is a rule that a CreateCustomer command, as a part of its creation, needs CreditScore verfication and if this service is provided by some external provider via a facade that could be written in house, how should a microservice communicate with such a service?
Are there any patterns or any recomendation re how such microservice to other services communication needs to be defined? Any recomendations / suggestions are welcome.
services dependency manager
+ protocol (REST/RPC/native API's)
+ circuit breakers
just like any other micro/service would do.
Don't understand why (external) micro/service is not just another micro/service for your system.

Differences between the service Mesh projects Istio and Conduit [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
" 2018: The Year of Service Mesh "
Recently, I did some research on the service Mesh for handling service-to-service communication, and the implementations created in the last two years. But I'm still so confused about the real differences between Istio and Conduit that already give the same features.
So it is obvious that they are competitors but, based on what, we can choose, as clients, the project that we should take?
Both technologies provide service mesh communication, and developed with plugin architecture, the list includes Prometheus,Zipkin etc.
As far as features are concerned, they will be at feature parity if not now, but at some point in the future. Especially with Kubernetes etc, you can always replace one with other without affecting your services much.
I would go with Istio if your services are based on Kubernetes as both are backed by Google, and expect them to work much better compared to Conduit going forward.
Last I read Istio will be supported on Docker Swarm as well, so it all depends where you are running your services and which mesh framework is most suited for it.

Scripting access to a website using different ips [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I would like to test automatically my website from different locations in order to localize content's presentation. I think I have to write a bash script to access the website with wget program, using an ip from a list. There is somewhere an established solution to this kind of problem ?
There is many solutions. I think to these ones :
IP spoofing. But it's not easy. In particular if you want orchestrate these tests to automate them...
Another solution is to use a reverse-proxy. An example: your application is hosted by Tomcat and you use Apache as reverse proxy. In this case you can easily configure several end-points in Apache where you lie about XFF
Another solution, you can rent VM in the cloud. This is a good approach if you want to perform real performance tests from a remote client, or check the behavior of Internet cache...
Some compagnies sells services to check availability of your web-stuff from different sites.

ServiceMix ESB as Bus or Container? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Should we use ServiceMix ESB as bus (i.e. communication channels) or as container to host services?
My current company host services (JMS/SOAP/RESTFUL etc, built by Java) in their own separate containers/servers etc, then each of these communicate to each other via the ServiceMix ESB, by adding extra bindings.
Is this a correct approach?
Should we migrate all existing services to become OSGI bundles, then host on ServiceMix?
I'd say it depends more on your current system land-scape. How do you handle failover and such. I personally would have all my service on that machine and if a routing is needed would try to do an "in-memory" routing instead of doing external service calls, would be much faster. On the other hand this again depends purely on how your application stack is working and if you have "time-critical" service calls that would perform better if run inside the same jvm. So actually there can't be a "silver-bullet" approach on this. As usual it depends ...

Resources