We're planning to develop some microservices based on the play framework. They will provide rest apis and lots of them will be using akka cluster/cluster-sharding under the hood.
We would like to have an api gateway that exposes the apis of our internal services, but we're facing one big issue:
- Multiple instances of each service will be running under some ip and port.
- How will the api gateway know where the services instances are running?
- Is there maybe something load-balancer-like for play that keeps track of all running services?
Which solution(s) could possibly fill the spot for the "API Gateway"/"Load Balancer"?
The question you're asking is not really related to play framework. And there is no single answer that would solve what you need.
You could start by reading akka Service Discovery and then make your choice based what fits you more.
We're building services with akka-http and use akka-cluster but use unrelated technologies to expose and run the services.
Check out
Kong for API Gateway
Consul for DNS based service discovery
docker swarm for running containers with mesh network for load balancing
You are looking for following components,
Service Registry : The whole point of this component is to keep track of "what service are running on what addresses". This can be as simple as a simple database which keeps entries for all the running services and their instances. Generally the orchestration service is responsible to register new service instances with Service Registry. Other choice can be to have instances themselves notify the service registry about their existence.
Service Health Checker : This component is mostly responsible for doing periodic runtime checks on the registered service instances and tell service registry if any of them is not working. The service registry implementation can then either mark these instances as "inactive" till they are found to be working by Service Health Checker in future (if ever).
Service Resolution : This is the conceptual component responsible for enabling a client to somehow get to the running service instances.
The whole of above components is called Service Discovery.
In your case, you have load-balancers which can act as a form of ServiceDiscovery.
I don't think load-balancers are going to change much over time unless you require a very advanced architecture, so your API gateway can simply "know" the url's to load-balancers for all your services. So, you don't really need service registry layer.
Now, your load-balancers inherently provide a health-check and quarantine mechanism for instances. So, you don't need an extra health check layer.
So, the only piece missing is to register your instances with the load balancer. This part you will have to figure out based on what your load-balancers are and what ecosystem they live in.
If you live in AWS ecosystem and your load balancers are ELB, then you should have things sorted out in that respect.
Based on Ivan's and Sarvesh's answers we did some research and discovered the netflix OSS projects.
Eureka can be used as service locator that integrates well with the Zuul api gateway. Sadly there's not much documentation on the configuration, so we looked further...
We've now finally choosen Kubernetes as Orchestator.
Kubernetes knows about all running containers, so there's no need for an external service locator like Eureka.
Traefik is an api gateway that utilizes the kuberentes api to discover all running microservices instances and does load balancing
Akka management finds all nodes via the kubernetes api and does the bootstrapping of the cluster for us.
Related
I have a service fabric cluster with multiple microservices, and I want to set up Azure Front Door, however it asks for a healthcheck endpoint in the backend but I don't know how I am supposed to set it up since the cluster doesn't have an endpoint for that.
Could anyone point me in the right direction?
You could implement a health check on your service, by introducing a watchdog service. Optionally tapping into the built-in health system of SF. It could look like this:
Create an ASP .NET Core Web API, and implement some health checks. For example, a custom check if your SF Service is alive (and well). Here's how to get started. Return 200 OK from the API, if the watched SF Service is running correctly.
Run this Web API as an SF Service. Expose it through the Load Balancer.
Use its URL as the health endpoint for your Service(s).
Currently I'm working on a real-time online game. First I implemented a go server with socket.io for handling messages between client and my game world and it works fine. Now for user data managing I need a http api for some functionality like login. I want to use awesome http/net package for that purpose. Should I serve the http server on different Port?
My next question is for deploying I want to use google container engine. Can I use pods with two ports open?
As far as I understood from your explanation, you need two ports open for two different APIs running in your application. Regarding Exposing two ports in Google Container Engine, you can read the discussion here that describes ways to expose ports in a pod.
Moreover, I invite you read this tutorial that involves deploying an API in a GKE cluster with a containerPort in a pod, Creating a Kubernetes service to allow internal cluster traffic to your pods (routing requests on an incoming port to your API targetPort), and creating an Ingress service to define what traffic is allowed into your cluster and where it goes. You can define different APIs with different targetPorts and run them on different pods. You can try it as an alternative. For more documentation on Exposing Applications using Services, you can read this GKE doc.
Our app is based on Java Spring boot. And we totally based on Google cloud, where we have dynamic IP and our serve isntance will work behind Elastic load balance, where an instance may get spawned and get killed based on server resource consumptions.
None we these server instance can be assumed to have static IP.
Looking for solution to connect different server instance with dynamic IP on Google Cloud.
Since 3.6, Hazelcast offers Discovery SPI to integrate external discovery mechanisms into the system. As a result there are many discovery plugins and you can implement your own. See the list of your options here. Kubernetes might be helpful in your case.
Some additional info from what Sertug said,
There is also a Google Compute SPI that might be helpful, you can check it out here:
https://github.com/hazelcast/hazelcast-gcp
Also, here's a blog post (a little old but still valid):
https://blog.hazelcast.com/hazelcast-discovery-spi
I am new to Microservices. I came across terms Service registry and service discovery.
What I understood is when a new service (or service instance) comes up, then it will register itself with the "service registry". It is also mentioned that the client can contact a service registry and get the list of IP-ports where that service is available.
In that case, what is the role of "service discovery".
Edit
Accepted answer. Also, more theoretical details were found https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture/
End to end process of registering services to a central place and reaching out to target service using service registry is service discovery.
This is pretty much like using DNS for finding IP address of a site and then reaching that site using the IP address.
I am a user of Kubernetes and it also talks on similar lines:
https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services
In short, service discovery is not a module with the specific role but the steps involved in connecting from serviceA to serviceB end-to-end.
t;dr: Service Discovery is used when the client doesn't know what service they want at first, so they start by asking for a list of services that are available.
Disclaimer: I suspect that the term is used in different ways by different systems. So take the textbook answer I give here with a grain of salt.
In general, service registry systems follow a Broker Pattern (or something similar), and fall into two categories:
White-pages brokering: clients know exactly what service they're looking for and ask for it by name
Yellow-pages brokering: clients know what kind of service they need performed, but they don't know the exact service that they want
Both systems connect clients to services, and both involve services that use a Register Pattern to enter themselves into the registry.
But yellow-pages systems require a preliminary Service Discovery step. In the Service Discovery pattern,
The client first asks for a list of services from the broker.
The client selects a service from the list.
The client requests a connection to a service from the list.
Image source: Hasan Gomaa, Software Modeling & Design (Cambriduge University Press, 2011), p. 283.
I'm interested in knowing if I can use Consul to solve the following issues:
1) Can Consul be used to load balance microservices? For instance, if I put console on the server that hosts my API gateway, can it be used to monitor all microservices it has discovered and load balance if I have two of the same microservice?
2) Can Consul be used at the microservice level to spin up instances as needed? Essentially, I'd like to not use IIS and find an alternative.
3) If for whatever reason Consul monitors a microservice as offline, can it attempt to start it up again? Or force a shut down of a microservice for whatever reason?
If Consul software can't solve these issues, is there other alternatives?
Thank you.
Consul DNS can provide a simple way for you to load balance services. It's especially powerful if you combine it with Consul Prepared Queries and health checks.
Consul is best suited for monitoring services (via health checks) but you can use consul watch to trigger events if a service suddenly becomes unavailable.
Hashicorp (the company behind Consul) offers another tool called Nomad.
Unlike Consul, Nomad is designed to run services (called jobs) and restart them if necessary.
Nomad works best if you tell it where to find Consul. This enables automatic service registration for any task Nomad launches, including deregistering it if you instruct Nomad to stop running that task. Health checks are supported as well.