Spring Cloud Consul Cluster connecting with spring boot - spring

I'm working with spring cloud consul as a config server and service discovery tool in test environment. according to the consul documentation, consul server need to be at least 3 to 5 recommended servers, and consul will automatically choose a leader server with internal election. The problem is in consul cluster, which server I need to choose to connect in my spring boot service? if leader server changed, what will happen to my service? what is the best practice of using consul server in a real production environment?
I need to mention that, I already studied consul website documentation, but what I need is a real example for architecture and best practice for production environment. note that my services have high TPS due to USSD transactions in a telecom company. Thanks in advance.
I have tried to connecting to consul server and I'm using it in a test environment as config and service discovery tool. I have nearly 100 service to use consul as their config and discovery server within an internal network with 100 VMs and containers.
I expect a architecture for production environment to use consul in real production environment with pros and cons.

Related

Spring boot application registers same instance, multipe times in consul cluster

I am trying to register a spring boot app to a consul cluster.
I have 3 node consul cluster 1 master 2 agents.
I have a load-balancer in front of 2 consul agents, so that it is HA.
In my application.yml. I ask the services to join via load-balancer
spring
cloud:
consul:
enabled: true
port: loadbalancer_port
host: http://loadbalancer
discovery:
instance-id: ${info.app.environment}:${spring.application.name}
tags:
- ${spring.profiles.active}
Now, when my service restarts it is creating a duplicate entry in consul.
I figured that, because it is being registered in 2 different agents.
Does this mean, I cant have HA consul with loadbalancer ? or should I ask the services to register to particular agents with out load-balancer?
Please help!!
Consul is designed to have a Consul client agent deployed on each server in your data center (see Consul Reference Architecture). Instead of registering services centrally, services running on a machine are registered with the local/co-located Consul agent. The agents then submit the list of services registered against them to the Consul servers, which then aggregates this info from each agent to form the service catalog. The catalog maintains the high-level view of the cluster, including which services are available, which nodes run those services, health information, etc.
TLDR; Remove the load balancer and register the services directly with the agents in order to avoid this issue where service registrations are duplicated across hosts.

Can Spring Cloud Consul replace consul-client?

I'm learning Consul.Got confused by relation between Spring Cloud Consul and Consul Client.
I found that, in Spring boot applications, we can use #EnableDiscoveryClient to contact to a Consul Agent. But if I want to election a leader of my own service, can Spring Cloud Consul provides these interfaces?Or I need to relay on consul-client?
The election of the leader in the consul cluster is make of internal algorithm of consul based in RAFT that is an algoritm of consensum, for more information you can see:
https://www.consul.io/docs/internals/consensus.html
The clients of consul is a wrapper of API REST expoused by Consul, normaly you must have you consul servers behind of the load balancer and this url is that you configure in your clients for example in spring boot.
spring.cloud.consul.host = url_load_balancer
spring.cloud.consul.port = port_load_balancer
In conclusion the client can't manage the leader election of a consul cluster, but the closest approach that i find is:
https://www.consul.io/docs/guides/leader-election.html
i hope that it help you

Does Netflix's Eureka provide any benefit when running Docker containers within Rancher?

We have a collection of microservices built with Spring Boot, using Spring Cloud Netflix. Up until now, they've been packaged as RPMs and deployed to VMs. Using Eureka has allowed for service registration/discovery (obviously) and our cross-microservice interaction to be done using Spring's RestTemplate with a Virtual IP (VIP), like the following:
http://foo-service/<PATH_TO_RESOURCE>
Client-side load-balancing was another benefit.
Now, we are looking to use Docker and run within Rancher. I'm wondering using Eureka still makes sense in this environment.
Within Rancher, if the Service is named 'foo-service', that name is used as a VIP within the Rancher internal network so the same URL shown above can also work, sans Eureka.
Also, if there are multiple Containers backing a Service, Rancher will round-robin load-balance traffic amongst them.
Plus, it seems Rancher will know about Containers coming and going sooner than Eureka would.
I'm struggling to find a solid reason to keep Eureka.
Not much familiar with Rancher, AFAIK it enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos or Kubernetes to manage your containers.
So, it finally comes down to whether your infrastructure platform provides service discovery functionality or not (I know Docker swarm and Kubernetes provides Service discovery, not sure about the others); if you get free service discovery out of the box from your platform and if you don't need client side load balancing, eureka is an overkill.
Here is an answer for the question in context of Kubernetes
https://stackoverflow.com/a/40568412/6785908
Quoting the relevant parts
In Kubernetes platform, using Eureka (Or Consul/zookeeper any
other service registries) for service discovery is an overkill; you
can achieve the same (arguably) functionality with Kubernetes Services
(+kube DNS Addon), which will act as a referable IP address and a load
balancer (not client side) for the ephemeral Pods. Read this
[article][1] by Christian Posta. If you want to refer your service by
its name instead of IP address add KubeDNS (A kubernetes add on) to
your cluster.
http://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/
Edit
Since you said,
Within Rancher, if the Service is named 'foo-service', it is used as a
VIP within the Rancher internal network so the same URL shown above
can also work, sans Eureka.
Also, if there are multiple Containers backing a Service, Rancher will
round-robing load-balance traffic amongst them.
So you are getting both Service discovery and the (server side) load balancer from your platform for free. So if you don't have a compelling reason to do client side load balancing, forget about eureka.

How to run Spring Cloud Config server in Fault Tolerance mode?

In my project we have a requirement to run two instances of spring cloud config server so if one instance goes down, other will take care the config server responsibilities.
Currently, you would need to put config server behind a load balancer. It is stateless, so that wouldn't hurt. There is an open issue to configure multiple config server url's in the client, so it could do failover there.
If you are running multiple instances of the config server, you can have them all register themselves in Eureka, and maybe do a lookup to the config server with it's application name via Eureka in all the other microservices. This way, Zuul (and Ribbon) will take care of the load balancing.
Edit:
I guess spencergibb is right. It's best to use a load balancer, for eg: ELB, if you're going to deploy on AWS.
Consider multiple spring-cloud-config-uris for high availability

RESTful Microservice failover & load balancing

At the moment we have some monolithic Web Applications and try to transfer the projects to an microservices infrastructure.
For the monolithic application is an HAProxy and Session Replication to have failover and load balancing.
Now we build some RESTful microservices with spring boot but it's not clear for me what is the best way to build the production environment.
Of course we can run all applications as unix services and still have a reverse proxy for load balancing and failover. This solution seems very heavy for me and have a lot of configuration and maintenance. Resource Management and scaling up or down servers will be always a manually process.
What are the best possibilities to setup production environment with 2-3 Servers and easy resource management?
Is there some solution the also support continuous deployment?
I'd recommend looking into service discovery. Netflix descibes this as:
A Service Discovery system provides a mechanism for:
Services to register their availability
Locating a single instance of a particular service
Notifying when the instances of a service change
Packages such as Netflix's Eureka could be of help. (EDIT - actually this looks like it might be AWS specific)
This should work well with continuous delivery as the services can make themselves unavailable, be updated and then register availability again.

Resources