I have spring boot app which I use spring-cloud-kubernetes dependencies. This is deployed in K8s. I have implemented service discovery and I have #DiscoveryClient which gives me service ids k8s namespace.My Problem is I want to do a rest call to one of this found services ( which have multiple pods running). How to do this ? Do I have to use Ribbon Client ?
My Code is
#RestController
public class HelloController {
#Autowired
private DiscoveryClient discoveryClient;
#RequestMapping("/services")
public List<String> services() {
log.info("/services - Request Received " + new Date());
List<String> services = this.discoveryClient.getServices();
log.info("Found services " + services.toString());
for (String service : services) {
// TODO call to this service
List<ServiceInstance> instances = discoveryClient.getInstances(service);
for (ServiceInstance instance : instances) {
log.info("Service ID >> " + service + " : Instance >> " + getStringVal(instance));
}
}
return services;
}
In service instances I can find host and port to call, but I want to call to service so that some load balancing mechanism calls to actual pod instance.
You need to use Spring Cloud Kubernetes Ribbon to call the host and port which will go to the kubernetes service and you get the load balancing provided by Kubernetes service and Kube Proxy.
Related
We have Spring Boot services running in Kubernetes and are using the Spring Cloud Kubernetes Load Balancer functionality with RestTemplate to make calls to other Spring Boot services. One of the main reasons we have this in place is historical - in that previously we ran our services in EC2 using Eureka for service discovery and after the migration we kept the Spring discovery client/client-side load balancing in place (updating dependencies etc for it to work with the Spring Cloud Kubernetes project)
We have a problem that when one of the target pods goes down we get multiple failures for requests for a period of time with java.net.NoRouteToHostException ie the spring load balancer is still trying to send to that pod.
So I have a few questions on this:
Shouldn't the target instance get removed automatically when this happens? So it might happen once but after that, the target pod list will be repaired?
Or if not is there some other configuration we need to add to handle this - eg retry / circuit breaker, etc?
A more general question is what benefit does Spring's client-side load balancing bring with Kubernetes? Without it, our service would still be able to call other services using Kubernetes built-in service / load-balancing functionality and this should handle the issue of pods going down automatically. The Spring documentation also talks about being able to switch from POD mode to SERVICE mode (https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#loadbalancer-for-kubernetes). But isn't this service mode just what Kubernetes does automatically? I'm wondering if the simplest solution here isn't to remove the Spring Load Balancer altogether? What would we lose then?
An update on this: we had the spring-retry dependency in place, but the retry was not working as by default it only works for GETs and most of our calls are POST (but OK to call again). Adding the configuration spring.cloud.loadbalancer.retry.retryOnAllOperations: true fixed this, and hence most of these failures should be avoided by the retry using an alternative instance on the second attempt.
We have also added a RetryListener that clears the load balancer cache for the service on certain connection exceptions:
#Configuration
public class RetryConfig {
private static final Logger logger = LoggerFactory.getLogger(RetryConfig.class);
// Need to use bean factory here as can't autowire LoadBalancerCacheManager -
// - it's set to 'autowireCandidate = false' in LoadBalancerCacheAutoConfiguration
#Autowired
private BeanFactory beanFactory;
#Bean
public CacheClearingLoadBalancedRetryFactory cacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
return new CacheClearingLoadBalancedRetryFactory(loadBalancerFactory);
}
// Extension of the default bean that defines a retry listener
public class CacheClearingLoadBalancedRetryFactory extends BlockingLoadBalancedRetryFactory {
public CacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
super(loadBalancerFactory);
}
#Override
public RetryListener[] createRetryListeners(String service) {
RetryListener cacheClearingRetryListener = new RetryListener() {
#Override
public <T, E extends Throwable> boolean open(RetryContext context, RetryCallback<T, E> callback) { return true; }
#Override
public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {}
#Override
public <T, E extends Throwable> void onError(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {
logger.warn("Retry for service {} picked up exception: context {}, throwable class {}", service, context, throwable.getClass());
if (throwable instanceof ConnectTimeoutException || throwable instanceof NoRouteToHostException) {
try {
LoadBalancerCacheManager loadBalancerCacheManager = beanFactory.getBean(LoadBalancerCacheManager.class);
Cache loadBalancerCache = loadBalancerCacheManager.getCache(CachingServiceInstanceListSupplier.SERVICE_INSTANCE_CACHE_NAME);
if (loadBalancerCache != null) {
boolean result = loadBalancerCache.evictIfPresent(service);
logger.warn("Load Balancer Cache evictIfPresent result for service {} is {}", service, result);
}
} catch(Exception e) {
logger.error("Failed to clear load balancer cache", e);
}
}
}
};
return new RetryListener[] { cacheClearingRetryListener };
}
}
}
Are there any issues with this approach? Could something like this be added to the built in functionality?
Shouldn't the target instance get removed automatically when this
happens? So it might happen once but after that the target pod list
will be repaired?
To resolve this issue you have to use the Readiness and Liveness Probe in Kubernetes.
Readiness will check the health of the endpoint that your application has, on the period of interval. If the application fails it will mark your PODs as Unready to accept the Traffic. So no traffic will go to that POD(replica).
Liveness will restart your application if it fails so your container or we can say POD will come up again and once we will get 200 response from app K8s will mark your POD as Ready to accept the traffic.
You can create the simple endpoint in the application that give response as 200 or 204 as per need.
Read more at : https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Make sure you application using the Kubernetes service to talk with each other.
Application 1 > Kubernetes service of App 2 > Application 2 PODs
To enable load balancing based on Kubernetes Service name use the
following property. Then load balancer would try to call application
using address, for example service-a.default.svc.cluster.local
spring.cloud.kubernetes.loadbalancer.mode=SERVICE
The most typical way to use Spring Cloud LoadBalancer on Kubernetes is
with service discovery. If you have any DiscoveryClient on your
classpath, the default Spring Cloud LoadBalancer configuration uses it
to check for service instances. As a result, it only chooses from
instances that are up and running. All that is needed is to annotate
your Spring Boot application with #EnableDiscoveryClientto enable
K8s-native Service Discovery.
References : https://stackoverflow.com/a/68536834/5525824
There are two microservices deployed with docker compose. A dependecy between services is defined in docker compose file by depends_on property. Is it possible to achieve the same effect implicitly, inside the spring boot application?
Let's say the microservice 1 depends on microservice 2. Which means, microsearvice 1 doesn't boot up before microservice 2 is healthy or registered on Eureka server.
By doing some research, I found a solution to the problem.
Spring Retry resolves dependency on Spring Cloud Config Server. Maven dependency spring-retry should be added into the pom.xml, and the properties below into the .properties file:
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-interval=2000
spring.cloud.config.retry.max-attempts=10
The following configuration class is used to resolve dependency on other microservices.
#Configuration
#ConfigurationProperties(prefix = "depends-on")
#Data
#Log
public class DependsOnConfig {
private List<String> services;
private Integer periodMs = 2000;
private Integer maxAttempts = 20;
#Autowired
private EurekaClient eurekaClient;
#Bean
public void dependentServicesRegisteredToEureka() throws Exception {
if (services == null || services.isEmpty()) {
log.info("No dependent services defined.");
return;
}
log.info("Checking if dependent services are registered to eureka.");
int attempts = 0;
while (!services.isEmpty()) {
services.removeIf(this::checkIfServiceIsRegistered);
TimeUnit.MILLISECONDS.sleep(periodMs);
if (maxAttempts.intValue() == ++attempts)
throw new Exception("Max attempts exceeded.");
}
}
private boolean checkIfServiceIsRegistered(String service) {
try {
eurekaClient.getNextServerFromEureka(service, false);
log.info(service + " - registered.");
return true;
} catch (Exception e) {
log.info(service + " - not registered yet.");
return false;
}
}
}
A list of services that the current microservice depends on are defined in .properties file:
depends-on.services[0]=service-id-1
depends-on.services[1]=service-id-2
A bean dependentServicesRegisteredToEureka is not being initialized until all services from the list register to Eureka. If needed, annotation #DependsOn("dependentServicesRegisteredToEureka") can be added to beans or components to prevent attempting an initialization before dependentServicesRegisteredToEureka initialize.
It seems that following problem doesn't have common decision and I try to solve it from another side. Microservices infrastructure consists of Spring Boot Microservices with Eureka-Zuul-Config-Admin Servers as service mesh. All Microservices runs inside Docker containers at the Kubernetes platform. Kubernetes monitors application health check (liveness/readyness probes) and redeploy it when health check in down state exceeds liveness probe timeout.
The problem is following - sometimes Microservice doesn't get correct Eureka server address after redeployment. Service discovery registration fails but Microservice continue working with health check 'UP' and dependent Microservices miss it.
Microservices are interdependent and failure of one Microservice causes cascade failure of all dependent Microservices. I don't use Histrix because of some reasons and actually it is not resolve my problem - missed data from failed Microservice just disables entire functionality related to the set of dependent Microservices.
Question: Is it possible to configure something like 'fail-fast' behavior for Eureka client without writing custom HealthIndicator? The actuator health check should be in 'DOWN' state while Eureka client doesn't get 204 successful registration response from Eureka.
Here is an example of how I fix it in code. It has pretty simple behavior - healthcheck goes down 'forever' after exceeding timeout to successful registration in Eureka on start or/and during runtime. The main goal is that the Kubernetes will redeploy Microservice when liveness probe timeout exceeded.
#Component
public class CustomHealthIndicator implements HealthIndicator {
private static final Logger logger = LoggerFactory.getLogger(CustomHealthIndicator.class);
#Autowired
#Qualifier("eurekaClient")
private EurekaClient eurekaClient;
private static final int HEALTH_CHECK_DOWN_LIMIT_MIN = 15;
private LocalDateTime healthCheckDownTimeLimit = getHealthCheckDownLimit();
#Override
public Health health() {
int errCode = registeredInEureka();
return errCode != 0
? Health.down().withDetail("Eureka registration fails", errCode).build()
: Health.up().build();
}
private int registeredInEureka() {
int status = 0;
if (isStatusUp()) {
healthCheckDownTimeLimit = getHealthCheckDownLimit();
} else if (LocalDateTime.now().isAfter(healthCheckDownTimeLimit)) {
logger.error("Exceeded {} min. limit for getting 'UP' state in Eureka", HEALTH_CHECK_DOWN_LIMIT_MIN);
status = HttpStatus.GONE.value();
}
return status;
}
private boolean isStatusUp() {
return eurekaClient.getInstanceRemoteStatus().compareTo(InstanceInfo.InstanceStatus.UP) == 0;
}
private LocalDateTime getHealthCheckDownLimit() {
return LocalDateTime.now().plus(HEALTH_CHECK_DOWN_LIMIT_MIN, ChronoUnit.MINUTES);
}
}
Is it possible to do the same by just configuring Spring components?
I was trying to setup Feign to work with RibbonClient, something like MyService api = Feign.builder().client(RibbonClient.create()).target(MyService.class, "https://myAppProd");, where myAppProd is an application which I can see in Consul. Now, if I use Spring annotations for the Feign client (#FeignClient("myAppProd"), #RequestMapping), everything works as Spring Cloud module will take care of everything.
If I want to use Feign.builder() and #RequestLine, I get the error:
com.netflix.client.ClientException: Load balancer does not have available server for client: myAppProd.
My first initial thought was that Feign was built to work with Eureka and only Spring Cloud makes the integration with Consul, but I am unsure about this.
So, is there a way to make Feign work with Consul without Spring Cloud?
Thanks in advance.
In my opinion, it's not feign work with consul, its feign -> ribbon -> consul.
RibbonClient needs to find myAppProd's serverList from its LoadBalancer.
Without ServerList, error: 'does not have available server for client'.
This job has been done by SpringCloudConsul and SpringCloudRibbon project, of course you can write another adaptor, it's just some glue code. IMHO, you can import this spring dependency into your project, but use it in non-spring way . Demo code:
just write a new feign.ribbon.LBClientFactory, that generate LBClient with ConsulServerList(Spring's class).
public class ConsulLBFactory implements LBClientFactory {
private ConsulClient client;
private ConsulDiscoveryProperties properties;
public ConsulLBFactory(ConsulClient client, ConsulDiscoveryProperties consulDiscoveryProperties) {
this.client = client;
this.properties = consulDiscoveryProperties;
}
#Override
public LBClient create(String clientName) {
IClientConfig config =
ClientFactory.getNamedConfig(clientName, DisableAutoRetriesByDefaultClientConfig.class);
ConsulServerList consulServerList = new ConsulServerList(this.client, properties);
consulServerList.initWithNiwsConfig(config);
ZoneAwareLoadBalancer<ConsulServer> lb = new ZoneAwareLoadBalancer<>(config);
lb.setServersList(consulServerList.getInitialListOfServers());
lb.setServerListImpl(consulServerList);
return LBClient.create(lb, config);
}
}
and then use it in feign:
public class Demo {
public static void main(String[] args) {
ConsulLBFactory consulLBFactory = new ConsulLBFactory(
new ConsulClient(),
new ConsulDiscoveryProperties(new InetUtils(new InetUtilsProperties()))
);
RibbonClient ribbonClient = RibbonClient.builder()
.lbClientFactory(consulLBFactory)
.build();
GitHub github = Feign.builder()
.client(ribbonClient)
.decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
List<Contributor> contributors = github.contributors("OpenFeign", "feign");
for (Contributor contributor : contributors) {
System.out.println(contributor.login + " (" + contributor.contributions + ")");
}
}
interface GitHub {
#RequestLine("GET /repos/{owner}/{repo}/contributors")
List<Contributor> contributors(#Param("owner") String owner, #Param("repo") String repo);
}
public static class Contributor {
String login;
int contributions;
}
}
you can find this demo code here, add api.github.com to your local consul before running this demo.
I have two projects one the service project another one consumer project,
Consumer project consumes the services of other project and the call should be async using JMS
I installed jms plugin in both of the projects
I have defined the JMSConnectionFactory in both of the project as below in resources.groovy
import org.springframework.jms.connection.SingleConnectionFactory
import org.apache.activemq.ActiveMQConnectionFactory
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) { brokerURL = 'vm://localhost' }
}
Note: Both of the project are for now on same machine (i.e. localhost)
Now from consumer's controller I am making call to service from ServiceProvider project
jmsService.send(service:'serviceProvider', params.body)
In ServiceProvider the service is defined as follow
import grails.plugin.jms.*
class ServiceProviderService {
def jmsService
static transactional = true
static exposes = ['jms1']
def createMessage(msg) {
print "Called1"
sleep(2000) // slow it down
return null
}
}
now when controller submits the call to service it gets submitted successfully but doesn't reach to the actual service
I also tried
jmsService.send(app: "ServiceProvider", service: "serviceProvider", method: "createMessage", msg, "standard", null)
Update
Now I have installed activeMQ plugin to service provider to make it embedded broker (jms is already there)
and created a service
package serviceprovider
class HelloService {
boolean transactional = false
static exposes = ['jms']
static destination = "queue.notification"
def onMessage(it){
println "GOT MESSAGE: $it"
}
def sayHello(String message){
println "hello"+message
}
}
resources.groovy in both of the project is now
import org.springframework.jms.connection.SingleConnectionFactory
import org.apache.activemq.ActiveMQConnectionFactory
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) { brokerURL = 'tcp://127.0.0.1:61616' }
}
from consumer's controller I am calling this service like below
jmsService.send(app:'queue.notification',service:'hello',method: 'sayHello', params.body)
call to method gets submitted but actually it is not getting called!
The in vm activemq config (vm://localhost) works only within a single VM. If your 2 projects run in separate VMs try setting up an external AMQ broker.
if you are using separate processes, then you need to use a different transport than VM (its for a single VM only), also, is one of your processes starting a broker? If not, then one of them should embed the broker (or run it externally) and expose it over a transport (like TCP)...