Spring Boot custom Kubernetes readiness probe - spring-boot

I want to implement custom logic to determine readiness for my pod, and I went over this: https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state and they mention an example property:
management.endpoint.health.group.readiness.include=readinessState,customCheck
Question is - how do I override customCheck?
In my case I want to use HTTP probes, so the yaml looks like:
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
httpGet:
path: /actuator/health
port: 12345
So then again - where and how should I apply logic that would determine when the app is ready (just like the link above, i'd like to rely on an external service in order for it to be ready)

customCheck is a key for your custom HealthIndicator. The key for a given HealthIndicator is the name of the bean without the HealthIndicator suffix
You can read:
https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.writing-custom-health-indicators
You are defining readinessProbe, so probably hiting /actuator/health/readiness is a better choice.
public class CustomCheckHealthIndicator extends AvailabilityStateHealthIndicator {
private final YourService yourService;
public CustomCheckHealthIndicator(ApplicationAvailability availability, YourService yourService) {
super(availability, ReadinessState.class, (statusMappings) -> {
statusMappings.add(ReadinessState.ACCEPTING_TRAFFIC, Status.UP);
statusMappings.add(ReadinessState.REFUSING_TRAFFIC, Status.OUT_OF_SERVICE);
});
this.yourService = yourService;
}
#Override
protected AvailabilityState getState(ApplicationAvailability applicationAvailability) {
if (yourService.isInitCompleted()) {
return ReadinessState.ACCEPTING_TRAFFIC;
} else {
return ReadinessState.REFUSING_TRAFFIC;
}
}
}

Related

How to intercept message republished to DLQ in Spring Cloud RabbitMQ?

I want to intercept messages that are republished to DLQ after retry limit is exhausted, and my ultimate goal is to eliminate x-exception-stacktrace header from those messages.
Config:
spring:
application:
name: sandbox
cloud:
function:
definition: rabbitTest1Input
stream:
binders:
rabbitTestBinder1:
type: rabbit
environment:
spring:
rabbitmq:
addresses: localhost:55015
username: guest
password: guest
virtual-host: test
bindings:
rabbitTest1Input-in-0:
binder: rabbitTestBinder1
consumer:
max-attempts: 3
destination: ex1
group: q1
rabbit:
bindings:
rabbitTest1Input-in-0:
consumer:
autoBindDlq: true
bind-queue: true
binding-routing-key: q1key
deadLetterExchange: ex1-DLX
dlqDeadLetterExchange: ex1
dlqDeadLetterRoutingKey: q1key_dlq
dlqTtl: 180000
prefetch: 5
queue-name-group-only: true
republishToDlq: true
requeueRejected: false
ttl: 86400000
#Configuration
class ConsumerConfig {
companion object : KLogging()
#Bean
fun rabbitTest1Input(): Consumer<Message<String>> {
return Consumer {
logger.info("Received from test1 queue: ${it.payload}")
throw AmqpRejectAndDontRequeueException("FAILED") // force republishing to DLQ after N retries
}
}
}
First I tried to register #GlobalChannelInterceptor (like here), but since RabbitMessageChannelBinder uses its own private RabbitTemplate instance (not autowired) for republishing (see #getErrorMessageHandler) it doesn't get intercepted.
Then I tried to extend RabbitMessageChannelBinder class by throwing away the code related to x-exception-stacktrace and then declare this extension as a bean:
/**
* Forked from {#link org.springframework.cloud.stream.binder.rabbit.RabbitMessageChannelBinder} with the goal
* to eliminate {#link RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE} header from messages republished to DLQ
*/
class RabbitMessageChannelBinderWithNoStacktraceRepublished
: RabbitMessageChannelBinder(...)
// and then
#Configuration
#Import(
RabbitAutoConfiguration::class,
RabbitServiceAutoConfiguration::class,
RabbitMessageChannelBinderConfiguration::class,
PropertyPlaceholderAutoConfiguration::class,
)
#EnableConfigurationProperties(
RabbitProperties::class,
RabbitBinderConfigurationProperties::class,
RabbitExtendedBindingProperties::class
)
class RabbitConfig {
#Bean
#Primary
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
#Order(Ordered.HIGHEST_PRECEDENCE)
fun customRabbitMessageChannelBinder(
appCtx: ConfigurableApplicationContext,
... // required injections
): RabbitMessageChannelBinder {
// remove the original (auto-configured) bean. Explanation is after the code snippet
val registry = appCtx.autowireCapableBeanFactory as BeanDefinitionRegistry
registry.removeBeanDefinition("rabbitMessageChannelBinder")
// ... and replace it with custom binder. It's initialized absolutely the same way as original bean, but is of forked class
return RabbitMessageChannelBinderWithNoStacktraceRepublished(...)
}
}
But in this case my channel binder doesn't respect the YAML properties (e.g. addresses: localhost:55015) and uses default values (e.g. localhost:5672)
INFO o.s.a.r.c.CachingConnectionFactory - Attempting to connect to: [localhost:5672]
INFO o.s.a.r.l.SimpleMessageListenerContainer - Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused
On the other hand if I don't remove original binder from Spring context I get following error:
Caused by: java.lang.IllegalStateException: Multiple binders are available, however neither default nor per-destination binder name is provided. Available binders are [rabbitMessageChannelBinder, customRabbitMessageChannelBinder]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:145)
Could anyone give me a hint how to solve this problem?
P.S. I use Spring Cloud Stream 3.1.6 and Spring Boot 2.6.6
Disable the binder retry/DLQ configuration (maxAttempts=1, republishToDlq=false, and other dlq related properties).
Add a ListenerContainerCustomizer to add a custom retry advice to the advice chain, with a customized dead letter publishing recoverer.
Manually provision the DLQ using a Queue #Bean.
#SpringBootApplication
public class So72871662Application {
public static void main(String[] args) {
SpringApplication.run(So72871662Application.class, args);
}
#Bean
public Consumer<String> input() {
return str -> {
System.out.println();
throw new RuntimeException("test");
};
}
#Bean
ListenerContainerCustomizer<MessageListenerContainer> customizer(RetryOperationsInterceptor retry) {
return (cont, dest, grp) -> {
((AbstractMessageListenerContainer) cont).setAdviceChain(retry);
};
}
#Bean
RetryOperationsInterceptor interceptor(MessageRecoverer recoverer) {
return RetryInterceptorBuilder.stateless()
.maxAttempts(3)
.backOffOptions(3_000L, 2.0, 10_000L)
.recoverer(recoverer)
.build();
}
#Bean
MessageRecoverer recoverer(RabbitTemplate template) {
return new RepublishMessageRecoverer(template, "DLX", "errors") {
#Override
protected void doSend(#Nullable
String exchange, String routingKey, Message message) {
message.getMessageProperties().getHeaders().remove(RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE);
super.doSend(exchange, routingKey, message);
}
};
}
#Bean
FanoutExchange dlx() {
return new FanoutExchange("DLX");
}
#Bean
Queue dlq() {
return new Queue("errors");
}
#Bean
Binding dlqb() {
return BindingBuilder.bind(dlq()).to(dlx());
}
}

Fail-fast behavior for Eureka client

It seems that following problem doesn't have common decision and I try to solve it from another side. Microservices infrastructure consists of Spring Boot Microservices with Eureka-Zuul-Config-Admin Servers as service mesh. All Microservices runs inside Docker containers at the Kubernetes platform. Kubernetes monitors application health check (liveness/readyness probes) and redeploy it when health check in down state exceeds liveness probe timeout.
The problem is following - sometimes Microservice doesn't get correct Eureka server address after redeployment. Service discovery registration fails but Microservice continue working with health check 'UP' and dependent Microservices miss it.
Microservices are interdependent and failure of one Microservice causes cascade failure of all dependent Microservices. I don't use Histrix because of some reasons and actually it is not resolve my problem - missed data from failed Microservice just disables entire functionality related to the set of dependent Microservices.
Question: Is it possible to configure something like 'fail-fast' behavior for Eureka client without writing custom HealthIndicator? The actuator health check should be in 'DOWN' state while Eureka client doesn't get 204 successful registration response from Eureka.
Here is an example of how I fix it in code. It has pretty simple behavior - healthcheck goes down 'forever' after exceeding timeout to successful registration in Eureka on start or/and during runtime. The main goal is that the Kubernetes will redeploy Microservice when liveness probe timeout exceeded.
#Component
public class CustomHealthIndicator implements HealthIndicator {
private static final Logger logger = LoggerFactory.getLogger(CustomHealthIndicator.class);
#Autowired
#Qualifier("eurekaClient")
private EurekaClient eurekaClient;
private static final int HEALTH_CHECK_DOWN_LIMIT_MIN = 15;
private LocalDateTime healthCheckDownTimeLimit = getHealthCheckDownLimit();
#Override
public Health health() {
int errCode = registeredInEureka();
return errCode != 0
? Health.down().withDetail("Eureka registration fails", errCode).build()
: Health.up().build();
}
private int registeredInEureka() {
int status = 0;
if (isStatusUp()) {
healthCheckDownTimeLimit = getHealthCheckDownLimit();
} else if (LocalDateTime.now().isAfter(healthCheckDownTimeLimit)) {
logger.error("Exceeded {} min. limit for getting 'UP' state in Eureka", HEALTH_CHECK_DOWN_LIMIT_MIN);
status = HttpStatus.GONE.value();
}
return status;
}
private boolean isStatusUp() {
return eurekaClient.getInstanceRemoteStatus().compareTo(InstanceInfo.InstanceStatus.UP) == 0;
}
private LocalDateTime getHealthCheckDownLimit() {
return LocalDateTime.now().plus(HEALTH_CHECK_DOWN_LIMIT_MIN, ChronoUnit.MINUTES);
}
}
Is it possible to do the same by just configuring Spring components?

Missing metrics when programmatically creating a circuitbreaker

I want to define a circuitbreak with a programming approach so I did:
#Configuration
public class MyCircuitBreakerConfig {
#Bean
public CircuitBreakerRegistry myRegistry() {
CircuitBreakerRegistry registry = CircuitBreakerRegistry.ofDefaults();
registry.circuitBreaker("mycircuit", circuitConfig());
return registry;
}
Problem is that, even though it works correctly, I get the following in metrics:
"components" : {
"circuitBreakers" : {
"status" : "UNKNOWN"
}
While, if I define it in my properties file:
resilience4j:
circuitbreaker:
configs:
myconfig:
...
instances:
mycircuit:
base-config: myconfig
I can see it. What could the problem be?
I'm using the resilience4j-spring-boot2 dependency.
You must not create your own CircuitBreakerRegistry.
The Spring Boot AutoConfiguration creates an instance which you should use. If you need it, just inject (autowire) the existing CircuitBreakerRegistry into your code.
You can override the defaults as follows
resilience4j.circuitbreaker:
configs:
default:
slidingWindowSize: 100
permittedNumberOfCallsInHalfOpenState: 10
waitDurationInOpenState: 10000
failureRateThreshold: 60
eventConsumerBufferSize: 10

Spring boot not registering on promethes end point

I am trying to configure Prometheus and Grafana with spring boot.
#Configuration
#EnableSpringBootMetricsCollector
public class MetricsConfiguration {
/**
* Register common tags application instead of job. This application tag is
* needed for Grafana dashboard.
*
* #return registry with registered tags.
*/
#Value("${spring.application.name}")
private String applicationName;
#Value("${spring.profiles.active}")
private String environment;
#Bean
public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> {
registry.config().commonTags("application", applicationName, "environment", environment)
.meterFilter(getDefualtConfig());
};
}
private MeterFilter getDefualtConfig() {
return new MeterFilter() {
#Override
public DistributionStatisticConfig configure(Meter.Id id, DistributionStatisticConfig config) {
return DistributionStatisticConfig.builder().percentilesHistogram(true).percentiles(0.95, 0.99).build()
.merge(config);
}
};
}
}
while running the application I am able to see traing on localhost:8080/prometheus url.
but same I am not able to see on localhost:9090/metrics url which is Prometheus URL.
I have added the configuration in prometheus.yml and restarted the Prometheus.
- job_name: 'my-api'
scrape_interval: 10s
metrics_path: '/prometheus'
target_groups:
- targets: ['localhost:8080']
After spending 2 hours found the solution,
we were using basic auth for all health points also
The issue was that I was not setting up basic auth in my proemtheus.yml
- job_name: 'my-api'
scrape_interval: 10s
metrics_path: '/prometheus'
target_groups:
- targets: ['localhost:8080']
basic_auth:
username: test
password: test

Spring Cloud Gateway API - Context-path on routes not working

I have setup context-path in application.yml
server:
port: 4177
max-http-header-size: 65536
tomcat.accesslog:
enabled: true
servlet:
context-path: /gb-integration
And I have configured some routes
#Bean
public RouteLocator routeLocator(RouteLocatorBuilder builder) {
final String sbl = "http://localhost:4178";
return builder.routes()
//gb-sbl-rest
.route("sbl", r -> r
.path("/sbl/**")
.filters(f -> f.rewritePath("/sbl/(?<segment>.*)", "/gb-sbl/${segment}"))
.uri(sbl)).build();
}
I want the API gateway to be reached using localhost:4177/gb-integration/sbl/**
However it is only working on localhost:4177/sbl/**
It seems my context-path is ignored.
Any ideas how I can get my context-path to work on all my routes?
You probably already figuered it out by your self, but here is what is working for me:
After reading the Spring Cloud documentation and having tryied many things on my own, I have eventually opted for a route by route configuration. In your case, it would look something like this:
.path("/gb-integration/sbl/**")
and repeat the same pattern for every route.
.path("/gb-integration/abc/**")
...
.path("/gb-integration/def/**")
You can actually see this in spring cloud documentation.
The spring clould documentation seems to be in progress. Hopefully, we shall find a better solution.
Detailing on #sendon1982 answer
If your service is exposed at localhost:8080/color/red and you want it to be accessible from gateway as localhost:9090/gateway/color/red, In the Path param of predicates, prepend the /gateway, and add StripPrefix as 1 in filters, which basically translates to
take the requested path which matches Path, strip/remove out the prefix paths till the number mentioned and route using given uri and the stripped path
my-app-gateway: /gateway
spring:
cloud:
gateway:
routes:
- id: color-service
uri: http://localhost:8080
predicates:
- Path=${my-app-gateway}/color/**
filters:
- StripPrefix=1
Using yaml file like this
spring:
cloud:
gateway:
routes:
- id: property-search-service-route
uri: http://localhost:4178
predicates:
- Path=/gb-integration/sbl/**
fixed :
application.yaml:
gateway:
discovery:
locator:
enabled: true
lower-case-service-id: true
filters:
# 去掉 /ierp/[serviceId] 进行转发
- StripPath=2
predicates:
- name: Path
# 路由匹配 /ierp/[serviceId]
# org.springframework.cloud.gateway.discovery.DiscoveryClientRouteDefinitionLocator#getRouteDefinitions
args[pattern]: "'/ierp/'+serviceId+'/**'"
filter:
#Component
public class StripPathGatewayFilterFactory extends
AbstractGatewayFilterFactory<StripPathGatewayFilterFactory.Config> {
/**
* Parts key.
*/
public static final String PARTS_KEY = "parts";
public StripPathGatewayFilterFactory() {
super(StripPathGatewayFilterFactory.Config.class);
}
#Override
public List<String> shortcutFieldOrder() {
return Arrays.asList(PARTS_KEY);
}
#Override
public GatewayFilter apply(Config config) {
return (exchange, chain) -> {
ServerHttpRequest request = exchange.getRequest();
ServerWebExchangeUtils.addOriginalRequestUrl(exchange, request.getURI());
String path = request.getURI().getRawPath();
String[] originalParts = StringUtils.tokenizeToStringArray(path, "/");
// all new paths start with /
StringBuilder newPath = new StringBuilder("/");
for (int i = 0; i < originalParts.length; i++) {
if (i >= config.getParts()) {
// only append slash if this is the second part or greater
if (newPath.length() > 1) {
newPath.append('/');
}
newPath.append(originalParts[i]);
}
}
if (newPath.length() > 1 && path.endsWith("/")) {
newPath.append('/');
}
ServerHttpRequest newRequest = request.mutate().path(newPath.toString()).contextPath(null).build();
exchange.getAttributes().put(ServerWebExchangeUtils.GATEWAY_REQUEST_URL_ATTR, newRequest.getURI());
return chain.filter(exchange.mutate().request(newRequest).build());
};
}
public static class Config {
private int parts;
public int getParts() {
return parts;
}
public void setParts(int parts) {
this.parts = parts;
}
}
}

Resources