I have an external requirement that I provide an endpoint to tell the load balancer to send traffic to my app. Much like the Kubernetes "readiness" probe, but it has to be a certain format and path, so I can just give them the actuator health endpoint.
In the past I've used the HealthEndpoint and called health(), but that doesn't work for reactive apps. Is there a more flexible way to see if the app is "UP"? At this level I don't care if it's reactive or servlet, I just want to know what Spring Boot says about the app.
I haven't found anything like this, most articles talk about calling /actuator/health, but that isn't what I need.
Edit:
Just a bit more detail, I have to return a certain string "NS_ENABLE" if it's good. There are certain conditions where I return "NS_DISABLE", so I can't just not return anything, which would normally make sense.
Also, I really like how Spring Boot does the checking for me. I'd rather not re-implement all those checks.
Edit 2: My final solution
The answers below got me very far along even though it wasn't my final solution, so I wanted to give a hint to my final understanding.
It turns out that the HealthEndpoint works for reactive apps just as well as servlet apps, you just have to wrap them in Mono.
How do we define health of any web servers?
We look at how our dependent services are, we check the status of Redis, MySQL, MongoDB, ElasticSearch, and other databases, this's what actuator does internally.
Actuator checks the status of different databases and based on that it returns Up/Down.
You can implement your own methods that would check the health of dependent services.
Redis is healthy or not can be checked using ping command
MySQL can be verified using SELECT 1 command or run some query that should always success like SHOW TABLES
Similarly, you can implement a health check for other services. If you find all required services are up then you can declare up otherwise down.
What about shutdown triggers? Whenever your server receives a shutdown signal than no matter what's the state of your dependent services, you should always say down, so that upstream won't send a call to this instance.
Edit
The health of the entire spring app can be checked programmatically by autowiring one or more beans from the Actuator module.
#Controller
public class MyHealthController{
#Autowired private HealthEndpoint healthEndpoint;
#GetMapping("health")
public Health health() {
Health health = healthEndpoint.health();
return healthEndpoint.health();
}
}
There're other beans related to health check, we can auto wire required beans. Some of the beans provide the health of the respective component, we can combine the health of each component using HealthAggregator to get the final Health. All registered health indicator components can be accessed via HealthIndicatorRegistry.
#Controller
public class MyHealthController{
#Autowired private HealthAggregator healthAggregator;
#Autowired private HealthIndicatorRegistry healthIndicatorRegistry;
#GetMapping("health")
public Health health() {
Map<String, Health> health = new HashMap<>();
for (Entry<String, HealthIndicator> entry : healthIndicatorRegistry.getAll().entrySet()) {
health.put(entry.getKey(), entry.getValue().health());
}
return healthAggregator.aggregate(health);
}
}
NOTE: Reactive component has its own health indicator. Useful classes are ReactiveHealthIndicatorRegistry, ReactiveHealthIndicator etc
Simple solution is to write your own health endpoint instead of depending on Spring.
Spring Boot provides you production-ready endpoints but if it doesn't satisfy your purpose, write your end-point. It will just return "UP" in response. If the service is down, it will not return anything.
Here's the spring boot documentation on writing reactive health endpoints. Folow the guide and should be enough for your usecase.
They also document on how to write liveliness and Readiness of your application.
https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html#reactive-health-indicators
Related
I would like to record the startup information about my application during a Spring Boot test. I have the startup actuator configured and working in Spring Boot 'bootrun' mode. However, when I try to access that actuator during a test using a TestRestTemplate, I get a 404 error.
I have written an example program that demonstrates the problem. The issue isn't with acutators overall as I have the metrics and health actuators working in the same test. Just the startup actuator.
The example code is on GitHub
I have a solution for this so I thought I would post it. For complete details, see the original repo in GitHub and check the solution branch.
One possible way to enable ApplicationStartup data collection during a Spring Boot Test is to create a ContextCustomizer. This allows you to get into the testing context early enough to record all of the data that you are looking for. The ContextCustomizer should have a single static BufferingApplicationStartup that it registers as a singleton bean into the test context's bean factory. It also needs to set the bean factory's ApplicationStartup because that will be passed to the SpringApplication just before it is run.
Here is the snippet of the customizer that holds the key:
#Override
public void customizeContext(ConfigurableApplicationContext context, MergedContextConfiguration mergedConfig) {
ConfigurableListableBeanFactory beanFactory = context.getBeanFactory();
Object possibleSingleton = beanFactory.getSingleton(BEAN_NAME);
// The only way it wouldn't be an instance of a BufferingApplicationStartup is if it is null or we haven't
// run yet (and it is the DefaultApplicationStartup). In either case, jam our BufferingApplicationStartup
// in here.
if(!(possibleSingleton instanceof BufferingApplicationStartup)) {
beanFactory.registerSingleton(BEAN_NAME, APPLICATION_STARTUP);
beanFactory.setApplicationStartup(APPLICATION_STARTUP);
}
}
When you do this, make sure you implement a good equals and hashCode for your customizer or else you will break the test context caching and you will refresh your test context with every test class. Since the only relevant part of the customizer is the static BufferingApplicationStartup, I chose to return its hashcode.
Finally, don't forget to add your ContextCustomizerFactory to the src/test/resources/META-INF/spring.factories or else the rest of the Spring Boot testing support won't see your customizer.
Once this is all setup, you can access the Startup Actuator endpoint just like you would any other actuator.
Context: I need to provide a way to change parameter values during production on lower performance cost as possible.
Goal: I want change annotation values on fly and apply it at once on all microservices instances.
Personal background and limitations: I know I can use Spring Cloud Config to change parameters on the fly like explained in this article and I Know there is some challenges and pitfalls involved on changing annotations on the fly also like discussed in stackoveflow question.
I know that Spring Cloud Config can be used for setting up a centralized configuration applied to all microservice instances during boot/start. I have used it a bit. I am wondering if I can use it for centralizing parameters that can affect customized annotations on fly.
An imagined solution is:
... whenever I need somepRopertyValue
#Value("${config.somePropertyValue}")
private String somePropertyValue;
#Bean
public String somePropertyValue(){
return somePropertyValue;
}
A config client in all microservices endpoint that must be call not only when the application start but whenever somePropertyValue managed in Spring Cloud Config Server bootstrap.properties is updated:
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
#SpringBootApplication
public class SpringConfigClientApplication {
public static void main(String[] args) {
SpringApplication.run(SpringConfigClientApplication.class, args);
}
}
#RefreshScope
#RestController
class MessageRestController {
#Value("${server.somePropertyValue:Unable to connect to config server}")
private String somePropertyValue;
#RequestMapping("/server/somePropertyValue")
String getSomePropertyValue() {
return this.somePropertyValue;
}
}
And somehow somePropertyValue is maintened in Spring Cloud Config and if change during production time it affects on demand everywhere somePropertyValue is annoted in all microservice instances.
I am currently reaching this behaviour by adding a kafka consumer in all SpringBoot microservices that listen/observe a topic and when it receives a new messagge it changes on the fly the parameter value. It seems so odd that I created a Kafka dependency in all company microservices. Since I have used Spring Config for a bit similar scenario I am wondering if there is a better alternative using some out-of-box Spring approach. Also performance is highly important in my case and a bit delay on syncronize all parameters isn't an issue. By delay I mean that two or three seconds to update parameters in all microservices isn't an issue.
There are two ways to do that:
i- There's a refresh endpoint, and you can actually call that for a service, and it'll actually refresh its configurations without restarting itself, which is pretty neat. e.g. MS-A is listing on 8080 then do a POST request at this endpoint:
localhost:8080/refresh.
NOTE: Spring Actuator actually adds a RefreshEndpoint to the app automatically when we annotate a controller in MS-A with #RefreshScope.
ii- What you can also do is use Spring Cloud Bus, and broadcast an event, and then every service listens on that and refreshes itself. That's handy if you have dozens of services all using the Config Server, and you don't want to go one by one and hit a /refresh endpoint as we have did in 1st approach. You just want to broadcast a message to a bus and have all these things automatically pick it up.
Reference: Both concepts I've learnt while taking course at Pluralsight
I have Kubernetes running on two nodes and one application deployed on the two nodes (two pods, one per node).
It's a Spring Boot application. It uses OpenFeign for service discoverability. In the app i have a RestController defined and it has a few APIs and an #Autowired #Service which is called from inside the APIs.
Whenever i do a request on one of the APIs Kubernetes uses some sort of load-balancing to route the traffic to one of the pods, and the apps RestController is called. This is fine and i want this to be load-balanced.
The problem happens once that API is called and it calls the #Autowired #Service. Somehow this too gets load-balanced and the call to the #Service might end up on the other node.
Heres and example:
we have two nodes: node1, node2
we make a request to node1's IP address.
this might get load-balanced to node2 (this is fine)
node1 gets the request and calls the #Autowired #Service
the call jumps to node2 (this is where the problem happens)
And in code:
Controller:
#Autowired
private lateinit var userService: UserService
#PostMapping("/getUser")
fun uploadNewPC(#RequestParam("userId") userId: String): User {
println(System.getEnv("hostIP")) //123.45.67.01
return userService.getUser(userId)
}
Service:
#Service
class UserService {
fun getUser(userId: String) : User {
println(System.getEnv("hostIP")) //123.45.67.02
...
}
}
I want the load-balancing to happen only on the REST requests not the internal calls of the app to its #Service components. How would i achieve this? Is there any configuration to the way Spring Boot's #service components operate in Kubernetes clusters? Can i change this?
Thanks in advance.
Edit:
After some debugging i found that It wasn't the Service that was load balanced to another node but the initial http request. Even though the request is specifically send to the url of node1... And since i was debugging both nodes at the same time, i didn't notice this.
Well, I haven't used openfeign, but in my understanding it can loadbalance only REST requests indeed.
If I've got your question right, you say that when the REST controller calls the service component (UserService in this case) the network call is issued and this is undesirable.
In this case, I believe, following points for consideration will be beneficial:
Spring boot has nothing to do with load balancing at this level by default, it should be a configured in the spring boot application somehow.
This also has nothing to do with the fact that this application runs in a Kubernetes environment, again its only a spring boot configuration.
Assuming, you have a UserService interface that obviously doesn't have any load balancing logic, spring boot must wrap it into some kind of proxy that adds these capabilities. So try to debug the application startup, place a breakpoint in the controller method and check out what is the actual type of the user service, again it must be some sort of proxy
If the assumption in 3 is correct, there must be some kind of bean post processor (possibly in spring.factories file of some dependency) class that gets registered within the application context. Probably if you'll create some custom method that will print all beans (Bean Post Processor is also a bean), you'll see the suspicious bean.
I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)
Spring cloud config client helps to change the properties in run time. Below are 2 ways to do that
Update GIT repository and hit /refresh in the client application to get the latest values
Update the client directly by posting the update to /env and then /refresh
Problem here in both the approaches is that there could be multiple instances of client application running in cloud foundry and above rest calls will reach any one of the instances leaving application in inconsistent state
Eg. POST to /env could hit instance 1 and leaves instance 2 with old data.
One solution I could think of is to continuously hit these end points "n" times using for loop just to make sure all instance will be updated but it is a crude solution. Do any body have better solution for this?
Note: We are deploying our application in private PCF environment.
The canonical solution for that problem is the Spring Cloud Bus. If your apps are bound to a RabbitMQ service and they have the bus on the classpath there will be additional endpoints /bus/env and /bus/refresh that broadcast the messages to all instances. See docs for more details.
Spring Cloud Config Server Not Refreshing
see org.springframework.cloud.bootstrap.config.RefreshEndpoint code hereļ¼
public synchronized String[] refresh() {
Map<String, Object> before = extract(context.getEnvironment()
.getPropertySources());
addConfigFilesToEnvironment();
Set<String> keys = changes(before,
extract(context.getEnvironment().getPropertySources())).keySet();
scope.refreshAll();
if (keys.isEmpty()) {
return new String[0];
}
context.publishEvent(new EnvironmentChangeEvent(keys));
return keys.toArray(new String[keys.size()]);
}
that means /refresh endpoint pull git first and then refresh catch,and public a environmentChangeEvent,so we can customer the code like this.