Kubernetes and Spring Boot #Service load balancing - spring-boot

I have Kubernetes running on two nodes and one application deployed on the two nodes (two pods, one per node).
It's a Spring Boot application. It uses OpenFeign for service discoverability. In the app i have a RestController defined and it has a few APIs and an #Autowired #Service which is called from inside the APIs.
Whenever i do a request on one of the APIs Kubernetes uses some sort of load-balancing to route the traffic to one of the pods, and the apps RestController is called. This is fine and i want this to be load-balanced.
The problem happens once that API is called and it calls the #Autowired #Service. Somehow this too gets load-balanced and the call to the #Service might end up on the other node.
Heres and example:
we have two nodes: node1, node2
we make a request to node1's IP address.
this might get load-balanced to node2 (this is fine)
node1 gets the request and calls the #Autowired #Service
the call jumps to node2 (this is where the problem happens)
And in code:
Controller:
#Autowired
private lateinit var userService: UserService
#PostMapping("/getUser")
fun uploadNewPC(#RequestParam("userId") userId: String): User {
println(System.getEnv("hostIP")) //123.45.67.01
return userService.getUser(userId)
}
Service:
#Service
class UserService {
fun getUser(userId: String) : User {
println(System.getEnv("hostIP")) //123.45.67.02
...
}
}
I want the load-balancing to happen only on the REST requests not the internal calls of the app to its #Service components. How would i achieve this? Is there any configuration to the way Spring Boot's #service components operate in Kubernetes clusters? Can i change this?
Thanks in advance.
Edit:
After some debugging i found that It wasn't the Service that was load balanced to another node but the initial http request. Even though the request is specifically send to the url of node1... And since i was debugging both nodes at the same time, i didn't notice this.

Well, I haven't used openfeign, but in my understanding it can loadbalance only REST requests indeed.
If I've got your question right, you say that when the REST controller calls the service component (UserService in this case) the network call is issued and this is undesirable.
In this case, I believe, following points for consideration will be beneficial:
Spring boot has nothing to do with load balancing at this level by default, it should be a configured in the spring boot application somehow.
This also has nothing to do with the fact that this application runs in a Kubernetes environment, again its only a spring boot configuration.
Assuming, you have a UserService interface that obviously doesn't have any load balancing logic, spring boot must wrap it into some kind of proxy that adds these capabilities. So try to debug the application startup, place a breakpoint in the controller method and check out what is the actual type of the user service, again it must be some sort of proxy
If the assumption in 3 is correct, there must be some kind of bean post processor (possibly in spring.factories file of some dependency) class that gets registered within the application context. Probably if you'll create some custom method that will print all beans (Bean Post Processor is also a bean), you'll see the suspicious bean.

Related

Proper way to get Spring Boot health status from inside

I have an external requirement that I provide an endpoint to tell the load balancer to send traffic to my app. Much like the Kubernetes "readiness" probe, but it has to be a certain format and path, so I can just give them the actuator health endpoint.
In the past I've used the HealthEndpoint and called health(), but that doesn't work for reactive apps. Is there a more flexible way to see if the app is "UP"? At this level I don't care if it's reactive or servlet, I just want to know what Spring Boot says about the app.
I haven't found anything like this, most articles talk about calling /actuator/health, but that isn't what I need.
Edit:
Just a bit more detail, I have to return a certain string "NS_ENABLE" if it's good. There are certain conditions where I return "NS_DISABLE", so I can't just not return anything, which would normally make sense.
Also, I really like how Spring Boot does the checking for me. I'd rather not re-implement all those checks.
Edit 2: My final solution
The answers below got me very far along even though it wasn't my final solution, so I wanted to give a hint to my final understanding.
It turns out that the HealthEndpoint works for reactive apps just as well as servlet apps, you just have to wrap them in Mono.
How do we define health of any web servers?
We look at how our dependent services are, we check the status of Redis, MySQL, MongoDB, ElasticSearch, and other databases, this's what actuator does internally.
Actuator checks the status of different databases and based on that it returns Up/Down.
You can implement your own methods that would check the health of dependent services.
Redis is healthy or not can be checked using ping command
MySQL can be verified using SELECT 1 command or run some query that should always success like SHOW TABLES
Similarly, you can implement a health check for other services. If you find all required services are up then you can declare up otherwise down.
What about shutdown triggers? Whenever your server receives a shutdown signal than no matter what's the state of your dependent services, you should always say down, so that upstream won't send a call to this instance.
Edit
The health of the entire spring app can be checked programmatically by autowiring one or more beans from the Actuator module.
#Controller
public class MyHealthController{
#Autowired private HealthEndpoint healthEndpoint;
#GetMapping("health")
public Health health() {
Health health = healthEndpoint.health();
return healthEndpoint.health();
}
}
There're other beans related to health check, we can auto wire required beans. Some of the beans provide the health of the respective component, we can combine the health of each component using HealthAggregator to get the final Health. All registered health indicator components can be accessed via HealthIndicatorRegistry.
#Controller
public class MyHealthController{
#Autowired private HealthAggregator healthAggregator;
#Autowired private HealthIndicatorRegistry healthIndicatorRegistry;
#GetMapping("health")
public Health health() {
Map<String, Health> health = new HashMap<>();
for (Entry<String, HealthIndicator> entry : healthIndicatorRegistry.getAll().entrySet()) {
health.put(entry.getKey(), entry.getValue().health());
}
return healthAggregator.aggregate(health);
}
}
NOTE: Reactive component has its own health indicator. Useful classes are ReactiveHealthIndicatorRegistry, ReactiveHealthIndicator etc
Simple solution is to write your own health endpoint instead of depending on Spring.
Spring Boot provides you production-ready endpoints but if it doesn't satisfy your purpose, write your end-point. It will just return "UP" in response. If the service is down, it will not return anything.
Here's the spring boot documentation on writing reactive health endpoints. Folow the guide and should be enough for your usecase.
They also document on how to write liveliness and Readiness of your application.
https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html#reactive-health-indicators

changing custom annotations on the fly from Spring Cloud Config Server. Is it possible?

Context: I need to provide a way to change parameter values during production on lower performance cost as possible.
Goal: I want change annotation values on fly and apply it at once on all microservices instances.
Personal background and limitations: I know I can use Spring Cloud Config to change parameters on the fly like explained in this article and I Know there is some challenges and pitfalls involved on changing annotations on the fly also like discussed in stackoveflow question.
I know that Spring Cloud Config can be used for setting up a centralized configuration applied to all microservice instances during boot/start. I have used it a bit. I am wondering if I can use it for centralizing parameters that can affect customized annotations on fly.
An imagined solution is:
... whenever I need somepRopertyValue
#Value("${config.somePropertyValue}")
private String somePropertyValue;
#Bean
public String somePropertyValue(){
return somePropertyValue;
}
A config client in all microservices endpoint that must be call not only when the application start but whenever somePropertyValue managed in Spring Cloud Config Server bootstrap.properties is updated:
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
#SpringBootApplication
public class SpringConfigClientApplication {
public static void main(String[] args) {
SpringApplication.run(SpringConfigClientApplication.class, args);
}
}
#RefreshScope
#RestController
class MessageRestController {
#Value("${server.somePropertyValue:Unable to connect to config server}")
private String somePropertyValue;
#RequestMapping("/server/somePropertyValue")
String getSomePropertyValue() {
return this.somePropertyValue;
}
}
And somehow somePropertyValue is maintened in Spring Cloud Config and if change during production time it affects on demand everywhere somePropertyValue is annoted in all microservice instances.
I am currently reaching this behaviour by adding a kafka consumer in all SpringBoot microservices that listen/observe a topic and when it receives a new messagge it changes on the fly the parameter value. It seems so odd that I created a Kafka dependency in all company microservices. Since I have used Spring Config for a bit similar scenario I am wondering if there is a better alternative using some out-of-box Spring approach. Also performance is highly important in my case and a bit delay on syncronize all parameters isn't an issue. By delay I mean that two or three seconds to update parameters in all microservices isn't an issue.
There are two ways to do that:
i- There's a refresh endpoint, and you can actually call that for a service, and it'll actually refresh its configurations without restarting itself, which is pretty neat. e.g. MS-A is listing on 8080 then do a POST request at this endpoint:
localhost:8080/refresh.
NOTE: Spring Actuator actually adds a RefreshEndpoint to the app automatically when we annotate a controller in MS-A with #RefreshScope.
ii- What you can also do is use Spring Cloud Bus, and broadcast an event, and then every service listens on that and refreshes itself. That's handy if you have dozens of services all using the Config Server, and you don't want to go one by one and hit a /refresh endpoint as we have did in 1st approach. You just want to broadcast a message to a bus and have all these things automatically pick it up.
Reference: Both concepts I've learnt while taking course at Pluralsight

Are services in AEM really singleton?

I have an interface which I have implemented. I have annoted the impl with #Component and #Service of the package org.apache.felix.scr.annotations.
I wrote a simple constructor for my impl
public MyImpl(){
LOG.info("New instance created!!");
}
I also added loggers in #activate and #deactivate method.
I expected to see "New instance created!!" only once BUT I can see activate and deactivate method being called per request I make on a page(This service is invoked by A Sling Model which is used in that page)
What I saw was "New instance created!!" logged several times.
This means the OSGi container create multiple instances of my Service and called the activate and deactivate method every time.
This shows that this is not a Singleton.
The Object should be discarded only when I uninstall my bundle.
Please help me understand what is going on here.
I WANT TO IMPLEMENT A TRUE SINGLETON IN AEM
I have implemented this in AEM 6.5 instance which uses Apache Felix.
Edit:
Adding Service properties:
aemRootUrl http://localhost:8080
api.http.connections_manager.timeout 60000
api.http.cookie_max.age 18000
api.http.max_connections 200
api.http.max_connections_per_host 20
api.http.timeout.connection 300000
api.http.timeout.socket 300000
api.server.ssl.trust_all_certs true
api.server.url https://10asdasdsad
api.server.username admin
component.id 3925
component.name com.example.foundation.core.connection.impl.HybrisConnectionImpl
non_akamai.api.server.url hadasdadasd
service.bundleid 585
Service PID com.example.foundation.core.connection.impl.HybrisConnectionImpl
service.scope bundle
Using Bundles com.example.dumb-foundation.core (585)
Values altered to hide client specific information
EDIT::
I've removed the SCR annotations and replaced them with OSGI annotations here I've explictly specified
#Component(service =HybrisConnection.class, immediate=true,scope = ServiceScope.SINGLETON)
But still is shows as scope=bundle.
Should I enforce Singleton and OSGi annotations on it's dependencies as well for this to be a proper Singleton?
In declarative services (which is what you use behind the scenes) there are some cases when a component (and its service) is unpublished.
By default a simple component with immediate=true will come up when the bundle starts and go down when it stops.
If your component has any mandatory service dependencies (#Reference) then it will only be active while all dependencies are present. So if at least one dependent service goes away the component will be deactivated.
In addition the component might get restarted when config is not present at start but added later. If you want to avoid this make the config required.
Every thing #Christian Schneider said is true.
They AEM services are Singletons but are deactivated/unpublished at times. This might be for various reasons.
I faced a horrible issue because of ConfigurationAdmin service. Using this services caused our OSGi config files to be bound to the wrong bundle i.e. SlingModels. bundle within AEM.
the only way to access this is by getting the service using configAdmin.getConfig(PID).setBundleLocation(null);
BUT Doing this causes the service that is linked to this configuration to restart.
So every time I did config.setBundleLocation(null) the service restarted.
The best and most awesome way to resolve this is use OCD to define configuration for OSGi Services linked to OSGi config.xmls
AND NEVER EVER EVER use configuration Admin
If you want to access properties of another service Say ServiceA want to read ServiceB's title property set in com.example.serivce.impl.ServiceB.xml
Then in ServiceB in the #activate method read the props from OCD config and set it in instance level and have ServiceA inject ServiceB as it's dependency and use the property needed.
eg.
class ServiceA{
#Reference
private ServiceB serviceB;
public void someMethod(){
serviceB.getTitle(); // Successfully read property of another service i.e.
ServiceB without using ConfigurationAdmin.
}
}

Spring Boot: Retrieve config via rest call upon application startup

I d like to make a REST call once on application startup to retrieve some configuration parameters.
For example, we need to retrieve an entity called FleetConfiguration from another server. I d like to do a GET once and save the keep the data in memory for the rest of the runtime.
What s the best way of doing this in Spring? using Bean, Config annotations ..?
I found this for example : https://stackoverflow.com/a/44923402/494659
I might as well use POJOs handle the lifecycle of it myself but I am sure there s a way to do it in Spring without re-inventing the wheel.
Thanks in advance.
The following method will run once the application starts, call the remote server and return a FleetConfiguration object which will be available throughout your app. The FleetConfiguration object will be a singleton and won't change.
#Bean
#EventListener(ApplicationReadyEvent.class)
public FleetConfiguration getFleetConfiguration(){
RestTemplate rest = new RestTemplate();
String url = "http://remoteserver/fleetConfiguration";
return rest.getForObject(url, FleetConfiguration.class);
}
The method should be declared in a #Configuration class or #Service class.
Ideally the call should test for the response code from the remote server and act accordingly.
Better approach is to use Spring Cloud Config to externalize every application's configuration here and it can be updated at runtime for any config change so no downtime either around same.

Spring Boot: specify port at the mapping level

Spring Boot: I want to have achieved the following: some URL paths are mapped to a port, some to another.
In other words I'd like something like:
public class Controller1 {
#RequestMapping(value="/path1", port="8080") public...
#RequestMapping(value="/path2", port="8081") public...
}
So that my app responds to both localhost:8080/path1 and localhost:8081/path2
It's acceptable to have 2 separate controllers within the app.
I have managed to partially succeed by implementing an EmbeddedServletContainerCustomizer for tomcat, but it would be nice to be able to achieve this inside the controller if possible.
Is it possible?
What you are trying to do would imply that the application is listening on multiple ports. This would in turn mean that you start multiple tomcat, since spring-boot packages one container started on a single port.
What you can do
You can launch the same application twice, using different spring profiles. Each profile would configure a different port.
2 properties:
application-one.properties: server.port=8080
application-two.properties: server.port=8081
2 controllers
#Profile("one")
public class Controller1 {
#RequestMapping(value="/path1") public...
}
#Profile("two")
public class Controller2 {
#RequestMapping(value="/path2") public...
}
Each controller is activated when the specified spring profile is provided.
Launch twice
$ java -jar -Dspring.profiles.active=one YourApp.jar
$ java -jar -Dspring.profiles.active=two YourApp.jar
While you cannot prevent making call on the undesired port, you can specify HttpServletRequest among other parameters of the method of the controller, and then use HttpServletRequest.getLocalPort() to obtain the port the call is made on.
Then you can manually return the HTTP error code if the request is made on the wrong port, or forward to another controller if the design is such that same path on different ports must be differently processed.

Resources