Spring Cache + Hibernate L2 cache via Hazelcast - spring

I have Spring Boot application with Hibernate L2 cache enabled and I used a Hazelcast for this.
Also I want to add Spring cache, using #Cacheable annotation
I need to distribute this cache(spring cache and hibernate l2) between several pods in kubernetes, using embedded distributed cache pattern.
For now, I successfully distribute Hibernate l2 cache between pods using the following configuration
hazelcast.yaml
hazelcast:
instance-name: my-instance
network:
join:
multicast:
enabled: false
kubernetes:
enabled: true
namespace: dev
application.properties
spring.datasource.type=com.zaxxer.hikari.HikariDataSource
spring.datasource.url=jdbc:postgresql://host.docker.internal/postgres
spring.datasource.username=postgres
spring.datasource.password=pass
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.properties.hibernate.cache.use_second_level_cache=true
spring.jpa.properties.hibernate.cache.region.factory_class=com.hazelcast.hibernate.HazelcastCacheRegionFactory
spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.properties.hibernate.show_sql=true
hibernate.cache.hazelcast.instance_name=my-instance
But I also need to share spring cache, using hazelcast.
For example, I have the following code , and I want to distribute data, that cached in this service, between k8s pods
#Service
public class BookService {
#Autowired
private BookRepo bookRepo;
#Cacheable("books")
public Optional<Book> getBookById(int id) {
try {
Thread.sleep(15000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException(e);
}
System.out.println("Book service triggered");
return bookRepo.findById(id);
}
}
And I have no idea How to correctly configure my application to share both spring and hibernate l2 cache between k8s pods ?

Related

Spring cloud load-balancer drops instances after cache refresh

I have a need to save Spring Cloud Gateway routes within a database and to do this we send a web request using the WebClient to another microservice.
I'm using Eureka for service discovery and want the WebClient to use discovery instance names instead of explicit URLs and I've therefore utilised the #LoadBalanced annotation on the bean method:
#Bean
public WebClient loadBalancedWebClientBuilder(WebClient.Builder builder) {
return builder
.exchangeStrategies(exchangeStrategies())
.build();
}
#Bean
#LoadBalanced
WebClient.Builder builder() {
return WebClient.builder();
}
private ExchangeStrategies exchangeStrategies() {
return ExchangeStrategies.builder()
.codecs(clientCodecConfigurer -> {
clientCodecConfigurer.defaultCodecs().jackson2JsonEncoder(getEncoder());
clientCodecConfigurer.defaultCodecs().jackson2JsonDecoder(getDecoder());
}).build();
}
This all works on start-up and for the default 35s cache time - i.e. the webClient discovers the required 'saveToDatabase' service instance and sends the request.
On each eventPublisher.publishEvent(new RefreshRoutesEvent(this)) a call is made to the same downstream microservice (via the WebClient) to retrieve all saved routes.
Again this works initially, but after the default 35seconds the load balancer cache seems to be cleared and the downstream service id can no longer be found:
WARN o.s.c.l.core.RoundRobinLoadBalancer - No servers available for service: QUERY-SERVICE
I have confirmed it is the cache refresh purging the cache and not re-acquiring the instances by setting
spring:
application:
name: my-gateway
cloud:
loadbalancer:
cache:
enabled: true
ttl: 240s
health-check:
refetch-instances: true
ribbon:
enabled: false
gateway:
...
I've struggled with this for days now and cannot find/ see where or why the cache is not being updated, only purged. Adding specific #LoadBalancerClient() configuration as below makes no difference.
#Bean
public ServiceInstanceListSupplier instanceSupplier(ConfigurableApplicationContext context) {
return ServiceInstanceListSupplier.builder()
.withDiscoveryClient()
.withHealthChecks()
.withCaching()
.withRetryAwareness()
.build(context);
}
Clearly this must work for other people, so what am I missing?!
Thanks.

How to disable interceptor call for Actuators in Springboot application

I am trying to implement Prometheus in my microservices based spring boot application, deployed over weblogic server. As part of POC,I have included the configs as part of one war. To enable it, i have set below config -
Application.properties
management:
endpoint:
prometheus:
enabled: true
endpoints:
web:
exposure:
include: "*"
Gradle -
implementation 'io.micrometer:micrometer-registry-prometheus'
But the actuator request is getting blocked by existing interceptors. It asks to pass values in headers specific to our project. Through postman(http:localhost:8080/abc/actuator/prometheus), I am able to test my POC(with required headers) and it returns time-series data expected by Prometheus. But Prometheus is not able to scrap data on its own(with pull approach), as the call lacks headers in request.
I tried following links (link1,link2) to bypass it, but my request still got intercepted by existing interceptor.
Interceptors blocking the request are part of dependent jars.
Edited --
I have used following way to exclude all calls to interceptor -
#Configuration
public class MyConfig implements WebMvcConfigurer{
#Override
public void addInterceptors(InterceptorRegistry registry){
registry.addInterceptor(new MyCustomInterceptor()).addPathPatterns("**/actuator/**");
}
}
MyCustomInterceptor
#Component
public class MyCustomInterceptor implements HandlerInterceptor{
}
I have not implemented anything custom in MyCustomInterceptor(as i only want to exclude all calls to 'actuator' endpoint from other interceptors).
#Configuration
public class ActuatorConfig extends WebMvcEndpointManagementContextConfiguration {
public WebMvcEndpointHandlerMapping webEndpointServletHandlerMapping(WebAnnotationEndpointDiscoverer endpointDiscoverer,
EndpointMediaTypes endpointMediaTypes,
CorsEndpointProperties corsProperties,
WebEndpointProperties webEndpointProperties) {
WebMvcEndpointHandlerMapping mapping = super.webEndpointServletHandlerMapping(
endpointDiscoverer,
endpointMediaTypes,
corsProperties,
webEndpointProperties);
mapping.setInterceptors(null);
return mapping;
}
}
Maybe you can override with setting null. I got code from https://github.com/spring-projects/spring-boot/issues/11234
AFAIK Spring HandlerInterceptor do not intercept actuator's endpoints by default.
Spring Boot can't intercept actuator access

Spring GCP Cloud SQL and GCP Runtime Config processed out of order

Spring Cloud SQL is initialized with an EnvironmentPostProcessor factory and GCP Runtime Config is initialized with a BootstrapConfiguration factory, in that order. But I have Cloud SQL properties configured in the Runtime Config, so it's a little awkward. I see that there is (or used to be) a Spring Boot starter for this (spring-cloud-gcp-starter-config), but it doesn't seem to be maintained anymore. Does anyone even use the Runtime Config service? Here's how I've worked around this:
public class RuntimeConfigEnvironmentPostProcessor implements EnvironmentPostProcessor {
#Override
public void postProcessEnvironment(ConfigurableEnvironment environment, SpringApplication application) {
try {
if (Arrays.stream(environment.getActiveProfiles()).noneMatch("local"::equals)) {
GcpConfigProperties gcpConfigProperties = new GcpConfigProperties();
gcpConfigProperties.setEnabled(true);
gcpConfigProperties.setName(environment.getProperty("spring.application.name"));
gcpConfigProperties.setProfile(environment.getActiveProfiles()[
environment.getActiveProfiles().length - 1]);
gcpConfigProperties.setProjectId(environment.getProperty("spring.cloud.gcp.config.project-id"));
PropertySourceLocator locator = new GoogleConfigPropertySourceLocator(
new DefaultGcpProjectIdProvider(),
GoogleCredentials::getApplicationDefault,
gcpConfigProperties);
environment.getPropertySources().addLast(
locator.locate(environment));
}
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
And I registered it in a spring.factories file which is something I've never had to do before. I found a similar issue and someone has worked around it in a similar manner: Spring Cloud Config and Spring Cloud Vault order of initialization
Is there a more elegant way?

springboot spring datasource tomcat properties not working

I am working on a springboot application with spring jpa with spring starter version <version>2.2.4.RELEASE</version>
I have defined below properties for tomcat and also excluded HikariCP NOTE: HikariCP is also not working
application.properties
spring.datasource.type=org.apache.tomcat.jdbc.pool.DataSource
spring.datasource.tomcat.initial-size=30
spring.datasource.tomcat.max-wait=60000
spring.datasource.tomcat.max-active=300
spring.datasource.tomcat.min-idle=30
spring.datasource.tomcat.default-auto-commit=true
I've tried all combinations and also used default but I am getting below error after 2-3 API calls .
o.h.engine.jdbc.spi.SqlExceptionHelper : [http-nio-8080-exec-5] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:4; busy:
4; idle:0; lastwait:30000].
The problem is with the deployment. I am deploying the app to cloudfoundry, and it by default adds profile called cloud. So, I created a bean of DataSource for "cloud" profile like below:
#Configuration
#Profile("cloud")
public class CloudConfig extends AbstractCloudConfig {
#Bean
public DataSource dataSource() {
PooledServiceConnectorConfig.PoolConfig poolConfig = new PooledServiceConnectorConfig.PoolConfig(20, 300, 30000);
DataSourceConfig dbConfig = new DataSourceConfig(poolConfig, null);
return connectionFactory().dataSource(dbConfig);
}
}

spring boot multiple mongodb datasource

We are using spring boot and we have multiple mongodbs within the system. We are able to configure "one" mongodb in application.properties files, as per the spring boot documents. Now we have a need to write to multiple mongodbs. How can we configure this?
Hope someone can help and any code examples would be helpful.
Thanks
GM
Use multiple #Bean methods, where you create and configure your datasources, and specify the bean name to distinguish them.
Example:
#Bean("primary")
public Mongo primaryMongo() throws UnknownHostException {
Mongo mongo = new Mongo();
// configure the client ...
return mongo;
}
#Bean("secondary")
public Mongo secondaryMongo() throws UnknownHostException {
Mongo mongo = new Mongo();
// configure the client ...
return mongo;
}
When you want to access the datasource, use the #Qualifier annotation on the field to specify the datasource with the bean name:
#Autowired
#Qualifier("primary")
private Mongo mongo;

Resources