Unable to Evict Cache every 60 seconds using TTL config in Hazelcast - spring-boot

#Bean
public Config cacheConfig() {
Map<String, MapConfig> mapConfigs = new HashMap<>();
EvictionConfig evictionConfig = new EvictionConfig();
evictionConfig.setEvictionPolicy(EvictionPolicy.LFU);
evictionConfig.setSize(10);
MapConfig widgetMapConfig = new MapConfig();
widgetMapConfig.setBackupCount(1);
widgetMapConfig.setName("widget");
widgetMapConfig.setMaxIdleSeconds(60);
widgetMapConfig.setTimeToLiveSeconds(60);
widgetMapConfig.setInMemoryFormat(InMemoryFormat.BINARY);
widgetMapConfig.setEvictionConfig(evictionConfig);
mapConfigs.put("widget-Map", widgetMapConfig);
Config programmaticConfig = new Config();
programmaticConfig.setMapConfigs(mapConfigs);
return programmaticConfig;
}
Above Is the config and Cache is not getting evicted when its not being used for 60 seconds. Can Someone help me on this. I want to Evict based on TTL config

I changed the Core Hazelcast Module version form 4.0.2 to 4.2.4 and It started working for me. I would suggest anyone integrating Hazelcast with Springboot to look after the Release Date and choose the versions accordingly when you know your configuration are up to the mark as suggested in Hazelcast Official Documentation.
<!-- https://mvnrepository.com/artifact/com.hazelcast/hazelcast -->
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
**<version>4.2.4</version>**
</dependency>

Version number should not have any implication on eviction. Once the entries are expired, they may remain in the JVM until they are garbage collected. The best way to check if entries are correctly expired and evicted is either do a map.size or use Management Center.
Do note that map.size is a very expensive distributed operation.

Related

Redisson doesn't set TTL or cache name correctly

I'm creating a Spring application that uses Redis cache via redisson client.
#Bean
public CacheManager cacheManager(RedissonClient redissonClient) throws IOException {
Map<String, CacheConfig> config = new HashMap<String,CacheConfig>();
config.put("employeesCache", new CacheConfig(24*60*1000, 12*60*1000));
RedissonSpringCacheManager manager= new RedissonSpringCacheManager(redissonClient, config);
return manager;
}
However when running this application the cache name created in Redis is {employeesCache}:redisson_options instead of just employeesCache.
Also, when I check for the TTL in the Redis CLI it returns (integer) -1 ,meaning it has not been set.
So the RedissonSpringCacheManager is partially functioning, it creates the cache but without any configuration, can you help me fix it.
I'm using the following Maven dependencies
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson-spring-boot-starter</artifactId>
<version>3.13.1</version>
</dependency>
Redisson use a map to save your data, but keys in the map are not support TTL, so Redisson maintain a {employeesCache}:redisson_options to save the configurations, your employeesCache keys is maintained and deleted by redisson, NOT redis
So, your data will be saved in a map called employeesCache, not in {employeesCache}:redisson_options, just leave it alone.

Spring Data Gemfire: TTL expiration annotation is not working; how to set TTL using annotations?

In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}
First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.

Redis cache metrics with Prometheus(Spring boot)

I am using RedisTemplate for caching purpose in my spring boot service. Now I want to check cache hit/cache miss through end point actuator/prometheus. But can not see cache hit/cache miss for the cache.
The code I have written is something like below
#EnableCaching
#Configuration
public class CachingConfiguration {
#Bean
public RedisTemplate<String, SomeData> redisTemplate(LettuceConnectionFactory connectionFactory, ObjectMapper objectMapper)
{
RedisTemplate<String, SomeData> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
var valueSerializer = new Jackson2JsonRedisSerializer<SomeData>(SomeData.class);
valueSerializer.setObjectMapper(objectMapper);
template.setValueSerializer(valueSerializer);
return template;
}
}
Now am doing like below to get and save into cache
to get:-
redisTemplate.opsForValue().get(key);
And to save:-
redisTemplate.opsForValue().set(key, obj, some_time_limit);
My cache is working properly, am getting able to save into cache and getting proper data.
But I don't see cache hit/miss related data inside actuator/prometheus.
In my application.yml file I have added below
cache:
redis:
enable-statistics: 'true'
I would assume that in order for Springboot Cache Monitoring to apply (Including Hits/Misses), you would need to depend on AutoConfiguration.
In your case you are creating the RedisTemplate yourself, and probably enable-statistics is not actually applied.
Can you remove the redistemplate creation and use #Cacheable annotation abstraction? That way any supported Cache library will work out of the box, without you having to create #Bean and manually configuring it.
Otherwise, generally if you wanted to enable statistics on a cache manager manually, you will need to call RedisCacheManager.RedisCacheManagerBuilder enableStatistics():
https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/cache/RedisCacheManager.RedisCacheManagerBuilder.html
For Reference:
Auto-configuration enables the instrumentation of all available Cache
instances on startup, with metrics prefixed with cache. Cache
instrumentation is standardized for a basic set of metrics.
Additional, cache-specific metrics are also available.
Metrics are tagged by the name of the cache and by the name of the
CacheManager, which is derived from the bean name.
Only caches that are configured on startup are bound to the registry. For caches not
defined in the cache’s configuration, such as caches created on the
fly or programmatically after the startup phase, an explicit
registration is required. A CacheMetricsRegistrar bean is made
available to make that process easier.
I had exactly the same question and spent a good number of hours trying to figure out how to enable cache metrics for my manually created RedisTemplate instance.
What I eventually realised is that it's only RedisCache class which collects and exposes CacheStatistics through getStatistics() method. As far as I can see there is nothing like that for RedisTemplate, which means you either need to switch to using RedisCache through RedisCacheManager and #Cacheable annotation or implement your custom metrics collection.

Configuring near cache with Hazelcast

good day everyone
Im trying to use Hazelcast as a local cache in SpringBoot app, following tutorial:
https://hazelcast.com/blog/non-stop-client-with-near-cache/
Hazelcast gradle version:
com.hazelcast:hazelcast-client:3.12.11 -> 3.12.10 \
In my version I see no method to set up max timeout for connection to cluster
(setClusterConnectTimeoutMillis()):
HazelcastInstance member = Hazelcast.newHazelcastInstance();
ClientConfig config = new ClientConfig();
config.getConnectionStrategyConfig().setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
config.getConnectionStrategyConfig().getConnectionRetryConfig().setClusterConnectTimeoutMillis(Integer.MAX_VALUE);
NearCacheConfig nearCacheConfig = new NearCacheConfig("map");
config.addNearCacheConfig(nearCacheConfig);
HazelcastInstance client = HazelcastClient.newHazelcastClient(config);
.. and withought it it falls with
com.hazelcast.client.HazelcastClientOfflineException: Client
connecting to cluster
Any alternative to set up it by java config in my Hazelcast version ?
Alternatives to client near-cache setup is using declarative in hazelcast-client.xml or yaml files. Near Cache can also be configured for members using their xml or yaml file.
See...
https://github.com/hazelcast/hazelcast-code-samples/blob/master/clients/client-near-cache/src/main/resources/hazelcast-client.xml
Also docs...
https://docs.hazelcast.com/imdg/latest/performance/near-cache.html#near-cache-example-for-imap

Unable to set HikariCP max-pool-size

I'm using Spring Boot 2.0.3 release. I want to increase the maximum pool size of HikariCP which is 10 by default.
I tried changing it in applicaiton.properties with
spring.datasource.hikari.maximum-pool-size=200
but it is not working because in the logs it still show that max pool size is 10.
The reason I want to change is because I am somehow reaching that limit in staging and I have no idea what's causing it.
I faced similar issue recently (Spring Boot v2.1.3). Posting it here in case people bump into the same scenario.
Long story short: if you're initializing DataSource using #ConfigurationProperties, those properties don't seem to require hikari prefix for maximum-pool-size, unless I'm missing something. So spring.datasource.maximum-pool-size should work if you use #ConfigurationProperties(prefix = "spring.datasource").
Details: In my case I'm initializing DataSource myself using org.springframework.boot.jdbc.DataSourceBuilder, so that I could later use it in org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean:
#Bean
#Primary
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource primaryDataSource()
{
return DataSourceBuilder.create().build();
}
In this case property spring.datasource.hikari.maximum-pool-size taken from Common App Properties section in Spring Boot doc did not help. Neither did suggested above maximumPoolSize.
So I went and debugged Spring Boot code which lead me to org.springframework.boot.context.properties.bind.JavaBeanBinder and it's method bind. Value for property name for HikariDataSource setter setMaximumPoolSize was "maximum-pool-size", so just for sake of testing I renamed my property to be spring.datasource.maximum-pool-size and it finally worked.
Hope it helps. Please let me know in the comments if I missed something or took wrong path. Thanks!
As per HikariCP Github README it's maximumPoolSize so try using:
spring.datasource.hikari.maximumPoolSize = 200
But this will work only if you allow Spring Boot to create the DataSource. If you create the DataSource yourself Spring Boot properties have no effect.
Do note that 200 is a very high value that may negatively impact your database as each physical connection requires server resources. In most cases a lower value will yield better performance, see HikariCP wiki: About Pool Sizing.

Resources