Spring Data Gemfire: TTL expiration annotation is not working; how to set TTL using annotations? - spring-boot

In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}

First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.

Related

Is it possible to set Redis key expiration time when using Spring Integration RedisMessageStore

Dears, I'd like to auto-expire Redis keys when using org.springframework.integration.redis.store.RedisMessageStore class in Spring Integration. I see some methods related to "expiry callback" but I could not find any documentation or examples yet. Any advice will be much appreciated.
#Bean
public MessageStore redisMessageStore(LettuceConnectionFactory redisConnectionFactory) {
RedisMessageStore store = new RedisMessageStore(redisConnectionFactory, "TEST_");
return store;
}
Spring Boot: 2.6.3, spring integration and spring-boot-starter-data-redis.
The RedisMessageStore does not have any expiration features. And technically it must not. The point of this kind of store is too keep data until it is used. Look at it as persistent storage. The RedisMetadataStore is based on the RedisProperties object, so it also cannot use expiration feature for particular entry.
You probably talk about a MessageGroupStoreReaper, which really calls a MessageGroupStore.expireMessageGroups(long timeout), but that's already an artificial, cross-store implementation provided by the framework. The logic relies on the group.getTimestamp() and group.getLastModified(). So, still not that auto-expiration Redis feature.
The MessageGroupStoreReaper is a process needed to be run in your application: nothing Redis-specific.
See more info in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#reaper

Redis cache metrics with Prometheus(Spring boot)

I am using RedisTemplate for caching purpose in my spring boot service. Now I want to check cache hit/cache miss through end point actuator/prometheus. But can not see cache hit/cache miss for the cache.
The code I have written is something like below
#EnableCaching
#Configuration
public class CachingConfiguration {
#Bean
public RedisTemplate<String, SomeData> redisTemplate(LettuceConnectionFactory connectionFactory, ObjectMapper objectMapper)
{
RedisTemplate<String, SomeData> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
var valueSerializer = new Jackson2JsonRedisSerializer<SomeData>(SomeData.class);
valueSerializer.setObjectMapper(objectMapper);
template.setValueSerializer(valueSerializer);
return template;
}
}
Now am doing like below to get and save into cache
to get:-
redisTemplate.opsForValue().get(key);
And to save:-
redisTemplate.opsForValue().set(key, obj, some_time_limit);
My cache is working properly, am getting able to save into cache and getting proper data.
But I don't see cache hit/miss related data inside actuator/prometheus.
In my application.yml file I have added below
cache:
redis:
enable-statistics: 'true'
I would assume that in order for Springboot Cache Monitoring to apply (Including Hits/Misses), you would need to depend on AutoConfiguration.
In your case you are creating the RedisTemplate yourself, and probably enable-statistics is not actually applied.
Can you remove the redistemplate creation and use #Cacheable annotation abstraction? That way any supported Cache library will work out of the box, without you having to create #Bean and manually configuring it.
Otherwise, generally if you wanted to enable statistics on a cache manager manually, you will need to call RedisCacheManager.RedisCacheManagerBuilder enableStatistics():
https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/cache/RedisCacheManager.RedisCacheManagerBuilder.html
For Reference:
Auto-configuration enables the instrumentation of all available Cache
instances on startup, with metrics prefixed with cache. Cache
instrumentation is standardized for a basic set of metrics.
Additional, cache-specific metrics are also available.
Metrics are tagged by the name of the cache and by the name of the
CacheManager, which is derived from the bean name.
Only caches that are configured on startup are bound to the registry. For caches not
defined in the cache’s configuration, such as caches created on the
fly or programmatically after the startup phase, an explicit
registration is required. A CacheMetricsRegistrar bean is made
available to make that process easier.
I had exactly the same question and spent a good number of hours trying to figure out how to enable cache metrics for my manually created RedisTemplate instance.
What I eventually realised is that it's only RedisCache class which collects and exposes CacheStatistics through getStatistics() method. As far as I can see there is nothing like that for RedisTemplate, which means you either need to switch to using RedisCache through RedisCacheManager and #Cacheable annotation or implement your custom metrics collection.

Implement multi-tenanted application with Keycloak and springboot

When we use 'KeycloakSpringBootConfigResolver' for reading the keycloak configuration from Spring Boot properties file instead of keycloak.json.
Now there are guidelines to implement a multi-tenant application using keycloak by overriding 'KeycloakConfigResolver' as specified in http://www.keycloak.org/docs/2.3/securing_apps_guide/topics/oidc/java/multi-tenancy.html.
The steps defined here can only be used with keycloak.json.
How can we adapt this to a Spring Boot application such that keycloak properties are read from the Spring Boot properties file and multi-tenancy is achieved.
You can access the keycloak config you secified in your application.yaml (or application.properties) if you inject org.keycloak.representations.adapters.config.AdapterConfig into your component.
#Component
public class MyKeycloakConfigResolver implements KeycloakConfigResolver {
private final AdapterConfig keycloakConfig;
public MyKeycloakConfigResolver(org.keycloak.representations.adapters.config.AdapterConfig keycloakConfig) {
this.keycloakConfig = keycloakConfig;
}
#Override
public KeycloakDeployment resolve(OIDCHttpFacade.Request request) {
// make a defensive copy before changing the config
AdapterConfig currentConfig = new AdapterConfig();
BeanUtils.copyProperties(keycloakConfig, currentConfig);
// changes stuff here for example compute the realm
return KeycloakDeploymentBuilder.build(currentConfig);
}
}
After several trials, the only feasible option for spring boot is to have
Multiple instances of the spring boot application running with different spring 'profiles'.
Each application instance can have its own keycloak properties (as it is under different profiles) including the realm.
The challenge is to have an upgrade path for all instances for version upgrades/bug fixes, but I guess there are multiple strategies already implemented (not part of this discussion)
there is a ticket regarding this problem: https://issues.jboss.org/browse/KEYCLOAK-4139?_sscc=t
Comments for that ticket also talk about possible workarounds intervening in servlet setup of the service used (Tomcat/Undertow/Jetty), which you could try.
Note that the documentation you linked in your first comment is super outdated!

GemFire - Spring Boot Configuration

I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)

Can I use the JCache API for distributed caches in Apache Ignite?

I would like to configure a distributed cache with Apache Ignite using the JCache API (JSR107, javax.cache). Is this possible?
The examples I have found either create a local cache with the JCache API or create a distributed cache (or datagrid) using the Apache Ignite API.
JCache allows to provide provider-specific configuration when creating a cache. I.e., you can do this:
// Get or create a cache manager.
CacheManager cacheMgr = Caching.getCachingProvider().getCacheManager();
// This is an Ignite configuration object (org.apache.ignite.configuration.CacheConfiguration).
CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();
// Specify cache mode and/or any other Ignite-specific configuration properties.
cfg.setCacheMode(CacheMode.PARTITIONED);
// Create a cache based on configuration create above.
Cache<Integer, String> cache = cacheMgr.createCache("a", cfg);
Also note that partitioned mode is actually the default one in Ignite, so you are not required to specify it explicitly.
UPD. In addition, CachingProvider.getCacheManager(..) method accepts a provider-specific URI that in case of Ignite should point to XML configuration file. Discovery, communication and other parameters can be provided there.
Please note that JCache specification does not specify all the configurations that apply to individual cache providers in terms of configuring via CacheManager for creating a Grid. The requirement for creating a CacheManager is standard but not everything relevant to how the manager itself is configured.
Following code will demonstrate how to create a grid using Apache Ignite in SpringBoot
#Bean
#SuppressWarnings("unchecked")
public org.apache.ignite.cache.spring.SpringCacheManager cacheManager() {
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setGridName("petclinic-ignite-grid");
//igniteConfiguration.setClassLoader(dynamicClassLoaderWrapper());
igniteConfiguration.setCacheConfiguration(this.createDefaultCache("petclinic"),
this.createDefaultCache("org.hibernate.cache.spi.UpdateTimestampsCache"),
this.createDefaultCache("org.hibernate.cache.internal.StandardQueryCache"));
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setConfiguration(igniteConfiguration);
springCacheManager.setDynamicCacheConfiguration(this.createDefaultCache(null));
return springCacheManager;
}
private org.apache.ignite.configuration.CacheConfiguration createDefaultCache(String name) {
org.apache.ignite.configuration.CacheConfiguration cacheConfiguration = new org.apache.ignite.configuration.CacheConfiguration();
cacheConfiguration.setName(name);
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setStatisticsEnabled(true);
cacheConfiguration.setEvictSynchronized(true);
return cacheConfiguration;
}
}
If we were to create another instance of this service and have it register to the same grid as igniteConfiguration.setGridName("petclinic-ignite-grid"), an IMDG will be created. Please note that the 2 service instances with this version of partitioned, embedded distributed cache should be able to talk to each other via required PORTS. Please refer to Apache Ignite - Data Grid for more details.
Hope this helps.

Resources