Using Redis as the Spring cache manager in order to cache custom java objects - spring

I would like to use Redis as a cache manager in order to cache JPA entities coming from a MySQL database.
I am new to Redis and it seems Redis is only able to cache the basic types/structures it knows (strings, hashes, etc.)
My question is: can I use Redis (together with Spring cache abstraction) as a spring cache manager to cache my custom objects (say a Person, Order, Customer, etc...)?

You can start by looking at Spring Data Redis, but unlike Spring Data JPA, is doesn't offer repository abstraction, instead using Spring templates with accessor methods specific only to redis. Since Redis does not support relations, you'll have to design and implement these relations by overriding JPA's standard CRUD operations.
Here's a great article that details something up your alley...
http://www.packtpub.com/article/building-applications-spring-data-redis
I am new to Redis and it seems Redis is only able to cache the basic
types/structures it knows (strings, hashes, etc.)
Redis can store anything; text, json, binary data, it doesn't matter.
By default, RedisTemplate (part of Spring Data Redis), uses Java serialization to marshal/unmarshall objects to/from redis, but it uses more space in redis compared to something like MessagePack, based on my tests.

Redisson offers Redis based Spring Cache provider. It supports such important cache settings like ttl and maxIdleTime for Redis store and supports many popular codecs: Jackson JSON, Avro, Smile, CBOR, MsgPack, Kryo, FST, LZ4, Snappy and JDK Serialization.
Config example is below:
#Configuration
#ComponentScan
#EnableCaching
public static class Application {
#Bean(destroyMethod="shutdown")
RedissonClient redisson() {
Config config = ...
return Redisson.create(config);
}
#Bean
CacheManager cacheManager(RedissonClient redissonClient) throws IOException {
Map<String, CacheConfig> config = new HashMap<String, CacheConfig>();
// ttl = 24 mins, maxIdleTime = 12 mins
config.put("testCache", new CacheConfig(24*60*1000, 12*60*1000));
return new RedissonSpringCacheManager(redissonClient, config);
}
}

Related

Spring Data Gemfire: TTL expiration annotation is not working; how to set TTL using annotations?

In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}
First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.

Redis cache metrics with Prometheus(Spring boot)

I am using RedisTemplate for caching purpose in my spring boot service. Now I want to check cache hit/cache miss through end point actuator/prometheus. But can not see cache hit/cache miss for the cache.
The code I have written is something like below
#EnableCaching
#Configuration
public class CachingConfiguration {
#Bean
public RedisTemplate<String, SomeData> redisTemplate(LettuceConnectionFactory connectionFactory, ObjectMapper objectMapper)
{
RedisTemplate<String, SomeData> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
var valueSerializer = new Jackson2JsonRedisSerializer<SomeData>(SomeData.class);
valueSerializer.setObjectMapper(objectMapper);
template.setValueSerializer(valueSerializer);
return template;
}
}
Now am doing like below to get and save into cache
to get:-
redisTemplate.opsForValue().get(key);
And to save:-
redisTemplate.opsForValue().set(key, obj, some_time_limit);
My cache is working properly, am getting able to save into cache and getting proper data.
But I don't see cache hit/miss related data inside actuator/prometheus.
In my application.yml file I have added below
cache:
redis:
enable-statistics: 'true'
I would assume that in order for Springboot Cache Monitoring to apply (Including Hits/Misses), you would need to depend on AutoConfiguration.
In your case you are creating the RedisTemplate yourself, and probably enable-statistics is not actually applied.
Can you remove the redistemplate creation and use #Cacheable annotation abstraction? That way any supported Cache library will work out of the box, without you having to create #Bean and manually configuring it.
Otherwise, generally if you wanted to enable statistics on a cache manager manually, you will need to call RedisCacheManager.RedisCacheManagerBuilder enableStatistics():
https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/cache/RedisCacheManager.RedisCacheManagerBuilder.html
For Reference:
Auto-configuration enables the instrumentation of all available Cache
instances on startup, with metrics prefixed with cache. Cache
instrumentation is standardized for a basic set of metrics.
Additional, cache-specific metrics are also available.
Metrics are tagged by the name of the cache and by the name of the
CacheManager, which is derived from the bean name.
Only caches that are configured on startup are bound to the registry. For caches not
defined in the cache’s configuration, such as caches created on the
fly or programmatically after the startup phase, an explicit
registration is required. A CacheMetricsRegistrar bean is made
available to make that process easier.
I had exactly the same question and spent a good number of hours trying to figure out how to enable cache metrics for my manually created RedisTemplate instance.
What I eventually realised is that it's only RedisCache class which collects and exposes CacheStatistics through getStatistics() method. As far as I can see there is nothing like that for RedisTemplate, which means you either need to switch to using RedisCache through RedisCacheManager and #Cacheable annotation or implement your custom metrics collection.

Apache Ignite, Spring data and mysql does not work together

I have published project
https://github.com/armdev/ignite-spring-boot
with Spring data JPA, Mysql and Apache Ignite configuration.
This is Ignite cache configuration
#Bean
public Ignite igniteInstance() {
IgniteConfiguration cfg = new IgniteConfiguration();
// Setting some custom name for the node.
cfg.setIgniteInstanceName("springDataNode");
// Enabling peer-class loading feature.
cfg.setPeerClassLoadingEnabled(true);
// Defining and creating a new cache to be used by Ignite Spring Data
// repository.
CacheConfiguration ccfg = new CacheConfiguration("FlightCache");
// Setting SQL schema for the cache.
ccfg.setIndexedTypes(Long.class, Flight.class);
cfg.setActiveOnStart(true);
cfg.setCacheConfiguration(ccfg);
return Ignition.start(cfg);
}
Project has 2 API, one works without Ignite, but repository which is configured with Ignite does not work. I do not understand reason.
You need to configure a CacheStore that will operate on top of the MySQL data source.
You need to enable write-through and read-through behavior as well.

Spring Data MongoDB eliminate POJO's

My system is a dynamic telemetry system. We have hundreds of different spiders all sending telemetry back to the SpringBoot server, Everything is dynamic, driven by json files in Mongo, including the UI. We don't build the UI, as opposed to individual teams can configure their own UI for their needs, all by editing json docs.
We have the majority of the UI running and i began the middleware piece. We are using Spring Boot for the first time along with Spring Data Mongo with several MQ listeners for events. The problem is Spring Data. I started reading the docs on it and I realized the docs do not address using it without POJO's. I have this wonderfully dynamic model that changes per user per minute if the telemetry spiders dictate, I couldn't shackle this to a POJO if I tried. Is there a way to use Spring Data with a Map?
It seems from my experiments that the big issue is there is no way to tell the CRUD routines of the repository class what collection to query without a POJO.
Are my suspicions correct in that this won't work and am I better off ditching Spring Data and using the Mongo driver directly?
I don't think you can do it without a pojo when using spring-data. The least you could do is this
public interface NoPojoRepository extends MongoRepository<DummyPojo, String> {
}
and create a dummy pojo with just id and a Map.
#Data
public class DummyPojo {
#Id
private String id;
private Map<String, Object> value;
}
Since this value field is a map, you can store pretty much anything.

Can I use the JCache API for distributed caches in Apache Ignite?

I would like to configure a distributed cache with Apache Ignite using the JCache API (JSR107, javax.cache). Is this possible?
The examples I have found either create a local cache with the JCache API or create a distributed cache (or datagrid) using the Apache Ignite API.
JCache allows to provide provider-specific configuration when creating a cache. I.e., you can do this:
// Get or create a cache manager.
CacheManager cacheMgr = Caching.getCachingProvider().getCacheManager();
// This is an Ignite configuration object (org.apache.ignite.configuration.CacheConfiguration).
CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();
// Specify cache mode and/or any other Ignite-specific configuration properties.
cfg.setCacheMode(CacheMode.PARTITIONED);
// Create a cache based on configuration create above.
Cache<Integer, String> cache = cacheMgr.createCache("a", cfg);
Also note that partitioned mode is actually the default one in Ignite, so you are not required to specify it explicitly.
UPD. In addition, CachingProvider.getCacheManager(..) method accepts a provider-specific URI that in case of Ignite should point to XML configuration file. Discovery, communication and other parameters can be provided there.
Please note that JCache specification does not specify all the configurations that apply to individual cache providers in terms of configuring via CacheManager for creating a Grid. The requirement for creating a CacheManager is standard but not everything relevant to how the manager itself is configured.
Following code will demonstrate how to create a grid using Apache Ignite in SpringBoot
#Bean
#SuppressWarnings("unchecked")
public org.apache.ignite.cache.spring.SpringCacheManager cacheManager() {
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setGridName("petclinic-ignite-grid");
//igniteConfiguration.setClassLoader(dynamicClassLoaderWrapper());
igniteConfiguration.setCacheConfiguration(this.createDefaultCache("petclinic"),
this.createDefaultCache("org.hibernate.cache.spi.UpdateTimestampsCache"),
this.createDefaultCache("org.hibernate.cache.internal.StandardQueryCache"));
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setConfiguration(igniteConfiguration);
springCacheManager.setDynamicCacheConfiguration(this.createDefaultCache(null));
return springCacheManager;
}
private org.apache.ignite.configuration.CacheConfiguration createDefaultCache(String name) {
org.apache.ignite.configuration.CacheConfiguration cacheConfiguration = new org.apache.ignite.configuration.CacheConfiguration();
cacheConfiguration.setName(name);
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setStatisticsEnabled(true);
cacheConfiguration.setEvictSynchronized(true);
return cacheConfiguration;
}
}
If we were to create another instance of this service and have it register to the same grid as igniteConfiguration.setGridName("petclinic-ignite-grid"), an IMDG will be created. Please note that the 2 service instances with this version of partitioned, embedded distributed cache should be able to talk to each other via required PORTS. Please refer to Apache Ignite - Data Grid for more details.
Hope this helps.

Resources