Spring #Cacheable how to deal with Redis Read and Read / Write replicas? - spring

I have a simple method that connects to an external service and I want to cache its result
#Cacheable(cacheNames = "myCache", key = "#token")
public String getUserByToken(String token) {
return externalService.getUserFromToken(token);
}
Right now this is my custom CacheManager config because i need different TTL for different cache key.
#Configuration
#EnableCaching
public class CacheConfiguration extends CachingConfig {
#Autowired
private LettuceConnectionFactory lettuceConnectionFactory;
#Primary
#Bean
public CacheManager cacheManager() {
RedisCacheConfiguration redisCacheConfiguration = RedisCacheConfiguration.defaultCacheConfig();
CacheManager cacheManager = RedisCacheManager.RedisCacheManagerBuilder.fromConnectionFactory(lettuceConnectionFactory)
.cacheDefaults(redisCacheConfiguration)
.withCacheConfiguration("myCache",
redisCacheConfiguration.entryTtl(Duration.ofMinutes(5)))
.withCacheConfiguration("someOtherCache",
redisCacheConfiguration.entryTtl(Duration.ofMinutes(10)))
.build();
return cacheManager;
}
Right now I have a single lettuceConnectionFactory which is simply the bean created by Spring AutoConfigure stuff and uses my Redis primary Read / Write endpoint.
However I would like to improve throughput and planning to create 2 separate lettuce connection factory bean. One that will use my Redis primary read / write endpoint and an other lettuceReadConnectionFactory that will use my Redis read only endpoint.
I would like to use the lettuceReadConnectionFactory exclusively for read operation.
Is there a way somehow to still use #Cacheable and have it use a different cacheManager to read and write ?

The #Cacheable annotation takes a cacheManager attribute, you could create two different named CacheManager beans and pass your customized managers to it.

Related

how to set the expiry time for redis cache in springboot

Hi I want to add an expiry time to the redis cache.
I have added spring.cache.redis.time-to-live=1m.
, but its not working.
I am using Jedis connection factory , for connection
please some thing , new to Redis.
Tried to create some beans as I got from google, but not working.
To add an expiry time to Redis cache in Spring Boot, you can set the "spring.cache.redis.time-to-live" property in the application.properties file as you have done. This property specifies the default time-to-live for cache entries.
However, there are a few things to check if it's not working as expected:
Ensure that you have the Spring Boot Redis dependency in your pom.xml or build.gradle file.
Check if the Redis cache is enabled by adding the #EnableCaching annotation to your Spring Boot application class.
Check if the RedisConnectionFactory is properly configured. In your case, you are using JedisConnectionFactory. You can create a bean for the RedisConnectionFactory as follows:
#Bean
public RedisConnectionFactory redisConnectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
// set the Redis server host and port
jedisConnectionFactory.setHostName("localhost");
jedisConnectionFactory.setPort(6379);
jedisConnectionFactory.afterPropertiesSet();
return jedisConnectionFactory;
}
Ensure that the RedisTemplate is properly configured. You can create a bean for RedisTemplate as follows:
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(redisConnectionFactory());
// set the key and value serializer
redisTemplate.setKeySerializer(new StringRedisSerializer());
redisTemplate.setValueSerializer(new GenericJackson2JsonRedisSerializer());
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
Note that in the above example, I'm using the GenericJackson2JsonRedisSerializer for serializing the values. You can change this to any other serializer that suits your needs.
Once you have the RedisConnectionFactory and RedisTemplate beans configured properly, Spring Boot should automatically apply the time-to-live property to the cached values.
If the issue still persists, you can try setting the TTL for a specific cache using the #Cacheable annotation as follows:
#Cacheable(value = "myCache", key = "#key", unless = "#result == null", cacheManager = "cacheManager", cacheManagerCustomizers = MyCacheManagerCustomizer.class)
public Object getCachedData(String key) {
// your code to retrieve the data from the database or any other source
}
In the above example, I have set the TTL for the "myCache" cache using the MyCacheManagerCustomizer class. You can implement this class to customize the cache manager as per your requirements.
I hope this helps you to add an expiry time to Redis cache in your Spring Boot application.

Need to configure my JPA layer to use a TransactionManager (Spring Cloud Task + Batch register a PlatformTransactionManager unexpectedly)

I am using Spring Cloud Task + Batch in a project.
I plan to use different datasources for business data and Spring audit data on the task. So I configured something like:
#Bean
public TaskConfigurer taskConfigurer() {
return new DefaultTaskConfigurer(this.singletonNotExposedSpringDatasource());
}
#Bean
public BatchConfigurer batchConfigurer() {
return new DefaultBatchConfigurer(this.singletonNotExposedSpringDatasource());
}
whereas main datasource is autoconfigured through JpaBaseConfiguration.
The problem comes when SimpleBatchConfiguration+DefaultBatchConfigurer expose a PlatformTransactionManager bean, since JpaBaseConfiguration has a #ConditionalOnMissingBean on PlatformTransactionManager. Therefore Batch's PlatformTransactionManager, binded to the spring.datasource takes place.
So far, this seems to be caused because this bug
So I tried to emulate what JpaBaseConfiguration does, defining my own PlatformTransactionManager over my biz datasource/entityManager.
#Primary
#Bean
public PlatformTransactionManager appTransactionManager(final LocalContainerEntityManagerFactoryBean appEntityManager) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(appEntityManager.getObject());
this.appTransactionManager = transactionManager;
return transactionManager;
}
Note I have to define it with a name other than transactionManager, otherwise Spring finds 2 beans and complains (unregardless of #Primary!)
But now it comes the funny part. When running the tests, everything runs smooth, tests finish and DDLs are properly created for both business and Batch/Task's databases, database reads work flawlessly, but business data is not persisted in my testing database, so final assertThats fail when counting. If I #Autowire in my test PlatformTransactionManager or ÈntityManager, everything indicates they are the proper ones. But if I debug within entityRepository.save, and execute org.springframework.transaction.interceptor.TransactionAspectSupport.currentTransactionStatus(), it seems the DatasourceTransactionManager from Batch's configuration is overriding, so my custom exposed PlatformTransactionManager is not being used.
So I guess it is not a problem of my PlatformManager being the primary, but that something is configuring my JPA layer TransactionInterceptor to use the non primary but transactionManager named bean of Batch.
I also tried with making my #Configuration implement TransactionManagementConfigurer and override PlatformTransactionManager annotationDrivenTransactionManager() but still no luck
Thus, I guess what I am asking is whether there is a way to configure the primary TransactionManager for the JPA Layer.
The problem comes when SimpleBatchConfiguration+DefaultBatchConfigurer expose a PlatformTransactionManager bean,
As you mentioned, this is indeed what was reported in BATCH-2788. The solution we are exploring is to expose the transaction manager bean only if Spring Batch creates it.
In the meantime you can set the property spring.main.allow-bean-definition-overriding=true to allow bean definition overriding and set the transaction manager you want Spring Batch to use with BatchConfigurer#getTransactionManager. In your case, it would be something like:
#Bean
public BatchConfigurer batchConfigurer() {
return new DefaultBatchConfigurer(this.singletonNotExposedSpringDatasource()) {
#Override
public PlatformTransactionManager getTransactionManager() {
return new MyTransactionManager();
}
};
}
Hope this helps.

Use multiple ClientAuthentiation with spring-vault

We have an application using spring-vault. It authenticates to Vault using an AppRole. We use the token we get from that operation to read and write secrets. The configuration for the VaultEndpoint and AppRoleAuthentication are auto-configured from a properties file.
Code looks like this:
#Autowired
private ApplicationContext context;
#Autowired
private VaultOperations vault;
private Logger logger = LoggerFactory.getLogger(VaultFacade.class);
public VaultFacadeImpl() {
logger.debug("Creating VaultFacade with autowired context");
context = new AnnotationConfigApplicationContext(VaultConfig.class);
vault = context.getBean(VaultTemplate.class);
//vault variable ready to use with vault.read or vault.write
//in our VaultFacadeImpl
}
I would like to keep autowire capabilities, but also support two other ClientAuthentication implementations:
The existing TokenAuthentication
A custom ClientAuthentication implementation (LDAP auth backend)
The end result would be having two authentication mechanism available at the same time. Some operations would be carried out with the application's credentials (AppRole in Vault), others with the user's credentials (LDAP in Vault).
I think I can create multiple AbstractVaultConfiguration classes, each returning a different ClientAuthentication derivative. But how can I create a VaultTemplate for configuration class?
If you want to have an additional VaultTemplate bean, then you need to configure and declare the bean yourself. You can keep the foundation provided by AbstractVaultConfiguration. Your config could look like:
#Configuration
public class CustomConfiguration {
#Bean
public VaultTemplate ldapAuthVaultTemplate(ClientFactoryWrapper clientHttpRequestFactoryWrapper,
ThreadPoolTaskScheduler threadPoolTaskScheduler) {
return new VaultTemplate(…,
clientHttpRequestFactoryWrapper.getClientHttpRequestFactory(),
ldapSessionManager(threadPoolTaskScheduler));
}
#Bean
public SessionManager ldapSessionManager(ThreadPoolTaskScheduler threadPoolTaskScheduler) {
ClientAuthentication clientAuthentication = new MyLdapClientAuthentication(…);
return new LifecycleAwareSessionManager(clientAuthentication,
threadPoolTaskScheduler,
…);
}
}
On the client side (using the second VaultTemplate) you need to make sure to look up the appropriate instance. Spring doesn't limit you to a bean per type but allows registration of multiple beans of the same type.

Spring - Data persisted by JdbcTemplate unable to be seen by JpaRepository

I am building a Spring Boot application which requires the need for persistence via JDBC and selecting/reading via JPA/Hibernate. I have implemented both of these types of operations using Spring's JdbcTemplate and Spring Data's JpaRepository.
After I persist using JdbcTemplate I am unable to see the data via JpaRepository even though they share the same datasource. I am able to read the data if I use JdbcTemplate.
NOTE: I am using two data sources. One is configured in another class without the #Primary annotation using its own entity manager factory and transaction manager, which is why I've needed to explicitly define it below using Spring Boot's default bean terminology "transactionManager" and "entityManagerFactory".
The following is my embedded database configuration for the primary beans:
#Configuration
#EnableJpaRepositories(basePackages = {"com.repository"})
public class H2DataSourceConfiguration {
private static final Logger log = LoggerFactory.getLogger(H2DataSourceConfiguration.class);
#Bean(destroyMethod = "shutdown")
#Primary
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.setName("dataSource")
.build();
}
#Bean
#Primary
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder, DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("com.my.domain", "org.springframework.data.jpa.convert.threeten")
.build();
}
#Bean
#Primary
public JpaTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager jpaTransactionManager = new JpaTransactionManager(entityManagerFactory);
return jpaTransactionManager;
}
}
The persistence happens in a different transaction to the reading of the data, however they share the same service.
Both operations happen within the #Transactional annotation. Both repository beans are specified in the same service and also contain the #Transactional annotation. The service looks as follows:
#Service
#Transactional
public class MyServiceImpl implements MyService {
private static final Logger log = LoggerFactory.getLogger(MyServiceImpl.class);
#Autowired
private MyJpaRepository myJpaRepository;
#Autowired
private MyJdbcRepository myJdbcRepository;
...
}
MyJdbcRepositoryImpl.java:
#Repository
#Transactional(propagation = Propagataion.MANDATORY)
public class MyJdbcRepositoryImpl implements MyJdbcRepository {
#Autowired
private NamedParameterJdbcTemplate jdbcTemplate;
// methods within here all use jdbcTemplate.query(...)
}
MyJpaRepository.java:
#Repository
#Transactional(propagation = Propagataion.MANDATORY)
public interface AcquisitionJpaRepository extends JpaRepository<AcquisitionEntity, Long> {
}
Is it at all possible that the jdbctemplate calls are saving to a different h2 database?
The above configuration is correct!
The problem was that the JdbcTemplate calls had the schema owner as a prefix.
For example:
select * from I_AM_SCHEMA.KILL_ME
However, I had both the #Entity annotation and the #Table annotation on the entity object and only specified the table name!
Example:
#Entity
#Table(name = "KILL_ME")
So, we were writing to one table with JdbcTemplate but reading from a completely different other table via JPA/Hibernate due to us missing the prefix.
The correct fix was to prefix the entity name in the #Entity annotation:
#Entity("I_AM_SCHEMA.KILL_ME")
DONE!

Spring Java Config - Is it possible to create a #Bean dinamically?

Given this configuration class, I want to dinamically create a DataSourceTransactionManager bean for each one of the DataSource objects, is it possible?
#Configuration
public class SomeConfig {
#Autowired
private DataSource[] dataSources;
}
That is to say, I want to loop the dataSources array to create a #Bean that returns a new DataSourceTransactionManager(dataSource[i]).
In this case I don't want to create a #Bean List<DataSourceTransactionManager> as answered here, but a number of #Bean DataSourceTransactionManager instances.

Resources