Redisson doesn't set TTL or cache name correctly - spring

I'm creating a Spring application that uses Redis cache via redisson client.
#Bean
public CacheManager cacheManager(RedissonClient redissonClient) throws IOException {
Map<String, CacheConfig> config = new HashMap<String,CacheConfig>();
config.put("employeesCache", new CacheConfig(24*60*1000, 12*60*1000));
RedissonSpringCacheManager manager= new RedissonSpringCacheManager(redissonClient, config);
return manager;
}
However when running this application the cache name created in Redis is {employeesCache}:redisson_options instead of just employeesCache.
Also, when I check for the TTL in the Redis CLI it returns (integer) -1 ,meaning it has not been set.
So the RedissonSpringCacheManager is partially functioning, it creates the cache but without any configuration, can you help me fix it.
I'm using the following Maven dependencies
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson-spring-boot-starter</artifactId>
<version>3.13.1</version>
</dependency>

Redisson use a map to save your data, but keys in the map are not support TTL, so Redisson maintain a {employeesCache}:redisson_options to save the configurations, your employeesCache keys is maintained and deleted by redisson, NOT redis
So, your data will be saved in a map called employeesCache, not in {employeesCache}:redisson_options, just leave it alone.

Related

I am not able to connect to Google Cloud Memory Store from Spring Boot

I am developing a module with spring boot in my backend where i need to use Redis through GCP Memory Store. I have been searching in forum and even the "oficial documentation" about memory store but i cannot understand how to connect to memory store with my spring boot app.
I found a google code lab but they use a Compute Engine VM to install spring boot and then save and retrieve information from memory store. So i tried to do it like that in my local spring boot but it didnt work because throws an error saying:
Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to 10.1.3.4
the codelab i mentioned earlier says that you only have to add this line to your application.properties:
spring.redis.host=10.1.3.4
as well as the annotation #EnableCaching in the main class and #Cachable annotation in the controller method where you try to do something with redis.
the method looks like this:
#RequestMapping("/hello/{name}")
#Cacheable("hello")
public String hello(#PathVariable String name) throws InterruptedException {
Thread.sleep(5000);
return "Hello " + name;
}
i dont know what else to do. Notice that i am new on this topic of redis and memory store.
Anyone can give me some guidance on this please?
thanks in advance
codelab url: https://codelabs.developers.google.com/codelabs/cloud-spring-cache-memorystore#0
See this documentation on how to setup Memorystore Redis instance.
Included in the documentation is how you can connect and test your Memorystore instance from different computing environments.
There's also a step by step guide on how SpringBoot can use Redis to cache with annonations.
Add the Spring Data Redis starter in your pom.xml if you're using Maven for your project setup.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Add this configuration in your application.properties file:
spring.redis.host=<MEMORYSTORE_REDIS_IP>
# Configure default TTL, e.g., 10 minutes
spring.cache.redis.time-to-live=600000
Turn on caching capability explicitly with the #EnableCaching annotation:
#SpringBootApplication
#EnableCaching
class DemoApplication {
...
}
Once you configured the Spring Boot with Redis and enabled caching, you can use the #Cacheable annotation to cache return values.
#Service
class OrderService {
private final OrderRepository orderRepository;
public OrderService(OrderRepository orderRepository) {
this.orderRepository = orderRepository;
}
#Cacheable("order")
public Order getOrder(Long id) {
orderRepository.findById(id);
}
}

Unable to Evict Cache every 60 seconds using TTL config in Hazelcast

#Bean
public Config cacheConfig() {
Map<String, MapConfig> mapConfigs = new HashMap<>();
EvictionConfig evictionConfig = new EvictionConfig();
evictionConfig.setEvictionPolicy(EvictionPolicy.LFU);
evictionConfig.setSize(10);
MapConfig widgetMapConfig = new MapConfig();
widgetMapConfig.setBackupCount(1);
widgetMapConfig.setName("widget");
widgetMapConfig.setMaxIdleSeconds(60);
widgetMapConfig.setTimeToLiveSeconds(60);
widgetMapConfig.setInMemoryFormat(InMemoryFormat.BINARY);
widgetMapConfig.setEvictionConfig(evictionConfig);
mapConfigs.put("widget-Map", widgetMapConfig);
Config programmaticConfig = new Config();
programmaticConfig.setMapConfigs(mapConfigs);
return programmaticConfig;
}
Above Is the config and Cache is not getting evicted when its not being used for 60 seconds. Can Someone help me on this. I want to Evict based on TTL config
I changed the Core Hazelcast Module version form 4.0.2 to 4.2.4 and It started working for me. I would suggest anyone integrating Hazelcast with Springboot to look after the Release Date and choose the versions accordingly when you know your configuration are up to the mark as suggested in Hazelcast Official Documentation.
<!-- https://mvnrepository.com/artifact/com.hazelcast/hazelcast -->
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
**<version>4.2.4</version>**
</dependency>
Version number should not have any implication on eviction. Once the entries are expired, they may remain in the JVM until they are garbage collected. The best way to check if entries are correctly expired and evicted is either do a map.size or use Management Center.
Do note that map.size is a very expensive distributed operation.

Spring cloud bus with AWS Kinesis stream #refreshscope

I read everywhere #RefreshScope for cloud bus applications work with RabbitMQ and Kafka. But in my case, I am using AWS Parameter store. I want all my client instances to be refreshed automatically without rebuilding servers on AWS Console.
I created AWS Eventbridge from Paramstore to notify Kinesis Stream but I am not able to figure out how can it notify all my client nodes instead of load balancer refresh to only one node(instance).
Thank you for responding in advance.
I've never worked with AWS Eventbridge / Kinesis, however:
#RefreshScope is something that belongs to spring cloud and not not AWS.
More precisely, beans defined with this scope will be re-loaded by spring without reloading the whole application context "dynamically" when configuration changes in spring boot cloud configuration service. Usually this means that you don't have to restart the application.
Now, spring boot microservice should be deployed with actuator that exposes refresh endpoint. Calling this endpoint manually will cause all the #RefreshScope beans to reload.
Here is the source code of the RefreshEndpoint:
#Endpoint(id = "refresh")
public class RefreshEndpoint {
private ContextRefresher contextRefresher;
public RefreshEndpoint(ContextRefresher contextRefresher) {
this.contextRefresher = contextRefresher;
}
#WriteOperation
public Collection<String> refresh() {
Set<String> keys = this.contextRefresher.refresh();
return keys;
}
}
As you see, its merely invokes contextRefresher.refresh(), ContextRefresher is a bean that you can inject in your custom code that will listen to the changes coming from AWS Parameter store (it should somehow invoke it directly or maybe send some message that you could consume or something).
If you're using spring-cloud-bus (disclaimer, I've never worked with it) it exposes the bus-refresh endpoint as well (pretty similar mechanism to what I've described), read spring-cloud-bus documentation for more details.
Thank you Team for sharing info.
Here is what I did to make it work. Added these two libraries to my project
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kinesis</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-bus</artifactId>
</dependency>
And added these two entries into bootstrap.properties
cloud.aws.region.static=us-east-1
cloud.aws.stack.auto = false
And refreshing using this endpoint (/bus-refresh)

How to pass additional parameters to MariaDB connect string to fix timezone issue (e.g. useLegacyDatetimeCode)

I'm deploying some spring-boot app to Swisscom AppCloud which uses a MariaDB service. The service gets automatically configured in my app using the CloudFoundry connectors and the connection works fine.
However: since I heavily use ZonedDateTime-Objects in my Java-Code, I also included in the pom.xml...
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-java8</artifactId>
</dependency>
...to properly preserve the ZonedDateTimes in the database.
This works fine on my local MariaDB, when I add...
...?useLegacyDatetimeCode=false
...to the connect string (as described here: https://moelholm.com/2016/11/09/spring-boot-controlling-timezones-with-hibernate/ -> "BONUS TIP: Getting the Hibernate configuration to work with MariaDB / MySQL").
How can I add this flag (and maybe others, too) to the connection to the MariaDB service on Swisscom AppCloud?
If you use Spring on CloudFoundry in conjunction with a MariaDB binding, the datasource is automatically configured by this mechanism: https://docs.cloudfoundry.org/buildpacks/java/spring-service-bindings.html
This is powered by the Spring cloud connectors project, which you can customize to your needs.
I did not test it, but you should be able to set the driver properties as follows:
#Bean
public DataSource dataSource() {
PoolConfig poolConfig = new PoolConfig(5, 30, 3000);
ConnectionConfig connConfig = new ConnectionConfig("useLegacyDatetimeCode=false");
DataSourceConfig dbConfig = new DataSourceConfig(poolConfig, connConfig);
return connectionFactory().dataSource(dbConfig);
}

Spring Boot / Spring Data import.sql doesn't run Spring-Boot-1.0.0.RC1

I've been following the development of Spring Boot, and sometime between the initial version 0.0.5-BUILD-SNAPSHOT and the current version I am using 1.0.0.RC1 I am no longer running my import.sql script.
Here is my configuration for LocalContainerEntityManager and JpaVendorAdapter
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(
DataSource dataSource, JpaVendorAdapter jpaVendorAdapter) {
LocalContainerEntityManagerFactoryBean lef = new LocalContainerEntityManagerFactoryBean();
lef.setDataSource(dataSource);
lef.setJpaVendorAdapter(jpaVendorAdapter);
lef.setPackagesToScan("foo.*");
return lef;
}
#Bean
public JpaVendorAdapter jpaVendorAdapter() {
HibernateJpaVendorAdapter hibernateJpaVendorAdapter = new HibernateJpaVendorAdapter();
hibernateJpaVendorAdapter.setShowSql(true);
hibernateJpaVendorAdapter.setGenerateDdl(true);
hibernateJpaVendorAdapter.setDatabase(Database.POSTGRESQL);
return hibernateJpaVendorAdapter;
}
Interesting the hibernate.hbm2ddl.auto still seems to run, which I think is part of the definition of my SpringBootServletInitializer
#Configuration
#ComponentScan
#EnableAutoConfiguration
public class Application extends SpringBootServletInitializer {
However, I also noticed that the tables generated no longer have underscores and changed their shape when generated?
However, that could be the result of updating my org.postgresql version like so:
Previously:
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.2-1004-jdbc41</version>
</dependency>
Now:
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.3-1100-jdbc41</version>
</dependency>
I also had to change pggetserialsequence to pg_get_serial_sequence to get the script to run at all from pgadmin?
I guess I'm confusing what's going on, but most importantly I want to get back to having my import.sql run.
I have been following the sample project: https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples/spring-boot-sample-data-jpa
And their import.sql isn't running either on 1.0.0-BUILD-SNAPSHOT
The import.sql script is a Hibernate feature I think (not Spring or Spring Boot). It must be running in the sample otherwise the tests would fail, but in any case it only runs if ddl-auto is set to create the tables. With Spring Boot you should ensure that spring.jpa.hibernate.ddl-auto is set to "create" or "create-drop" (the latter is the default in Boot for an embedded database, but not for others, e.g. postgres).
If you want to unconditionally run a SQL script, By default Spring Boot will run one independent of Hibernate settings if you put it in classpath:schema.sql (or classpath:schema-<platform>.sql where <platform> is "postgres" in your case).
I think you can probably delete the JpaVendorAdapter and also the LocalContainerEntityManagerFactoryBean (unless you are using persistence.xml) and let Boot take control. The packages to scan can be set using an #EntityScan annotation (new in Spring Boot).
The default table naming scheme was changed in Boot 1.0.0.RC1 (so nothing to do with your postgres dependency). I'm not sure that will still be the case in RC2, but anyway you can go back to the old Hibernate defaults by setting spring.jpa.hibernate.naming-strategy=org.hibernate.cfg.ImprovedNamingStrategy.
Hey I came across similar issue. My sql script was not getting invoked initially. Then I tried renaming the file from "import.sql" to "schema.sql", it worked. May be give this a shot. My code can be found here - https://github.com/sidnan/spring-batch-example
In addition to what was already said, it's worth noting you can use the data.sql file to import/intialize data into your tables. Just put your data.sql into the root of the classpath (eg: if you're running a Spring Boot app, you put it in the src/main/resources path).
Like was said before, use it together with the property ddl-auto=create-drop, so that it won't crash trying to insert the existing data.
You can also set up which specific file to execute using the spring.datasource.data property. Check out more info here: http://docs.spring.io/spring-boot/docs/current/reference/html/howto-database-initialization.html
Note: the schema.sql mentioned before would contain the whole DB definition. If you want to use this, ensure that Hibernate doesn't try to construct the DB for you based on the Java Entities from your project. This is what de doc says:
If you want to use the schema.sql initialization in a JPA app (with
Hibernate) then ddl-auto=create-drop will lead to errors if Hibernate
tries to create the same tables. To avoid those errors set ddl-auto
explicitly to "" (preferable) or "none"

Resources