Dynamically extend ttl in redis cache - spring-boot

I have been trying to learn about spring boot with redis and I canme across this excellent article which pretty much explains everything. But I have been trying to find out if it is possible to dynamically reset the TTL of a cache entry by putting the cache entry back in cache every time it is fetched. For example the ttl for my cache entry is 1hr, can I keep extending it and prevent its eviction as along as it is being actively accessed.
#Service
public ControlledCacheService {
#Cacheable(cacheNames = "myControlledCache")
public String getFromCache() {
return null;
}
#CachePut(cacheNames = "myControlledCache")
public String populateCache(String value) {
return value;
}
}
.
.
.
#Autowired
ControlledCacheService controlledCacheService;
private String getFromControlledCache() {
String fromCache = controlledCacheService.getFromCache();
if (fromCache == null) {
log.info("Oups - Cache was empty. Going to populate it");
String myValue = "valueToPutInCache"
String newValue = controlledCacheService.populateCache(myValue);
log.info("Populated Cache with: {}", newValue);
return newValue;
}
log.info("Returning from Cache: {}", fromCache);
controlledCacheService.populateCache(myValue); // **will calling this reset the ttl ?**
return fromCache;
}

Related

Spring Cache - Clear cache only when API response is success

I am using Spring Cache #CacheEvict & #Cacheable
Currently I am running a scheduler every Hr to clear cache and next time when fetchUser() is called it will fetch data from external APi and add to cache.
#Scheduled(cron = "0 0 * * * *}")
#CacheEvict(value = "some-unique-value", allEntries = true)
public void clearUserCache() {
log.info("Cache cleared");
}
#Cacheable(value = "some-unique-value", unless = "#result.isFailure()")
#Override
public Result<UserResponse> fetchUser() {
try {
UserResponse userResponse = api.fetchUserDetail();
return Result.success(userResponse);
} catch (Exception e) {
return Result.failure(INTERNAL_SERVER_ERROR);
}
}
Now what we need is to clear cache only when User API call is success. Is there a way to do that.
As now cache is cleared on schedule and suppose external API call fails. Main API will return error response. In that case I should be able to use existing cache itself.
If I got it correctly, why don't you call it as a normal method after checking the API call is correct at this method's parent?
With your code, something along the lines of
// we just leave scheduled here as you need it.
#Scheduled(cron = "0 0 * * * *}")
#CacheEvict(value = "some-unique-value", allEntries = true)
public void clearUserCache() {
log.info("Cache cleared");
}
#Cacheable(value = "some-unique-value", unless = "#result.isFailure()")
#Override
public Result<UserResponse> fetchUser() {
try {
UserResponse userResponse = api.fetchUserDetail();
return Result.success(userResponse);
} catch (Exception e) {
return Result.failure(INTERNAL_SERVER_ERROR);
}
}
public void parentMethod() {
Result<UserResponse> userResult = this.fetchUser();
if(userResult.isFailure()) {
this.clearUserCache();
}
}
This way, if any Exception is thrown it will return with a failure status and you're able to check it. So the cache will be cleared either every hour or when it didn't work.
So the next time, as it was a failure and there's no cache, it will try again.
I didn't find any direct implementation but with a work around I was able to do it.
Use Case
User API response should be updated only when next service call is triggered which make use of User API. It should not be updated by scheduler. As we need to pass on header information coming in from external system, to User API as well.
Cache must be cleared only when User API response is success.
Steps:
Added a variable in scheduler and turning it ON on Schedule time and OFF when cache is updated.
This flag is used in UserService class to check if scheduler was triggered or not.
If not, use cache. If true, trigger User API call. Check for response, if success. Trigger CacheEvict method and update Cache.
Sample Code:
SchedulerConfig
private boolean updateUserCache;
#Scheduled(cron = "${0 0 * * * *}") // runs every Hr
public void userScheduler() {
updateUserCache = true;
log.info("Scheduler triggered for User");
}
#CacheEvict(value = "USER_CACHE", allEntries = true)
public void clearUserCache() {
updateUserCache = false;
log.info("User cache cleared");
}
public boolean isUserCacheUpdateRequired() {
return updateUserCache;
}
UserService
UserResponse userResponse = null;
if (schedulerConfig.isUserCacheUpdateRequired()) {
userResponse = userCache.fetchUserDetail();
if (userResponse != null) {
// clear's cache and userResponse is stored in cache automatically when getUserDetail is called below
schedulerConfig.clearUserCache();
}
}
return userCache.getUserDetail(userResponse);
UserCache
#Cacheable(value = "USER_CACHE", key = "#root.targetClass", unless = "#result.isFailure()")
public Result<User> getUserDetail(UserResponse userResponse) {
try {
if (userResponse == null) { // handle first time trigger when cache is not available
userResponse = fetchUserDetail(); // actual API call
}
return Result.success(mapToUser(userResponse));
} catch (Exception e) {
return Result.failure("Error Response");
}
}
Note:
Result is a custom Wrapper, assume it as a object which has success or failure attributes
I had to add #Cacheable part as separate Bean because caching only works on proxy objects. If I keep getUserDetail inside UserService and call directly, its not been intercepted as proxy and cache logic is not working, API call is triggered each time.
Most important: This is not the best solution and has scope for improvement.

Orchard use ICacheManager to cache authenticated user data

I am using Orchard and would like to cache data specific to an authenticated user.
When a new user logs in, or after a period of time, the database should be queried again.
I've accomplished half of this below (after 30 minutes it will query the database again):
private UserData SomeUserSpecificData()
{
var data = _cacheManager.Get("userdata",
ctx => {
ctx.Monitor(_clock.When(TimeSpan.FromMinutes(30)));
return GetDatabaseData();
});
return data;
}
But how would I force the cache to re-query the database when a new user has logged in?
I have a feeling it might involve ISignals but not sure how to implement this.
Thanks.
You're absolutely right with ISignals!
You just need to inject ISignals and use it like this:
private readonly Orchard.Caching.ISignals _signals;
private UserData SomeUserSpecificData()
{
var data = _cacheManager.Get("userdata",
ctx => {
ctx.Monitor(_clock.When(TimeSpan.FromMinutes(30)));
cts.Monitor(_signals.When("SOMETHING_HAPPENED_FOR_xyz"));
return GetDatabaseData();
});
return data;
}
What's also nice to know: You can monitor as many signals as you want.
To keep things clean I'll typically create a static class like this:
public static class CacheSignals
{
public const string SomethingHappened = "SOMETHING_HAPPENED";
public const string SomethingHappenedForUser(int userId) => string.Format("SOMETHING_HAPPENED_FOR_{0}", userId);
}
Now you can simply implement a custom EventHandler forOrchard.Users.Events.IUserEventHandler
and invalidate the cache on LoggedIn.
private readonly Orchard.Caching.ISignals _signals;
public void LoggedIn(Security.IUser user)
{
_signals.Trigger(CacheSignals.SomethingHappenedForUser(user.Id));
}

ServiceStack caching strategy

I'm learning ServiceStack and have a question about how to use the [Route] tag with caching. Here's my code:
[Route("/applicationusers")]
[Route("/applicationusers/{Id}")]
public class ApplicationUsers : IReturn<ApplicationUserResponse>
{
public int Id { get; set; }
}
public object Get(ApplicationUsers request)
{
//var cacheKey = UrnId.Create<ApplicationUsers>("users");
//return RequestContext.ToOptimizedResultUsingCache(base.Cache, cacheKey, () =>
return new ApplicationUserResponse
{
ApplicationUsers = (request.Id == 0)
? Db.Select<ApplicationUser>()
: Db.Select<ApplicationUser>("Id = {0}", request.Id)
};
}
What I want is for the "ApplicationUsers" collection to be cached, and the times when I pass in an Id, for it to use the main cached collection to get the individual object out.
If I uncomment the code above, the main collection is cached under the "users" key, but any specific query I submit hits the Db again. Am I just thinking about the cache wrong?
Thanks in advance,
Mike
this line
var cacheKey = UrnId.Create<ApplicationUsers>("users");
is creating the same cache key for all the requests, you must use some of the request parameters to make a "unique key" for each different response.
var cacheKey = UrnId.Create<ApplicationUsers>(request.Id.ToString());
this will give you the "urn:ApplicationUsers:0" key for the get all and the "urn:ApplicationUsers:9" for the request with Id = 9
now you can use the extension method in this way.
return RequestContext.ToOptimizedResultUsingCache(Cache, cacheKey, () => {
if(request.Id == 0) return GetAll();
else return GetOne(request.Id);
});
I hope this helps, regards.

Hibernate criteria queries - Query Conditions

I am using spring 3.0 and I am using the JqGrid plugin. I am working on the search feature which sends a json string with all the search criteria. Here is what the string can look like.
{"groupOp":"AND","rules":[{"field":"firstName","op":"bw","data":"John"},{"field":"lastName","op":"cn","data":"Doe"},{"field":"gender","op":"eq","data":"Male"}]}
If you look at the "op" property inside the rules array, you will see the operation which must be executed. The Jq-grid has the following operations
['eq','ne','lt','le','gt','ge','bw','bn','in','ni','ew','en','cn','nc']
which corresponds with
['equal','not equal', 'less', 'less or equal','greater','greater or equal', 'begins with','does not begin with','is in','is not in','ends with','does not end with','contains','does not contain']
I plan to use hibernate criteria searching to enable the search feature. For this I am using Jackson's ObjectMapper to convert the incoming JSON into Java. This is all well and good. Here is my code that converts the json.
public class JsonJqgridSearchModel {
public String groupOp;
public ArrayList<JqgridSearchCriteria> rules;
}
public class JqgridSearchCriteria {
public String field;
public String op;
public String data;
public SimpleExpression getRestriction(){
if(op.equals("cn")){
return Restrictions.like(field, data);
}else if(op.equals("eq")){
return Restrictions.eq(field, data);
}else if(op.equals("ne")){
return Restrictions.ne(field, data);
}else if(op.equals("lt")){
return Restrictions.lt(field, data);
}else if(op.equals("le")){
return Restrictions.le(field, data);
}else if(op.equals("gt")){
return Restrictions.gt(field, data);
}else if(op.equals("ge")){
return Restrictions.ge(field, data);
}else{
return null;
}
}
}
#RequestMapping(value = "studentjsondata", method = RequestMethod.GET)
public #ResponseBody String studentjsondata(#RequestParam("_search") Boolean search ,HttpServletRequest httpServletRequest) {
StringBuilder sb = new StringBuilder();
Format formatter = new SimpleDateFormat("MMMM dd, yyyy");
if(search){
ObjectMapper mapper = new ObjectMapper();
try {
JsonJqgridSearchModel searchModel= mapper.readValue(httpServletRequest.getParameter("filters"), JsonJqgridSearchModel.class);
SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();
Session session = sessionFactory.openSession();
session.beginTransaction();
Criteria criteria = session.createCriteria(Person.class);
Iterator<JqgridSearchCriteria> iterator = searchModel.rules.iterator();
while(iterator.hasNext()){
System.out.println("before");
criteria.add(iterator.next().getRestriction());
System.out.println("after");
}
} catch (JsonParseException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}else{//do other stuff here}
This is where the problem comes in. How do I transalate the jqGrid operation into the equivelent hibernate command ? For example
"cn" should correspond with
criteria.add(Restrictions.like("firstName", myJsonJqgridSearchModel.data));
Interestingly, I've just written almost identical code to what you have above (mine doesn't use JqGrid however). I'm wondering if your problem is specifically related to the "cn" - LIKE condition? I had problems with this - I had to specify the MatchMode to get the "contains" like I wanted:
return Restrictions.ilike(
searchCriterion.getPropertyName(),
searchCriterion.getValue().toString(),
MatchMode.ANYWHERE);
I found that without specifying the MatchMode, it was generating SQL as:
WHERE property LIKE 'value'
By specifying the MatchMode.ANYWHERE, it generated SQL as:
WHERE property LIKE '%value%'
which is the "contains" operation that I was expecting. Perhaps this is your issue as well?

Play Framework: Image Display question

ref:
http://www.lunatech-research.com/playframework-file-upload-blob
I'm uneasy about one point in this example
#{list items:models.User.findAll(), as:'user'}
<img src="#{userPhoto(user.id)}">
#{/list}
At this point I'm already holding the user object (including the image blob). Yet the userPhoto() method makes another dip into the backend to get the Image user.photo
public static void userPhoto(long id) {
final User user = User.findById(id);
notFoundIfNull(user);
response.setContentTypeIfNotSet(user.photo.type());
renderBinary(user.photo.get());
}
Any way to avoid this unnecessary findById call?
You're not actually holding the user object any more though, because the userPhoto action is invoked in a separate request that's sent when the browser tries to load the image from the URL generated by #{userPhoto(user.id)}.
Of course, you could use the cache to store data from each user's photo Blob, which would reduce the likelihood that you had to go to the database on the image request. It's more trouble than it's worth in this case though since you're just doing a simple primary key lookup for the user object, and that should be relatively inexpensive. Plus Blobs aren't serializable, so you have to pull out each piece of information separately.
Still, if you were to try that it might look something like this:
// The action that renders your list of images
public static void index() {
List<User> users = User.findAll();
for (User user : users) {
cachePhoto(user.photo);
}
render(users);
}
// The action that returns the image data to display
public static void userPhoto(long id) {
InputStream photoStream;
String path = Cache.get("image_path_user_" + id);
String type = Cache.get("image_type_user_" + id);
// Was the data we needed in the cache?
if (path == null || type == null) {
// No, we'll have to go to the database anyway
User user = User.findById(id);
notFoundIfNull(user);
cachePhoto(user.photo);
photoStream = user.photo.get();
type = user.photo.type();
} else {
// Yes, just generate the stream directly
try {
photoStream = new FileInputStream(new File(path));
} catch (Exception ex) {
throw new UnexpectedException(ex);
}
}
response.setContentTypeIfNotSet(type);
renderBinary(photoStream);
}
// Convenience method for caching the photo information
private static void cachePhoto(Blob photo) {
if (photo == null) {
return;
}
Cache.set("image_path_user_" + user.id,
photo.getFile.getAbsolutePath());
Cache.set("image_type_user_" + user.id,
photo.getType());
}
Then you'd still have to worry about appropriately populating/invalidating the cache in your add, update, and delete actions too. Otherwise your cache would be polluted with stale data.

Resources