How to use Cache.getOrElse(java.lang.String key, java.util.concurrent.Callable<T> block, int expiration) - caching

How to use Cache.getOrElse(java.lang.String key, java.util.concurrent.Callable block, int expiration)
Could someone give me a example?
My point is how to use “expiration",I know it means expire time.
By the way:
I want save some object to cache,and set a expire time.
when the expire time,I can reset the object to the cache.
Thanks.

Let's assume that, you want to set User object on cache, for that you set userId as key and user object as value. If need set expiration time, for sample i set it as 30secs.
cache.set(userId, userObject, 30);
At some point of time, if you want to get user object from cache, which you set earlier using userId as key, you might try the following way to get the user object from cache.
User user = cache.get(userId);
Above will return you the user object, if you access within 30secs, otherwise it will return NULL. This will be perfect for case like validating the session.
In some case, you frequently need to retrieve value from cache, for that following is the best approach.
User user = cache.getOrElse(userId, () -> User.get(userId), 30);
cache will check, whether it has given userId as key, if available then straight away return the user object and update the expiration time to 30secs further.
If given userId not available, then callable block gets invoked and set userId as key, user object fetched from db as value and expiration time as 30secs.

Expiration is the number of seconds that the Object would be hold in the Cache. If you pass 0 as expiration the Cache doesn't expire and you would have to control it by hand.
What getOrElse does is check the Cache, if the Object is not there then call the callable block that you are passing and adds the result to the cache for the number of seconds that you are passing as expiration time.
I based my comment in the Play Framework Cache Javadoc.

I use getOrElse in controllers when I have dynamic and static content to display. Cache the static and then render it together with the dynamic part:
try {
Html staticHtml = Cache.getOrElse("static-content", () -> staticView.render(), 60 * 60);
Html rendered = dynamicPage.render(arg1, arg2, staticHtml);
return ok(rendered);
} catch (Exception e) {
e.printStackTrace();
return internalServerError();
}
staticView.render() returns some html from a view. This view should not call any other pages which are dynamic or you stash something you do not really want to stash.
60*60 means I want to store it for one hour (60 seconds times 60 minutes... ok you can write 3600 if you want)
I should add that getOrElse gets the Object from the cache with the specified key (in this example the key is static-content) but if it cannot find it, then it calls the function which returns an object which is then stored for the specified amount of time in the cache with that key. Pretty neat.
Then you can call some other (dynamic) page and pass the html to it.
The dynamic stuff will stay dynamic :)

Related

Count views with Session

I'm new to laravel. I found a way here how to count article views, I used it on my own and it works as it should
$viewed = Session::get('viewed_article', []);
if (!in_array($article->id, $viewed)) {
$article->increment('views');
Session::push('viewed_article', $article->id);
}
But the only thing I do not fully understand is how it works and what it does, which makes me feel a little uneasy.
Who is not difficult, can you explain how this function works?
The first line:
$viewed = Session::get('viewed_article', []);
uses the Session facade to get the data with the key viewed_article from the session, or if nothing exists for that key, set $viewed to an empty array instead (the second argument sets the default value).
The next line, the if statement:
if (!in_array($article->id, $viewed)) {
makes sure that the current article id is not in the $viewed array.
If this condition is true (i.e. the article is not in the array), then the views are incremented (i.e. increased by one) on the article:
$article->increment('views');
Lastly, the article id is added into the viewed_article session data, so the next time the code runs, it won't count the view again:
Session::push('viewed_article', $article->id);

Session Expiry in Vapor 4

I am developing a web application with Vapor 4. It would be useful to persist client-made data on the server side for a few minutes at a time in between requests. I want to use sessions to do this. However, I am a bit confused on how the best way to automatically destroy this data after a set time. Should I make a job and have it check periodically? Or is there an easy way to set an expiry time on session creation?
I have used a bit of Middleware to achieve this for some months and it is very reliable.
It compares the timestamp now to the value from the immediate previous request. If the difference is greater than the allowed session timeout, it forces a logout.
I had to give a bit of thought to initialising the timestamp and "BAD" ensures a nil gets returned from trying to initialise a Double, which then gets the current timestamp to start the session 'timer'. I think this is safe as the user can't log in without having made at least one route call beforehand and I have other Middleware that checks to make sure the user is logged in. Try this:
struct SessionTimeoutMiddleware:Middleware
{
func respond(to request:Request, chainingTo next:Responder) -> EventLoopFuture<Response>
{
let lastRequestTimeStamp = Double(request.session.data["lastRequest"] ?? "BAD") ?? Date().timeIntervalSince1970
request.session.data["lastRequest"] = String(Date().timeIntervalSince1970)
if Date().timeIntervalSince1970 - lastRequestTimeStamp > 300.0 // seconds
{
request.auth.logout(User.self)
return request.eventLoop.makeSucceededFuture(request.redirect(to:"/somewhere/safe"))
}
return next.respond(to:request)
}
}
Then, register in configure.swift using:
let userAuthSessionsMW = User.authenticator()
let sessionTimeoutMW = SessionTimeoutMiddleware()
let timed = app.grouped(C.URI.Users).grouped(userAuthSessionsMW, sessionTimeoutMW)
try SecureRoutes(timed)

MemoryCacheClient works differently than others - reference retained

I have a service that pulls statistics for a sales region. The service computes the stats for ALL regions and then caches that collection, then returns only the region requested.
public object Any(RegionTotals request)
{
string cacheKey = "urn:RegionTotals";
//make sure master list is in the cache...
base.Request.ToOptimizedResultUsingCache<RegionTotals>(
base.Cache, cacheKey, CacheExpiryTime.DailyLoad(), () =>
{
return RegionTotalsFactory.GetObject();
});
//then retrieve them. This is all teams
RegionTotals tots = base.Cache.Get<RegionTotals>(cacheKey);
//remove all except requested
tots.Records.RemoveAll(o => o.RegionID != request.RegionID);
return tots;
}
What I'm finding is that when I use a MemoryCacheClient (as part of a StaticAppHost that I use for Unit Tests), the line tots.Records.RemoveAll(...) actually affects the object in the cache. This means that I get the cached object, delete rows, and then the cache no longer contains all regions. Therefore, subsequent calls to this service for any other region return no records. If I use my normal Cache, of course the Cache.Get() makes a new copy of the object in the cache, and removing records from that object doesn't affect the cache.
This is because an In Memory cache doesn't add any serialization overhead and just stores your object instances in memory. Whereas when you use any of the other Caching Providers your values are serialized first then sent to the remote Caching Provider then when it's retrieved it's deserialized back so it's never reusing the same object instances.
If you plan on mutating cached values you'll need to clone the instances before mutating them, if you don't want to manually implement ICloneable you can serialize and deserialize them with:
var clone = TypeSerializer.Clone(obj);

StackExchange.Redis LockTake & Lock Release

I am using the the following code for redis lock and release
var key = "test-x";
RedisValue token = (RedisValue) Guid.NewGuid().ToString();
if(db.LockTake(key, token, duration)) {
try {
// you have the lock do work
} finally {
db.LockRelease(key, token);
}
}
My problem:
In a unit test I am calling this method 2 times. The first time always work, but the second time I want to obtain the lock on this specific key, it does not work. From my understanding the db.LockRelease should release the lock, making it available for the second request. I did notice that db.LockRelease returns false.
Any idea what might be happening?
The lock key needs to be unique. You are probably using the same lock key as the cache key in you code. From https://stackoverflow.com/a/25138164:
the key (the unique name of the lock in the database)

How to cache IQueryable result for paging

What is the best way to cache Queryable result if every call need to calculate lot of things and return it to client.
Code Sample
[Queryable]
public IQueryable<Car> Get()
{
try
{
var result=GetCarList();
//GetCarList() calculation is taking around 1 min
return result.AsQueryable();
}
}
GetCarList()
{
var query = from car in db.CarDetail
where car.color == "white"
select car;
//10k records of white cars are selected with out considering makers
//white is mandatory
foreach (var car in query)
{
//Processing each record in every call
}
}
Query sample
First Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10
Second Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10$skip=10
Third Page
localhost/api/Car?$filter=(make eq 'ford')&$orderby=carid desc&$top=10$skip=20
Every time each call is taking 1 min even though the calculation is same for current filter. what is the best way to cache this kind of api call?
As the OP explains in his comment, the object to cache is the list returned by the call to GetCarList(); and the result is always the same.
You can simply store this in Cache, see docs: Cache Class.
When you need it, check if it's in cache. If not, create it and store in cache before using (anywhere you want to use it). As the Cache is thread safe you will not have concurernty problems by accesing it from different requests.

Resources