Cache eviction at particular time of the day and not through TTL - spring

Please let me know if cache eviction can be done at particular time of the day instead of TTL. I am using spring framework so if any API provides this feature then I can use this API by plugging into Spring.
I did run through search mechanism if similar question has been asked but failed to find any prior question.
If similar question has been asked please let me know the link.
Thanks, Amitabh

According to GemFire docs:
You configure for eviction based on entry count, percentage of
available heap, and absolute memory usage. You also configure what to
do when you need to evict: destroy entries or overflow them to disk.
See Persistence and Overflow.
http://gemfire.docs.pivotal.io/latest/userguide/index.html#developing/eviction/configuring_data_eviction.html
But you may be able to get something closer to what you need through Custom expiration. Please check the following link:
http://gemfire.docs.pivotal.io/latest/userguide/index.html#developing/expiration/configuring_data_expiration.html

Ehcache expiration does not offer such a feature out of the box.
You still have some options:
Configure the TTL when creating the Element with a computed value.
Use refresh-ahead or even better scheduled refresh ahead
Have a look at the following question. Note that this may not work with all configurations as sometimes the Element gets re-created internally.

Related

rate limiting and throttling in java

I need to implement ratelimiter/throttling in one of my microservice.
For example I have one User microservice that handles login and get user data based on role like Admin or normal user that is implemented using
JWT token and annotation #Secured, So, My ask is to throttle based on these what api is being called.And, I should be able to modify the throttle limit at runtime too.
I don't want to re-invent the wheel, so, any ideas please?
technology stack:- java, spring boot
Answer to this surely depends on what you relate throttling to.
If you are thinking to throttle data returned by api based on role for some time, you could achieve this simply by using spring-boot cache. You can control cache evict time in springboot-app (Even if you want to externalize configuration).
Please have a look at https://spring.io/guides/gs/caching/. Also, have a look at https://www.youtube.com/watch?v=nfZxXGjXVfc demonstration if required.
I am not putting details of how caching is done as it's very well explained in springboot docs. So, might have to tune it according to your need but this is first answer to your controlled throttling.
If you are thinking to throttle the api endpoint itself or throttle the amount of data it could serve i.e. control no. of requests it could serve in a seconds etc. Then you could use RateLimiter from Guava.
Also, I managed to find another one probably more relevant if you are using springboot. It's weddini/spring-boot-throttling
Seems like 2nd approach fits more into what you need.
Hope it helps!
I have implemented a rate limiter base on token-bucket.Other releated technology is spring-boot,spring-data-redis and lua.
Hope it can be helpful.
https://github.com/AllstarVirgo/rateLimit

What is a best practice for ehcache clearStatistics() method replacement?

I am trying to find the new best practice for clearing statistics on an ehcache Cache. Previously, you would be able to call clearStatistics() and then you coul in real-time reset your stats on hit/miss operations.
Somewhere between ehcache 2.6 and 2.10, this went away. However instead of seeing a release where it was deprecated and hints as to the new philosophy or suggested implementation strategy, I simply see the method gone from the API documentation: It is not shown in http://www.ehcache.org/apidocs/2.10/deprecated-list.html#method nor http://www.ehcache.org/apidocs/2.9/deprecated-list.html#method, and any previous versions are lost to refactoring on the site.
Cache.clearStatistics has been removed in Ehcache 2.7.0. This release included a large rework of the Ehcache statistics to make them low overhead and ensure you paid the price only for statistics you queried and only for a limited period of time.
You can't clear statistics anymore inside Ehcache. If you need that feature, you have to use an external object that can handle the baselining for your application.
You can find the API documentation for each <major>.<minor> on http://www.ehcache.org/documentation/. And for the most recent versions, you can navigate to different fix versions even though there is no explicit link.
For example, see http://www.ehcache.org/apidocs/2.9.1/index.html
Disclaimer: I am working on Ehcache

How does the Mule caching strategy know when the data has changed?

I'm looking at the cache scope for Mule 3.8.1 in Anypoint Studio 6.1 and wanted to know if/how the caching detects changes to the data?
I see there is a Time to Live and Expiration times which are useful so it is not checking all the time but how can Mule caching be set up to detect changes as if e.g. the data is incorrect in the database and is then fixed I wouldn't want to have to wait an hour or have to redeploy the application to see the change if I can help it.
Thanks
As far as i can think you need to invalidate your cache by yourself, if you want the data to be loaded again. Since its your data you know when it is changed and you can trigger the invalidation.
https://docs.mulesoft.com/mule-user-guide/v/3.8/cache-scope#invalidating-a-cache

Rate-Limit an API (spring MVC)

I'm looking the best more efficient way to implement (or use an already setup) rate limiter that would protect all my rest api url. the protection I'm looking at is a "call per second per user limiter"
I had a look on the net and what comes out was the use of either "Redis" or Guava RateLimiter.
To be honest I have never used Redis and I'am really not familiar with it. But by looking on its docs it seems that it has a quite robust rate limiter system.
I have also had a look at Guava's RateLimiter. And it looks a bit easier to use (don't need a redis installation etc...)
So I would like some suggestion of what would be "in my case" the best solution? Is using Redis "too much"?
Have any of you already tried RateLimter? Is this a good solution? Is it scaleable?
PS: I am also open to other solutions than the 2 I aforementioned if you think there are better choices.
Thank you!
If you are trying to limit access to your Spring-based REST api you should use token-bucket algorithm.
There is bucket4j-spring-boot-starter project which uses bucket4j library to rate-limit access to the REST api. You can configure it via application properties file. There is an option to limit the access based on IP address or username.
If you are using Netflix Zuul you could use Spring Cloud Zuul RateLimit which uses different storage options: Consul, Redis, Spring Data and Bucket4j.
Guava’s RateLimiter blocks the current thread so if there’s a burst of asynchronous calls against the throttled service lots of threads will be blocked and might result exhaust of free threads.
Perhaps Spring-based library Kite meets your needs. Kite's "rate-limiting throttle" rejects requests after the principal reaches a configurable limit on the number of requests in some time period. The rate limiter uses Spring Security to determine the principal involved.
But Kite is still a single-JVM approach. If you do need a cluster-aware approach Redis is a way to go.
there is no hard rule, it totally depends on your specific situation. provided that "I have never used Redis", I would recommend guava RateLimiter. compare to redis, a completely new nosql system for you, guava RateLimiter is much easier to get started with. by adding a few lines of code, you are enable to distribute permits at a configurable rate. what left to do is to adapt it to fit your need, like providing rate limit on a per user basis.

Can/Should I disable the cache expiry when backing data store is unavailable?

I'm just started out with Ehcache, and it seems pretty good so far. I'm using it in a simplistic fashion to speed up reads against a database, but I wonder whether I can also use it to let the application stay up if the database is unavailable for short periods. (Update - my context is a application with high-availability modules that only read from the database)
It seems like I could do that by disabling expiry in the event of a database read problem, and re-enabling it when a read works again.
What do you think? Is that a reasonable approach or have I missed something? If it's a fair approach, any tips for how best to implement appreciated.
Update - ehcache supports a dynamically configurable option to un/set the cache to 'eternal'. This seems to do what I need.
Interesting question - usually, the answer would be "it depends".
Firstly, if you have database reliability problems, I'd invest time and energy in fixing them, rather than applying a bandaid solution.
Secondly, most applications need both reading and writing to work - it doesn't seem to make sense to keep your app up for reads only.
However, if your app has a genuine "read only" function, and there's a known and controlled reason for database down time (e.g. backups), then yes, you can use your cache to keep the application up and running while the database is down. I would do this by extending the cache periods, rather than trying to code specific edge cases. For instance, you might have a background process which checks whether the database is available and swaps in a different configuration file when there's trouble.

Resources