Vault configuration in springboot application - spring

I'm working on micorservice using springboot . I have three questions here . Answers to any/all are much appreciated .Thanks in advance
Background: We need to read some key from vault during application startup and save it in variable for later use (to avoid hits on vault) . There will be TTL for this value so application should refresh and take whenever new value configured in vault.
Q1 : How to load and ensure values are loaded only once(i.e vault hit only once)
Q2 :How to get the new values whenever there is a change
Q3 : How to test locally.

Use guava cache to store values (assuming they are strings, but you can change it to any type) like this:
LoadingCache<String, String> vaultData = CacheBuilder.newBuilder()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<String, String>() {
public String load(String key) throws AnyException {
return actuallyLoadFromVault(String);
}
});
This way when your code will read some key from vaultData for the first time it will loaded using actuallLoadFromVault (which you need to write of cause) and after that any new access to that key via vaultData will hit the cached value that is stored in memory.
With proper configuration after 10 minutes the value will be wiped from the cache (please read https://github.com/google/guava/wiki/CachesExplained#when-does-cleanup-happen and How does Guava expire entries in its CacheBuilder? to configure that correctly).
You might need to set max cache size to limit the memory consumption. See documentation for details.

Related

Clear remote Redis cache in Spring Boot application

I'm using Spring Boot 2.3 and I'm using the default cache mechanism using app.properties.
I defined all values:
spring.cache.type = redis
spring.redis.host = host
spring.redis.port = port
spring.redis.timeout = 4000
spring.redis.password = psw
spring.cache.redis.time-to-live = 28800000
I take advantage of the cache in Spring Repository for example:
#Cacheable(cacheNames = "contacts")
#Override
Page<Contact> findAll(Specification specification, Pageable pageable);
It works as expected. Redis, however is a cluster used from a couple of my applications and I need the second application is able to remove some/all keys in redis.
The applicazione A1 take advantage of the cache and put keys inside. The app A2, need to clear some keys or all keys.
In A2 I did:
cacheManager.getCacheNames().forEach(cacheName -> cacheManager.getCache(cacheName).clear());
but of course the list of cache names is empty becuase in this app I don't add keys to the cache and, anyway, I don't have the same keys of the A1.
I should list remote keys and then I need to clear them. Is there a simple way without using Spring Data Redis library?
You could define a separate prefix for entire your cache in the Redis. Something like a namespace for your cache entries.
And after you could flush all keys in this namespace.
Note: ensure that CacheManager has only Redis cache and doesn't have an in-memory cache (L1).

Hazelcast distributed map: what is the default eviction policy?

I have a distributed map in Hazelcast, something like this:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getGroupConfig().setName("clusterName").setPassword("clusterPWD");
clientConfig.getNetworkConfig().addAddress("X.X.X.X");
clientConfig.setInstanceName(InstanceName);
HazelcastInstance instance = HazelcastClient.newHazelcastClient(clientConfig);
[...]
map = instance.getMap("MAP_NAME");
[...]
// a lot of map.put();
[...]
// a lot of map.get();
I need to avoid OOM problems and clean the cache every time.
EDIT: It seems that the default policy is NOT EVICTION so it's necessary to clean the cache with some policy.
I tried adding an hazelcast-client.xml in classpath with this configuration
<near-cache name="wm_info">
<max-size>3</max-size>
<time-to-live-seconds>5</time-to-live-seconds>
<max-idle-seconds>5</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
<in-memory-format>OBJECT</in-memory-format>
</near-cache>
both adding this code
EvictionConfig evictionConfig = new EvictionConfig()
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(2);
NearCacheConfig nearCacheConfig = new NearCacheConfig()
.setName(WM_MAP_NAME)
.setInMemoryFormat(InMemoryFormat.BINARY)
.setInvalidateOnChange(true)
.setTimeToLiveSeconds(5)
.setEvictionConfig(evictionConfig);
clientConfig.addNearCacheConfig(nearCacheConfig);
but doesn't work... cache items still in cache even after some minutes.
EDIT2: The only way it seems to work is:
map.put(code, json, 5, TimeUnit.SECONDS);
Any alternatives?
Thanks
Andrea
by default eviction/expiration is not configured for map, you have to explicitly configure if you don't want your map to exceed a threshold. If you keep putting entries into a map with a default configuration you'll get OOM eventually.
Below is a map configuration which enables eviction with the policy least-recently-used. When map size reaches to the configured threshold, some of the entries will get evicted.
If you want to expire the entries too you may configure time-to-live-seconds and max-idle-seconds too.
<map name="default">
...
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<max-size policy="PER_NODE">5000</max-size>
...
</map>
</hazelcast>
Take a look at the Map Eviction section of the documentation
https://docs.hazelcast.org/docs/3.11.2/manual/html-single/index.html#map-eviction
To add to what Ali commented, you have to add a size restriction of some sort on the cluster map side (number of entries, memory size, etc). Then you add an eviction policy to tell Hazelcast which entries to evict when it hits the threshold and needs to put in new values.

How to get Object idle time using Spring boot Redis

I am trying to implement session management, where we store jwt token to redis. Now I want remove the key if the object idle time is more than 8 hours. Pls help
There is no good reason that comes to my mind for using IDLETIME instead of using the much simpler pattern of issuing a GET followed by an EXPIRE apart from very trivial memory requirements for key expiry.
Recommended Way: GET and EXPIRE
GET the key you want.
Issue an EXPIRE <key> 28800.
Way using OBJECT IDLETIME, DEL and some application logic:
GET the key you want.
Call OBJECT IDLETIME <key>.
Check in your application code if the idletime > 8h.
If condition 3 is met, then issue a DEL command.
The second way is more cumbersome and introduces network latency since you need three round trips to your redis server while the first solution just does it in one round trip if you use a pipeline or two round trips without any app server time at worst.
This is what I did using Jedis. I am fetching 1000 records at a time. You can add a loop to fetch all records in a batch.
Jedis jedis = new Jedis("addURLHere");
ScanParams scanParams = new ScanParams().count(1000);
ScanResult<String> scanResult = jedis.scan(ScanParams.SCAN_POINTER_START, scanParams);
List<String> result = scanResult.getResult();
result.stream().forEach((key) -> {
if (jedis.objectIdletime(key) > 8 * 60 * 60) { // more than 5 days
//your functionality here
}
});`

How to do multi-threaded writing in ehcache?

Here's my code of using ehcache when I do multi-threaded reading and writing:
write code:
try {
targetCache.acquireWriteLockOnKey(key);
targetCache.putIfAbsent(new Element(key, value));
}
finally {
targetCache.releaseWriteLockOnKey(key);
}
reading code:
try{
cache.acquireReadLockOnKey(key);
cacheCarId = (String)ele.getObjectValue();
}
finally {
cache.releaseReadLockOnKey(key);
}
key and value are both String.
My config is as follows:
CacheConfiguration config = new CacheConfiguration();
config.name("carCache");
config.maxBytesLocalHeap(128, MemoryUnit.parseUnit("M"));
config.eternal(false);
config.timeToLiveSeconds(60);
config.setTimeToIdleSeconds(60);
SizeOfPolicyConfiguration sizeOfPolicyConfiguration = new SizeOfPolicyConfiguration();
sizeOfPolicyConfiguration.maxDepth(10000);
sizeOfPolicyConfiguration.maxDepthExceededBehavior("abort");
config.addSizeOfPolicy(sizeOfPolicyConfiguration);
Cache memoryOnlyCache = new Cache(config);
CacheManager.getInstance().addCache(memoryOnlyCache);
Values are evict within 60s and will be written by multi-thread. The total number of key is less than 25,000.
The reading and writing was ok at the beginning, but after a couple of hours, i get inconsistence of reading and writing...
Could Anybody help me with this problem? Thanks a lot
A Cache is already a thread safe data structure, so you should not need to use explicit locking as you do.
Also the method Cache.putIfAbsent is already an atomic operation that guarantees that only one thread will succeed with the put.
Note that eviction and expiry are two different things. With your configuration, eviction happens when the cache size grows beyond 128MB and expiry indeed happens after 60 seconds. However Ehcache does expiry in-line, so it is triggered when you read or write the mapping.
As for your remark on inconsistence, you will need to describe in more detail what you mean by that.

Changing hazelcast configuration at runtime

Is it possible to change Hazelcast configuration at runtime and if so what parameters are modifiable.
It seems to be possible using Hazelcast Management Center but can't find any examples/references in official docos/forums.
Might be a bit late to answer your question but better late than never :)
You can modify some of the map config properties after the map has been created using the MapService:
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
// create map
IMap<String, Integer> myMap = instance.getMap("myMap");
// create a new map config
MapConfig newMapConfig = instance.getConfig().getMapConfig("myMap").setAsyncBackupCount(1);
// submit the new map config to the map service
MapService mapService = (MapService)(((AbstractDistributedObject)instance.getDistributedObject(MapService.SERVICE_NAME, "")).getService());
mapService.getMapServiceContext().getMapContainer("myMap").setMapConfig(newMapConfig);
Note that this API is not visible/documented so it might not work in future versions.
We are using this in our application when we need to insert several million entries in a distributed map at startup. Disabling the backup cut the insertion time by 30%. After the data are inserted, we enable the backup.
The Hazelcast internals are not really designed to be modifiable. What do you want to modify?

Resources