The right way to reload and restart State Machine with different data set - spring-statemachine

I need the same SM to serve various records from the same db-table (can't create SM per each record). Would this be appropriate way to reinitialize SM with new states from another record OR can you advice a better approach please ?
public static <S, E> void reinitStateMachine(Integer key, IStateMachineEnabledService sees, StateMachine<S, E> stateMachine, Class<? extends Enum> statesClazz) {
String dbReadState;
try {
dbReadState = sees.findStateById(key);
} catch (Exception e) {
throw new MissingResourceException("Error while trying to load record with state, no record found: "+key+". Extra info:"+e.getMessage(),sees.toString(),String.valueOf(key));
}
S currentState = (S) Enum.valueOf(statesClazz, dbReadState);
stateMachine.stop();
((AbstractStateMachine<S, E>) stateMachine).resetStateMachine(new DefaultStateMachineContext<S, E>(currentState, null, null, null));
stateMachine.start();
}
Thanks !
PS: I am aware of persist and restore interfaces in 1.1.0, but persisting SMContext works only for string state machine, while i use enum.

Related

Handle Hibernate optimistic locking with Spring

I am using Hibernate and Spring Data, it will perform optimistic locking when insert or update an entity, and if the version in database doesn't match with the one to persist, it will throw exception StaleObjectStateException, in Spring, you need to catch it with ObjectOptimisticLockingFailureException.
What I want to do is catch the exception and ask the user to refresh the page in order to get the latest data from database like below:
public void cancelRequest()
{
try
{
request.setStatus(StatusEnum.CANCELLED);
this.request = topUpRequestService.insertOrUpdate(request);
loadRequests();
//perform other tasks...
} catch (ObjectOptimisticLockingFailureException ex)
{
FacesUtils.showErrorMessage(null, "Action Failed.", FacesUtils.getMessage("message.pleaseReload"));
}
}
I assume it will also work with the code below but I have not tested it yet.
public void cancelRequest()
{
RequestModel latestModel = requestService.findOne(request.getId());
if(latestModel.getVersion() != request.getVersion())
{
FacesUtils.showErrorMessage(null, "Action Failed.", FacesUtils.getMessage("message.pleaseReload"));
}
else
{
request.setStatus(StatusEnum.CANCELLED);
this.request = requestService.insertOrUpdate(request);
loadRequests();
//perform other tasks...
}
}
I need to apply this checking on everywhere I call requestService.insertOrUpdate(request); and I don't want to apply them one by one. Therefore, I decide to place the checking code inside the function insertOrUpdate(entity) itself.
#Transactional
public abstract class BaseServiceImpl<M extends Serializable, ID extends Serializable, R extends JpaRepository<M, ID>>
implements BaseService<M, ID, R>
{
protected R repository;
protected ID id;
#Override
public synchronized M insertOrUpdate(M entity)
{
try
{
return repository.save(entity);
} catch (ObjectOptimisticLockingFailureException ex)
{
FacesUtils.showErrorMessage(null, FacesUtils.getMessage("message.actionFailed"),
FacesUtils.getMessage("message.pleaseReload"));
return entity;
}
}
}
My main question is, there will be one problem with this approach. The caller side will not know whether the entity persisted successfully or not since the exception will be caught and handled inside the function, so the caller side will always assume the persist was success, and continue do the other tasks, which is I don't want. I want it to stop performing tasks if fail to persist:
public void cancelRequest()
{
try
{
request.setStatus(StatusEnum.CANCELLED);
this.request = topUpRequestService.insertOrUpdate(request);
//I want it to stop here if fail to persist, don't load the requests and perform other tasks.
loadRequests();
//perform other tasks...
} catch (ObjectOptimisticLockingFailureException ex)
{
FacesUtils.showErrorMessage(null, "Action Failed.", FacesUtils.getMessage("message.pleaseReload"));
}
}
I know when calling the insertOrUpdate , I can catch the returned entiry by declaring new model variable, and compare it's version to the original one, if version is same, means the persistance was failed. But if I doing it this way, I have to write the version checking code on everywhere I call insertOrUpdate. Any better approach then this?
The closest way to being able to do this and not having to necessarily make significant code changes at all the invocation points would be to look into some type of Spring AOP advice that works similar to Spring's #Transactional annotation.
#FacesReloadOnException( ObjectOptimisticLockingFailureException.class )
public void theRequestHandlerMethod() {
// call your service here
}
The idea is that the #FacesReloadOnException annotation triggers an around advice that catches any exception provided in the annotation value and does basically handles the call the FacesUtils should any of those exception classes be thrown.
The other options you have available aren't going to be nearly as straight forward and will require that you touch all your usage points in some fashion, its just inevitable.
But I certainly would not consider putting the try/catch block in the service tier if you don't want to alter your service tier's method return types because the controllers are going to need more context as you've pointed out. The only way to push that try/catch block downstream would be if you returned some type of Result object that your controller could then inspect like
public void someControllerRequestMethod() {
InsertOrUpdateResult result = yourService.insertOrUpdate( theObject );
if ( result.isSuccess() ) {
loadRequests();
}
else {
FacesUtils.showErrorMessage( ... );
}
}
Otherwise you'd need to get creative if you want to somehow centralize this in your web tier. Perhaps a web tier utility class that mimics your BaseService interface like the following:
public <T extends BaseService, U> U insertOrUpdate(T service, U object, Consumer<U> f) {
try {
U result = service.insertOrUpdate( object );
f.accept( result );
return result;
}
catch ( ObjectOptimisticLockingFailureException e ) {
FacesUtils.showErrorMessage( ... );
}
}
But being frank, unless you have a lot of call sites that are similar enough to where such a generalization with a consumer like this makes sense, you may find its more effort and work to generalize it than it would to just place the try/catch block in the controller itself.

How to retrieve and find current state on Spring State Machine?

I'm creating an project that needs a FSM and i choose Spring State Machine to help me solve the problem. I'm using JPA and trying to figure out how to start the state machine based on my current state retrieving the state from an JPA repository. I found on documentation this approach:
state machine persist
But i'm confusing about this approach too: persisting state machine
I'm not trying to persist all state machine configuration, but only start and send events based on my entity status. but in both cases i don't know how to put jpa repository to find my current state.
Now i'm trying this approach:
class StateMachineAdapter<S, E, T> {
lateinit var stateMachineFactory: StateMachineFactory<S, E>
lateinit var persister: StateMachinePersister<S, E, T>
fun stateMachineRestore(contextObject: T): StateMachine<S, E> {
val stateMachine: StateMachine<S, E> = stateMachineFactory.getStateMachine()
return persister.restore(stateMachine, contextObject)
}
fun persist(stateMachine: StateMachine<S, E>, contestation: T) {
persister.persist(stateMachine, contestation)
}
fun create(): StateMachine<S, E> {
val stateMachine: StateMachine<S, E> = stateMachineFactory.getStateMachine()
stateMachine.start()
return stateMachine
}
}
I found this piece of code in Spring documentation, and i thought it could be replaced with JpaRepository:
public void change(int order, String event) {
Order o = jdbcTemplate.queryForObject("select id, state from orders where id = ?", new Object[] { order },
new RowMapper<Order>() {
public Order mapRow(ResultSet rs, int rowNum) throws SQLException {
return new Order(rs.getInt("id"), rs.getString("state"));
}
});
handler.handleEventWithState(MessageBuilder.withPayload(event).setHeader("order", order).build(), o.state);
}
This indeed has been quite awkward to do using existing functionality as there is a lot of moving parts as you've probably seen from samples and docs.
I'm currently working to overhaul things around this in next 1.2.8 release to make persisting easier. If you're willing to use snapshots(in 1.2.x branch) until 1.2.8 is out, then start by checking new sample datajpapersist sample. Based on same concepts as storing configs but with new persist classes in spring-statemachine-data. Also issues around this 1.2.8 gh issues.
It'd be good to get some feedback on this.

Kafka Streams state store exception when putting a value

I am using the low level processor API with state stores. Up to 0.10.0.1 it was working fine, but I have upgraded Kafka Streams and I am getting the below error. I figured out that this is due to the changelog and it is looking at the record context:
java.lang.IllegalStateException: This should not happen as timestamp() should only be called while a record is processed
! at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.timestamp(AbstractProcessorContext.java:150)
! at org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:60)
! at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.put(ChangeLoggingKeyValueBytesStore.java:47)
! at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueStore.put(ChangeLoggingKeyValueStore.java:66)
! at org.apache.kafka.streams.state.internals.MeteredKeyValueStore$2.run(MeteredKeyValueStore.java:67)
#Override
public void process(String arg0, List<Data> data {
data.forEach((x) -> {
String rawKey = x.getId();
Data data = kvStore.get(rawKey);
long bytesize = data == null ? 0 : data.getVolume();
x.addVolume(bytesize);
kvStore.put(rawKey, x);
});
}
public void start() {
builder = new KStreamBuilder();
storeSupplier = Stores.create(getKVStoreName()).withKeys(getProcessorKeySerde()).withValues(getProcessorValueSerde()).persistent().build();
builder.addStateStore(storeSupplier);
stream = builder.stream(Serdes.String(), serde(),getTopicName());
processStream(stream);
streams = new KafkaStreams(builder, props);
streams.cleanUp();
streams.start();
}
#Override
public void init(ProcessorContext context) {
super.init(context);
this.context = context;
this.context.schedule(timeinterval);
this.kvStore = (KeyValueStore) context.getStateStore(getKVStoreName());
}
Exceptions like this may come up when using the same instance of the Processor across multiple streams threads or partitions.
Ensure that you are returning a new instance to the ProcessorSupplier:
new ProcesorSupplier(() -> new Processor(...
The same applies to Transformer and TransformerSupplier as well.
To broadly quote the documentation:
Creating a single Processor/Transformer object and returning the same object reference in ProcesorSupplier/TransformerSupplier#get() would be a violation of the supplier pattern and leads to runtime exceptions.

java application crashed by suspicious jdbc memory leak

I have been working on a java application which crawls page from Internet with http-client(version4.3.3). It uses one fixedThreadPool with 5 threads,each is a loop thread .The pseudocode is following.
public class Spiderling extends Runnable{
#Override
public void run() {
while (true) {
T task = null;
try {
task = scheduler.poll();
if (task != null) {
if Ehcache contains task's config
taskConfig = Ehcache.getConfig;
else{
taskConfig = Query task config from db;//close the conn every time
put taskConfig into Ehcache
}
spider(task,taskConfig);
}
} catch (Exception e) {
e.printStackTrace();
}
}
LOG.error("spiderling is DEAD");
}
}
I am running it with following arguments -Duser.timezone=GMT+8 -server -Xms1536m -Xmx1536m -Xloggc:/home/datalord/logs/gc-2016-07-23-10-28-24.log -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC on a server(2 cpus,2G memory) and it crashes pretty regular about once in two or three days with no OutOfMemoryError and no JVM error log.
Here is my analysis;
I analyse the gc log with GC-EASY,the report is here. The weird thing is the Old Gen increasing slowly until the allocated max heap size,but the Full Gc has never happened even once.
I suspect it might has memory leak,so I dump the heap map using cmd jmap -dump:format=b,file=soldier.bin and using the Eclipse MAT to analyze the dump file.Here is the problem suspect which object occupies 280+ M bytes.
The class "com.mysql.jdbc.NonRegisteringDriver",
loaded by "sun.misc.Launcher$AppClassLoader # 0xa0018490", occupies 281,118,144
(68.91%) bytes. The memory is accumulated in one instance of
"java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "".
Keywords
com.mysql.jdbc.NonRegisteringDriver
java.util.concurrent.ConcurrentHashMap$Segment[]
sun.misc.Launcher$AppClassLoader # 0xa0018490.
I use c3p0-0.9.1.2 as mysql connection pool and mysql-connector-java-5.1.34 as jdbc connector and Ehcache-2.6.10 as memory cache.I have see all posts about 'com.mysql.jdbc.NonregisteringDriver memory leak' and still get no clue.
This problem has driven me crazy for several days, any advice or help will be appreciated!
**********************Supplementary description on 07-24****************
I use a JAVA WEB + ORM Framework called JFinal(github.com/jfinal/jfinal) which is open in github。
Here are some core code for further description about the problem.
/**
* CacheKit. Useful tool box for EhCache.
*
*/
public class CacheKit {
private static CacheManager cacheManager;
private static final Logger log = Logger.getLogger(CacheKit.class);
static void init(CacheManager cacheManager) {
CacheKit.cacheManager = cacheManager;
}
public static CacheManager getCacheManager() {
return cacheManager;
}
static Cache getOrAddCache(String cacheName) {
Cache cache = cacheManager.getCache(cacheName);
if (cache == null) {
synchronized(cacheManager) {
cache = cacheManager.getCache(cacheName);
if (cache == null) {
log.warn("Could not find cache config [" + cacheName + "], using default.");
cacheManager.addCacheIfAbsent(cacheName);
cache = cacheManager.getCache(cacheName);
log.debug("Cache [" + cacheName + "] started.");
}
}
}
return cache;
}
public static void put(String cacheName, Object key, Object value) {
getOrAddCache(cacheName).put(new Element(key, value));
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key) {
Element element = getOrAddCache(cacheName).get(key);
return element != null ? (T)element.getObjectValue() : null;
}
#SuppressWarnings("rawtypes")
public static List getKeys(String cacheName) {
return getOrAddCache(cacheName).getKeys();
}
public static void remove(String cacheName, Object key) {
getOrAddCache(cacheName).remove(key);
}
public static void removeAll(String cacheName) {
getOrAddCache(cacheName).removeAll();
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key, IDataLoader dataLoader) {
Object data = get(cacheName, key);
if (data == null) {
data = dataLoader.load();
put(cacheName, key, data);
}
return (T)data;
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key, Class<? extends IDataLoader> dataLoaderClass) {
Object data = get(cacheName, key);
if (data == null) {
try {
IDataLoader dataLoader = dataLoaderClass.newInstance();
data = dataLoader.load();
put(cacheName, key, data);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
return (T)data;
}
}
I use CacheKit like CacheKit.get("cfg_extract_rule_tree", extractRootId, new ExtractRuleTreeDataloader(extractRootId)). and class ExtractRuleTreeDataloader will be called if find nothing in cache by extractRootId.
public class ExtractRuleTreeDataloader implements IDataLoader {
public static final Logger LOG = LoggerFactory.getLogger(ExtractRuleTreeDataloader.class);
private int ruleTreeId;
public ExtractRuleTreeDataloader(int ruleTreeId) {
super();
this.ruleTreeId = ruleTreeId;
}
#Override
public Object load() {
List<Record> ruleTreeList = Db.find("SELECT * FROM cfg_extract_fule WHERE root_id=?", ruleTreeId);
TreeHelper<ExtractRuleNode> treeHelper = ExtractUtil.batchRecordConvertTree(ruleTreeList);//convert List<Record> to and tree
if (treeHelper.isValidTree()) {
return treeHelper.getRoot();
} else {
LOG.warn("rule tree id :{} is an error tree #end#", ruleTreeId);
return null;
}
}
As I said before, I use JFinal ORM.The Db.find method code is
public List<Record> find(String sql, Object... paras) {
Connection conn = null;
try {
conn = config.getConnection();
return find(config, conn, sql, paras);
} catch (Exception e) {
throw new ActiveRecordException(e);
} finally {
config.close(conn);
}
}
and the config close method code is
public final void close(Connection conn) {
if (threadLocal.get() == null) // in transaction if conn in threadlocal
if (conn != null)
try {conn.close();} catch (SQLException e) {throw new ActiveRecordException(e);}
}
There is no transaction in my code,so I am pretty sure the conn.close() will be called every time.
**********************more description on 07-28****************
First, I use Ehcache to store the taskConfigs in the memory. And the taskConfigs almost never change, so I want store them in the memory eternally and store them to disk if the memory can not store them all.
I use MAT to find out the GC Roots of NonRegisteringDriver, and the result is show in the following picture.
The Gc Roots of NonRegisteringDriver
But I still don't understand why the default behavior of Ehcache lead memory leak.The taskConfig is a class extends the Model class.
public class TaskConfig extends Model<TaskConfig> {
private static final long serialVersionUID = 5000070716569861947L;
public static TaskConfig DAO = new TaskConfig();
}
and the source code of Model is in this page(github.com/jfinal/jfinal/blob/jfinal-2.0/src/com/jfinal/plugin/activerecord/Model.java). And I can't find any reference (either directly or indirectly) to the connection object as #Jeremiah guessing.
Then I read the source code of NonRegisteringDriver, and don't understand why the map field connectionPhantomRefs of NonRegisteringDriver holds more than 5000 entrys of <ConnectionPhantomReference, ConnectionPhantomReference>,but find no ConnectionImpl in the queue field refQueue of NonRegisteringDriver. Because I see the cleanup code in class AbandonedConnectionCleanupThread which means it will move the ref in the NonRegisteringDriver.connectionPhantomRefs while getting abandoned connection ref from NonRegisteringDriver.refQueue.
#Override
public void run() {
threadRef = this;
while (running) {
try {
Reference<? extends ConnectionImpl> ref = NonRegisteringDriver.refQueue.remove(100);
if (ref != null) {
try {
((ConnectionPhantomReference) ref).cleanup();
} finally {
NonRegisteringDriver.connectionPhantomRefs.remove(ref);
}
}
} catch (Exception ex) {
// no where to really log this if we're static
}
}
}
Appreciate the help offered by #Jeremiah !
From the comments above I'm almost certain your memory leak is actually memory usage from EhCache. The ConcurrentHashMap you're seeing is the one backing the MemoryStore, and I'm guessing that the taskConfig holds a reference (either directly or indirectly) to the connection object, which is why it's showing in your stack.
Having eternal="true" in the default cache makes it so the inserted objects are never allowed to expire. Even without that, the timeToLive and timeToIdle values default to an infinite lifetime!
Combine that with the default behavior of Ehcache when retrieving elements is to copy them (last I checked), through serialization! You're just stacking new Object references up each time the taskConfig is extracted and put back into the ehcache.
The best way to test this (in my opinion) is to change your default cache configuration. Change eternal to false, and implement a timeToIdle value. timeToIdle is a time (in seconds) that a value may exist in the cache without being accessed.
<ehcache> <diskStore path="java.io.tmpdir"/> <defaultCache maxElementsInMemory="10000" eternal="false" timeToIdle="120" overflowToDisk="true" diskPersistent="false" diskExpiryThreadIntervalSeconds="120"/>
If that works, then you may want to look into further tweaking your ehcache configuration settings, or providing a more customized cache reference other than default for your class.
There are multiple performance considerations when tweaking the ehcache. I'm sure that there is a better configuration for your business model. The Ehcache documentation is good, but I found the site to be a bit scattered when I was trying to figure it out. I've listed some links that I found useful below.
http://www.ehcache.org/documentation/2.8/configuration/cache-size.html
http://www.ehcache.org/documentation/2.8/configuration/configuration.html
http://www.ehcache.org/documentation/2.8/apis/cache-eviction-algorithms.html#provided-memorystore-eviction-algorithms
Good luck!
To test your memory leak try the following:
Insert a TaskConfig into ehcache
Immediately retrieve it back out of the cache.
output the value of TaskConfig1.equals(TaskConfig2);
If it returns false, that is your memory leak. Override equals and
hash in your TaskConfig Object and rerun the test.
The root cause of the java program is that the Linux OS runs out of memory and the OOM Killer kills the progresses.
I found the log in /var/log/messages like following.
Aug 3 07:24:03 iZ233tupyzzZ kernel: Out of memory: Kill process 17308 (java) score 890 or sacrifice child
Aug 3 07:24:03 iZ233tupyzzZ kernel: Killed process 17308, UID 0, (java) total-vm:2925160kB, anon-rss:1764648kB, file-rss:248kB
Aug 3 07:24:03 iZ233tupyzzZ kernel: Thread (pooled) invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Aug 3 07:24:03 iZ233tupyzzZ kernel: Thread (pooled) cpuset=/ mems_allowed=0
Aug 3 07:24:03 iZ233tupyzzZ kernel: Pid: 6721, comm: Thread (pooled) Not tainted 2.6.32-431.23.3.el6.x86_64 #1
I also find the default value of maxIdleTime is 20 seconds in the C3p0Plugin which is a c3p0 plugin in JFinal, So I think this is why the Object NonRegisteringDriver occupies 280+ M bytes that shown in the MAT report. So I set the maxIdleTime to 3600 seconds and the object NonRegisteringDriver is no longer suspicious in the MAT report.
And I reset the jvm argements to -Xms512m -Xmx512m. And the java program already has been running pretty well for several days. The Full Gc will be called as expected when the Old Gen is full.

How to avoid caching when values are null?

I am using Guava to cache hot data. When the data does not exist in the cache, I have to get it from database:
public final static LoadingCache<ObjectId, User> UID2UCache = CacheBuilder.newBuilder()
//.maximumSize(2000)
.weakKeys()
.weakValues()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<ObjectId, User>() {
#Override
public User load(ObjectId k) throws Exception {
User u = DataLoader.datastore.find(User.class).field("_id").equal(k).get();
return u;
}
});
My problem is when the data does not exists in database, I want it to return null and to not do any caching. But Guava saves null with the key in the cache and throws an exception when I get it:
com.google.common.cache.CacheLoader$InvalidCacheLoadException:
CacheLoader returned null for key shisoft.
How do we avoid caching null values?
Just throw some Exception if user is not found and catch it in client code while using get(key) method.
new CacheLoader<ObjectId, User>() {
#Override
public User load(ObjectId k) throws Exception {
User u = DataLoader.datastore.find(User.class).field("_id").equal(k).get();
if (u != null) {
return u;
} else {
throw new UserNotFoundException();
}
}
}
From CacheLoader.load(K) Javadoc:
Returns:
the value associated with key; must not be null
Throws:
Exception - if unable to load the result
Answering your doubts about caching null values:
Returns the value associated with key in this cache, first loading
that value if necessary. No observable state associated with this
cache is modified until loading completes.
(from LoadingCache.get(K) Javadoc)
If you throw an exception, load is not considered as complete, so no new value is cached.
EDIT:
Note that in Caffeine, which is sort of Guava cache 2.0 and "provides an in-memory cache using a Google Guava inspired API" you can return null from load method:
Returns:
the value associated with key or null if not found
If you may consider migrating, your data loader could freely return when user is not found.
Simple solution: use com.google.common.base.Optional<User> instead of User as value.
public final static LoadingCache<ObjectId, Optional<User>> UID2UCache = CacheBuilder.newBuilder()
...
.build(
new CacheLoader<ObjectId, Optional<User>>() {
#Override
public Optional<User> load(ObjectId k) throws Exception {
return Optional.fromNullable(DataLoader.datastore.find(User.class).field("_id").equal(k).get());
}
});
EDIT: I think #Xaerxess' answer is better.
Faced the same issue, cause missing values in the source was part of the normal workflow. Haven't found anything better than to write some code myself using getIfPresent, get and put methods. See the method below, where local is Cache<Object, Object>:
private <K, V> V getFromLocalCache(K key, Supplier<V> fallback) {
#SuppressWarnings("unchecked")
V s = (V) local.getIfPresent(key);
if (s != null) {
return s;
} else {
V value = fallback.get();
if (value != null) {
local.put(key, value);
}
return value;
}
}
When you want to cache some NULL values, you could use other staff which namely behave as NULL.
And before give the solution, I would suggest you not to expose LoadingCache to outside. Instead, you should use method to restrict the scope of Cache.
For example, you could use LoadingCache<ObjectId, List<User>> as return type. And then, you could return empty list when you could'n retrieve values from database. You could use -1 as Integer or Long NULL value, you could use "" as String NULL value, and so on. After this, you should provide a method to handler the NULL value.
when(value equals NULL(-1|"")){
return null;
}
I use the getIfPresent
#Test
public void cache() throws Exception {
System.out.println("3-------" + totalCache.get("k2"));
System.out.println("4-------" + totalCache.getIfPresent("k3"));
}
private LoadingCache<String, Date> totalCache = CacheBuilder
.newBuilder()
.maximumSize(500)
.refreshAfterWrite(6, TimeUnit.HOURS)
.build(new CacheLoader<String, Date>() {
#Override
#ParametersAreNonnullByDefault
public Date load(String key) {
Map<String, Date> map = ImmutableMap.of("k1", new Date(), "k2", new Date());
return map.get(key);
}
});

Resources