NPE in Ignite 2.9 SpringCacheManager - spring

We are catching NPE in SpringCacheManager during application startup:
java.lang.NullPointerException: null
at org.apache.ignite.cache.spring.SpringCacheManager.getCache(SpringCacheManager.java:357)
at org.springframework.cache.interceptor.AbstractCacheResolver.resolveCaches(AbstractCacheResolver.java:89)
at org.springframework.cache.interceptor.CacheAspectSupport.getCaches(CacheAspectSupport.java:253)
at org.springframework.cache.interceptor.CacheAspectSupport$CacheOperationContext.<init>(CacheAspectSupport.java:724)
at org.springframework.cache.interceptor.CacheAspectSupport.getOperationContext(CacheAspectSupport.java:266)
at org.springframework.cache.interceptor.CacheAspectSupport$CacheOperationContexts.<init>(CacheAspectSupport.java:615)
at org.springframework.cache.interceptor.CacheAspectSupport.execute(CacheAspectSupport.java:346)
at org.springframework.cache.interceptor.CacheInterceptor.invoke(CacheInterceptor.java:61)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691)
It seems that spring boot application started early than cache manager is becoming ready to work.
If we look into ignite code:
/** {#inheritDoc} */
#Override public Cache getCache(String name) {
assert ignite != null;
SpringCache cache = caches.get(name);
if (cache == null) {
CacheConfiguration<Object, Object> cacheCfg = dynamicCacheCfg != null ?
new CacheConfiguration<>(dynamicCacheCfg) : new CacheConfiguration<>();
NearCacheConfiguration<Object, Object> nearCacheCfg = dynamicNearCacheCfg != null ?
new NearCacheConfiguration<>(dynamicNearCacheCfg) : null;
cacheCfg.setName(name);
cache = new SpringCache(nearCacheCfg != null ? ignite.getOrCreateCache(cacheCfg, nearCacheCfg) :
ignite.getOrCreateCache(cacheCfg), this); // (SpringCacheManager.java:357)
SpringCache old = caches.putIfAbsent(name, cache);
if (old != null)
cache = old;
}
return cache;
}
I have no idea of NPE reasons, and how to avoid it.
We are using: Ignite 2.9, spring cache.
Could you help?)

Related

Leverage Spring boot Redis Auto configure logic for RedisConnectionFactory

Spring boot auto configures RedisConnectionFactory if spring-data-redis exists on classpath and RedisConnectionFactory is initialized in LettuceConnectionConfiguration if Lettuce-core available on classpath.
I've only one Redis store as of now, so leveraging Spring boot auto configuration.
Now I'm adding two redis stores, one redis store used as default and other is used when specified with parameter cacheManager = "secondayCacheManager" in #Cacheable annotation so, application should've capability to cache/cache-get on both redis stores.
To configure both Redis Stores, we've to configure both the primary and secondary RedisConnectionFactory and cacheManager using custom configuration. (because spring doesn't auto configure RedisConnectionFactory if it already exists in any custom configuration)
Now the above is custom configuration and missing lot of logic that is happening while configuring RedisConnectionFactory in LettuceConnectionConfiguration.
Auto configure logic for LettuceConnectionConfiguration is package private so, cannot be called directly from custom configuration.
We would like to leverage the auto configure logic in
LettuceConnectionConfiguration while configuring the custom
RedisConnectionFactory for both primary and secondary redis caches.
Is there a way to achieve this?
Reason being we would like keep the redis connection configurations as it is done by spring boot auto configure.
Currently using below code to configure both the primary and secondary RedisConnectionFactory with Pool configuration and some code copy pasted from LettuceConnectionConfiguration class.
public static LettuceConnectionFactory buildLettuceConnectionFactory(RedisProperties properties, ClientResources clientResources) {
RedisStandaloneConfiguration standaloneConfiguration = new RedisStandaloneConfiguration(properties.getHost(), properties.getPort());
standaloneConfiguration.setDatabase(properties.getDatabase());
if (properties.getPassword() != null) {
standaloneConfiguration.setPassword(RedisPassword.of(properties.getPassword()));
}
if (properties.getUsername() != null) {
standaloneConfiguration.setUsername(properties.getUsername());
}
LettucePoolingClientConfiguration poolingClientConfiguration = LettucePoolingClientConfiguration.builder()
.poolConfig(buildGenericObjectPoolConfig(properties))
.shutdownTimeout(properties.getLettuce().getShutdownTimeout())
.clientOptions(createClientOptions(properties))
.clientResources(clientResources)
.build();
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(
standaloneConfiguration, poolingClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory;
}
private static GenericObjectPoolConfig buildGenericObjectPoolConfig(RedisProperties properties) {
RedisProperties.Pool pool = properties.getLettuce().getPool();
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
if (Objects.nonNull(pool)) {
poolConfig.setMaxIdle(pool.getMaxIdle());
poolConfig.setMinIdle(pool.getMinIdle());
poolConfig.setMaxTotal(pool.getMaxActive());
poolConfig.setMaxWaitMillis(pool.getMaxWait().toMillis());
}
return poolConfig;
}
private static ClientOptions createClientOptions(RedisProperties properties) {
ClientOptions.Builder builder = initializeClientOptionsBuilder(properties);
Duration connectTimeout = properties.getConnectTimeout();
if (connectTimeout != null) {
builder.socketOptions(SocketOptions.builder().connectTimeout(connectTimeout).build());
}
return builder.timeoutOptions(TimeoutOptions.enabled()).build();
}
private static ClientOptions.Builder initializeClientOptionsBuilder(RedisProperties properties) {
if (properties.getCluster() != null) {
ClusterClientOptions.Builder builder = ClusterClientOptions.builder();
Refresh refreshProperties = properties.getLettuce().getCluster().getRefresh();
Builder refreshBuilder = ClusterTopologyRefreshOptions.builder()
.dynamicRefreshSources(refreshProperties.isDynamicRefreshSources());
if (refreshProperties.getPeriod() != null) {
refreshBuilder.enablePeriodicRefresh(refreshProperties.getPeriod());
}
if (refreshProperties.isAdaptive()) {
refreshBuilder.enableAllAdaptiveRefreshTriggers();
}
return builder.topologyRefreshOptions(refreshBuilder.build());
}
return ClientOptions.builder();
}

Caused by io.lettuce.core.rediscommandexecutionexception: moved 15596 XX.X.XXX.XX:6379 Java Spring boot

We have a spring-boot application which is deployed to lambda in AWS.
Code
public AbstractRedisClient getClient(String host, String port) {
LOG.info("redis-uri" + "redis://"+host+":"+port);
return RedisClient.create("redis://"+host+":"+port);
}
/**
* Returns the Redis connection using the Lettuce-Redis-Client
*
* #return RedisClient
*/
public RedisClient getConnection(String host, String port) {
LOG.info("redis-Host " + host);
LOG.info("redis-Port " + port);
RedisClient redisClient = (RedisClient) getClient(host, port);
redisClient.setDefaultTimeout(Duration.ofSeconds(10));
return redisClient;
}
private RedisCommands<String, String> getRedisCommands() {
StatefulRedisConnection<String, String> statefulConnection = openConnection();
if(statefulConnection != null)
return statefulConnection.sync();
else
return null;
}
public StatefulRedisConnection<String, String> openConnection() {
if(connection != null && connection.isOpen()) {
return connection;
}
String redisPort = "6379";
String redisHost = environment.getProperty("REDIS_HOST");
//String redisPort = environment.getProperty("REDIS_PORT");
LOG.info("Host: {}", redisHost);
LOG.info("Port: {}", redisPort);
UnifiedReservationRedisConfig lettuceRedisConfig = new UnifiedReservationRedisConfig();
String redisUri = "redis://"+redisHost+":"+redisPort;
redisClient = lettuceRedisConfig.getConnection(redisHost, redisPort);
ConnectionFuture<StatefulRedisConnection<String, String>> future = redisClient
.connectAsync(StringCodec.UTF8, RedisURI.create(redisUri));
try {
connection = future.get();
} catch(InterruptedException | ExecutionException exception) {
LOG.info(exception.getMessage());
closeConnectionsAsync();
connection = null;
Thread.currentThread().interrupt();
}
return connection;
}
private void closeConnectionsAsync() {
LOG.info("Close redis connection");
if(connection != null && connection.isOpen()) {
connection.closeAsync();
}
if(redisClient != null) {
redisClient.shutdownAsync();
}
}
The issue was happening all time, But frequently geting this issue like Caused by io.lettuce.core.rediscommandexecutionexception: moved 15596 XX.X.XXX.XX:6379, Any one can help to solve this issue
As per my knowledge, you are doing port-forwarding to your redis cluster/instance using port 15596 but the actual redis ports like 6379 are not accessible from your application's network.
When redis's java client get the access to redis then tries to connect to actual ports like 6379.
Try using RedisClusterClient, instead of RedisClient. The unhandled MOVED response indicates that you are trying to use the non-cluster-aware client with Redis deployed in cluster mode.

Hibernate session closed for no apparent reason

I've been struggling with this for days and i really don't know what's happening here.
I have a few processes that are triggered by a user and happen on a seperate thread, which does some operations, and updates an entry in the DB about it's progress which the user can retrieve.
this has all been working fine until recently, when suddenly, sometimes, seemingly uncorrelated with anything, these processes fail on their first attempt to lazy load an entity. They fail with one of a few different errors, all of which eventually stem from the hibernate seesion being closed somehow:
org.hibernate.SessionException: Session is closed. The read-only/modifiable setting is only accessible when the proxy is associated with an open session.
-
org.hibernate.SessionException: Session is closed!
-
org.hibernate.LazyInitializationException: could not initialize proxy - the owning Session was closed
-
org.hibernate.exception.GenericJDBCException: Could not read entity state from ResultSet : EntityKey[com.rdthree.plenty.domain.crops.CropType#1]
-
java.lang.NullPointerException
at com.mysql.jdbc.ResultSetImpl.checkColumnBounds(ResultSetImpl.java:766)
i'm using spring #Transactional to manage my transactions
here's my config:
#Bean
public javax.sql.DataSource dataSource() {
// update TTL so that the datasource will pick up DB failover - new IP
java.security.Security.setProperty("networkaddress.cache.ttl", "30");
HikariDataSource ds = new HikariDataSource();
String jdbcUrl = "jdbc:mysql://" + plentyConfig.getHostname() + ":" + plentyConfig.getDbport() + "/"
+ plentyConfig.getDbname() + "?useSSL=false";
ds.setJdbcUrl(jdbcUrl);
ds.setUsername(plentyConfig.getUsername());
ds.setPassword(plentyConfig.getPassword());
ds.setConnectionTimeout(120000);
ds.setMaximumPoolSize(20);
return ds;
}
#Bean
public JpaTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager manager = new JpaTransactionManager();
manager.setEntityManagerFactory(entityManagerFactory);
return manager;
}
#Bean
#Autowired
public LocalContainerEntityManagerFactoryBean entityManagerFactory(javax.sql.DataSource dataSource) {
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setDataSource(dataSource);
factory.setPackagesToScan("com.rdthree.plenty.domain");
Properties properties = new Properties();
properties.put("org.hibernate.flushMode", "ALWAYS");
properties.put("hibernate.cache.use_second_level_cache", "true");
properties.put("hibernate.cache.use_query_cache", "true");
properties.put("hibernate.cache.region.factory_class",
"com.rdthree.plenty.config.PlentyInfinispanRegionFactory");
properties.put("hibernate.cache.infinispan.statistics", "true");
properties.put("hibernate.cache.infinispan.query", "distributed-query");
properties.put("hibernate.enable_lazy_load_no_trans", "true");
if (plentyConfig.getProfile().equals(PlentyConfig.UNIT_TEST)
|| plentyConfig.getProfile().equals(PlentyConfig.PRODUCTION_INIT)) {
properties.put("hibernate.cache.infinispan.cfg", "infinispan-local.xml");
} else {
properties.put("hibernate.cache.infinispan.cfg", "infinispan.xml");
}
factory.setJpaProperties(properties);
HibernateJpaVendorAdapter adapter = new HibernateJpaVendorAdapter();
adapter.setShowSql(false);
adapter.setDatabasePlatform("org.hibernate.dialect.MySQLDialect");
factory.setJpaVendorAdapter(adapter);
return factory;
}
the way it works is that the thread fired by the user iterates over a collection of plans and applies each of them, the process that applies the plan also updates the progress entity on the DB.
the whole thread bean is marked at transactional:
#Component
#Scope("prototype")
#Transactional(propagation = Propagation.REQUIRES_NEW, isolation = Isolation.READ_COMMITTED)
public class TemplatePlanApplicationThreadBean extends AbstractPlentyThread implements TemplatePlanApplicationThread {
...
#Override
public void run() {
startProcessing();
try {
logger.trace(
"---Starting plan manifestation for " + fieldCropReplaceDatePlantationDates.size() + " fields---");
List<Plan> plans = new ArrayList<>();
for (FieldCropReplaceDatePlantationDates obj : fieldCropReplaceDatePlantationDates) {
for (TemplatePlan templatePlan : obj.getTemplatePlans()) {
try {
plans.add(planService.findActivePlanAndManifestTemplatePlan(templatePlan, organization,
obj.getPlantationDate(), obj.getReplacementStartDate(), obj.getFieldCrop(),
autoSchedule, schedulerRequestArguments, planApplicationProgress, false));
} catch (ActivityException e) {
throw new IllegalArgumentException(e);
}
}
Plan plan = plans.get(plans.size() - 1);
plan = planService.getEntityById(plan.getId());
if (plan != null) {
planService.setUnscheduledPlanAsSelected(plan);
}
plans.clear();
}
if (planApplicationProgressService.getEntityById(planApplicationProgress.getId()) != null) {
planApplicationProgressService.deleteEntity(planApplicationProgress.getId());
}
} catch (Exception e) {
logger.error(PlentyUtils.extrapolateStackTrace(e));
connector.createIssue("RdThreeLLC", "plenty-web",
new GitHubIssueRequest("Template plan application failure",
"```\n" + PlentyUtils.extrapolateStackTrace(e) + "\n```", 1, new ArrayList<>(),
Lists.newArrayList("plentytickets")));
planApplicationProgress.setFailed(true);
planApplicationProgressService.saveEntity(planApplicationProgress);
} finally {
endProcessing();
}
}
here is the method called by the thread:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
public synchronized Plan findActivePlanAndManifestTemplatePlan(TemplatePlan templatePlan,
ServiceProviderOrganization organization, Date plantationDate, Date replacementDate, FieldCrop fieldCrop,
boolean autoSchedule, SchedulerRequestArguments schedulerRequestArguments, Progress planProgress,
boolean commit) throws ActivityException {
Plan oldPlan = getLatestByFieldCrop(fieldCrop);
List<Activity> activitiesToRemove = oldPlan != null
? findActivitiesToRemoveAfterDate(oldPlan, replacementDate != null ? replacementDate : new Date())
: new ArrayList<>();
List<PlanExpense> planExpensesToRemove = oldPlan != null
? findPlanExpensesToRemoveAfterDate(oldPlan, replacementDate != null ? replacementDate : new Date())
: new ArrayList<>();
Date oldPlanPlantationDate = oldPlan != null ? inferPlanDates(oldPlan).getPlantationDate() : null;
if (oldPlan != null) {
if (commit) {
oldPlan.setReplaced(true);
}
buildPlanProfitProjectionForPlanAndField(oldPlan, Sets.newHashSet(activitiesToRemove),
Sets.newHashSet(planExpensesToRemove));
}
if (commit) {
for (Activity activity : activitiesToRemove) {
activityService.deleteEntity(activity.getId());
}
for (PlanExpense planExpense : planExpensesToRemove) {
planExpenseService.deleteEntity(planExpense.getId());
}
}
oldPlan = oldPlan != null ? getEntityById(oldPlan.getId()) : null;
Plan plan = manifestTemplatePlan(templatePlan, oldPlan, organization,
plantationDate != null ? plantationDate : oldPlanPlantationDate, replacementDate, fieldCrop,
autoSchedule, schedulerRequestArguments, planProgress, commit);
if (!commit) {
setPlanAllocationsUnscheduled(plan);
}
return plan;
}
the thing that kills me is that the error happens only sometimes, so i can't really debug it and i can't correlate it with anything.
Any ideas about what would cause the session to close?
i tried turning off any other threads, so basically this is the only one that's running, didn't help
Thanks

CloseableHttpClient.execute freezes once every few weeks despite timeouts

We have a groovy singleton that uses PoolingHttpClientConnectionManager(httpclient:4.3.6) with a pool size of 200 to handle very high concurrent connections to a search service and processes the xml response.
Despite having specified timeouts, it freezes about once a month but runs perfectly fine the rest of the time.
The groovy singleton below. The method retrieveInputFromURL seems to block on client.execute(get);
#Singleton(strict=false)
class StreamManagerUtil {
// Instantiate once and cache for lifetime of Signleton class
private static PoolingHttpClientConnectionManager connManager = new PoolingHttpClientConnectionManager();
private static CloseableHttpClient client;
private static final IdleConnectionMonitorThread staleMonitor = new IdleConnectionMonitorThread(connManager);
private int warningLimit;
private int readTimeout;
private int connectionTimeout;
private int connectionFetchTimeout;
private int poolSize;
private int routeSize;
PropertyManager propertyManager = PropertyManagerFactory.getInstance().getPropertyManager("sebe.properties")
StreamManagerUtil() {
// Initialize all instance variables in singleton from properties file
readTimeout = 6
connectionTimeout = 6
connectionFetchTimeout =6
// Pooling
poolSize = 200
routeSize = 50
// Connection pool size and number of routes to cache
connManager.setMaxTotal(poolSize);
connManager.setDefaultMaxPerRoute(routeSize);
// ConnectTimeout : time to establish connection with GSA
// ConnectionRequestTimeout : time to get connection from pool
// SocketTimeout : waiting for packets form GSA
RequestConfig config = RequestConfig.custom()
.setConnectTimeout(connectionTimeout * 1000)
.setConnectionRequestTimeout(connectionFetchTimeout * 1000)
.setSocketTimeout(readTimeout * 1000).build();
// Keep alive for 5 seconds if server does not have keep alive header
ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
HeaderElementIterator it = new BasicHeaderElementIterator
(response.headerIterator(HTTP.CONN_KEEP_ALIVE));
while (it.hasNext()) {
HeaderElement he = it.nextElement();
String param = he.getName();
String value = he.getValue();
if (value != null && param.equalsIgnoreCase
("timeout")) {
return Long.parseLong(value) * 1000;
}
}
return 5 * 1000;
}
};
// Close all connection older than 5 seconds. Run as separate thread.
staleMonitor.start();
staleMonitor.join(1000);
client = HttpClients.custom().setDefaultRequestConfig(config).setKeepAliveStrategy(myStrategy).setConnectionManager(connManager).build();
}
private retrieveInputFromURL (String categoryUrl, String xForwFor, boolean isXml) throws Exception {
URL url = new URL( categoryUrl );
GPathResult searchResponse = null
InputStream inputStream = null
HttpResponse response;
HttpGet get;
try {
long startTime = System.nanoTime();
get = new HttpGet(categoryUrl);
response = client.execute(get);
int resCode = response.getStatusLine().getStatusCode();
if (xForwFor != null) {
get.setHeader("X-Forwarded-For", xForwFor)
}
if (resCode == HttpStatus.SC_OK) {
if (isXml) {
extractXmlString(response)
} else {
StringBuffer buffer = buildStringFromResponse(response)
return buffer.toString();
}
}
}
catch (Exception e)
{
throw e;
}
finally {
// Release connection back to pool
if (response != null) {
EntityUtils.consume(response.getEntity());
}
}
}
private extractXmlString(HttpResponse response) {
InputStream inputStream = response.getEntity().getContent()
XmlSlurper slurper = new XmlSlurper()
slurper.setFeature("http://xml.org/sax/features/validation", false)
slurper.setFeature("http://apache.org/xml/features/disallow-doctype-decl", false)
slurper.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false)
slurper.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false)
return slurper.parse(inputStream)
}
private StringBuffer buildStringFromResponse(HttpResponse response) {
StringBuffer buffer= new StringBuffer();
BufferedReader rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
String line = "";
while ((line = rd.readLine()) != null) {
buffer.append(line);
System.out.println(line);
}
return buffer
}
public class IdleConnectionMonitorThread extends Thread {
private final HttpClientConnectionManager connMgr;
private volatile boolean shutdown;
public IdleConnectionMonitorThread
(PoolingHttpClientConnectionManager connMgr) {
super();
this.connMgr = connMgr;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(5000);
connMgr.closeExpiredConnections();
connMgr.closeIdleConnections(10, TimeUnit.SECONDS);
}
}
} catch (InterruptedException ex) {
// Ignore
}
}
public void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
}
I also found found this in the log leading me to believe it happened on waiting for response data
java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
Findings thus far:
We are using java 1.8u25. There is an open issue on a similar scenario
https://bugs.openjdk.java.net/browse/JDK-8075484
HttpClient had a similar report https://issues.apache.org/jira/browse/HTTPCLIENT-1589 but this was fixed in
the 4.3.6 version we are using
Questions
Can this be a synchronisation issue? From my understanding even though the singleton is accessed by multiple threads, the only shared data is the cached CloseableHttpClient
Is there anything else fundamentally wrong with this code,approach that may be causing this behaviour?
I do not see anything obviously wrong with your code. I would strongly recommend setting SO_TIMEOUT parameter on the connection manager, though, to make sure it applies to all new socket at the creation time, not at the time of request execution.
I would also help to know what exactly 'freezing' means. Are worker threads getting blocked waiting to acquire connections from the pool or waiting for response data?
Please also note that worker threads can appear 'frozen' if the server keeps on sending bits of chunk coded data. As usual a wire / context log of the client session would help a lot
http://hc.apache.org/httpcomponents-client-4.3.x/logging.html

How to avoid caching when values are null?

I am using Guava to cache hot data. When the data does not exist in the cache, I have to get it from database:
public final static LoadingCache<ObjectId, User> UID2UCache = CacheBuilder.newBuilder()
//.maximumSize(2000)
.weakKeys()
.weakValues()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<ObjectId, User>() {
#Override
public User load(ObjectId k) throws Exception {
User u = DataLoader.datastore.find(User.class).field("_id").equal(k).get();
return u;
}
});
My problem is when the data does not exists in database, I want it to return null and to not do any caching. But Guava saves null with the key in the cache and throws an exception when I get it:
com.google.common.cache.CacheLoader$InvalidCacheLoadException:
CacheLoader returned null for key shisoft.
How do we avoid caching null values?
Just throw some Exception if user is not found and catch it in client code while using get(key) method.
new CacheLoader<ObjectId, User>() {
#Override
public User load(ObjectId k) throws Exception {
User u = DataLoader.datastore.find(User.class).field("_id").equal(k).get();
if (u != null) {
return u;
} else {
throw new UserNotFoundException();
}
}
}
From CacheLoader.load(K) Javadoc:
Returns:
the value associated with key; must not be null
Throws:
Exception - if unable to load the result
Answering your doubts about caching null values:
Returns the value associated with key in this cache, first loading
that value if necessary. No observable state associated with this
cache is modified until loading completes.
(from LoadingCache.get(K) Javadoc)
If you throw an exception, load is not considered as complete, so no new value is cached.
EDIT:
Note that in Caffeine, which is sort of Guava cache 2.0 and "provides an in-memory cache using a Google Guava inspired API" you can return null from load method:
Returns:
the value associated with key or null if not found
If you may consider migrating, your data loader could freely return when user is not found.
Simple solution: use com.google.common.base.Optional<User> instead of User as value.
public final static LoadingCache<ObjectId, Optional<User>> UID2UCache = CacheBuilder.newBuilder()
...
.build(
new CacheLoader<ObjectId, Optional<User>>() {
#Override
public Optional<User> load(ObjectId k) throws Exception {
return Optional.fromNullable(DataLoader.datastore.find(User.class).field("_id").equal(k).get());
}
});
EDIT: I think #Xaerxess' answer is better.
Faced the same issue, cause missing values in the source was part of the normal workflow. Haven't found anything better than to write some code myself using getIfPresent, get and put methods. See the method below, where local is Cache<Object, Object>:
private <K, V> V getFromLocalCache(K key, Supplier<V> fallback) {
#SuppressWarnings("unchecked")
V s = (V) local.getIfPresent(key);
if (s != null) {
return s;
} else {
V value = fallback.get();
if (value != null) {
local.put(key, value);
}
return value;
}
}
When you want to cache some NULL values, you could use other staff which namely behave as NULL.
And before give the solution, I would suggest you not to expose LoadingCache to outside. Instead, you should use method to restrict the scope of Cache.
For example, you could use LoadingCache<ObjectId, List<User>> as return type. And then, you could return empty list when you could'n retrieve values from database. You could use -1 as Integer or Long NULL value, you could use "" as String NULL value, and so on. After this, you should provide a method to handler the NULL value.
when(value equals NULL(-1|"")){
return null;
}
I use the getIfPresent
#Test
public void cache() throws Exception {
System.out.println("3-------" + totalCache.get("k2"));
System.out.println("4-------" + totalCache.getIfPresent("k3"));
}
private LoadingCache<String, Date> totalCache = CacheBuilder
.newBuilder()
.maximumSize(500)
.refreshAfterWrite(6, TimeUnit.HOURS)
.build(new CacheLoader<String, Date>() {
#Override
#ParametersAreNonnullByDefault
public Date load(String key) {
Map<String, Date> map = ImmutableMap.of("k1", new Date(), "k2", new Date());
return map.get(key);
}
});

Resources