I am working with Spring and EhCache
I have the following method
#Override
#Cacheable(value="products", key="#root.target.PRODUCTS")
public Set<Product> findAll() {
return new LinkedHashSet<>(this.productRepository.findAll());
}
I have other methods working with #Cacheable and #CachePut and #CacheEvict.
Now, imagine the database returns 100 products and they are cached through key="#root.target.PRODUCTS", then other method would insert - update - deleted an item into the database. Therefore the products cached through the key="#root.target.PRODUCTS" are not the same anymore such as the database.
I mean, check the two following two methods, they are able to update/delete an item, and that same item is cached in the other key="#root.target.PRODUCTS"
#Override
#CachePut(value="products", key="#product.id")
public Product update(Product product) {
return this.productRepository.save(product);
}
#Override
#CacheEvict(value="products", key="#id")
public void delete(Integer id) {
this.productRepository.delete(id);
}
I want to know if is possible update/delete the item located in the cache through the key="#root.target.PRODUCTS", it would be 100 with the Product updated or 499 if the Product was deleted.
My point is, I want avoid the following:
#Override
#CachePut(value="products", key="#product.id")
#CacheEvict(value="products", key="#root.target.PRODUCTS")
public Product update(Product product) {
return this.productRepository.save(product);
}
#Override
#Caching(evict={
#CacheEvict(value="products", key="#id"),
#CacheEvict(value="products", key="#root.target.PRODUCTS")
})
public void delete(Integer id) {
this.productRepository.delete(id);
}
I don't want call again the 500 or 499 products to be cached into the key="#root.target.PRODUCTS"
Is possible do this? How?
Thanks in advance.
Caching the collection using the caching abstraction is a duplicate of what the underlying caching system is doing. And because this is a duplicate, it turns out that you have to resort to some kind of duplications in your own code in one way or the other (the duplicate key for the set is the obvious representation of that). And because there is duplication, you have to sync state somehow
If you really need to access to the whole set and individual elements, then you should probably use a shortcut for the easiest leg. First, you should make sure your cache contains all elements which is not something that is obvious. Far from it actually. Considering you have that:
//EhCacheCache cache = (EhCacheCache) cacheManager.getCache("products");
#Override
public Set<Product> findAll() {
Ehcache nativeCache = cache.getNativeCache();
Map<Object, Element> elements = nativeCache.getAll(nativeCache.getKeys());
Set<Product> result = new HashSet<Product>();
for (Element element : elements.values()) {
result.add((Product) element.getObjectValue());
}
return Collections.unmodifiableSet(result);
}
The elements result is actually a lazy loaded map so a call to values() may throw an exception. You may want to loop over the keys or something.
You have to remember that the caching abstraction eases the access to the underlying caching infrastructure and in no way it replaces it: if you had to use the API directly, this is probably what you would have to do in some sort.
Now, we can keep the conversion on SPR-12036 if you believe we can improve the caching abstraction in that area. Thanks!
I think something like this schould work... Actually it's only a variation if "Stéphane Nicoll" answer ofcourse, but it may be useful for someone. I write it right here and haven't check it in IDE, but something similar works in my Project.
Override CacheResolver:
#Cacheable(value="products", key="#root.target.PRODUCTS", cacheResolver = "customCacheResolver")
Implement your own cache resolver, which search "inside" you cached items and do the work in there
public class CustomCacheResolver implements CacheResolver{
private static final String CACHE_NAME = "products";
#Autowired(required = true) private CacheManager cacheManager;
#SuppressWarnings("unchecked")
#Override
public Collection<? extends Cache> resolveCaches(CacheOperationInvocationContext<?> cacheOperationInvocationContext) {
// 1. Take key from query and create new simple key
SimpleKey newKey;
if (cacheOperationInvocationContext.getArgs().length != null) { //optional
newKey = new SimpleKey(args); //It's the key of cached object, which your "#Cachable" search for
} else {
//Schould never be... DEFAULT work with cache if something wrong with arguments
return new ArrayList<>(Arrays.asList(cacheManager.getCache(CACHE_NAME)));
}
// 2. Take cache
EhCacheCache ehCache = (EhCacheCache)cacheManager.getCache(CACHE_NAME); //this one we bringing back
Ehcache cache = (Ehcache)ehCache.getNativeCache(); //and with this we working
// 3. Modify existing Cache if we have it
if (cache.getKeys().contains(newKey) && YouWantToModifyIt) {
Element element = cache.get(key);
if (element != null && !((List<Products>)element.getObjectValue()).isEmpty()) {
List<Products> productsList = (List<Products>)element.getObjectValue();
// ---**--- Modify your "productsList" here as you want. You may now Change single element in this list.
ehCache.put(key, anfragenList); //this method NOT adds cache, but OVERWRITE existing
// 4. Maybe "Create" new cache with this key if we don't have it
} else {
ehCache.put(newKey, YOUR_ELEMENTS);
}
return new ArrayList<>(Arrays.asList(ehCache)); //Bring all back - our "new" or "modified" cache is in there now...
}
Read more about CRUD of EhCache: EhCache code samples
Hope it helps. And sorry for my English:(
I think there is a way to read the collection from underlying cache structure of spring. You can retrieve the collection from underlying ConcurrentHashMap as key-value pairs without using EhCache or anything else. Then you can update or remove an entry from that collection and then you can update the cache too. Here is an example that may help:
import com.crud.model.Post;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.Cache;
import org.springframework.cache.CacheManager;
import org.springframework.cache.interceptor.CacheOperationInvocationContext;
import org.springframework.cache.interceptor.CacheResolver;
import org.springframework.cache.interceptor.SimpleKey;
import org.springframework.stereotype.Component;
import java.util.*;
#Component
#Slf4j
public class CustomCacheResolver implements CacheResolver {
private static final String CACHE_NAME = "allPost";
#Autowired
private CacheManager cacheManager;
#SuppressWarnings("unchecked")
#Override
public Collection<? extends Cache> resolveCaches(CacheOperationInvocationContext<?> cacheOperationInvocationContext) {
log.info(Arrays.toString(cacheOperationInvocationContext.getArgs()));
String method = cacheOperationInvocationContext.getMethod().toString();
Post post = null;
Long postId = null;
if(method.contains("update")) {
//get the updated post
Object[] args = cacheOperationInvocationContext.getArgs();
post = (Post) args[0];
}
else if(method.contains("delete")){
//get the post Id to delete
Object[] args = cacheOperationInvocationContext.getArgs();
postId = (Long) args[0];
}
//read the cache
Cache cache = cacheManager.getCache(CACHE_NAME);
//get the concurrent cache map in key-value pair
assert cache != null;
Map<SimpleKey, List<Post>> map = (Map<SimpleKey, List<Post>>) cache.getNativeCache();
//Convert to set to iterate
Set<Map.Entry<SimpleKey, List<Post>>> entrySet = map.entrySet();
Iterator<Map.Entry<SimpleKey, List<Post>>> itr = entrySet.iterator();
//if a iterated entry is a list then it is our desired data list!!! Yayyy
Map.Entry<SimpleKey, List<Post>> entry = null;
while (itr.hasNext()){
entry = itr.next();
if(entry instanceof List) break;
}
//get the list
assert entry != null;
List<Post> postList = entry.getValue();
if(method.contains("update")) {
//update it
for (Post temp : postList) {
assert post != null;
if (temp.getId().equals(post.getId())) {
postList.remove(temp);
break;
}
}
postList.add(post);
}
else if(method.contains("delete")){
//delete it
for (Post temp : postList) {
if (temp.getId().equals(postId)) {
postList.remove(temp);
break;
}
}
}
//update the cache!! :D
cache.put(entry.getKey(),postList);
return new ArrayList<>(Collections.singletonList(cacheManager.getCache(CACHE_NAME)));
}
}
Here are the methods that uses the CustomCacheResolver
#Cacheable(key = "{#pageNo,#pageSize}")
public List<Post> retrieveAllPost(int pageNo,int pageSize){ // return list}
#CachePut(key = "#post.id",cacheResolver = "customCacheResolver")
public Boolean updatePost(Post post, UserDetails userDetails){ //your logic}
#CachePut(key = "#postId",cacheResolver = "customCacheResolver")
public Boolean deletePost(Long postId,UserDetails userDetails){ // your logic}
#CacheEvict(allEntries = true)
public Boolean createPost(String userId, Post post){//your logic}
Hope it helps to manipulate your spring application cache manually!
Though I don't see any easy way, but you can override Ehcache cache functionality by supplying cache decorator. Most probably you'd want to use EhcahceDecoratorAdapter, to enhance functions used by EhCacheCache put and evict methods.
Simple and rude solution is :
#Cacheable(key = "{#pageNo,#pageSize}")
public List<Post> retrieveAllPost(int pageNo,int pageSize){ // return list}
#CacheEvict(allEntries = true)
public Boolean updatePost(Post post, UserDetails userDetails){ //your logic}
#CacheEvict(allEntries = true)
public Boolean deletePost(Long postId,UserDetails userDetails){ // your logic}
#CacheEvict(allEntries = true)
public Boolean createPost(String userId, Post post){//your logic}
Related
I'm creating Spring Boot HATEOAS REST application. Code below shows how am I adding links, while GET request is send for specific Employee. I'm using RepresentationModelAssembler toModel function. There's also toCollectionModel function to Override, which I would like to use to convert List<Employees> to CollectionModel. -> This will be returned in /Employees/all endpoint.
And I dunno how to do that. So what I need is to pass List<Employees>, then all list elements needs to be processed by toModel functions, and then, like in toModel function I need possibility to add more links to it -> links to entire new collection (not individual items).
Looking forward for your answers!
#Component
public class EmployeeModelAssembler implements RepresentationModelAssembler<Employee, EntityModel<Employee>> {
#Override
public EntityModel<Employee> toModel(Employee employee) {
EntityModel<Employee> employeeEntityModel = EntityModel.of(employee);
Link selfLink = linkTo(methodOn(EmployeeController.class).getEmployeeById(employee.getId())).withSelfRel();
employeeEntityModel.add(selfLink);
return employeeEntityModel;
}
#Override
public CollectionModel<EntityModel<Employee>> toCollectionModel(Iterable<? extends Employee> entities) {
?? ?? ??
}
}
You can use something like this:
#GetMapping(produces = { "application/hal+json" })
public CollectionModel<Customer> getAllCustomers() {
List<Customer> allCustomers = customerService.allCustomers();
for (Customer customer : allCustomers) {
String customerId = customer.getCustomerId();
Link selfLink = linkTo(CustomerController.class).slash(customerId).withSelfRel();
customer.add(selfLink);
if (orderService.getAllOrdersForCustomer(customerId).size() > 0) {
Link ordersLink = linkTo(methodOn(CustomerController.class)
.getOrdersForCustomer(customerId)).withRel("allOrders");
customer.add(ordersLink);
}
}
Link link = linkTo(CustomerController.class).withSelfRel();
CollectionModel<Customer> result = CollectionModel.of(allCustomers, link);
return result;
}
Visit https://www.baeldung.com/spring-hateoas-tutorial#springhateoasinaction for detailed explanation
I am trying to cache Kafka Records within 3 minutes of interval post that it will get expired and removed from the cache.
Each incoming records which is fetched using kafka consumer written in springboot needs to be updated in cache first then if it is present i need to discard the next duplicate records if it matches the cache record.
I have tried using Caffeine cache as below,
#EnableCaching
public class AppCacheManagerConfig {
#Bean
public CacheManager cacheManager(Ticker ticker) {
CaffeineCache bookCache = buildCache("declineRecords", ticker, 3);
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Collections.singletonList(bookCache));
return cacheManager;
}
private CaffeineCache buildCache(String name, Ticker ticker, int minutesToExpire) {
return new CaffeineCache(name, Caffeine.newBuilder().expireAfterWrite(minutesToExpire, TimeUnit.MINUTES)
.maximumSize(100).ticker(ticker).build());
}
#Bean
public Ticker ticker() {
return Ticker.systemTicker();
}
}
and my Kafka Consumer is as below,
#Autowired
CachingServiceImpl cachingService;
#KafkaListener(topics = "#{'${spring.kafka.consumer.topic}'}", concurrency = "#{'${spring.kafka.consumer.concurrentConsumers}'}", errorHandler = "#{'${spring.kafka.consumer.errorHandler}'}")
public void consume(Message<?> message, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long createTime) {
logger.info("Recieved Message: " + message.getPayload());
try {
boolean approveTopic = false;
boolean duplicateRecord = false;
if (cachingService.isDuplicateCheck(declineRecord)) {
//do something with records
}
else
{
//do something with records
}
cachingService.putInCache(xmlJSONObj, declineRecord, time);
and my caching service is as below,
#Component
public class CachingServiceImpl {
private static final Logger logger = LoggerFactory.getLogger(CachingServiceImpl.class);
#Autowired
CacheManager cacheManager;
#Cacheable(value = "declineRecords", key = "#declineRecord", sync = true)
public String putInCache(JSONObject xmlJSONObj, String declineRecord, String time) {
logger.info("Record is Cached for 3 minutes interval check", declineRecord);
cacheManager.getCache("declineRecords").put(declineRecord, time);
return declineRecord;
}
public boolean isDuplicateCheck(String declineRecord) {
if (null != cacheManager.getCache("declineRecords").get(declineRecord)) {
return true;
}
return false;
}
}
But Each time a record comes in consumer my cache is always empty. Its not holding the records.
Modifications Done:
I have added Configuration file as below after going through the suggestions and more kind of R&D removed some of the earlier logic and now the caching is working as expected but duplicate check is failing when all the three consumers are sending the same records.
`
#Configuration
public class AppCacheManagerConfig {
public static Cache<String, Object> jsonCache =
Caffeine.newBuilder().expireAfterWrite(3, TimeUnit.MINUTES)
.maximumSize(10000).recordStats().build();
#Bean
public CacheLoader<Object, Object> cacheLoader() {
CacheLoader<Object, Object> cacheLoader = new CacheLoader<Object, Object>() {
#Override
public Object load(Object key) throws Exception {
return null;
}
#Override
public Object reload(Object key, Object oldValue) throws Exception {
return oldValue;
}
};
return cacheLoader;
}
`
Now i am using the above cache as manual put and get.
I guess you're trying to implement records deduplication for Kafka.
Here is the similar discussion:
https://github.com/spring-projects/spring-kafka/issues/80
Here is the current abstract class which you may extend to achieve the necessary result:
https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/listener/adapter/AbstractFilteringMessageListener.java
Your caching service is definitely incorrect: Cacheable annotation allows marking the data getters and setters, to add caching through AOP. While in the code you clearly implement some low-level cache updating logic of your own.
At least next possible changes may help you:
Remove #Cacheable. You don't need it because you work with cache manually, so it may be the source of conflicts (especially as soon as you use sync = true). If it helps, remove #EnableCaching as well - it enables support for cache-related Spring annotations which you don't need here.
Try removing Ticker bean with the appropriate parameters for other beans. It should not be harmful as per your configuration, but usually it's helpful only for tests, no need to define it otherwise.
Double-check what is declineRecord. If it's a serialized object, ensure that serialization works properly.
Add recordStats() for cache and output stats() to log for further analysis.
I'm using Paging Library to load data from network using ItemKeyedDataSource. After fetching items user can edit them, this updates are done inside in Memory cache (no database like Room is used).
Now since the PagedList itself cannot be updated (discussed here) I have to recreate PagedList and pass it to the PagedListAdapter.
The update itself is no problem but after updating the recyclerView with the new PagedList, the list jumps to the beginning of the list destroying previous scroll position. Is there anyway to update PagedList while keeping scroll position (like how it works with Room)?
DataSource is implemented this way:
public class MentionKeyedDataSource extends ItemKeyedDataSource<Long, Mention> {
private Repository repository;
...
private List<Mention> cachedItems;
public MentionKeyedDataSource(Repository repository, ..., List<Mention> cachedItems){
super();
this.repository = repository;
this.teamId = teamId;
this.inboxId = inboxId;
this.filter = filter;
this.cachedItems = new ArrayList<>(cachedItems);
}
#Override
public void loadInitial(#NonNull LoadInitialParams<Long> params, final #NonNull ItemKeyedDataSource.LoadInitialCallback<Mention> callback) {
Observable.just(cachedItems)
.filter(() -> return cachedItems != null && !cachedItems.isEmpty())
.switchIfEmpty(repository.getItems(..., params.requestedLoadSize).map(...))
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(response -> callback.onResult(response.data.list));
}
#Override
public void loadAfter(#NonNull LoadParams<Long> params, final #NonNull ItemKeyedDataSource.LoadCallback<Mention> callback) {
repository.getOlderItems(..., params.key, params.requestedLoadSize)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(response -> callback.onResult(response.data.list));
}
#Override
public void loadBefore(#NonNull LoadParams<Long> params, final #NonNull ItemKeyedDataSource.LoadCallback<Mention> callback) {
repository.getNewerItems(..., params.key, params.requestedLoadSize)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(response -> callback.onResult(response.data.list));
}
#NonNull
#Override
public Long getKey(#NonNull Mention item) {
return item.id;
}
}
The PagedList created like this:
PagedList.Config config = new PagedList.Config.Builder()
.setPageSize(PAGE_SIZE)
.setInitialLoadSizeHint(preFetchedItems != null && !preFetchedItems.isEmpty()
? preFetchedItems.size()
: PAGE_SIZE * 2
).build();
pagedMentionsList = new PagedList.Builder<>(new MentionKeyedDataSource(mRepository, team.id, inbox.id, mCurrentFilter, preFetchedItems)
, config)
.setFetchExecutor(ApplicationThreadPool.getBackgroundThreadExecutor())
.setNotifyExecutor(ApplicationThreadPool.getUIThreadExecutor())
.build();
The PagedListAdapter is created like this:
public class ItemAdapter extends PagedListAdapter<Item, ItemAdapter.ItemHolder> { //Adapter from google guide, Nothing special here.. }
mAdapter = new ItemAdapter(new DiffUtil.ItemCallback<Mention>() {
#Override
public boolean areItemsTheSame(Item oldItem, Item newItem) {
return oldItem.id == newItem.id;
}
#Override
public boolean areContentsTheSame(Item oldItem, Item newItem) {
return oldItem.equals(newItem);
}
});
, and updated like this:
mAdapter.submitList(pagedList);
P.S: If there is a better way to update list items without using Room please share.
All you need to do is to invalidate current DataSource each time you update your data.
What I would do:
move networking logic from MentionKeyedDataSource to new class that extends PagedList.BoundaryCallback and set it when building your PagedList.
move all you data to some DataProvider that holds all your downloaded data and has reference to DataSource. Each time data is updated in DataProvider invalidate current DataSource
Something like that maybe
val dataProvider = PagedDataProvider()
val dataSourceFactory = InMemoryDataSourceFactory(dataProvider = dataProvider)
where
class PagedDataProvider : DataProvider<Int, DataRow> {
private val dataCache = mutableMapOf<Int, List<DataRow>>()
override val sourceLiveData = MutableLiveData<DataSource<Int, DataRow>>()
//implement data add/remove/update here
//and after each modification call
//sourceLiveData.value?.invalidate()
}
and
class InMemoryDataSourceFactory<Key, Value>(
private val dataProvider: DataProvider<Key, Value>
) : DataSource.Factory<Key, Value>() {
override fun create(): DataSource<Key, Value> {
val source = InMemoryDataSource(dataProvider = dataProvider)
dataProvider.sourceLiveData.postValue(source)
return source
}
}
This approach is very similar to what Room does - every time table row is updated - current DataSource is invalidated and DataSourceFactory creates new data source.
You can modify directly in your adapter if you called currentList like that
class ItemsAdapter(): PagedListAdapter<Item, ItemsAdapter.ViewHolder(ITEMS_COMPARATOR) {
...
fun changeItem(position: Int,newData:String) {
currentList?.get(position)?.data = newData
notifyItemChanged(position)
}
}
I'm using the gemfire-json-server module in SpringXD to populate a GemFire grid with json representation of “Order” objects. I understand the gemfire-json-server module saves data in Pdx form in GemFire. I’d like to read the contents of the GemFire grid into an “Order” object in my application. I get a ClassCastException that reads:
java.lang.ClassCastException: com.gemstone.gemfire.pdx.internal.PdxInstanceImpl cannot be cast to org.apache.geode.demo.cc.model.Order
I’m using the Spring Data GemFire libraries to read contents of the cluster. The code snippet to read the contents of the Grid follows:
public interface OrderRepository extends GemfireRepository<Order, String>{
Order findByTransactionId(String transactionId);
}
How can I use Spring Data GemFire to convert data read from the GemFire cluster into an Order object?
Note: The data was initially stored in GemFire using SpringXD's gemfire-json-server-module
Still waiting to hear back from the GemFire PDX engineering team, specifically on Region.get(key), but, interestingly enough if you annotate your application domain object with...
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type")
public class Order ... {
...
}
This works!
Under-the-hood I knew the GemFire JSONFormatter class (see here) used Jackson's API to un/marshal (de/serialize) JSON data to and from PDX.
However, the orderRepository.findOne(ID) and ordersRegion.get(key) still do not function as I would expect. See updated test class below for more details.
Will report back again when I have more information.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = GemFireConfiguration.class)
#SuppressWarnings("unused")
public class JsonToPdxToObjectDataAccessIntegrationTest {
protected static final AtomicLong ID_SEQUENCE = new AtomicLong(0l);
private Order amazon;
private Order bestBuy;
private Order target;
private Order walmart;
#Autowired
private OrderRepository orderRepository;
#Resource(name = "Orders")
private com.gemstone.gemfire.cache.Region<Long, Object> orders;
protected Order createOrder(String name) {
return createOrder(ID_SEQUENCE.incrementAndGet(), name);
}
protected Order createOrder(Long id, String name) {
return new Order(id, name);
}
protected <T> T fromPdx(Object pdxInstance, Class<T> toType) {
try {
if (pdxInstance == null) {
return null;
}
else if (toType.isInstance(pdxInstance)) {
return toType.cast(pdxInstance);
}
else if (pdxInstance instanceof PdxInstance) {
return new ObjectMapper().readValue(JSONFormatter.toJSON(((PdxInstance) pdxInstance)), toType);
}
else {
throw new IllegalArgumentException(String.format("Expected object of type PdxInstance; but was (%1$s)",
pdxInstance.getClass().getName()));
}
}
catch (IOException e) {
throw new RuntimeException(String.format("Failed to convert PDX to object of type (%1$s)", toType), e);
}
}
protected void log(Object value) {
System.out.printf("Object of Type (%1$s) has Value (%2$s)", ObjectUtils.nullSafeClassName(value), value);
}
protected Order put(Order order) {
Object existingOrder = orders.putIfAbsent(order.getTransactionId(), toPdx(order));
return (existingOrder != null ? fromPdx(existingOrder, Order.class) : order);
}
protected PdxInstance toPdx(Object obj) {
try {
return JSONFormatter.fromJSON(new ObjectMapper().writeValueAsString(obj));
}
catch (JsonProcessingException e) {
throw new RuntimeException(String.format("Failed to convert object (%1$s) to JSON", obj), e);
}
}
#Before
public void setup() {
amazon = put(createOrder("Amazon Order"));
bestBuy = put(createOrder("BestBuy Order"));
target = put(createOrder("Target Order"));
walmart = put(createOrder("Wal-Mart Order"));
}
#Test
public void regionGet() {
assertThat((Order) orders.get(amazon.getTransactionId()), is(equalTo(amazon)));
}
#Test
public void repositoryFindOneMethod() {
log(orderRepository.findOne(target.getTransactionId()));
assertThat(orderRepository.findOne(target.getTransactionId()), is(equalTo(target)));
}
#Test
public void repositoryQueryMethod() {
assertThat(orderRepository.findByTransactionId(amazon.getTransactionId()), is(equalTo(amazon)));
assertThat(orderRepository.findByTransactionId(bestBuy.getTransactionId()), is(equalTo(bestBuy)));
assertThat(orderRepository.findByTransactionId(target.getTransactionId()), is(equalTo(target)));
assertThat(orderRepository.findByTransactionId(walmart.getTransactionId()), is(equalTo(walmart)));
}
#Region("Orders")
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type")
public static class Order implements PdxSerializable {
protected static final OrderPdxSerializer pdxSerializer = new OrderPdxSerializer();
#Id
private Long transactionId;
private String name;
public Order() {
}
public Order(Long transactionId) {
this.transactionId = transactionId;
}
public Order(Long transactionId, String name) {
this.transactionId = transactionId;
this.name = name;
}
public String getName() {
return name;
}
public void setName(final String name) {
this.name = name;
}
public Long getTransactionId() {
return transactionId;
}
public void setTransactionId(final Long transactionId) {
this.transactionId = transactionId;
}
#Override
public void fromData(PdxReader reader) {
Order order = (Order) pdxSerializer.fromData(Order.class, reader);
if (order != null) {
this.transactionId = order.getTransactionId();
this.name = order.getName();
}
}
#Override
public void toData(PdxWriter writer) {
pdxSerializer.toData(this, writer);
}
#Override
public boolean equals(Object obj) {
if (obj == this) {
return true;
}
if (!(obj instanceof Order)) {
return false;
}
Order that = (Order) obj;
return ObjectUtils.nullSafeEquals(this.getTransactionId(), that.getTransactionId());
}
#Override
public int hashCode() {
int hashValue = 17;
hashValue = 37 * hashValue + ObjectUtils.nullSafeHashCode(getTransactionId());
return hashValue;
}
#Override
public String toString() {
return String.format("{ #type = %1$s, id = %2$d, name = %3$s }",
getClass().getName(), getTransactionId(), getName());
}
}
public static class OrderPdxSerializer implements PdxSerializer {
#Override
public Object fromData(Class<?> type, PdxReader in) {
if (Order.class.equals(type)) {
return new Order(in.readLong("transactionId"), in.readString("name"));
}
return null;
}
#Override
public boolean toData(Object obj, PdxWriter out) {
if (obj instanceof Order) {
Order order = (Order) obj;
out.writeLong("transactionId", order.getTransactionId());
out.writeString("name", order.getName());
return true;
}
return false;
}
}
public interface OrderRepository extends GemfireRepository<Order, Long> {
Order findByTransactionId(Long transactionId);
}
#Configuration
protected static class GemFireConfiguration {
#Bean
public Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("name", JsonToPdxToObjectDataAccessIntegrationTest.class.getSimpleName());
gemfireProperties.setProperty("mcast-port", "0");
gemfireProperties.setProperty("log-level", "warning");
return gemfireProperties;
}
#Bean
public CacheFactoryBean gemfireCache(Properties gemfireProperties) {
CacheFactoryBean cacheFactoryBean = new CacheFactoryBean();
cacheFactoryBean.setProperties(gemfireProperties);
//cacheFactoryBean.setPdxSerializer(new MappingPdxSerializer());
cacheFactoryBean.setPdxSerializer(new OrderPdxSerializer());
cacheFactoryBean.setPdxReadSerialized(false);
return cacheFactoryBean;
}
#Bean(name = "Orders")
public PartitionedRegionFactoryBean ordersRegion(Cache gemfireCache) {
PartitionedRegionFactoryBean regionFactoryBean = new PartitionedRegionFactoryBean();
regionFactoryBean.setCache(gemfireCache);
regionFactoryBean.setName("Orders");
regionFactoryBean.setPersistent(false);
return regionFactoryBean;
}
#Bean
public GemfireRepositoryFactoryBean orderRepository() {
GemfireRepositoryFactoryBean<OrderRepository, Order, Long> repositoryFactoryBean =
new GemfireRepositoryFactoryBean<>();
repositoryFactoryBean.setRepositoryInterface(OrderRepository.class);
return repositoryFactoryBean;
}
}
}
So, as you are aware, GemFire (and by extension, Apache Geode) stores JSON in PDX format (as a PdxInstance). This is so GemFire can interoperate with many different language-based clients (native C++/C#, web-oriented (JavaScript, Pyhton, Ruby, etc) using the Developer REST API, in addition to Java) and also to be able to use OQL to query the JSON data.
After a bit of experimentation, I am surprised GemFire is not behaving as I would expect. I created an example, self-contained test class (i.e. no Spring XD, of course) that simulates your use case... essentially storing JSON data in GemFire as PDX and then attempting to read the data back out as the Order application domain object type using the Repository abstraction, logical enough.
Given the use of the Repository abstraction and implementation from Spring Data GemFire, the infrastructure will attempt to access the application domain object based on the Repository generic type parameter (in this case "Order" from the "OrderRepository" definition).
However, the data is stored in PDX, so now what?
No matter, Spring Data GemFire provides the MappingPdxSerializer class to convert PDX instances back to application domain objects using the same "mapping meta-data" that the Repository infrastructure uses. Cool, so I plug that in...
#Bean
public CacheFactoryBean gemfireCache(Properties gemfireProperties) {
CacheFactoryBean cacheFactoryBean = new CacheFactoryBean();
cacheFactoryBean.setProperties(gemfireProperties);
cacheFactoryBean.setPdxSerializer(new MappingPdxSerializer());
cacheFactoryBean.setPdxReadSerialized(false);
return cacheFactoryBean;
}
You will also notice, I set the PDX 'read-serialized' property (cacheFactoryBean.setPdxReadSerialized(false);) to false in order to ensure data access operations return the domain object and not the PDX instance.
However, this had no affect on the query method. In fact, it had no affect on the following operations either...
orderRepository.findOne(amazonOrder.getTransactionId());
ordersRegion.get(amazonOrder.getTransactionId());
Both calls returned a PdxInstance. Note, the implementation of OrderRepository.findOne(..) is based on SimpleGemfireRepository.findOne(key), which uses GemfireTemplate.get(key), which just performs Region.get(key), and so is effectively the same as (ordersRegion.get(amazonOrder.getTransactionId();). The outcome should not be, especially with Region.get() and read-serialized set to false.
With the OQL query (SELECT * FROM /Orders WHERE transactionId = $1) generated from the findByTransactionId(String id), the Repository infrastructure has a bit less control over what the GemFire query engine will return based on what the caller (OrderRepository) expects (based on the generic type parameter), so running OQL statements could potentially behave differently than direct Region access using get.
Next, I went onto try modifying the Order type to implement PdxSerializable, to handle the conversion during data access operations (direct Region access with get, OQL, or otherwise). This had no affect.
So, I tried to implement a custom PdxSerializer for Order objects. This had no affect either.
The only thing I can conclude at this point is something is getting lost in translation between Order -> JSON -> PDX and then from PDX -> Order. Seemingly, GemFire needs additional type meta-data required by PDX (something like #JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#type") in the JSON data that PDXFormatter recognizes, though I am not certain it does.
Note, in my test class, I used Jackson's ObjectMapper to serialize the Order to JSON and then GemFire's JSONFormatter to serialize the JSON to PDX, which I suspect Spring XD is doing similarly under-the-hood. In fact, Spring XD uses Spring Data GemFire and is most likely using the JSON Region Auto Proxy support. That is exactly what SDG's JSONRegionAdvice object does (see here).
Anyway, I have an inquiry out to the rest of the GemFire engineering team. There are also things that could be done in Spring Data GemFire to ensure the PDX data is converted, such as making use of the MappingPdxSerializer directly to convert the data automatically on behalf of the caller if the data is indeed of type PdxInstance. Similar to how JSON Region Auto Proxying works, you could write AOP interceptor for the Orders Region to automagicaly convert PDX to an Order.
Though, I don't think any of this should be necessary as GemFire should be doing the right thing in this case. Sorry I don't have a better answer right now. Let's see what I find out.
Cheers and stay tuned!
See subsequent post for test code.
I am using Guava to cache hot data. When the data does not exist in the cache, I have to get it from database:
public final static LoadingCache<ObjectId, User> UID2UCache = CacheBuilder.newBuilder()
//.maximumSize(2000)
.weakKeys()
.weakValues()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<ObjectId, User>() {
#Override
public User load(ObjectId k) throws Exception {
User u = DataLoader.datastore.find(User.class).field("_id").equal(k).get();
return u;
}
});
My problem is when the data does not exists in database, I want it to return null and to not do any caching. But Guava saves null with the key in the cache and throws an exception when I get it:
com.google.common.cache.CacheLoader$InvalidCacheLoadException:
CacheLoader returned null for key shisoft.
How do we avoid caching null values?
Just throw some Exception if user is not found and catch it in client code while using get(key) method.
new CacheLoader<ObjectId, User>() {
#Override
public User load(ObjectId k) throws Exception {
User u = DataLoader.datastore.find(User.class).field("_id").equal(k).get();
if (u != null) {
return u;
} else {
throw new UserNotFoundException();
}
}
}
From CacheLoader.load(K) Javadoc:
Returns:
the value associated with key; must not be null
Throws:
Exception - if unable to load the result
Answering your doubts about caching null values:
Returns the value associated with key in this cache, first loading
that value if necessary. No observable state associated with this
cache is modified until loading completes.
(from LoadingCache.get(K) Javadoc)
If you throw an exception, load is not considered as complete, so no new value is cached.
EDIT:
Note that in Caffeine, which is sort of Guava cache 2.0 and "provides an in-memory cache using a Google Guava inspired API" you can return null from load method:
Returns:
the value associated with key or null if not found
If you may consider migrating, your data loader could freely return when user is not found.
Simple solution: use com.google.common.base.Optional<User> instead of User as value.
public final static LoadingCache<ObjectId, Optional<User>> UID2UCache = CacheBuilder.newBuilder()
...
.build(
new CacheLoader<ObjectId, Optional<User>>() {
#Override
public Optional<User> load(ObjectId k) throws Exception {
return Optional.fromNullable(DataLoader.datastore.find(User.class).field("_id").equal(k).get());
}
});
EDIT: I think #Xaerxess' answer is better.
Faced the same issue, cause missing values in the source was part of the normal workflow. Haven't found anything better than to write some code myself using getIfPresent, get and put methods. See the method below, where local is Cache<Object, Object>:
private <K, V> V getFromLocalCache(K key, Supplier<V> fallback) {
#SuppressWarnings("unchecked")
V s = (V) local.getIfPresent(key);
if (s != null) {
return s;
} else {
V value = fallback.get();
if (value != null) {
local.put(key, value);
}
return value;
}
}
When you want to cache some NULL values, you could use other staff which namely behave as NULL.
And before give the solution, I would suggest you not to expose LoadingCache to outside. Instead, you should use method to restrict the scope of Cache.
For example, you could use LoadingCache<ObjectId, List<User>> as return type. And then, you could return empty list when you could'n retrieve values from database. You could use -1 as Integer or Long NULL value, you could use "" as String NULL value, and so on. After this, you should provide a method to handler the NULL value.
when(value equals NULL(-1|"")){
return null;
}
I use the getIfPresent
#Test
public void cache() throws Exception {
System.out.println("3-------" + totalCache.get("k2"));
System.out.println("4-------" + totalCache.getIfPresent("k3"));
}
private LoadingCache<String, Date> totalCache = CacheBuilder
.newBuilder()
.maximumSize(500)
.refreshAfterWrite(6, TimeUnit.HOURS)
.build(new CacheLoader<String, Date>() {
#Override
#ParametersAreNonnullByDefault
public Date load(String key) {
Map<String, Date> map = ImmutableMap.of("k1", new Date(), "k2", new Date());
return map.get(key);
}
});