Delete multiple values from Redis Cache using #CacheEvict Spring Boot - spring-boot

first time using Redis (for caching purposes), I have the cache working for getting all values and updating them, but for delete, since I want to allow for functionality for a user to delete multiple values at once instead of doing them one at a time (A user can select multiple records and the delete would be done for all of those, or if a single value is selected, then only that would be deleted)
Now when I delete a Single record, then it would be removed from the cache when I check, but when I attempt to delete multiple records, those values still remain in the cache.
Delete Endpoint
// In the request, a list of Ids can be passed or even just a single Id, and then those records will be deleted appropriately
#CacheEvict(value="Post", key="#postId")
#DeleteMapping("/posts/{postId}")
public ResponseEntity<?> deletePosts(#PathVariable List<Long> postId, Principal principal) {
postService.deleteMultiplePosts(postId, principal.getName());
// When a record is deleted, just return all the remaining records as a response
return new ResponseEntity<>(createMapForPageContents(postService.findAllActivePostsByPostCreator(principal.getName(), PageRequest.of(0, 5))), HttpStatus.OK);
}
Service method
public void deleteMultiplePosts(List<Long> postIdList, String emailAddress) {
List<Post> posts = new ArrayList<>();
for (int i = 0; i < postIdList.size(); i++) {
posts.add(findPostById(postIdList.get(i), emailAddress));
}
postRepository.deleteAll(posts);
}
The delete works and reflects in MySQL, but not the cache (Only when attempting to delete multiple values from the cache, not just one)
I'm using Postman to test requests

Related

Using findOne() / findAll() in spring boot for Cassandra DB

During code optimization I found few areas where I was using findOne() within for loop –
public List<User> validateUsers(List<String> userIds) {
List<User> validUsers = new ArrayList<>();
for ( String userId : userIds) {
User user = userRepository.findOne(userId); //Network hit :: expensive call
//Perform validations
...
//Add valid users to validUsers list
...
}
return validUsers;
}
Above method takes long time if I pass huge list of users to validate. [for 300 users around 5 sec.]
Then I changed above method to use findAll() and perform validations on result collection -
public List<User> validateUsers(List<String> userIds) {
List<User> validUsers = new ArrayList<>();
Iterable<User> itr = userRepository.findAll(userIds); //Only one Network hit
for ( User user : itr) {
//Perform validations
...
//Add valid users to validUsers list
...
}
return validUsers;
}
Now for 300 users, results coming in 100 ms.
Question is: Is there any side effects of using findAll() considering the underlying structure of Cassandra? Also I am using CrudRepository. Should I use CassandraRepository?
Following are the parameters to think of when you are attempting this.
How big is the users table, if you are using findAll.
Partition keys for the user table
As Cassandra queries are faster with the primary key fields, findOne might perform better with the large amount of data.
However, can you try
List<T> findAllById(Iterable<ID> ids);
from org.springframework.data.cassandra.repository.CassandraRepository

GraphQL Java: Using #Batched DataFetcher

I know how to retrieve a bean from a service in a datafetcher:
public class MyDataFetcher implements DataFetcher {
...
#Override
public Object get(DataFetchingEnvironment environment) {
return myService.getData();
}
}
But schemas with nested lists should use a BatchedExecutionStrategy and create batched DataFetchers with get() methods annotated #Batched (see graphql-java doc).
But where do I put my getData() call then?
///// Where to put this code?
List list = myService.getData();
/////
public class MyDataFetcher implements DataFetcher {
#Batched
public Object get(DataFetchingEnvironment environment) {
return list.get(environment.getIndex()); // where to get the index?
}
}
WARNING: The original BatchedExecutionStrategy has been deprecated and will get removed. The current preferred solution is the Data Loader library. Also, the entire execution engine is getting replaced in the future, and the new one will again support batching "natively". You can already use the new engine and the new BatchedExecutionStrategy (both in nextgen packages) but they have limited support for instrumentations. The answer below applies equally to both the legacy and the nextgen execution engine.
Look at it like this. Normal DataFetcherss receive a single object as source (DataFetchingEnvironment#getSource) and return a single object as a result. For example, if you had a query like:
{
user (name: "John") {
company {
revenue
}
}
Your company resolver (fetcher) would get a User object as source, and would be expected to somehow return a Company based on that e.g.
User owner = (User) environment.getSource();
Company company = companyService.findByOwner(owner);
return company;
Now, in the exact same scenario, if your DataFetcher was batched, and you used BatchedExecutionStrategy, instead of receiving a User and returning a Company, you'd receive a List<User> and would return a List<Company> instead.
E.g.
List<User> owners = (List<User>) environment.getSource();
List<Company> companies = companyService.findByOwners(owners);
return companies;
Notice that this means your underlying logic must have a way to fetch multiple things at once, otherwise it wouldn't be batched. So your myService.getData call would need to change, unless it can already fetch data for multiple source object in one go.
Also notice that batched resolution makes sense in nested queries only, as the top level resolver can already fetch a list of object, without the need for batching.

How can I cache a database query with "IN" operator?

I'm using Spring Boot with Spring Cache. I have a method that, given a list of ids, returns a list of Food that match with those ids:
public List<Food> get(List<Integer> ids) {
return "select * from FOOD where FOOD_ID in ids"; // << pseudo-code
}
I want to cache the results by id. Imagine that I do:
List<Food> foods = get(asList(1, 5, 7));
and then:
List<Food> foods = get(asList(1, 5));
I want to Food with id 1 and Food with id 5 to be retrieved from cache. Is it possible?
I know I can do a method like:
#Cacheable(key = "id")
public Food getById(id){
...
}
and iterate the ids list and call it each time, but in that case I don't take advantage of IN SQL operator, right? Thanks.
The key attribute of Cacheable takes a SpEL expression to calculate the cache key. So you should be able to do something like
#Cacheable(key = "#ids.stream().map(b -> Integer.toString(b)).collect(Collectors.joining(",")))
This would require the ids to always be in the same order
https://docs.spring.io/spring/docs/current/spring-framework-reference/html/cache.html#cache-annotations-cacheable-key
A better option would be to create a class to wrap around your ids that would be able to generate the cache key for you, or some kind of utility class function.
Another possible Solution without #Cacheable would be to inject the cache manager into the class like:
#Autowired
private CacheManager cacheManager;
You can then retrieve the food cache from the cache manager by name
Cache cache = cacheManager.getCache('cache name');
then you could adjust your method to take in the list of ids and manually add and get the values from cache
cache.get(id);
cache.put(id, food);
You will most likely still not be able to use the SQL IN clause, but you are at least handling the iteration inside the method and not everywhere this method is called, and leveraging the cache whenever possible.
public List<Food> get(List<Integer> ids) {
List<Food> result = new ArrayList<>();
for(Integer id : ids) {
// Attempt to fetch from cache
Food food = cache.get(id);
if (food == null) {
// Fetch from DB
cache.put(id, food);
}
result.add(food);
}
return result;
}
Relevant Javadocs:
http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/cache/CacheManager.html
http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/cache/Cache.html

spring-data: cache a queries total count

I'm using spring data jpa with querydsl. I have a method that returns query results in pages containing total count. getting the total count is expensive and I would like to cache it. how is that possible?
My naive approach
#Cacheable("queryCount")
private long getCount(JPAQuery query){
return query.count();
}
does not work (to make it work they way wanted the actually key for the cache should not be the whole query, just the criteria). Anyway tested it, did not work and then I found this: Spring 3.1 #Cacheable - method still executed
The way I understand this I can only cache the public interface methods. However in said method I would need to cache a property of the return value, eg.
Page<T> findByComplexProperty(...)
I would need to cache
page.getTotalElements();
Annotating the whole method works (it is cached) but not the way I would like. Assume getting total count takes 30 seconds. Hence for every new page request user needs to wait 30 sec. if he goes back a page, then the cache is used but I would want the count to be only run exactly once and then count is fetched from cache.
How can I do that?
My solution was to autowire the cache manager in the class creating the complex query:
#Autowired
private CacheManager cacheManager;
and then create a simple private method getCount
private long getCount(JPAQuery query) {
Predicate whereClause = query.getMetadata().getWhere();
String key = whereClause.toString();
Cache cache = this.cacheManager.getCache(QUERY_CACHE);
Cache.ValueWrapper value = cache.get(key);
if (value == null) {
Long result = query.count();
cache.put(key, result);
return result;
}
return (Long)value.get();
}

MVC3 Entity Framework Code First Updating Subset Related List of Items

I have a table of data with a list of key value pairs in it.
Key Value
--------------------
ElementName PrimaryEmail
Email someemail#gmail.ca
Value Content/Images/logo-here.jpg
I am able to generate new items on my client webpage. When, I create a new row on the client and save it to the server by executing the following code the item saves to the database as expected.
public ViewResult Add(CardElement cardElement)
{
db.Entry(obj).State = EntityState.Added;
db.SaveChange();
return Json(obj);
}
Now, when I want to delete my objects by sending another ajax request I get a failure.
public void Delete(CardElement[] cardElements)
{
foreach (var cardElement in cardElements)
{
db.Entry(cardElement).State = EntityState.Deleted;
}
db.SaveChanges();
}
This results in the following error.
Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries.
I have tried other ways of deleting including find by id remove and attach and delete but obviously I am approaching in the right fashion.
I am not sure what is causing your issue, but I tend to structure my deletes as follows:
public void Delete(CardElement[] cardElements)
{
foreach (var cardElement in cardElements)
{
var element = db.Table.Where(x => x.ID == cardElement.ID).FirstOrDefault();
if(element != null)
db.DeleteObject(element);
}
db.SaveChanges();
}
although I tend to do database first development, which may change things slightly.
EDIT: the error you are receiving states that no rows were updated. When you pass an object to a view, then pass it back to the controller, this tends to break the link between the object and the data store. That is why I prefer to look up the object first based on its ID, so that I have an object that is still linked to the data store.

Resources