How to cache exceptions using spring boot cache and caffeine - spring-boot

Very simple and straight forward exception. is there any way to cache exception using caffeine/springboot ?
some specific exceptions in my method can be very time consuming... [404 for example] i wish i could cache it and avoid long processing

A simple way to cache exceptions is to encapsulate the call, catch the exception and represent it as a value. I am just adding more details based on #ben-manes comment.
Approach 1: encapsulate the exception as a business object
Approach 2: return null value or Optional object on exception. For caching null values, you need to explicitly enable caching of null values (refer here - Spring Boot Cacheable - Cache null values)
Here is an example based on Spring Caching (can be extended to Caffeine). The following class loads a Book entity object which may result in exception. The exception is handled and cached, so next time the same ISBN code (argument) is passed the return value (exception) is returned from the cache.
#Component
public class SimpleBookRepository implements BookRepository {
#Override
#Cacheable("books")
public Book getByIsbn(String isbn) {
Book book = loadBook(isbn);
return book;
}
// Don't do this at home
private void loadBook(String isbn) {
Book book;
try {
//get book from DB here
book = loadBook();//DB call. can throw exception.
} catch (Exception e) {
book = new Book("None found"); Approach 1 - //encapsulate error as an entity.
book = null; // Approach 2 - set as null
}
return book;
}
}

1. Very first time add these dependency in pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>2.7.0</version>
</dependency>
2. add #CacheConfig(cacheNames = {"customer"}) , #Slf4j under #Service
annotation
3. add #Cacheable on your method where you want caching. and in method add log
public Customer saveCustomer (Customer customerId) {
log.info("Inside save Customer method of Customer Service");
return customerRepository.save(customerId);
}
There are a few important point:
The #CacheConfig is a class level annotation and help to streamline caching configurations.
The #Cacheable annotation used to demarcate methods that are cacheable. In simple words, this annotation used to show caching API that we want to store results for this method into the cache so, on subsequent invocations, the value in the cache returned without execute the method.

Related

Spring Boot #Cacheable does not serve requests from in-memory cache

Given the following method with the Cacheable annotation:
#Cacheable(value = "cachekey", key = "#taskId")
public Task getTask(Long taskId) {
log.info("called");
Task task = ...;
return task;
}
And the following configuration:
#EnableCaching
#Configuration
public class CacheConfiguration {
#Bean
public CacheManager cacheManager() {
return new ConcurrentMapCacheManager("cachekey");
}
#Scheduled(fixedDelay = 30000)
#CacheEvict(allEntries = true, cacheNames = {"cachekey"})
public void cacheEvict() {
}
}
And the following pom parts:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.6.7</version>
<relativePath/>
</parent>
...
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
By calling getTask, I always hit the method and no request is served from the cache itself.
What am I doing wrong?
I've recreated an application based off the code you provided and tried to keep it as similar as possible. The program yields the output:
Calling getTask
getTask called
Calling getTask again
true
Which indicates that getTask is only executed once and that the caching is working. So we'll have to assume it is something not listed in the code you provided. Two thoughts come to mind:
Are you sure that you are providing the same long to #getTask(Long) when calling it multiple times? As it's currently configured, different longs will result in different cache entries. You'll have to call it at least twice with the same long in order to see caching take effect.
Are you calling #getTask(Long) from a different bean than the one which contains #getTask(Long)? The functionality provided by #Cacheable is implemented using Spring AOP. Spring AOP will wrap the bean (CachedService in my example) with some generated code that handles the cache retrieval and put. However, this wrapping does not affect method calls internal to that bean.
For example, if I added a #getTask99() method to CachedService like in the code snippet below:
#Component
public static class CachedService {
public Task getTask99() {
return getTask(99L);
}
#Cacheable(value = "cachekey", key = "#taskId")
public Task getTask(Long taskId) {
System.out.println("getTask called");
Task task = new Task(System.out::println);
return task;
}
}
Calling #getTask99() produces:
Calling getTask99
getTask called
Calling getTask99 again
getTask called
false
Indicating that nothing was cached, as the bean wrapping has been circumvented. Instead, call #getTask(99L) from a different bean, like CacheTestRunner in my example.

ProxyingHandlerMethodArgumentResolver interfering with data binding

I have a web handler working with validation. When I add data-jpa dependencies, the validations stop working.
The problem is with the ProxyingHandlerMethodArgumentResolver. The data-jpa starter adds the resolver to the head of the resolver list and again later in the list. A proxy is created that does not update the model attribute object referenced in the model attribute annotation on the parameter.
My solution is to remove the resolver from the head of the resolver list, but keep it later in the list. The resolver can still be referenced, but after my custom resolvers.
I assume that this solution will cause problems later when I use more features from data-jpa. Can you suggest another way to get the original code working?
Details:
The following code works before adding the data-dependencies. I use an interface for the model attribute. As I understand, the model attribute parameter is used to bind to a model property with that name, if it exists, and create a new instance if the name does not exist in the model. Since "dataBad" is in the model, I do not expect the data binding to create a new instance, so I am able to use an interface.
#Controller
#RequestMapping("/ControllerBad")
#SessionAttributes("dataBad")
public class ControllerBad {
#ModelAttribute("dataBad")
public RequestDataRequired modelData() {
return new RequestDataRequiredSingle();
}
#PostMapping(params="confirmButton")
public String confirmMethod(
#Valid #ModelAttribute("dataBad") RequestDataRequired dataBad,
BindingResult errors
)
{
if (errors.hasErrors()) {
return "edit";
}
return "redirect:ControllerBad?confirmButton=Confirm";
}
This worked correctly. The request parameters were copied into the model attribute "dataBad".
Next, I wanted to add persistence, so I added spring-boot-starter-data-jpa and mysql-connector-java to the pom file
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
I added properties for the database to application properties
spring.datasource.url=jdbc:mysql://localhost:3306/baz
spring.datasource.username=foo
spring.datasource.password=bar
I have not created any entity classes. I have the class that binds to the form, but I have not added the annotations for an entity. At this point, I just want to get the data from the form into my bean that is in the model. Here is the interface for the form data object.
public interface RequestDataRequired {
#NotNull(message = "cannot be empty")
#Pattern(regexp = "(?i)red|green|blue",
message = "must be red, green, or blue")
public String getColor();
public void setColor(String color);
}
Nothing else was changed. When I ran the new version the validation failed, because the color property was null.
If I use an implementation of the interface, then it works. I would like to make it work with an interface, as the name of the implementation class would appear in may locations in the controller, not just in the model attribute method.
#Valid #ModelAttribute("dataBad") RequestDataRequiredSingle dataBad
I can get it working with a session attribute interface and a model attribute interface, but this entails duplicate work for copying request parameters and errors.
#PostMapping(params="confirmSessionModelButton")
public String confirmSessionModelMethod(
Model model,
#SessionAttribute RequestDataRequired dataBad,
#Valid #ModelAttribute RequestDataRequired dataModel,
BindingResult errors
)
{
BeanUtils.copyProperties(dataModel, dataBad);
if (errors.hasErrors()) {
model.addAttribute(BindingResult.class.getName() + ".dataBad", errors);
return viewLocation("edit");
}
return "redirect:ControllerBad?confirmButton=Confirm";
}
After some experimenting, I found that data-jpa added four new argument
resolvers. The ProxyingHandlerMethodArgumentResolver was included twice: once at the head of the resolver list and again after my own custom resolvers.
A proxy object is created for an interface and the request parameters are copied into the proxy. The proxy will not update the model attribute object referenced in the model attribute annotation on the parameter. The proxied object is available in the request handler with the request data, but the session attribute is not updated.
Since the proxying resolver is first in the list, any custom resolvers are not called.
If I remove the proxying resolver from the head of the argument resolver list, but leave it later in the list, I can get the code running as it did before.
public class WebConfig implements WebMvcConfigurer {
#Autowired
private RequestMappingHandlerAdapter requestMappingHandlerAdapter;
#PostConstruct
public void init() {
List<HandlerMethodArgumentResolver> argumentResolvers =
requestMappingHandlerAdapter.getArgumentResolvers();
List<HandlerMethodArgumentResolver> newList = argumentResolvers.subList(1, argumentResolvers.size());
requestMappingHandlerAdapter.setArgumentResolvers(newList);
}
}
I am content with this solution for now but I assume that I will break something in the data-jpa that I will need later on.
Can anyone suggest a different way to get the former behavior of updating the model attribute with the request data and only creating a new instance of the model attribute when it is not already in the model?
I have found a simple solution for the problem. Data-jpa uses projections that create proxies for interfaces, and that is the problem I have. However,
data-jpa also supports DTOs, which are classes that look like the interface.
#Component
public class RequestDataRequiredDTO implements RequestDataRequired {
private String color;
#Override
public String getColor() {
return color;
}
#Override
public void setColor(String color) {
this.color = color;
}
}
I have to do two things. First I have to use a reference to the DTO in the model attribute parameter and data-jpa will project into it without using a proxy. I still use the normal interface everywhere else, I even extended the DTO from the interface.
#PostMapping(params="confirmButton")
public String confirmMethod(
#Valid #ModelAttribute("dataBad") RequestDataRequiredDTO dataBad,
BindingResult errors
)
{
if (errors.hasErrors()) {
return viewLocation("edit");
}
return "redirect:ControllerBad?confirmButton=Confirm";
}
Second, I have to define a converter from the actual class in the model to the DTO type.
#Component
public class ClassToDTOConverter
implements Converter<RequestDataRequiredSingle, RequestDataRequiredDTO> {
#Override
public RequestDataRequiredDTO convert(RequestDataRequiredSingle source) {
RequestDataRequiredDTO target = new RequestDataRequiredDTO();
target.setColor(source.getColor());
return target;
}
}

#Valid not working for spring rest controller

I have defined a rest endpoint method as:
#GetMapping("/get")
public ResponseEntity getObject(#Valid MyObject myObject){....}
This maps request parameters to MyObject.
MyObject is defined as(with lombok, javax.validation annotations):
#Value
#AllArgsConstructor
public class MyObject {
#Min(-180) #Max(180)
private double x;
#Min(-90) #Max(90)
private double y;
}
But validations are not working. Even with values out of prescribed range, request doesn't throw error and goes well.
If you on a version of Spring Boot > 2.3 it now states
Validation Starter no longer included in web starters
... you’ll need to add the starter yourself.
i.e.
For Maven builds, you can do that with the following:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
For Gradle, you will need to add something like this:
dependencies {
...
implementation 'org.springframework.boot:spring-boot-starter-validation'
}
Please refer to https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.3-Release-Notes#validation-starter-no-longer-included-in-web-starters
Annotate your controller with org.springframework.validation.annotation.Validated
I see a couple of things here that you should fix. Let's start talking about the REST standard, the first rule is to think in endpoints as representation of resources, not operations, for example, in your code, I presume the MyObject class represents a Point (you should refactor the class to have a proper name), then the path value for the getObject can be "/point". The operations are mapped on the HTTP method, accordingly:
GET: Obtain info about a resource.
POST: Create a resource.
PUT: Update a resource.
DELETE: Delete a resource.
In getObject you're expecting to receive an object. The get method according to the REST standards means you want to retrieve some data, and usually you send some data included in the url like ../app-context/get/{id}, here the id is a parameter that tells your controller you want some info belonging to an id, so if you would invoke the endpoint like as ../app-context/get/1 to get info of some domain object identified by the number 1.
If you want to send data to the server, the most common HTTP method is a POST.
According to this, at design level you should:
Give a meaningful name to the MyObject class.
Check the operation you want to make in the getObject.
Assign a path to getObject representing a resource.
At code level, with the above comments, you could change this as:
#Data
#AllArgsConstructor
#NoArgsConstructor
public class MyObject {
#Min(-180) #Max(180)
private double x;
#Min(-90) #Max(90)
private double y;
}
#PostMapping("/point")
public ResponseEntity savePoint(#RequestBody #Valid MyObject myObject) {...}
I will explain the changes:
Add #PostMapping to fulfill the REST standard.
Add #RequestBody, this annotation take the info sent to the server and use it to create a MyObject object.
Add #NoArgsConstructor to MyObject, by default, the deserialisation use a default constructor (with no arguments). You could write some specialised code to make the things work without the default constructor, but thats up to you.
I just had to add the following dependency to get the validations working.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>

Spring Data Rest and collections with unique constraints

I'm evaluating spring-data-rest and am running into a situation where the magic no longer appears to be working in my favor.
Say I have a collection of items.
Parent - 1:M - Child
Parent
Long id
String foo
String bar
#OneToMany(...)
#JoinColumn(name = "parent_id", referencedColumnName = "id", nullable = false)
Collection<Child> items
setItems(items) {
this.items.clear();
this.items.addAll(items);
}
#Table(name = "items", uniqueConstraints = {#UniqueConstraint(columnNames = {"parent_id", "ordinal"})})
Child
Long id
String foo
Integer ordinal
The database has a constraint that children of the same parent can't have conflicting values in one particular field, 'ordinal'.
I want to PATCH to the parent entity, overwriting the collection of children. The problem comes with the default behavior of hibernate. Hibernate doesn't flush the changes from when the collection is cleared until after the new items are added. This violates the constraint, even though the eventual state will not.
Cannot insert duplicate key row in object 'schema.parent_items' with unique index 'ix_parent_items_id_ordinal'
I have tried mapping this constraint to the child entity by using #UniqueConstraints(), but this doesn't appear to change the behavior.
I am currently working around this by manually looking at the current items and updating the ones that would cause the constraint violation with the new values.
Am I missing something? This seems like a fairly common use case, but maybe I'm trying too hard to shoe-horn hibernate into a legacy database design. I'd love to be able to make things work against our current data without having to modify the schema.
I see that I can write a custom controller and service, à la https://github.com/olivergierke/spring-restbucks, and this would let me handle the entityManager and flush in between. The problem I see going that way is that it seems that I lose the entire benefit of using spring-data-rest in the first place, which solves 99% of my problems with almost no code. Is there somewhere that I can shim in a custom handler for this operation without rewriting all the other operations I get for free?
In order to customize Spring Data REST (my way to do, I have to speak about with Spring Data REST guys) like following:
Consider we have a exposed repository UserRepository on /users/, you should have at least the following API:
...
/users/{id} GET
/users/{id} DELETE
...
Now you want to override /users/{id} DELETE but keep other API to be handle by Spring Data REST.
The natural approach (again in my opinion) is to write your own UserController (and your custom UserService) like following:
#RestController
#RequestMapping("/users")
public class UserController {
#Inject
private UserService userService;
#ResponseStatus(value = HttpStatus.NO_CONTENT)
#RequestMapping(method = RequestMethod.DELETE, value = "/{user}")
public void delete(#Valid #PathVariable("user") User user) {
if (!user.isActive()) {
throw new UserNotFoundException(user);
}
user.setActive(false);
userService.save(user);
}
}
But by doing this, the following mapping /users will now be handle by org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping instead of org.springframework.data.rest.webmvc.RepositoryRestHandlerMapping.
And if you pay attention on method handleNoMatch of org.springframework.web.servlet.mvc.method.RequestMappingInfoHandlerMapping (parent of org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping) you can see the following thing:
else if (patternAndMethodMatches.isEmpty() && !allowedMethods.isEmpty()) {
throw new HttpRequestMethodNotSupportedException(request.getMethod(), allowedMethods);
}
patternAndMethodMatches.isEmpty(): return TRUE if url and method (GET, POST, ...) does not match.
So if you are asking for /users/{id} GET it will be TRUE because GET only exists on Spring Data REST exposed repository controller.
!allowedMethods.isEmpty(): return TRUE if at least 1 method GET, POST or something else matches for the given url.
And again it's true for /users/{id} GET because /users/{id} DELETE exists.
So Spring will throw an HttpRequestMethodNotSupportedException.
In order to by-pass this problem I created my own HandlerMapping with the following logic:
The HandlerMapping has a list of HandlerMapping (here RequestMappingInfoHandlerMapping and RepositoryRestHandlerMapping)
The HandlerMapping loops over this list and delegate the request. If an exception occurs we keep it (we keep only the first exception in fact) and we continues to the other handler. At the end if all handlers of the list throw an exception we rethrow the first exception (previously keeped).
Moreover we implements org.springframework.core.Ordered in order to place the handler before org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping.
import org.springframework.core.Ordered;
import org.springframework.util.Assert;
import org.springframework.web.servlet.HandlerExecutionChain;
import org.springframework.web.servlet.HandlerMapping;
import javax.servlet.http.HttpServletRequest;
import java.util.List;
/**
* #author Thibaud Lepretre
*/
public class OrderedOverridingHandlerMapping implements HandlerMapping, Ordered {
private List<HandlerMapping> handlers;
public OrderedOverridingHandlerMapping(List<HandlerMapping> handlers) {
Assert.notNull(handlers);
this.handlers = handlers;
}
#Override
public HandlerExecutionChain getHandler(HttpServletRequest request) throws Exception {
Exception firstException = null;
for (HandlerMapping handler : handlers) {
try {
return handler.getHandler(request);
} catch (Exception e) {
if (firstException == null) {
firstException = e;
}
}
}
if (firstException != null) {
throw firstException;
}
return null;
}
#Override
public int getOrder() {
return -1;
}
}
Now let's create our bean
#Inject
#Bean
#ConditionalOnWebApplication
public HandlerMapping orderedOverridingHandlerMapping(HandlerMapping requestMappingHandlerMapping,
HandlerMapping repositoryExporterHandlerMapping) {
List<HandlerMapping> handlers = Arrays.asList(requestMappingHandlerMapping, repositoryExporterHandlerMapping);
return new OrderedOverridingHandlerMapping(handlers);
}
Et voilà.

Adding #Transactional causes "collection with cascade="all-delete-orphan" was no longer referenced"

I am upgrading a working project from Spring2+Hibernate3 to Spring3+Hibernate4. Since HibernateTemplate and HibernateDAOSupport have been retired, I did the following
Before (simplified)
public List<Object> loadTable(final Class<?> cls)
{
Session s = getSession(); // was calling the old Spring getSession
Criteria c = s.createCriteria(cls);
List<Object> objects = c.list();
if (objects == null)
{
objects = new ArrayList<Object>();
}
closeSession(s);
return objects;
}
Now (simplified)
#Transactional(propagation=Propagation.REQUIRED)
public List<Object> loadTable(final Class<?> cls)
{
Session s = sessionFactory.getCurrentSession();
Criteria c = s.createCriteria(cls);
List<Object> objects = c.list();
if (objects == null)
{
objects = new ArrayList<Object>();
}
return objects;
}
I also added the transaction annotation declaration to Spring XML and removed this from Hibernate properties
"hibernate.current_session_context_class", "org.hibernate.context.ThreadLocalSessionContext"
The #Transactional annotation seems to have worked as I see this in the stacktrace
at com.database.spring.DatabaseDAOImpl$$EnhancerByCGLIB$$7d20ef95.loadTable(<generated>)
During initialization, the changes outlined above seem to work for a few calls to the loadTable function but when it gets around to loading an entity with a parent, I get the "collection with cascade="all-delete-orphan" was no longer referenced" error. Since I have not touched any other code that sets/gets parents or children and am only trying to fix the DAO method, and the query is only doing a sql SELECT, can anyone see why the code got broken?
The problem seems similar to Spring transaction management breaks hibernate cascade
This is unlikely problem of Spring, but rather issue with your entity handling / definition. When you are using deleteOrphans on a relation, the underlying PersistentSet MUST NOT be removed from the entity itself. You are allowed only to modify the set instance itself. So if you are trying to do anything clever within your entity setters, that is the cause.
Also as far as I remember there are some issues when you have deleteOrphans on both sides of the relation and/or load/manipulate both sides within one session.
Btw. I don't think "hibernate.current_session_context_class", "org.hibernate.context.ThreadLocalSessionContext" is necessary. In our project, this is the only configuration we have:
#Bean
public LocalSessionFactoryBuilder sessionFactoryBuilder() {
return ((LocalSessionFactoryBuilder) new LocalSessionFactoryBuilder(
dataSourceConfig.dataSource()).scanPackages(ENTITY_PACKAGES).
setProperty("hibernate.id.new_generator_mappings", "true").
setProperty("hibernate.dialect", dataSourceConfig.dialect()).
setProperty("javax.persistence.validation.mode", "none"));
}
#Bean
public SessionFactory sessionFactory() {
return sessionFactoryBuilder().buildSessionFactory();
}
The issue was with Session Management. The same block of transactional code was being called by other modules that were doing their own session handling. To add to our woes, some of the calling modules were Spring beans while others were written in direct Hibernate API style. This disorganization was sufficient work to keep us away from moving up to Hibernate 4 immediately.
Moral of the lesson (how do you like that English?): Use a consistent DAO implementation across the entire project and stick to a clearly defined session and transaction management strategy.

Resources