Spring Constraint Validation Context - Database Request Caching - spring

I've written a custom Validation Annotation and a ConstraintValidator implementation, which uses a Spring Service (and executes a Database Query):
public class MyValidator implements ConstraintValidator<MyValidationAnnotation, String> {
private final MyService service;
public MyValidator(MyService service) {
this.service = service;
}
#Override
public void initialize(MyValidationAnnotation constraintAnnotation) {}
#Override
public boolean isValid(String value, ConstraintValidatorContext context) {
return service.exists(value);
}
}
It's used like this:
public class MyEntity {
#Valid
List<Foo> list;
}
public class Foo {
#MyValidationAnnotation
String id;
}
This works quite nice, but service.exists(value) is getting called for every item within the list, which is correct, but could/should be optimized.
Question:
When validating an instance of MyEntity, I'd like to cache the results of the service.exists(value) calls.
I don't want to use a static HashMap<String, Boolean>, because this would cache the results for the entire application lifetime.
Is it possible to access some kind of Constraint Validation Context, which only exists while this particular validation is running, so I can put there the cached results?
Or do you have some other solution?
Thanks in advance!

You can use Spring's cache support. There might be other parts in the application which needs caching and this can be reused. And the setup is very simple too. And it will keep your code neat and readable.
You can cache your service calls. You need to put annotation on your service methods and a little bit of configuration.
And for cache provider you can use Ehcache. You have many options like setting ttl and max number of elements that can be cached and eviction policy etc etc if needed.
Or you can implement your own cache provider if your needs are simple. And if it is web request, In this cache you may find ThreadLocal to be useful. you can do all caching for this running thread using threadlocal. When the request is processed you can clear the threadlocal cache.

Related

Where to validate uniqueness of field in Spring/Hibernate

I am building a REST API using spring and hibernate. I have come across the issue where I want to create a user and want to know the best practice on how to validate that the user can be created.
My controller has the #Valid annotation on the User object that gets passed into the method, and this checks for valid structure, however there is no #Unique property that gets picked up by #Valid.
I am using the #Column(unique = true) but this throws an error at the persistence level and I feel like that is quite low level and makes it difficult to throw a custom UsernameAlreadyExistsException().
My question here is what is the best practice in terms of preforming this type of validation. I thought about creating a custom annotation but it seems quite messy especially because as the project grows I would need multiple validators for different fields and it also seems to be closley related to tying the service layer to the annotation which seems messy
In my opinion, using custom annotation is the best approach to do stuff like this, you can inject some bean in ConstraintValidator and perform validation. However you can try one of the below unusual approaches, maybe it will fit your requirements.
Spring AOP
Spring Handler Interceptor
JPA Event Listeners
It's just my opinion about this, in most cases I think I will create custom annotations to handle it.
A good practice would be to put validation both on the database (which we know nothing about, but it is not complicated really) and on the Spring's side.
As #kamil-w already said, a good is to write custom constraint validator, see here for an example.
Keep in mind that you can always pass parameters like to constraint annotation, and then access them in your ConstraintValidator, for example.:
#Entity
public class Member {
// ...
#UniqueField(fieldName = "login", context = Member.class)
private String login;
}
#Component
public class UniqueFieldValidator implements ConstraintValidator<UniqueField, Object> {
#PersistenceUnit
private EntityManagerFactory emf;
private Class validationContext;
private String fieldName;
#Override
public void initialize(UniqueField uniqueField) {
this.validationContext = uniqueField.validationContext();
this.fieldName = uniqueField.fieldName();
}
#Override
public boolean isValid(Object value, ConstraintValidatorContext cxt) {
// use value, this.validationContext, this.fieldName and entity manager to check uniqueness
}
}

Common shared data objects for entire application

I have some data objects that are common across a Spring boot application - one is the logged in employee object and other is a category. I have created a #Component class which contains these are static variables. This way I do not even have to autowire them. They can be used directly like CurrentContext.employee in controllers.
#Component
public final class CurrentContext {
public static Category currentCategory;
public static Employee employee;
#Autowired
private CategoryService categoryService;
#Autowired
private EmployeeService employeeService;
#EventListener
public void onApplicationEvent(ContextRefreshedEvent event) {
currentCategory = categoryService.getCategory();
}
#EventListener
public void onLoginSuccess(InteractiveAuthenticationSuccessEvent event) {
employee = employeeService.getEmployeeByUserId(((MyUserDetails) event.getAuthentication().getPrincipal()).getUserId());
}
}
Is this a right way? Please suggest if there is a better way to handle shared data
Edit
Some background - I require the current logged in employee and a category which is common for all employees. So I autowired employeeService and categoryService in my controllers and use them to get the data. They are required in almost all my controller methods, so, I wanted to create a bean of these so that I directly use them in my controller and also save frequent database calls.
Normally, we only put the dependencies related to the cross-cutting concerns (i.e dependencies that are across the whole application such as security , logging , transaction stuff , time provider etc.) in the static field.
By accessing these kind of dependencies in the static way , we don't need to pass them through method parameters /constructors from object to object , which will make the API much cleaner without such noise (BTW. This is called Ambient Context Pattern in the .NET world).
Your Employee object most probably belong to this type , so it is ok to access it in a static way. But as their scope is per session , you cannot simply put it in the static field of a class. If yes, then you always get the same employee for all sessions. Instead, you have to somehow store it in an object which is session scope (e.g HttpSession) . Then at the beginning of handling a web request , you get it from the session and then put it in a ThreadLocal which is encapsulated inside a "ContextHolder" object. You then access that "ContextHolder" in a static way.
Sound very complicated and scary ? Don't worry as Spring Security has already implemented this stuff for you. What you need to do is to customize Authentication#getPrincipal()or extend default Authentication to contain your Employee. Then get it using SecurityContextHolder.getContext().getAuthentication()
For your currentCategory , if they are not the cross-cutting concerns and is the application scope , make a singleton bean to get it values is a much better OOP design.
#Component
public final class CurrentCategoryProvider {
#Autowired
private CategoryService categoryService;
public Category getCurrentCategory(){
//or cache the value to the an internal properties depending on your requirements
return categoryService.getCategory();
}
}
You then inject CurrentCategoryProvider to the bean that need to access currentCategory.

Sling / Jackrabbit - resolver / session lifetime and its concurrent consistency

I have a osgi component which works with JCR (for example, CRUD).
#Component
#Service
public class SomeServiceImpl implements SomeService {
#Reference
private ResourceResolverFactory resourceResolverFactory;
private ResourceResolver resourceResolver;
#Activate
private void init() {
resourceResolver = resourceResolverFactory.getServiceResourceResolver(
Collections.singletonMap(ResourceResolverFactory.SUBSERVICE, "myService"));
}
#Override
public void serve() {
//does something with resourceResolver
}
#Deactivate
private void dispose() {
resourceResolver.close();
}
}
It creates new instance of resourceResolver and keeps it as long as this service is alive. From time to time this service is invoked outside.
My questions are:
Is it correct approach where I once create resourceResolver and reuse it? Is it constantly?
Do I have guarantees that underlying session will not be expired?
By the way How long resourceResolver and their session lives and where can I see it?
What about concurrency? Imagine this service is invoked from serveral places parallely, Does Jackrabbit guarantee me consistency?
#Component
#Service
public class SomeServiceImpl implements SomeService {
#Reference
private SlingRepository slingRepository;
private Session session;
#Activate
private void init() {
session = slingRepository.login();
}
#Override
public void serve() {
//does something with session
}
#Deactivate
private void dispose() {
session.logout();
}
}
The same questions for another service (with session implementation).
It will be nice to see some proofs if it's possible. Maybe docs...
Thanks.
Is it correct approach where I once create resourceResolver and reuse it? Is it constantly?
No, it is not. It is perfect example of bad practice. Creation of resourceResolver is lightweight you can create as many as you need.
Note: you have to always close resourceResolver after usage but be careful and don't close it to early.
Do I have guarantees that underlying session will not be expired?
No you don't. AEM is collecting unclosed sessions after some time.
By the way How long resourceResolver and their session lives and where can I see it?
This session will became invalid after first concurrent write to the same resource. IRL big amount of changes even without conflict can fail on save.
What about concurrency? Imagine this service is invoked from serveral places parallely, Does Jackrabbit guarantee me consistency?
JCR session is supporting concurrency in scope of one session. Main assumption that will always create new session per update request.
The same questions for another service (with session implementation).
ResourceResolver is working over the Session it is just higher level of API.

Spring Cache Abstraction: How to Deal With java.util.Optional<T>

We have a lot of code in our code base that's similar to the following interface:
public interface SomethingService {
#Cacheable(value = "singleSomething")
Optional<Something> fetchSingle(int somethingId);
// more methods...
}
This works fine as long we're only using local caches. But as soon as we're using a distributed cache like Hazelcast, things start to break because java.util.Optional<T> is not serializable and thus cannot be cached.
With what I've come up so far to solve this problem:
Removing java.util.Optional<T> from the method definitions and instead checking for the trusty null.
Unwrapping java.util.Optional<T> before caching the actual value.
I want to avoid (1) because it would involve a lot of refactoring. And I have no idea how to accomplish (2) without implementing my own org.springframework.cache.Cache.
What other options do I have? I would prefer a generic (Spring) solution that would work with most distributed caches (Hazelcast, Infinispan, ...) but I would accept a Hazelcast-only option too.
A potential solution would be to register a serializer for the Optional type. Hazelcast has a flexibile serialization API and you can register a serializer for any type.
For more information see the following example:
https://github.com/hazelcast/hazelcast-code-samples/tree/master/serialization/stream-serializer
So something like this:
public class OptionalSerializer implements StreamSerializer<Optional> {
#Override
public void write(ObjectDataOutput out, Optional object) throws IOException {
if(object.isPresent()){
out.writeObject(object.get());
}else{
out.writeObject(null);
}
}
#Override
public Optional read(ObjectDataInput in) throws IOException {
Object result = in.readObject();
return result == null?Optional.empty():Optional.of(result);
}
#Override
public int getTypeId() {
return 0;//todo:
}
#Override
public void destroy() {
}
}
However the solution isn't perfect because this Optional thing will be part of the actual storage. So internally the Optional wrapper is also stored and this can lead to problems with e.g. queries.

How does delete operation work with Rest in Spring Data

Currently we have exposed our methods like this
#RestController
#RequestMapping("/app/person")
public class PersonResource {
#Timed
public void delete(#PathVariable Long id) {
log.debug("REST request to delete Person: {}", id);
personRepository.delete(id);
}
}
The operations of this method, in terms of input and output, are very clear to the user developer.
This article http://spring.io/guides/gs/accessing-data-rest/ shows how to expose JPARepositories directly obviating the need of a service layer.
#RepositoryRestResource(collectionResourceRel="people", path="people")
public interface PersonRepository extends JpaRepository<PersonEntity, Long> {
}
It is not obvious to me how I can make a "delete operation" available with PathVariable Long id.
There is an excellent article on this topic. https://github.com/spring-projects/spring-data-rest/wiki/Configuring-the-REST-URL-path
But it actually shows how to supress export of a delete operation.
As documented here, Spring Data REST will expose item resources for the repository you declare. Thus, all you need to do is discover the URI of the resource to delete and issue a DELETE request to it.

Resources