How can I do bean-validation with spring repositories? - spring

I'm trying to use my repository interface looks like this.
interface SomeRepository extends JpaRepository<Some, Long> {
#org.springframework.lang.Nullable
Some findByKey(
#org.springframework.lang.NonNull
#javax.validation.constraint.NotNull
final String key);
}
And I found those constraints don't work as expected.
#Test
void findByKeyWithNullKey() {
repository.findByKey(null);
}
The test case simply passes.
How can I make it work?

According to Spring JPA document :
To enable runtime checking of nullability constraints for query methods, you need to activate non-nullability on the package level by using Spring’s #NonNullApi.
you can add package annotations simply by creating package-info.java file and add the package declaration that it relates to in the file.Then add this annotation to your package like so :
#org.springframework.lang.NonNullApi
package com.example;

I would suggest to use javax validation in your spring framework and suppose if you are using maven so you just have to include below dependency
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
</dependency>
after that please try below code
Some findByKey(
#NotNull final String key);

It works as you pasted the code. Of course, you need to use #Repository on the repo and remove #javax.validation.constraint.NotNull since that's not what you want. Furthermore, you need to make sure you have the proper imports in the pom.
I'd recommend doing the reverse, adding non null api on package level, then:
Rule findOneByExpression(#Nullable String expression);
ruleRepository.findOneByExpression(null);
And see it fail, if it returns null. Then change it like so:
#Nullable
Rule findOneByExpression(#Nullable String expression);
And it will pass.

Related

Forcing validation annotation to provide a message

I am using hibernate validator to do POJO validation, and also i have created some custom ones. Here is an example:
//lombok annotations
public class Address {
#NotNull // standard
#State //Custom created
String country;
}
We have a requirement to represents all the validations errors with specific codes rather than messages. In order to achieve this we have decided to specify codes in every annotation that we use. The above example now looks like this:
//lombok annotations
public class Address {
#NotNull(message="ERR_001")
#State(message="ERR_002")
String country;
}
But we have a problem with this approach. We could not enforce to provide a message(error code in our case) all the time in an annotation. For custom annotation, it is still ok as we do not provide a default message, but for the standard ones there is chance to miss it and a string message will silently generated if we accidentally miss to provide a custom message.
Is there a way to enforce to provide message all the time in the annotation. It will probably help to have some consistency.
To my knowledge no, there is no way to do that. Maybe your best option is to create your own annotation and make the attribute mandatory.
Sevntu-Checkstyle provides additional checks to Checkstyle, including a check that an annotation is used with all required parameters.
<module name="RequiredParameterForAnnotation">
<property name="annotationName" value="NotNull"/>
<property name="requiredParameters" value="message"/>
</module>
I could not find a good way to handle it. But for now i have implemented a test which give us some control over it. Its not the best solution but solves the issue for now.
I am using classgraph to read all the annotations on POJO classes inside a package and filtering it on javax validations and if the default messages appears to be from javax.validation, then i am adding to a list.
Later on a unit test, i am checking if this list is empty or not.
private List<String> getAnnotationProperties(String appliedOn, AnnotationInfoList annotationInfos) {
return annotationInfos.stream()
.filter(annotationInfo -> annotationInfo.getName().contains("javax.validation.constraints"))
.filter(annotationInfo -> ((String) annotationInfo.getParameterValues().getValue("message")).contains("javax.validation.constraints"))
.map(annotationInfo -> annotationInfo.getName())
.collect(Collectors.toList());
}

QueryException when using Spring Data Rest with EclipseLink on Multi-Tenant System

I am using Spring data rest and EclipseLink to create a multi-tenant single table application.
But I am not able to create an Repository where I can call on custom QueryParameters.
My Kid.class
#Entity
#Table(name="kid")
#Multitenant
public class Kid {
#Id
private Long id;
#Column(name = "tenant_id")
private String tenant_id;
#Column(name = "mother_id")
private Long motherId;
//more attributes, constructor, getter and setter
}
My KidRepository
#RepositoryRestResource
public interface KidRepository extends PagingAndSortingRepository<Kid, Long>, QuerydslPredicateExecutor<Kid> {}
When I call localhost/kids I get the following exception:
Exception [EclipseLink-6174] (Eclipse Persistence Services - 2.7.4.v20190115-ad5b7c6b2a):
org.eclipse.persistence.exceptions.QueryException\r\nException Description: No value was provided for the session property [eclipselink.tenant-id].
This exception is possible when using additional criteria or tenant discriminator columns without specifying the associated contextual property.
These properties must be set through EntityManager, EntityManagerFactory or persistence unit properties.
If using native EclipseLink, these properties should be set directly on the session.
When I remove the #Multitenant annotation on my entity, everything works fine. So it has definitively something to do with EclipseLink.
When I don't extend from the QuerydslPredicateExecutor it works too. But then I have to implement all findBy* by myself. And even doing so, it breaks again. Changing my KidsRepository to:
#RepositoryRestResource
public interface KidRepository extends PagingAndSortingRepository<Kid, Long> {
Collection<Kid> findByMotherId(#Param("motherId") Long motherId);
}
When I now call localhost/kids/search/findByMotherId?motherId=1 I get the same exception as above.
I used this tutorial to set up EcpliseLink with JPA: https://blog.marcnuri.com/spring-data-jpa-eclipselink-configuring-spring-boot-to-use-eclipselink-as-the-jpa-provider/, meaning the PlatformTransactionManager, the createJpaVendorAdapter and the getVendorProperties are overwritten.
The tenant-id comes with a jwt and everything works fine as long as I don't use QuerydslPredicateExecutor, which is mandatory for the use case.
Turns out, that the wrong JpaTransactionManager is used we I rely on the QuerydslPredicateExecutor. I couldn't find out, which one is created, but having multiple breakpoints inside the EclipseLink Framework code, non of them were hit. This is true for both, using the QuerydslPredicateExecutor or using the custom findby method.
I have googled a lot and tried to override some of the basic EclipseLink methods, but non of that worked. I am running out of options.
Does anyone has any idea how to fix or work around this?
I was looking for a solution for the same issue; what finally helped was adding the Spring's #Transactional annotation to either Repository or any place from where this custom query is called. (It even works with javax.transactional.) We had the #Transactional annotation on most of our services so the issue was not obvious and its occurrence seemed rather accidental.
More detailed explanation about using #Transactional on Repository is here: How to use #Transactional with Spring Data?.

Java Configuration vs Component Scan Annotations

Java configuration allows us to manage bean creation within a configuration file. Annotated #Component, #Service classes used with component scanning does the same. However, I'm concerned about using these two mechanisms at the same time.
Should Java configuration and annotated component scans be avoided in the same project? I ask because the result is unclear in the following scenario:
#Configuration
public class MyConfig {
#Bean
public Foo foo() {
return new Foo(500);
}
}
...
#Component
public class Foo {
private int value;
public Foo() {
}
public Foo(int value) {
this.value = value;
}
}
...
public class Consumer {
#Autowired
Foo foo;
...
}
So, in the above situation, will the Consumer get a Foo instance with a 500 value or 0 value? I've tested locally and it appears that the Java configured Foo (with value 500) is created consistently. However, I'm concerned that my testing isn't thorough enough to be conclusive.
What is the real answer? Using both Java config and component scanning on #Component beans of the same type seems like a bad thing.
I think your concern is more like raised by the following use case:
You have a custom spring-starter-library that have its own #Configuration classes and #Bean definitions, BUT if you have #Component/#Service in this library, you will need to explicitly #ComponentScan these packages from your service, since the default #ComponentScan (see #SpringBootApplication) will perform component scanning from the main class, to all sub-packages of your app, BUT not the packages inside the external library. For that purpose, you only need to have #Bean definitions in your external library, and to inject these external configurations via #EnableSomething annotation used on your app's main class (using #Import(YourConfigurationAnnotatedClass.class) OR via using spring.factories in case you always need the external configuration to be used/injected.
Of course, you CAN have #Components in this library, but the explicit usage of #ComponentScan annotation may lead to unintended behaviour in some cases, so I would recommend to avoid that.
So, to answer your question -> You can have both approaches of defining beans, only if they're inside your app, but bean definitions outside your app (e.g. library) should be explicitly defined with #Bean inside a #Configuration class.
It is perfectly valid to have Java configuration and annotated component scans in the same project because they server different purposes.
#Component (#Service,#Repository etc) are used to auto-detect and auto-configure beans.
#Bean annotation is used to explicitly declare a single bean, instead of letting Spring do it automatically.
You can do the following with #Bean. But, this is not possible with #Component
#Bean
public MyService myService(boolean someCondition) {
if(someCondition) {
return new MyServiceImpl1();
}else{
return new MyServiceImpl2();
}
}
Haven't really faced a situation where both Java config and component scanning on the bean of the same type were required.
As per the spring documentation,
To declare a bean, simply annotate a method with the #Bean annotation.
When JavaConfig encounters such a method, it will execute that method
and register the return value as a bean within a BeanFactory. By
default, the bean name will be the same as the method name
So, As per this, it is returning the correct Foo (with value 500).
In general, there is nothing wrong with component scanning and explicit bean definitions in the same application context. I tend to use component scanning where possible, and create the few beans that need more setup with #Bean methods.
There is no upside to include classes in the component scan when you create beans of their type explicitly. Component scanning can easily be targeted at certain classes and packages. If you design your packages accordingly, you can component scan only the packages without "special" bean classes (or else use more advanced filters on scanning).
In a quick look I didn't find any clear information about bean definition precedence in such a case. Typically there is a deterministic and fairly stable order in which these are processed, but if it is not documented it maybe could change in some future Spring version.

#Reference from Servlet Filter

I am writing a Servlet Filter and would like to use one of my Liferay components using #Reference:
package my.filter;
import my.Compo;
import org.osgi.service.component.annotations.Reference;
public class MyFilter implements Filter {
#Override
public void doFilter(...) {
compo.doTheThing();
}
#Reference(unbind = "-")
protected my.Compo compo;
}
I get this Java compilation error:
annotation type not applicable to this kind of declaration
What am I doing wrong?
Is it maybe impossible to achieve this?
As tipped by Miroslav, #Reference can only be used in an OSGi component, and a servlet filter is not one.
The solution in Liferay 7 is to develop a filter component.
The procedure to do so is explained at http://www.javasavvy.com/liferay-dxp-filter-tutorial/
You can make a simple filer like: https://www.e-systems.tech/blog/-/blogs/filters-in-liferay-7 and http://www.javasavvy.com/liferay-dxp-filter-tutorial/
But you can also use regular filters, as long you configure you Liferay webapp for that -> there are two consequences if you use regular filters though: you will be out of osgi application and you will have to keep track of this whenever you update your bundle. That is why you should not go with regular implementation. (just complementing the OP answer with the underlining reason to avoid the initial track)

how to write predicate for QueryDslPredicateExecutor

When i try to use Querydsl as shown in Spring reference Spring 1.10.4.RELEASE reference - i get some errors from IDE:
Cannot resolve method findAll(predicate). I changed import to com.mysema.query.types.Predicate. Now method looks fine.
But i cant resolve problem with:
Predicate predicate = user.getUsername().equalsIgnoreCase(username).and((user.getId().equals(userid)).not);
I got errors: cannot resolve method: and, cannot resolve method not.
Some from reference:
Example 32. Querydsl integration on repositories
interface UserRepository extends CrudRepository<User, Long>, QueryDslPredicateExecutor<User> {
}
The above enables to write typesafe queries using Querydsl Predicate s.
Predicate predicate = user.firstname.equalsIgnoreCase("dave")
.and(user.lastname.startsWithIgnoreCase("mathews"));
userRepository.findAll(predicate);
But example is incorrect.
Anybody know how to use this?
You are probably using the wrong user object to start with. I assume you are currently using your domain class User but you need to use the class generated by Querydsl, normally named QUser.
See https://github.com/querydsl/querydsl/tree/master/querydsl-jpa for example code.
See QueryDsl - How to create Q classes with maven? for how to generate the necessary classes with Querydsl.

Resources