Specify query hints for SimpleJpaRepository find methods - spring

Goal
I'm implementing a custom base repository with the goal of being able to specify attribute nodes of entity graphs by method argument instead of method annotation. So instead of having annotated methods like
#EntityGraph(attributePaths = ["enrollments"])
fun findByIdFetchingEnrollments(id: Long): Optional<Student>
in my repositories, I just have a base repository with a method like
fun findById(id: ID, vararg attributeNodes: String): Optional<T>
which gives a lot of flexibility because with only one method I can, for example, call
studentRepository.findById(1L, "enrollments")
// or
studentRepository.findById(1L, "favouriteSubjects", "favouriteTeachers")
// and others...
where "enrollments", "favouriteSubjects", and "favouriteTeachers" are lazy-fetch entity fields by default but sometimes are required to be fetched eagerly (to avoid LazyInitializationException)
What I have
Those are my classes:
// Base repository interface. All my repositories will extend this interface
#NoRepositoryBean
interface Repository<T, ID> : JpaRepository<T, ID> {
fun findById(id: ID, vararg attributeNodes: String): Optional<T>
// other find methods
}
// Custom base class. Inherits SimpleJpaRepository
class RepositoryImpl<T, ID>(
val entityInformation: JpaEntityInformation<T, Any?>,
val em: EntityManager
) : SimpleJpaRepository<T, ID>(entityInformation, em), Repository<T,ID> {
// This is basically a copy-paste of SimpleJpaRepository findById method implementation...
override fun findById(id: ID, vararg attributeNodes: String): Optional<T> {
Assert.notNull(id, "The given id must not be null!")
/*
Because 'repositoryMethodMetadata!!.queryHints' is read-only, I have to create a new Map,
put all the queryHints entries in it and put the 'javax.persistence.loadgraph' hint.
*/
val graph = em.createEntityGraph(entityInformation.javaType)
graph.addAttributeNodes(*attributeNodes)
val hints: MutableMap<String, Any?> = HashMap()
hints.putAll(repositoryMethodMetadata!!.queryHints)
hints["javax.persistence.loadgraph"] = graph
if (repositoryMethodMetadata == null) {
return Optional.ofNullable(em.find(domainClass, id, hints))
}
val type = repositoryMethodMetadata!!.lockModeType
return Optional.ofNullable(
if (type == null) em.find(domainClass, id, hints)
else em.find(domainClass, id, type, hints)
)
}
/*
In order to implement the other find methods the same kind of copy-paste implementations
needs to be done, which shouldn't be necessary but I'm not seeing how.
*/
}
#Configuration
#EnableJpaRepositories(
"com.domain.project.repository",
repositoryBaseClass = RepositoryImpl::class)
class ApplicationConfiguration
Problem
The problem is that I'm not seeing a simple way of adding the javax.persistence.loadgraph hint to the query hints that SimpleJpaRepository pass to EnitityManager.find.
I think the best way to solve this would be override the SimpleJpaRepository getQueryHints() method which is used by all find methods in SimpleJpaRepository. Then my code in RepositoryImpl would become much simpler (override all find methods, and for each just add the hint and call super method). But I can't override it because QueryHints class is package-private.
If SimpleJpaRepository metadata property were mutable, I could also add to it query hints (which in turn are considered in getQueryHints() method), but all its properties are read-only and I think sometimes metadata is null (not sure on this but it is marked as #Nullable).

Related

How to avoid Spring Repository<T, ID> to leak persistence information into service tier

I'm using spring-data-mongodb at the moment so this question is primarily in context of MongoDB but I suspect my question applies to repository code in general.
Out of the box when using a MongoRepository<T, ID> interface (or any other Repository<T, ID> descendent) the entity type T is expected to be the document type (the type that defines the document schema).
As a result injecting such a repository into service component means this repository is leaking database schema information into the service tier (highly pseudo) :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
interface MyRepository extends MongoRepository<MyDocument, String> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
var documentId = convert(id, ...);
var matchingDocument = repository.findById(documentId).orElse(...);
var model = convert(matchignDocument, ...);
return model;
}
}
Whilst ideally I'd want to do this :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
#Configuration
class MyMagicConversionConfig {
...
}
class MyDocumentToModelConverter implements Converter<MyModel, MyDocument> {
...
}
class MyModelToDocumentConverter implements Converter<MyDocument, MyModel> {
...
}
// Note that the model and the model's ID type are used in the repository declaration
interface MyRepository extends MongoRepository<MyModel, UUID> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
// Repository now returns the model because it was converted upstream
// by the mongo persistence layer.
var matchingModel = repository.findById(documentId).orElse(...);
return matchingModel ;
}
}
Defining this conversion once seems significantly more practical than having to consistently do it throughout your service code so I suspect I'm just missing something.
But of course this requires some way to inform the mongo mapping layer to be aware of what conversion has to be applied to move between MyModel and MyDocument and to use the latter for it's actual source of mapping metadata (e.g. #Document, #Id, etc.).
I've been fiddling with custom converters but I just can't seem to make the MongoDB mapping component do the above.
My two questions are :
Is it currently possible to define custom converters or implement callbacks that allow me to define and implement this model <-> document conversion once and abstract it away from my service tier.
If not, what is the idiomatic way to approach cleaning this up such that the service layer can stay blissfully unaware of how or with what schema an entity is persisted? A lot of Spring Boot codebases appear to be fine with using the type that defines the database schema as their model but that seems supoptimal. Suggestions welcome!
Thanks!
I think you're blowing things a bit out of proportion. The service layer is not aware of the schema. It is aware of the types returned by the repository. How the properties of those are mapped onto the schema, depends on the object-document mapping. This, by default, uses the property name, as that's the most straightforward thing to do. That translation can either be customized using annotations on the document type or by registering a FieldNamingStrategy with Spring Data MongoDB.
Spring Data MongoDB's object-document mapping subsystem provides a lot of customization hooks that allows transforming arbitrary MongoDB documents into entities. The types which the repositories return are your domain objects that - again, only by default - are mapped onto a MongoDB document 1:1, simply because that's the most reasonable thing to do in the first place.
If really in doubt, you can manually implement repository methods individually that allow you to use the MongoTemplate API that allows you to explicitly define the type, the data should be projected into.
You can use something like MapStruct or write your own Singleton Mapper.
Then create default methods in your repository:
interface DogRepository extends MongoRepository<DogDocument, String> {
DogDocument findById(String id);
default DogModel dogById(String id) {
return DogMapper.INSTANCE.toModel(
findById(id)
);
}
}

How to link a Vaadin Grid with the result of Spring Mono WebClient data

This seems to be a missing part in the documentation of Vaadin...
I call an API to get data in my UI like this:
#Override
public URI getUri(String url, PageRequest page) {
return UriComponentsBuilder.fromUriString(url)
.queryParam("page", page.getPageNumber())
.queryParam("size", page.getPageSize())
.queryParam("sort", (page.getSort().isSorted() ? page.getSort() : ""))
.build()
.toUri();
}
#Override
public Mono<Page<SomeDto>> getDataByPage(PageRequest pageRequest) {
return webClient.get()
.uri(getUri(URL_API + "/page", pageRequest))
.retrieve()
.bodyToMono(new ParameterizedTypeReference<>() {
});
}
In the Vaadin documentation (https://vaadin.com/docs/v10/flow/binding-data/tutorial-flow-data-provider), I found an example with DataProvider.fromCallbacks but this expects streams and that doesn't feel like the correct approach as I need to block on the requests to get the streams...
DataProvider<SomeDto, Void> lazyProvider = DataProvider.fromCallbacks(
q -> service.getData(PageRequest.of(q.getOffset(), q.getLimit())).block().stream(),
q -> service.getDataCount().block().intValue()
);
When trying this implementation, I get the following error:
org.springframework.core.codec.CodecException: Type definition error: [simple type, class org.springframework.data.domain.Page]; nested exception is com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of `org.springframework.data.domain.Page` (no Creators, like default constructor, exist): abstract types either need to be mapped to concrete types, have custom deserializer, or contain additional type information
at [Source: (io.netty.buffer.ByteBufInputStream); line: 1, column: 1]
grid.setItems(lazyProvider);
I don't have experience with vaadin, so i'll talk about the deserialization problem.
Jackson needs a Creator when deserializing. That's either:
the default no-arg constructor
another constructor annotated with #JsonCreator
static factory method annotated with #JsonCreator
If we take a look at spring's implementations of Page - PageImpl and GeoPage, they have neither of those. So you have two options:
Write your custom deserializer and register it with the ObjectMapper instance
The deserializer:
public class PageDeserializer<T> extends StdDeserializer<Page<T>> {
public PageDeserializer() {
super(Page.class);
}
#Override
public Page<T> deserialize(JsonParser p, DeserializationContext ctxt) throws IOException, JacksonException {
//TODO implement for your case
return null;
}
}
And registration:
SimpleModule module = new SimpleModule();
module.addDeserializer(Page.class, new PageDeserializer<>());
objectMapper.registerModule(module);
Make your own classes extending PageImpl, PageRequest, etc. and annotate their constructors with #JsonCreator and arguments with #JsonProperty.
Your page:
public class MyPage<T> extends PageImpl<T> {
#JsonCreator
public MyPage(#JsonProperty("content_prop_from_json") List<T> content, #JsonProperty("pageable_obj_from_json") MyPageable pageable, #JsonProperty("total_from_json") long total) {
super(content, pageable, total);
}
}
Your pageable:
public class MyPageable extends PageRequest {
#JsonCreator
public MyPageable(#JsonProperty("page_from_json") int page, #JsonProperty("size_from_json") int size, #JsonProperty("sort_object_from_json") Sort sort) {
super(page, size, sort);
}
}
Depending on your needs for Sort object, you might need to create MySort as well, or you can remove it from constructor and supply unsorted sort, for example, to the super constructor. If you are deserializing from input manually you need to provide type parameters like this:
JavaType javaType = TypeFactory.defaultInstance().constructParametricType(MyPage.class, MyModel.class);
Page<MyModel> deserialized = objectMapper.readValue(pageString, javaType);
If the input is from request body, for example, just declaring the generic type in the variable is enough for object mapper to pick it up.
#PostMapping("/deserialize")
public ResponseEntity<String> deserialize(#RequestBody MyPage<MyModel> page) {
return ResponseEntity.ok("OK");
}
Personally i would go for the second option, even though you have to create more classes, it spares the tediousness of extracting properties and creating instances manually when writing deserializers.
There are two parts to this question.
The first one is about asynchronously loading data for a DataProvider in Vaadin. This isn't supported since Vaadin has prioritized the typical case with fetching data straight through JDBC. This means that you end up blocking a thread while the data is loading. Vaadin 23 will add support for doing that blocking on a separate thread instead of keeping the UI thread blocked, but it will still be blocking.
The other half of your problem doesn't seem to be directly related to Vaadin. The exception message says that the Jackson instance used by the REST client isn't configured to support creating instances of org.springframework.data.domain.Page. I don't have direct experience with this part of the problem, so I cannot give any advice on exactly how to fix it.

findById with Quarkus, Mongodb and Panache raises the error: "This method is normally automatically overridden in subclasses"

I began with the Quarkus tutorial: https://quarkus.io/guides/mongodb-panache for learning how to use Mongo in Quarkus, but although my entities are storing right on the database when I try to use findByIdOptional from PanacheMongoRepository the console displays this error:
Caused by: java.lang.IllegalStateException: This method is normally automatically overridden in subclasses
at io.quarkus.mongodb.panache.runtime.MongoOperations.implementationInjectionMissing(MongoOperations.java:633)
at io.quarkus.mongodb.panache.PanacheMongoRepositoryBase.findByIdOptional(PanacheMongoRepositoryBase.java:102)
at com.basketmaster.backend.common.infraestructure.CrudMongoRepository.findById(CrudMongoRepository.kt:25)
interface CrudRepository<M : Model<*>> {
fun save(model: M): M
fun findById(id: String): M
}
#ApplicationScoped
class CrudMongoRepository<M : Model<*>> : CrudRepository<M>, PanacheMongoRepository<M> {
override fun save(model: M): M {
persistOrUpdate(model)
return model
}
override fun findById(id: String): M {
val optional = super.findByIdOptional(ObjectId(id))
return optional.orElseThrow { NotFoundException() }
}
}
I noticed the interface PanacheMongoRepository inherit from PanacheMongoRepositoryBase but findByIdOptional is not implemented:
public interface PanacheMongoRepositoryBase<Entity, Id> {
// ...
#GenerateBridge(targetReturnTypeErased = true)
default Entity findById(Id id) {
throw INSTANCE.implementationInjectionMissing();
}
#GenerateBridge
default Optional<Entity> findByIdOptional(Id id) {
throw INSTANCE.implementationInjectionMissing();
}
}
How should I use Panache for finding the entities by id? I don't understand why these methods are unimplemented but the tutorial uses them and it works properly.
The methods of MongoDB with Panache entities/repositories are implemented at build time via bytecode generation.
For this to happens, you need to have an entity / a reposotiry with a concrete parameter type not a generic one as the extension needs to know the concrete type at build time to be able to use the right parameterized MongoDB Collection.
So here, the issue is that you cannot directly use your CrudMongoRepository, you need to subclass it and provide a concrete parameter type.

How can I properly override a method declared in an abstract generic restcontroller?

I'm having some trouble implementing a function over some pre-existing code.
Other programmers working on this project previously defined a genric abstract "restcontroller" (it's not actually annotated as #RestController but it's meant to be extended by classes with that annotation)
public abstract class AbstractController<T extends AbstractEntity, R extends JpaRepository<T, Integer>> {
#GetMapping(value = "/getall")
public Paging<T> getAll(#RequestParam Integer itemsPerPage,
#RequestParam Integer pageIndex,
#RequestParam Map<String, String> filters,
#Autowired Consumer consumer) {
//Fetch entities of type T from repository R and return them
}
//other generic crud operations
}
This class is usually extended by concrete controllers that simply define other operations on their specific types, but do no alter generic crud operations.
What I want to do is extend this class, but override the getAll method, like this:
#RestController
#RequestMapping("/api/tasks")
public class TaskController extends AbstractController<Task, TaskRepository> {
#Override
public Paging<Task> getAll(Integer itemsPerPage, Integer pageIndex, Map<String, String> filters, Consumer consumer) {
LoggerFactory.getLogger(LazyTaskController.class).log("function called successfully!");
Paging<Task> paging = super.getAll(itemsPerPage, pageIndex, filters, consumer);
//do things with return value before returning
return paging;
}
}
If I call BASEURL/api/tasks/getall?itemsPerPage=25&pageIndex=0 without overriding the getAll method, the parameters are wired correctly (the Map contains two values, itemsPerPage and pageIndex, as expected, and consumer contains a concrete implementation of the intercace Consumer).
However if I do override it, the Map for some reason contains two values, one with key "consumer" and type Proxy, and another with key "org.springframework.validation.BindingResult.consumer" and value of type BeanPropertyBindingResult; and consumer contains a Proxy.
I suppose the #Override interferes with the autowiring of Consumer, but I can't figure out how to properly achieve what I have in mind (manipulating the results of getAll before returning them).
Thank you in advance
Nevermind, I solved it.
The problem with the Map was solved by adding #RequestParam and #Autowired annotations to the overridden method parameters as well.
The problem with the Consumer concrete type was somehow solved by applying a custom annotation that I found on another class in the codebase, I'm still not sure about what that annotation does but at least I know what to look for now.

Return custom-typed object from JpaRepository

I have the following repository:
public interface UserRepository extends BaseDAO<User> {
Collection<User> findByEmail(#Param("email") String email);
#Query("select new com.data.CustomUser(upper(substring(u.lastName, 1, 1)) as initial, count(*)) from User u join u.chats c where c.business=:business group by upper(substring(u.lastName, 1, 1)) order by initial")
List<CustomUser> getContactsIndex(#Param("email") String email);
}
which is exposed with Spring Data REST. The User object is a managed entity, while CustomUser not and as you can see, it's build on-fly by using custom query.
Once I want to call that function, it fails with Persistent entity must not be a null! exception. Is there any way to implement this behavior?
P.S. Expose CustomUser with separate repository is impossible because it is not a managed entity.
One challenge with using Spring Data Rest is when you hit an edge case and you don't know whether you've hit a bug or whether you're just outside the scope of what the library is intended for. In this case I think you are at the edge of what SDR will easily do for you, and it's time to implement your own controller.
Spring Data Rest is looking for an Entity - in your case a User - as the return type for ALL methods in the repository to expose under /entities/search, and breaks when it doesn't find that entity type. The User it wants to serialize isn't there, hence the "Persistent entity must not be null".
The way around this is to write a simple #Controller that has a #RequestMapping for the exact same url exposed by the repository method. This will override the SDR generated implementation for that url, and from that you can return whatever you want.
Your implementation might look something like this:
#Controller
public class CustomUserController {
private final UserRepository repository;
#Inject
public CustomUserController(UserRepository repo) {
repository = repo;
}
#RequestMapping(value = "/users/search/getContactsIndex", method = GET, produces = {MediaType.APPLICATION_JSON_VALUE})
public #ResponseBody List<CustomUser> getContactsIndex(#RequestParam String email) {
return repository.getContactsIndex(email);
}
}
Be aware that there is a "recommended" way to override functionality this way. There is an open issue to document the best way to do this.

Resources