Modify argument value with the Spring Data JPA Repository - spring

I do have a Spring Data JPA repository with a custom method:
#Repository
public interface EntityRepository extends PagingAndSortingRepository<Entity, Long> {
List<Entity> findByNameIgnoreCase(String name);
}
And I would like to somehow modify the name (e.g. escape the % and _, see https://jira.spring.io/browse/DATAJPA-216) value before calling the method.
My proposed solution was to create a CustomString and a Converter<CustomString, String> with required business logic in it. But even if I change the signature to findByNameIgnoreCase(CustomString name), the converter is not used and the original CustomString is passed to the SimpleJpaRepository.
Is there any other way how to do that without creating any extra Services and wrapping the repository call?

Can't you just convert the string before invoking the repository method?
Smth like this:
entityRepository.findByNameIgnoreCase(transformNameIntoSomethingElse(name));

Related

How to avoid Spring Repository<T, ID> to leak persistence information into service tier

I'm using spring-data-mongodb at the moment so this question is primarily in context of MongoDB but I suspect my question applies to repository code in general.
Out of the box when using a MongoRepository<T, ID> interface (or any other Repository<T, ID> descendent) the entity type T is expected to be the document type (the type that defines the document schema).
As a result injecting such a repository into service component means this repository is leaking database schema information into the service tier (highly pseudo) :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
interface MyRepository extends MongoRepository<MyDocument, String> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
var documentId = convert(id, ...);
var matchingDocument = repository.findById(documentId).orElse(...);
var model = convert(matchignDocument, ...);
return model;
}
}
Whilst ideally I'd want to do this :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
#Configuration
class MyMagicConversionConfig {
...
}
class MyDocumentToModelConverter implements Converter<MyModel, MyDocument> {
...
}
class MyModelToDocumentConverter implements Converter<MyDocument, MyModel> {
...
}
// Note that the model and the model's ID type are used in the repository declaration
interface MyRepository extends MongoRepository<MyModel, UUID> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
// Repository now returns the model because it was converted upstream
// by the mongo persistence layer.
var matchingModel = repository.findById(documentId).orElse(...);
return matchingModel ;
}
}
Defining this conversion once seems significantly more practical than having to consistently do it throughout your service code so I suspect I'm just missing something.
But of course this requires some way to inform the mongo mapping layer to be aware of what conversion has to be applied to move between MyModel and MyDocument and to use the latter for it's actual source of mapping metadata (e.g. #Document, #Id, etc.).
I've been fiddling with custom converters but I just can't seem to make the MongoDB mapping component do the above.
My two questions are :
Is it currently possible to define custom converters or implement callbacks that allow me to define and implement this model <-> document conversion once and abstract it away from my service tier.
If not, what is the idiomatic way to approach cleaning this up such that the service layer can stay blissfully unaware of how or with what schema an entity is persisted? A lot of Spring Boot codebases appear to be fine with using the type that defines the database schema as their model but that seems supoptimal. Suggestions welcome!
Thanks!
I think you're blowing things a bit out of proportion. The service layer is not aware of the schema. It is aware of the types returned by the repository. How the properties of those are mapped onto the schema, depends on the object-document mapping. This, by default, uses the property name, as that's the most straightforward thing to do. That translation can either be customized using annotations on the document type or by registering a FieldNamingStrategy with Spring Data MongoDB.
Spring Data MongoDB's object-document mapping subsystem provides a lot of customization hooks that allows transforming arbitrary MongoDB documents into entities. The types which the repositories return are your domain objects that - again, only by default - are mapped onto a MongoDB document 1:1, simply because that's the most reasonable thing to do in the first place.
If really in doubt, you can manually implement repository methods individually that allow you to use the MongoTemplate API that allows you to explicitly define the type, the data should be projected into.
You can use something like MapStruct or write your own Singleton Mapper.
Then create default methods in your repository:
interface DogRepository extends MongoRepository<DogDocument, String> {
DogDocument findById(String id);
default DogModel dogById(String id) {
return DogMapper.INSTANCE.toModel(
findById(id)
);
}
}

Spring Data - Build where clause at runtime

In Spring Data, how can I append more conditions to an existing query?
For example, I have the CrudRepository below:
#RepositoryRestResource
public interface MyRep extends CrudRepository<MyObject, Long> {
#Query("from MyObject mo where mo.attrib1 = :attrib1")
List<MyObj> findMyObjects(String attrib1, String conditions);
}
At runtime, I will need to call "findMyObjects" with two params. The first param is obviously the value of attrib1. the second param will be a where clause that would be determined at runtime, for example "attrib2 like '%xx%' and attrib3 between 'that' and 'this' and ...". I know this extra where condition will be valid, but I don't know what attributes and conditions will be in it. Is there anyway to append this where clause to the query defined in the #Query annotation?
Unfortunately, no. There is no straightforward way to achieve that.
You'll want to use custom reporistory methods where you'll be able to inject an EntityManager and interact with EntityManager.createQuery(...) directly.
Alternatively, you can build dynamic queries using Specifications or QueryDsl.
I ended up injecting an EntityManager that I obtained in the rest controller. Posting what I did here for criticism:
The repository code:
#RepositoryRestResource
public interface MyRepo extends CrudRepository<MyObject, Long> {
default List<MyObject> findByRuntimeConditions(EntityManager em, String runtimeConditions) {
String mySql = "<built my sql here. Watch for sql injection.>";
List<MyObject> list = em.createQuery(mySql).getResultList();
return list
}
}
The Rest controller code:
#RestController
public class DataController {
#Autowired
EntityManager em;
// of course watch for sql injection
#RequestMapping("myobjects/{runtimeConditions}")
public List<MyObject> getMyObjects(#PathVariable String runtimeConditions) {
List<MyObject> list = MyRepo.findByRuntimeConditions(em, runtimeConditions);
return list;
}
}

Jackson deserializer priority?

I have a Spring Boot app that is modeling ActityStreams objects and for the most part Jackson's Polymorphic Deserialization works well.
There are 'objects' in the JSON which are references (links) and not JSON objects with type information. For instance
"actor":"https://some.actors.href/ rather than
"actor":{
"type":"Actor",
"name":"SomeActor"
}
I've written custom deserializers and and placed them on the fields to deal with this
#JsonDeserialize (using = ActorOrLinkDeserializer.class)
private Actor actor;
However my ActorOrLinkDeserializer is instantiated but never called and Jackson complains with Missing type id when trying to resolve subtype of [simple type, class org.w3.activity.streams.Actor]: missing type id property 'type' (for POJO property 'actor') which is from the polymorphic deserializer.
It appears that the polymorphic deserialization code takes precedence over my local #JsonDeserialize annotation and I need a way to force my code to run first.
I've tried using my own ObjectMapper rather than Boot's and there's no difference.
I'd appreciate pointers and suggestions.
It turns-out there's a fairly simple solution to this problem using a DeserializationProblemHandler.
What I've implemented that works for all test cases so far is
1.
objectMapper.addHandler(new DeserProblemHandler());
or register with Spring Boot.
2.
public class DeserProblemHandler extends DeserializationProblemHandler {
public JavaType handleMissingTypeId(DeserializationContext ctxt, JavaType baseType, TypeIdResolver idResolver, String failureMsg) {
return TypeFactory.defaultInstance().constructType(baseType.getRawClass());
}
}
Add a constructor to each of the polymorphic classes that takes a string argument which is the href.

How to create custom abstract repository like PagingAndSortingRepository, etc?

For example in my persistence:
public interface SomePersistence extends JpaRepository<SomeClass, String> {};
I can write method like:
#Query("some query")
List<SomeClass> getAllWithSomeParam();
and spring knows to use SimpleJpaRepository class - implementation of JpaRepository.
When i write:
#Query("some query")
Page<SomeClass> getAllWithSomeParam(Pageable page);
spring knows to use implementation of PagingAndSortingRepository.
But now i want to add my own returned type - Cursor<T>
It`s mean i want write:
#Query("some query")
Cursor<SomeClass> anyMethodName();
Then i want to give spring my own repository CursorRepository with its personal CursorRepositoryImpl when i have only one method Cursor<T> findAll()
Can i realize it?

Return custom-typed object from JpaRepository

I have the following repository:
public interface UserRepository extends BaseDAO<User> {
Collection<User> findByEmail(#Param("email") String email);
#Query("select new com.data.CustomUser(upper(substring(u.lastName, 1, 1)) as initial, count(*)) from User u join u.chats c where c.business=:business group by upper(substring(u.lastName, 1, 1)) order by initial")
List<CustomUser> getContactsIndex(#Param("email") String email);
}
which is exposed with Spring Data REST. The User object is a managed entity, while CustomUser not and as you can see, it's build on-fly by using custom query.
Once I want to call that function, it fails with Persistent entity must not be a null! exception. Is there any way to implement this behavior?
P.S. Expose CustomUser with separate repository is impossible because it is not a managed entity.
One challenge with using Spring Data Rest is when you hit an edge case and you don't know whether you've hit a bug or whether you're just outside the scope of what the library is intended for. In this case I think you are at the edge of what SDR will easily do for you, and it's time to implement your own controller.
Spring Data Rest is looking for an Entity - in your case a User - as the return type for ALL methods in the repository to expose under /entities/search, and breaks when it doesn't find that entity type. The User it wants to serialize isn't there, hence the "Persistent entity must not be null".
The way around this is to write a simple #Controller that has a #RequestMapping for the exact same url exposed by the repository method. This will override the SDR generated implementation for that url, and from that you can return whatever you want.
Your implementation might look something like this:
#Controller
public class CustomUserController {
private final UserRepository repository;
#Inject
public CustomUserController(UserRepository repo) {
repository = repo;
}
#RequestMapping(value = "/users/search/getContactsIndex", method = GET, produces = {MediaType.APPLICATION_JSON_VALUE})
public #ResponseBody List<CustomUser> getContactsIndex(#RequestParam String email) {
return repository.getContactsIndex(email);
}
}
Be aware that there is a "recommended" way to override functionality this way. There is an open issue to document the best way to do this.

Resources