I'm facing an issue where using #JsonManagedReferences and #JsonBackReference won't break the infinite recursion loop when marshaling the Objects to Json.
I know that I could try to avoid bidirectional relationships but this would not serve very well. Another not desirable solution would be to drop the #RestResource(exported=false) annotation and follow the link provided just once.
One example would be:
#JsonManagedReference
#ManyToOne(cascade=CascadeType.ALL)
#RestResource(exported=false)
#JoinColumn(name="organisation_id")
private Organisation organisation;
with it's counterpart in another class:
#JsonBackReference
#OneToMany(fetch=FetchType.EAGER, cascade=CascadeType.ALL)
#JoinColumn(name ="organisation_id")
#RestResource(exported=false)
#Fetch(value = FetchMode.SUBSELECT)
private Set<OrganisationUnit> organisationUnits;
Bot classes have a #RepositoryRestResource with nothing special in it.
#RepositoryRestResource
public interface OrganisationRepository extends JpaRepository<Organisation, Long> {
public final static String ID_QUERY = "SELECT u FROM Organisation u where u.id=:objectId";
#Query(ID_QUERY)
public Organisation findObjectById(#Param("objectId") Long objectId);}
I know that Jackson 2 should be able to handle to kinds of situations but in this case it does not resolve the issue. Is this known behavior that I'm not aware of?
Please let me know of any obvious flaws as I'm not very experienced using JPA, Hibernate or Spring.
The error message which is provided is:
There was an unexpected error (type=Internal Server Error, status=500). Could not write content: Infinite recursion (StackOverflowError) (through reference chain: org.springframework.hateoas.PagedResources["_embedded"]);
nested exception is com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion
I'd be happy about any pointers.
Which Jackson version are you using? Jackson 2.2.3 does not have this issue. Its taken care automatically
Related
I have an old-ish (3 years) service I need to maintain and just joined the team. It is a spring-webflux app with mongodb (4). All models are defined with lombok and where there are relations between them, #DBRef annotation was used. It works fine, but now I have to bump up spring (and the rest of the dependencies) and I just realized #DBRef is no longer supported.
Somehow, I understand the decision, but I couldn't find any straight forward alternative other than doing myself all the cascade operations.
So, is there any easier way to approach this?
Thanks.
#DBRef has been replaced by #DocumentReference some time ago.
#DocumentReference: Applied at the field to indicate it is to be stored as a pointer to another document. This can be a single value (the id by default), or a Document provided via a converter.
This very simple example shows how it works:
public class Account {
private String id;
private Float total;
}
public class Person {
private String id;
#DocumentReference
private List<Account> accounts;
}
For more details, have a look at the official docu:
https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#mapping-usage-annotations
Hello there i have a question concerning the right way of modelling immutable entities:
Consider this entity (edited as of the suggestion by Jens Schauder):
#Getter
#RequiredArgsConstructor(staticName = "of", access = AccessLevel.PACKAGE)
public final class Student {
private final #Id #Wither
long studentId;
#NotNull
#Size(min = 4, max = 20)
private final String userId;
#NotNull
#Min(0)
private final int matriculationNumber;
#NotNull
#Email
private final String eMail;
}
So this entity should be immutable and offers a static of creation method. Also the RequiredArgsConstructor builds a private constructor although it should create a package visible one for all final/non null fields per definition. In short i did an AllArgsConstructor so to speak.
This document over here https://docs.spring.io/spring-data/jdbc/docs/current/reference/html/#mapping.fundamentals in detail the section about "Object creation internals" states 4 aspects for improved handling - "the constructor to be used by Spring Data must not be private" amongst others which are fulfilled in my opinion.
So my question:
Is this pictured entity done right in both ways concerning immutabillity and spring data jdbc internals optimum mapping?
EDIT:
There seems to be a bug with lombok plugin in intellij, hindering the access = AccessLevel.PACKAGE doing the right stuff. See here:
https://github.com/mplushnikov/lombok-intellij-plugin/issues/584
Although the issue is already closed a new version of the plugin is not available ...
This depends on your definition of "optimum mapping".
It should work, so this is already something.
But the optimization described in the docs cannot be applied because your constructor is private.
Therefore you lose the 10% performance boost of that which it probably does not make it "optimal".
But the 10% boost is about the object instantiation.
It is not about the roundtrip to the database which involves:
extraction of data from your entities
construction (or lookup) of SQL to use
sending both to the database
performing the query in the database
returning the result
This makes it very likely that the gain from that optimization is well below 10% and in most cases nothing to worry about.
Of course, you will never really know until you made your own benchmarks with real data.
For this, you would need to create an all args constructor which has at least package scope.
I am using Spring data rest and EclipseLink to create a multi-tenant single table application.
But I am not able to create an Repository where I can call on custom QueryParameters.
My Kid.class
#Entity
#Table(name="kid")
#Multitenant
public class Kid {
#Id
private Long id;
#Column(name = "tenant_id")
private String tenant_id;
#Column(name = "mother_id")
private Long motherId;
//more attributes, constructor, getter and setter
}
My KidRepository
#RepositoryRestResource
public interface KidRepository extends PagingAndSortingRepository<Kid, Long>, QuerydslPredicateExecutor<Kid> {}
When I call localhost/kids I get the following exception:
Exception [EclipseLink-6174] (Eclipse Persistence Services - 2.7.4.v20190115-ad5b7c6b2a):
org.eclipse.persistence.exceptions.QueryException\r\nException Description: No value was provided for the session property [eclipselink.tenant-id].
This exception is possible when using additional criteria or tenant discriminator columns without specifying the associated contextual property.
These properties must be set through EntityManager, EntityManagerFactory or persistence unit properties.
If using native EclipseLink, these properties should be set directly on the session.
When I remove the #Multitenant annotation on my entity, everything works fine. So it has definitively something to do with EclipseLink.
When I don't extend from the QuerydslPredicateExecutor it works too. But then I have to implement all findBy* by myself. And even doing so, it breaks again. Changing my KidsRepository to:
#RepositoryRestResource
public interface KidRepository extends PagingAndSortingRepository<Kid, Long> {
Collection<Kid> findByMotherId(#Param("motherId") Long motherId);
}
When I now call localhost/kids/search/findByMotherId?motherId=1 I get the same exception as above.
I used this tutorial to set up EcpliseLink with JPA: https://blog.marcnuri.com/spring-data-jpa-eclipselink-configuring-spring-boot-to-use-eclipselink-as-the-jpa-provider/, meaning the PlatformTransactionManager, the createJpaVendorAdapter and the getVendorProperties are overwritten.
The tenant-id comes with a jwt and everything works fine as long as I don't use QuerydslPredicateExecutor, which is mandatory for the use case.
Turns out, that the wrong JpaTransactionManager is used we I rely on the QuerydslPredicateExecutor. I couldn't find out, which one is created, but having multiple breakpoints inside the EclipseLink Framework code, non of them were hit. This is true for both, using the QuerydslPredicateExecutor or using the custom findby method.
I have googled a lot and tried to override some of the basic EclipseLink methods, but non of that worked. I am running out of options.
Does anyone has any idea how to fix or work around this?
I was looking for a solution for the same issue; what finally helped was adding the Spring's #Transactional annotation to either Repository or any place from where this custom query is called. (It even works with javax.transactional.) We had the #Transactional annotation on most of our services so the issue was not obvious and its occurrence seemed rather accidental.
More detailed explanation about using #Transactional on Repository is here: How to use #Transactional with Spring Data?.
I'm wondering: What is the point of FetchType.LAZY in one(many)-to-many using DAO Pattern?
It is basically useless? As soon as you are outside of your DAO (eg. were the actual work is done) you can't fetch the related data as you are not in a hibernate session anymore.
Lets make an Example:
Student and Class. A student takes many classes. he now logs into the system and his Student entity object is retrieved from the system.
application layer -> Service Layer -> DAO
Now the Student wants to see which classes he takes and oops a LazyInitializationException occurs as we are outside of DAO.
What are the options to prevent this? I've like googled hours and not found a solution for this except to actually fetch everything before leaving the DAO which defeats the purpose of lazy in the first place. (Have read about OpenSessionViewFilter but this should work independent of the application layer)
How do you solve this issue in a good way? What are alternative patterns that don't suffer from this?
EDIT:
I get no LazyInitializationException with following settings:
#OneToMany(fetch = FetchType.LAZY, mappedBy = "pk.compound",
cascade = CascadeType.ALL, orphanRemoval = true)
#Fetch(FetchMode.JOIN)
Funny thing is it must be exactly like this.
removing #Fetch -> LazyInitializationException
Even stranger is if I remove orphanRemoval = true, then LazyInitializationException also occurs even with #Fetch. So both of those are required.
Maybe someone could enlighten me why this is the case. Currently I'm tending to ditch hibernate completely as with pure JDBC I would have reached the desired behavior hours ago...
You can always fetch foreign-key relation data without the same ession. Since your session does not exists outside your Application Layer you just fetch it manually in the method where you retrieve the data and set it.
Application Layer:
public List<SchoolClass> getSchoolClassesByStudent(Serializable identifier)
{
List<SchoolClasses> schoolClasses = // get classes by student using criteria or hql
return schoolClasses;
}
Client Layer:
public void loadSchoolClassesByStudent(Student student)
{
student.setSchoolClasses(application.getSchoolClassesByStudent(student.getId()));
}
I myself choosed not to support any collections in my hibernate entities.
I fetch all child relations with very generic methods that my server provides to my client similar to this one.
Edit: Or create some logic (interceptor?) that can check outside the DAO if data is uninitialized before accessing it and initialize it using a generic method.
This would also assume that Hibernate jar's are on the client level, which depends if this is a good idea (Same if the uninitialized data is not set to null).
One way to solve the problem is to use OpenSessionInViewFilter Filter.
<filter>
<filter-name>hibernateSessionFilter</filter-name>
<filter-class>
org.springframework.orm.hibernate3.support.OpenSessionInViewFilter
</filter-class>
</filter>
My team has two classes, User and Store, related by JPA #ManyToMany annotations. The relevant code is below.
When creating a new User object and setting its stores, life is good. But we're seeing some unexpected behavior when trying to update User objects through our Struts 2 / Spring web UI. (These problems do not occur when running simple automated integration tests through JUnit).
Simply put, when saving a User, its stores collection is not updated -- the database shows no changes, and reloads of the User object in question shows it to have an unchanged set of stores.
The only solution that we have found to this problem -- and we don't like this solution, it makes all of the developers on our team a little queasy -- is to create a new Set of Stores, then do user.setStores(null), then do user.setStores(stores).
We are using the OpenEntityManagerInViewFilter; we are using Hibernate as our JPA provider; we are using Spring's JpaTransactionManager. We don't have any #Transactional annotations in our code -- and adding them breaks the existing code due to proxy behavior described elsewhere.
Any insight anyone might provide as to how we might solve this problem in a non-queasy manner is welcome.
Relevant part of User.java:
#ManyToMany
#JoinTable(name = "company.user_store_access",
joinColumns = #JoinColumn(name = "userid"),
inverseJoinColumns = #JoinColumn(name = "storeid"))
public Set<Store> getStores() {
return stores;
}
Relevant part of Store.java:
#ManyToMany(mappedBy = "stores")
public List<User> getUsers() {
return users;
}
Relevant parts of UserDetailAction.java:
(pass-throughs down a layer or two, and then:)
entity = getEntityManager().merge(entity);
The problem is that you have a bi-directional relation, and in this case there is one entity that controls the other, and from your mapping the controlling one is the Store entity and not the User, so adding a user to a store would work (since the Store is the one who controls) but adding a store to a user will not.
try inverting the situation, by making the User object the one who controls the relation, by putting the #ManyToMany(mappedBy = "users") annotation on the getStores() method and changing the getUsers() one.
Hope it helped.
P.S. the entity that controls the relation is always the one that doesn't have the mappedBy attribute.
I just got stuck to same problem and found solution on another page.
Here is what has worked for me
calling getJpaTemplate().flush() before calling merge has worked for me.
I did not get the reason why it worked.
Please try the following annotation in your User object:
#ManyToMany(cascade = {CascadeType.ALL})