I'm still trying to figure what exactly it is I am asking but this is fallout from a discussion in the office. So the dilemma is that on a mapping set to eager with a repository defined for the entity the mapping is to, a link is produced. Some of the time that is fine but some of the time I'd rather have the object fetched itself. If there is not a repository defined for that entity then that is what will occur with the eager fetch strategy. What would be ideal is if I could pass in a parameter and have the existence of that repository disappear or reappear.
Not totally following, but either the repo exists or not. If you want to be able to access entities of type X independently of other entity types, then you have to define a repo for type X.
I think you could achieve something similar using projections.
So you define define a repository for your association entity. By default spring data rest will just render a link to this entity and not embed it in the response.
Then you define a projection with a getter for your associated entity. You can choose on the client side if you want the projection by adding the projection query parameter.
So lets say you have a person with an address - an exported repository exists for Person and Address:
#Entity
public class Person {
#Id #GeneratedValue
private Long id;
private String firstName, lastName;
#OneToOne
private Address address;
…
}
interface PersonRepository extends CrudRepository<Person, Long> {}
interface AddressRepository extends CrudRepository<Address, Long> {}
Your projection could look like this:
#Projection(name = "inlineAddress", types = { Person.class })
interface InlineAddress {
String getFirstName();
String getLastName();
Address getAddress();
}
And if you call http://localhost/persons/1?projection=inlineAddress you have the address embedded - and by default it is just linked.
Related
This question already has answers here:
Infinite Recursion with Jackson JSON and Hibernate JPA issue
(29 answers)
Closed 3 months ago.
I have a Spring Boot application using JPA/Hibernate in its persistence layer. The application has read-only access to a database and basically has three entities Article, Category, and Field, which have the following relationships.
Article (*) -> (1) Category (*) <-> (1) Field
That is, an Article has a Category, and a Category always belongs to a single Field, however, multiple Category instances can belong to the same Field.
The application provides two REST endpoints, which give a single Article and a single Field by their IDs, respectively. Of course, this cannot work when using Jackson for serialization due to the cyclic dependency Category <-> Field.
What I want is when I retrieve an Article, it should give me its Category including the category's Field, but not all the other Category instances that belong to the this same Field. On the other hand, when I retrieve a Field, it should give me the Field including all Category instances that belong to this Field.
How can I achieve this?
Edit:
I basically have a similar question as Jackson infinite loops many-to-one one-to-many relation
You can use interface-based projections, to only retrieve needed properties, since Spring Data allows modeling dedicated return types, to more selectively retrieve partial views of the managed aggregates.
Let's assume the entities are declared as shown below. For simplicity, only the id attribute is defined alongside association-mapping attributes.
#Entity
public class Article {
#Id
private Long id;
#ManyToOne
private Category category;
}
#Entity
public class Category {
#Id
private Long id;
#OneToMany
private Set<Article> articles;
#ManyToOne
private Field field;
}
#Entity
public class Field {
#Id
private Long id;
#OneToMany
private Set<Category> categories;
}
For the first endpoint where the Article is fetched by id, the projections should be declared as follows:
public interface ArticleDto {
Long getId();
CategoryDto1 getCategory();
interface CategoryDto1 {
Long getId();
FieldDto1 getField();
}
interface FieldDto1 {
Long getId();
}
}
The important bit here is that the properties defined here exactly
match properties in the aggregate root.
Then, the additional query method should be defined in ArticleRepository:
interface ArticleRepository extends JpaRepository<Article, Long> {
Optional<ArticleDto> findDtoById(Long id);
}
The query execution engine creates proxy instances of that interface
at runtime for each element returned and forwards calls to the exposed
methods to the target object.
Declare additional projections to retrieve properties needed for the second case:
public interface FieldDto2 {
Long getId();
Set<CategoryDto2> getCategories();
interface CategoryDto2 {
Long getId();
}
}
Lastly, define the following query method in FieldRepository:
interface FieldRepository extends JpaRepository<Field, Long> {
Optional<FieldDto2> findDtoById(Long id);
}
With this approach, the infinite recursion exception would never appear, as long as projections don't contain attributes causing recursion.
My schema will be something similar to the above picture.
I am planning to use Spring data JDBC and found that
If multiple aggregates reference the same entity, that entity can’t be part of those aggregates referencing it since it only can be part of exactly one aggregate.
Following are my questions:
How to create two different aggregates for the above without changing the DB design?
How to retrieve the Order / Vendor list alone? i.e. I don't want to traverse through the aggregate root.
How to create two different aggregates for the above without changing the DB design?
I think you simply have three Aggregates here: Order, Vendor and ProductType. A mental test that I always use is:
If A has a reference to B and I delete an A, should I automatically and without exception delete all Bs referenced by that A? If so B is part of the A Aggregate.
This doesn't seem to be true for any of the relationships in your diagram, so let's go with separate Aggregates for each entity.
This in turn makes each reference in the diagram one between different Aggregates.
As described in "Spring Data JDBC, References, and Aggregates" these must be modelled as ids in your Java code, not as Java references.
class Order {
#Id
Long orderid;
String name;
String description;
Instance created;
Long productTypeId;
}
class Vendor {
#Id
Long vid;
String name;
String description;
Instance created;
Long productTypeId;
}
class ProductType {
#Id
Long pid;
String name;
String description;
Instance created;
}
Since they are separate Aggregates each gets it's own Repository.
interface Orders extends CrudRepository<Order, Long>{
}
interface Vendors extends CrudRepository<Vendor, Long>{}
interface ProductTypes extends CrudRepository<ProductType, Long>{}
At this point I think we fulfilled your requirements. You might have to add some #Column and #Table annotations to get the exact names you want or provide a NamingStrategy.
You probably also want some kind of caching for the product types since I'd expect they see lots of reads with only few writes.
And of course you can add additional methods to the repositories, for example:
interface Orders extends CrudRepository<Order, Long>{
List<Orders> findByProductTypeId(Long productTypeId);
}
I'm quite new to Spring Data JPA technology and currently facing one task I can't deal with. I am seeking best practice for such cases.
In my Postgres database I have a two tables connected with one-to-many relation. Table 'account' has a field 'type_id' which is foreign key references to field 'id' of table 'account_type':
So the 'account_type' table only plays a role of dictionary. Accordingly to that I've created to JPA entities (Kotlin code):
#Entity
class Account(
#Id #GeneratedValue var id: Long? = null,
var amount: Int,
#ManyToOne var accountType: AccountType
)
#Entity
class AccountType(
#Id #GeneratedValue var id: Long? = null,
var type: String
)
In my Spring Boot application I'd like to have a RestConroller which will be responsible for giving all accounts in JSON format. To do that I made entities classes serializable and wrote a simple restcontroller:
#GetMapping("/getAllAccounts", produces = [APPLICATION_JSON_VALUE])
fun getAccountsData(): String {
val accountsList = accountRepository.findAll().toMutableList()
return json.stringify(Account.serializer().list, accountsList)
}
where accountRepository is just an interface which extends CrudRepository<Account, Long>.
And now if I go to :8080/getAllAccounts, I'll get the Json of the following format (sorry for formatting):
[
{"id":1,
"amount":0,
"accountType":{
"id":1,
"type":"DBT"
}
},
{"id":2,
"amount":0,
"accountType":{
"id":2,
"type":"CRD"
}
}
]
But what I really want from that controller is just
[
{"id":1,
"amount":0,
"type":"DBT"
},
{"id":2,
"amount":0,
"type":"CRD"
}
]
Of course I can create new serializable class for accounts which will have String field instead of AccountType field and can map JPA Account class to that class extracting account type string from AccountType field. But for me it looks like unnecessary overhead and I believe that there could be a better pattern for such cases.
For example what I have in my head is that probably somehow I can create one JPA entity class (with String field representing account type) which will be based on two database tables and unnecessary complexity of having inner object will be reduced automagically each time I call repository methods :) Moreover I will be able to use this entity class in my business logic without any additional 'wrappers'.
P.s. I read about #SecondaryTable annotation but it looks like it can only work in cases where there is one-to-one relation between two tables which is not my case.
There are a couple of options whic allow clean separation without a DTO.
Firstly, you could look at using a projection which is kind of like a DTO mentioned in other answers but without many of the drawbacks:
https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#projections
#Projection(
name = "accountSummary",
types = { Account.class })
public Interface AccountSummaryProjection{
Long getId();
Integer getAmount();
#Value("#{target.accountType.type}")
String getType();
}
You then simply need to update your controller to call either query method with a List return type or write a method which takes a the proection class as an arg.
https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#projection.dynamic
#GetMapping("/getAllAccounts", produces = [APPLICATION_JSON_VALUE])
#ResponseBody
fun getAccountsData(): List<AccountSummaryProjection>{
return accountRepository.findAllAsSummary();
}
An alternative approach is to use the Jackson annotations. I note in your question you are manually tranforming the result to a JSON String and returning a String from your controller. You don't need to do that if the Jackson Json library is on the classpath. See my controller above.
So if you leave the serialization to Jackson you can separate the view from the entity using a couple of annotations. Note that I would apply these using a Jackson mixin rather than having to pollute the Entity model with Json processing instructions however you can look that up:
#Entity
class Account(
//in real life I would apply these using a Jacksin mix
//to prevent polluting the domain model with view concerns.
#JsonDeserializer(converter = StringToAccountTypeConverter.class)
#JsonSerializer(converter = AccountTypeToStringConverter.class
#Id #GeneratedValue var id: Long? = null,
var amount: Int,
#ManyToOne var accountType: AccountType
)
You then simply create the necessary converters:
public class StringToAccountTypeConverter extends StdConverter<String, CountryType>
implements org.springframework.core.convert.converter.Converter<String, AccountType> {
#Autowired
private AccountTypeRepository repo;
#Override
public AccountType convert(String value) {
//look up in repo and return
}
}
and vice versa:
public class AccountTypeToStringConverter extends StdConverter<String, CountryType>
implements org.springframework.core.convert.converter.Converter<AccountType, String> {
#Override
public String convert(AccountType value) {
return value.getName();
}
}
One of the least complicated ways to achieve what you are aiming for - from the external clients' point of view, at least - has to do with custom serialisation, what you seem to be aware of and what #YoManTaMero has extended upon.
Obtaining the desired class structure might not be possible. The closest I've managed to find is related to the #SecondaryTable annotation but the caveat is this only works for #OneToOne relationships.
In general, I'd pinpoint your problem to the issue of DTOs and Entities. The idea behind JPA is to map the schema and content of your database to code in an accessible but accurate way. It takes away the heavy-lifting of managing SQL queries, but it is designed mostly to reflect your DB's structure, not to map it to a different set of domains.
If the organisation of your DB schema does not exactly match the needs of your system's I/O communication, this might be a sign that:
Your DB has not been designed correctly;
Your DB is fine, but the manageable entities (tables) in it simply do not match directly to the business entities (models) in your external communication.
Should second be the case, Entities should be mapped to DTOs which can then be passed around. Single Entity may map to a few different DTOs. Single DTO might take more than one (related!) entities to be created. This is a good practice for medium-to-large systems in the first place - handing out references to the object that's the direct access point to your database is a risk.
Mind that simply because the id of the accountType is not taking part in your external communication does not mean it will never be a part of your business logic.
To sum up: JPA is designed with ease of database access in mind, not for smoothing out external communication. For that, other tools - such as e.g. Jackson serializer - are used, or certain design patterns - like DTO - are being employed.
One approach to solve this is to #JsonIgnore accountType and create getType method like
#JsonProperty("type")
var getType() {
return accountType.getType();
}
Is it possible to publish two different repositories for the same JPA entity with Spring Data Rest?
I gave the two repositories different paths and rel-names, but only one of the two is available as REST endpoint.
The point why I'm having two repositories is, that one of them is an excerpt, showing only the basic fields of an entity.
The terrible part is not only that you can only have 1 spring data rest repository (#RepositoryRestResource) per Entity but also that if you have a regular JPA #Repository (like CrudRepository or PagingAndSorting) it will also interact with the spring data rest one (as the key in the map is the Entity itself).
Lost quite a few hours debugging random load of one or the other. I guess that if this is a hard limitation of spring data rest at least an Exception could be thrown if the key of the map is already there when trying to override the value.
The answer seems to be: There is only one repository possible per entity.
I ended up using the #Subselect to create a second immutable entity and bound that to the second JpaRepsotory and setting it to #RestResource(exported = false), that also encourages a separation of concerns.
Employee Example
#Entity
#Table(name = "employee")
public class Employee {
#Id
Long id
String name
...
}
#RestResource
public interface EmployeeRepository extends PagingAndSortingRepository<Employee, Long> {
}
#Entity
#Immutable
#Subselect(value = 'select id, name, salary from employee')
public class VEmployeeSummary {
#Id
Long id
...
}
#RestResource(exported = false)
public interface VEmployeeRepository extends JpaRepository<VEmployeeSummary, Long> {
}
Context
Two packages in the monolithic application had different requirements. One needed to expose the entities for the UI in a PagingAndSortingRepository including CRUD functions. The other was for an aggregating backend report component without paging but with sorting.
I know I could have filtered the results from the PagingAndSorting Repository after requesting Pageable.unpaged() but I just wanted a Basic JPA repository which returned List for some filters.
So, this does not directly answer the question, but may help solve the underlying issue.
You can only have one repository per entity... however, you can have multiple entities per table; thus, having multiple repositories per table.
In a bit of code I wrote, I had to create two entities... one with an auto-generated id and another with a preset id, but both pointing to the same table:
#Entity
#Table("line_item")
public class LineItemWithAutoId {
#Id
#GeneratedValue(generator = "system-uuid")
#GenericGenerator(name = "system-uuid", strategy = "uuid")
private String id;
...
}
#Entity
#Table("line_item")
public class LineItemWithPredefinedId {
#Id
private String id;
...
}
Then, I had a repository for each:
public interface LineItemWithoutId extends Repository<LineItemWithAutoId,String> {
...
}
public interface LineItemWithId extends Repository<LineItemWithPredefinedId,String> {
...
}
For the posted issue, you could have two entities. One would be the full entity, with getters and setters for everything. The other, would be the entity, where there are setters for everything, but only getters for the fields you want to make public. Does this make sense?
In our Restful application we decided to use DTO's to shield the Hibernate domain model for several reasons.
We map Hibernate entities to DTO and vice versa manually using DTOMappers in the Service Layer.
Example in Service Layer:
#Transactional(readOnly=true)
public PersonDTO findPersonWithInvoicesById(Long id) {
Person person = personRepository.findById(id);
return PersonMapperDTOFactory.getInstance().toDTO(person);
}
The main concept could be explained like this:
JSON (Jackson parser) <-> Controller <-> Service Layer (uses Mapping Layer) <-> Repository
We agreed that we retrieve associations by performing a HQL (or Criteria) using a left join.
This is mostly a performant way to retrieve relations and avoids the N+1 select issue.
However, it's still possible to have the N+1 select issue when a developer mistakenly forgets to do a left join. The relations will still be fetched because the PersonDTOMapper will iterate over the Invoices of a Person for converting to InvoiceDTOs. So the data is still fetched because the DTOMapper is executed where a Hibernate Session is active (managed by Spring)
Is there some way to make the Hibernate Session 'not active' in our DTOMappers? We would face a LazyInitializationException that should trigger the developer that he didn't fetch some data like it should.
I've read about #Transactional(propagation = Propagation.NOT_SUPPORTED) that suspends the transaction. However, I don't know that it was intended for such purposes.
What is a clean solution to achieve this? Alternatives are also very welcome!
Usually I use the mapper in the controller layer. From my prspective, the service layer manages the application business logic, dtos are very useful if you want to rapresent data to the external world in a different way. In this way you may get the lazy inizitalization excpetion you are looking for.
I have one more reason to prefer this solution: just image you need to invoke a public method inside a public method in the service class: in this case you might need to call the mapper several times.
If you are using Hibernate, then there are specific ways that you can determine if an associated object has been lazy-loaded.
For example, let's say you have an entity class Foo that contains a #ManyToOne 'foreign' association to entity class Bar which is represented by a field in Foo called bar.
In you DTO mapping code you can check if the associated bar has been lazy-loaded using the following code:
if (!(bar instanceof HibernateProxy) ||
!((HibernateProxy)bar).getHibernateLazyInitializer().isUninitialized()) {
// bar has already been lazy-loaded, so we can
// recursively load a BarDTO for the associated Bar object
}
The simplest solution to achieve what you desire is to clear the entity manager after querying and before invoking the DTO mapper. That way, the object will be detached and access to uninitialized assocations will trigger a LazyInitializationException instead.
I felt your pain as well which drove me to developing Blaze-Persistence Entity Views which allows you to define DTOs as interfaces and map to the entity model, using the attribute name as default mapping, which allows very simple looking mappings.
Here a little example
#Entity
class Person {
#Id Long id;
String name;
String lastName;
String address;
String city;
String zipCode;
}
#EntityView(Person.class)
interface PersonDTO {
#IdMapping Long getId();
String getName();
}
Querying would be as simple as
#Transactional(readOnly=true)
public PersonDTO findPersonWithInvoicesById(Long id) {
return personRepository.findById(id);
}
interface PersonRepository extends EntityViewRepository<PersonDTO, Long> {
PersonDTO findById(Long id);
}
Since you seem to be using Spring data, you will enjoy the spring data integration.