Same entity for two different aggregate - spring-data-jdbc

My schema will be something similar to the above picture.
I am planning to use Spring data JDBC and found that
If multiple aggregates reference the same entity, that entity can’t be part of those aggregates referencing it since it only can be part of exactly one aggregate.
Following are my questions:
How to create two different aggregates for the above without changing the DB design?
How to retrieve the Order / Vendor list alone? i.e. I don't want to traverse through the aggregate root.

How to create two different aggregates for the above without changing the DB design?
I think you simply have three Aggregates here: Order, Vendor and ProductType. A mental test that I always use is:
If A has a reference to B and I delete an A, should I automatically and without exception delete all Bs referenced by that A? If so B is part of the A Aggregate.
This doesn't seem to be true for any of the relationships in your diagram, so let's go with separate Aggregates for each entity.
This in turn makes each reference in the diagram one between different Aggregates.
As described in "Spring Data JDBC, References, and Aggregates" these must be modelled as ids in your Java code, not as Java references.
class Order {
#Id
Long orderid;
String name;
String description;
Instance created;
Long productTypeId;
}
class Vendor {
#Id
Long vid;
String name;
String description;
Instance created;
Long productTypeId;
}
class ProductType {
#Id
Long pid;
String name;
String description;
Instance created;
}
Since they are separate Aggregates each gets it's own Repository.
interface Orders extends CrudRepository<Order, Long>{
}
interface Vendors extends CrudRepository<Vendor, Long>{}
interface ProductTypes extends CrudRepository<ProductType, Long>{}
At this point I think we fulfilled your requirements. You might have to add some #Column and #Table annotations to get the exact names you want or provide a NamingStrategy.
You probably also want some kind of caching for the product types since I'd expect they see lots of reads with only few writes.
And of course you can add additional methods to the repositories, for example:
interface Orders extends CrudRepository<Order, Long>{
List<Orders> findByProductTypeId(Long productTypeId);
}

Related

Cyclic dependency with JPA/Hibernate and Jackson [duplicate]

This question already has answers here:
Infinite Recursion with Jackson JSON and Hibernate JPA issue
(29 answers)
Closed 3 months ago.
I have a Spring Boot application using JPA/Hibernate in its persistence layer. The application has read-only access to a database and basically has three entities Article, Category, and Field, which have the following relationships.
Article (*) -> (1) Category (*) <-> (1) Field
That is, an Article has a Category, and a Category always belongs to a single Field, however, multiple Category instances can belong to the same Field.
The application provides two REST endpoints, which give a single Article and a single Field by their IDs, respectively. Of course, this cannot work when using Jackson for serialization due to the cyclic dependency Category <-> Field.
What I want is when I retrieve an Article, it should give me its Category including the category's Field, but not all the other Category instances that belong to the this same Field. On the other hand, when I retrieve a Field, it should give me the Field including all Category instances that belong to this Field.
How can I achieve this?
Edit:
I basically have a similar question as Jackson infinite loops many-to-one one-to-many relation
You can use interface-based projections, to only retrieve needed properties, since Spring Data allows modeling dedicated return types, to more selectively retrieve partial views of the managed aggregates.
Let's assume the entities are declared as shown below. For simplicity, only the id attribute is defined alongside association-mapping attributes.
#Entity
public class Article {
#Id
private Long id;
#ManyToOne
private Category category;
}
#Entity
public class Category {
#Id
private Long id;
#OneToMany
private Set<Article> articles;
#ManyToOne
private Field field;
}
#Entity
public class Field {
#Id
private Long id;
#OneToMany
private Set<Category> categories;
}
For the first endpoint where the Article is fetched by id, the projections should be declared as follows:
public interface ArticleDto {
Long getId();
CategoryDto1 getCategory();
interface CategoryDto1 {
Long getId();
FieldDto1 getField();
}
interface FieldDto1 {
Long getId();
}
}
The important bit here is that the properties defined here exactly
match properties in the aggregate root.
Then, the additional query method should be defined in ArticleRepository:
interface ArticleRepository extends JpaRepository<Article, Long> {
Optional<ArticleDto> findDtoById(Long id);
}
The query execution engine creates proxy instances of that interface
at runtime for each element returned and forwards calls to the exposed
methods to the target object.
Declare additional projections to retrieve properties needed for the second case:
public interface FieldDto2 {
Long getId();
Set<CategoryDto2> getCategories();
interface CategoryDto2 {
Long getId();
}
}
Lastly, define the following query method in FieldRepository:
interface FieldRepository extends JpaRepository<Field, Long> {
Optional<FieldDto2> findDtoById(Long id);
}
With this approach, the infinite recursion exception would never appear, as long as projections don't contain attributes causing recursion.

Spring Data JPA DistinctBy projections

Good day fellow hibernators!
I have a question on how the DistinctBy clause works in conjunction with Spring Data's projection
Assume I have 3 classes:
public class Task {
Long id;
#ManyToOne(fetch = LAZY)
#JoinColumn(name = "project_id")
private Project project;
#OneToOne
#JoinColumn(name = "contact_id")
private Contact assigned;
Boolean deleted;
// ...
}
public class Contact {
Long id;
// ...
}
public class Project {
Long id;
#OneToMany(fetch = LAZY, mappedBy = "project")
private Set<Task> tasks;
// ...
}
These would be my domain classes. Notice, Project does have a "One2Many" to Tasks, Contact does not. Now, I have 2 interfaces for my projections and the basic TaskRepo with 2 methods:
public interface JustProject {
Project getProject();
}
public interface JustAssignee {
Contact getContact();
}
public class TaskRepo extends CrudRepository<Task, Long>, JpaSpecificationExecutor<Task> {
List<JustAssignee> findDistinctByDeletedFalse();
List<JustProject> findDistinctByDeletedFalseAndDeletedFalse();
}
The way it works for me right now is that, findDistinctByDeletedFalse returns as many instances as there are distinct contacts for tasks (e.g. if there are 10 tasks but only 3 contacts, the method will return just 3 objects containing all the 3 distinct contacts). Same for findDistinctByDeletedFalseAndDeletedFalse but on project level.
Now I have a few questions here and would love to get some help in understanding how this works exactly.
is the distinct clause applied after the search is done?
my initial assumption was that this behavior would not work as it does now. I assumed that the distinct clause is applied before the result is fetched, meaning that it would be DISTINCT based on the underlying task model, not the returned JustContact or JustProject model.
is there any way I could somehow not abuse the ...AndDeletedFalse redundant appendix? I need both the two methods from the repo but I feel like I had to cheat just to obtain that result...
... am I doing something wrong? I wanted to get "all distinct contacts/projects assigned to all tasks" as elegant of a way as possible. I ended up thinking about this distinctby exactly because I was unsure on how it works and wanted to try mu luck out. I really didn't think it would work this way, but now that it does I would really want to understand why it does!
Many thanks <3
The DISTINCT keyword is applied to the query and therefore it's effect depends on the select list which in turn is controlled by the projection. Therefore if you have only project or only contact in your projection the DISTINCT will get applied to those values only. Note though, that this relies somewhat on the boundaries of the JPA specification and I wouldn't be surprised if you see different behaviour with different implementations. See https://github.com/eclipse-ee4j/jpa-api/issues/189 and https://github.com/eclipse-ee4j/jpa-api/issues/124 for somewhat related issues raised against the specification.
In oder to differentiate methods that otherwise only differ in the return value you might add any additional string between find and By in the method name. For example you might want to rename your methods to findDistinctContactsByDeletedFalse and findDistinctProjectsByDeletedFalse
I guess this is the best that you can get with Spring Data JPA. You might be able to use just a single method by using the dynamic projections approach, but I think this is a perfect use case for Blaze-Persistence Entity Views.
I created the library to allow easy mapping between JPA models and custom interface or abstract class defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure(domain model) the way you like and map attributes(getters) via JPQL expressions to the entity model.
A DTO model for your use case could look like the following with Blaze-Persistence Entity-Views:
#EntityView(Task.class)
public interface TaskAggregateDto {
// A synthetic "id" to get a grouping context on object level
#IdMapping("1")
int getGroupKey();
Set<ProjectDto> getProjects();
Set<ContactDto> getContacts();
#EntityView(Project.class)
interface ProjectDto {
#IdMapping
Long getId();
String getName();
}
#EntityView(Contact.class)
interface ContactDto {
#IdMapping
Long getId();
String getName();
}
}
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features
public interface TaskRepo extends CrudRepository<Task, Long>, JpaSpecificationExecutor<Task> {
TaskAggregateDto findOneByDeletedFalse();
}

Multiple Repositories for the Same Entity in Spring Data Rest

Is it possible to publish two different repositories for the same JPA entity with Spring Data Rest?
I gave the two repositories different paths and rel-names, but only one of the two is available as REST endpoint.
The point why I'm having two repositories is, that one of them is an excerpt, showing only the basic fields of an entity.
The terrible part is not only that you can only have 1 spring data rest repository (#RepositoryRestResource) per Entity but also that if you have a regular JPA #Repository (like CrudRepository or PagingAndSorting) it will also interact with the spring data rest one (as the key in the map is the Entity itself).
Lost quite a few hours debugging random load of one or the other. I guess that if this is a hard limitation of spring data rest at least an Exception could be thrown if the key of the map is already there when trying to override the value.
The answer seems to be: There is only one repository possible per entity.
I ended up using the #Subselect to create a second immutable entity and bound that to the second JpaRepsotory and setting it to #RestResource(exported = false), that also encourages a separation of concerns.
Employee Example
#Entity
#Table(name = "employee")
public class Employee {
#Id
Long id
String name
...
}
#RestResource
public interface EmployeeRepository extends PagingAndSortingRepository<Employee, Long> {
}
#Entity
#Immutable
#Subselect(value = 'select id, name, salary from employee')
public class VEmployeeSummary {
#Id
Long id
...
}
#RestResource(exported = false)
public interface VEmployeeRepository extends JpaRepository<VEmployeeSummary, Long> {
}
Context
Two packages in the monolithic application had different requirements. One needed to expose the entities for the UI in a PagingAndSortingRepository including CRUD functions. The other was for an aggregating backend report component without paging but with sorting.
I know I could have filtered the results from the PagingAndSorting Repository after requesting Pageable.unpaged() but I just wanted a Basic JPA repository which returned List for some filters.
So, this does not directly answer the question, but may help solve the underlying issue.
You can only have one repository per entity... however, you can have multiple entities per table; thus, having multiple repositories per table.
In a bit of code I wrote, I had to create two entities... one with an auto-generated id and another with a preset id, but both pointing to the same table:
#Entity
#Table("line_item")
public class LineItemWithAutoId {
#Id
#GeneratedValue(generator = "system-uuid")
#GenericGenerator(name = "system-uuid", strategy = "uuid")
private String id;
...
}
#Entity
#Table("line_item")
public class LineItemWithPredefinedId {
#Id
private String id;
...
}
Then, I had a repository for each:
public interface LineItemWithoutId extends Repository<LineItemWithAutoId,String> {
...
}
public interface LineItemWithId extends Repository<LineItemWithPredefinedId,String> {
...
}
For the posted issue, you could have two entities. One would be the full entity, with getters and setters for everything. The other, would be the entity, where there are setters for everything, but only getters for the fields you want to make public. Does this make sense?

spring data rest hateoas dynamically hide repository

I'm still trying to figure what exactly it is I am asking but this is fallout from a discussion in the office. So the dilemma is that on a mapping set to eager with a repository defined for the entity the mapping is to, a link is produced. Some of the time that is fine but some of the time I'd rather have the object fetched itself. If there is not a repository defined for that entity then that is what will occur with the eager fetch strategy. What would be ideal is if I could pass in a parameter and have the existence of that repository disappear or reappear.
Not totally following, but either the repo exists or not. If you want to be able to access entities of type X independently of other entity types, then you have to define a repo for type X.
I think you could achieve something similar using projections.
So you define define a repository for your association entity. By default spring data rest will just render a link to this entity and not embed it in the response.
Then you define a projection with a getter for your associated entity. You can choose on the client side if you want the projection by adding the projection query parameter.
So lets say you have a person with an address - an exported repository exists for Person and Address:
#Entity
public class Person {
#Id #GeneratedValue
private Long id;
private String firstName, lastName;
#OneToOne
private Address address;
…
}
interface PersonRepository extends CrudRepository<Person, Long> {}
interface AddressRepository extends CrudRepository<Address, Long> {}
Your projection could look like this:
#Projection(name = "inlineAddress", types = { Person.class })
interface InlineAddress {
String getFirstName();
String getLastName();
Address getAddress();
}
And if you call http://localhost/persons/1?projection=inlineAddress you have the address embedded - and by default it is just linked.

Multiple Relationship classes with the same type

Using spring-data-neo4j, I want to create two classes using #RelationshipEntity(type="OWNS") to link a Person class to both a Pet and Car.
#RelationshipEntity(type="OWNS")
public class OwnsCar {
#Indexed
private String name;
#StartNode
private Person person;
#EndNode
private Car car;
}
#RelationshipEntity(type="OWNS")
public class OwnsPet {
#Indexed
private String name;
#EndNode
private Person person;
#StartNode
private Pet pet;
}
This saves to the Graph Database properly with no problems, as I can query the actual Node and Relationship and see they type, etc.
But when I attempt to use #RelatedTo(type="OWNS", elementClass=Pet.class) I either get a class cast exception, or when using lazy-initialization I get incorrect results.
#NodeEntity
public class Person {
#Indexed
private String name;
#RelatedTo(type="OWNS", direction=Direction.OUTGOING, elementClass=Pet.class)
private Set<Pet> pets;
#RelatedTo(type="OWNS", direction=Direction.OUTGOING, elementClass=Car.class)
private Set<Car> cars;
}
The result I get when I attempt to print our my person(my toString() has been omitted, but it simply calls the toString() for each field) is this:
Person [nodeId=1, name=Nick, pets=[Car [nodeId=3, name=Thunderbird]], cars=[Car [nodeId=3, name=Thunderbird]]]
Does anyone know if this can be done, should be done, is just a bug or a feature that is missing?
It seems like the problem is, that the annotation causes springDataNeo4j to priorize the relationship name. I tried the same on another sample I created. If both annotations contain
type="OWNS" it mixes both 'objects'. When I omit this information, and only use direction and type, it works for me.
Unfortunately this will lead to a problem if you are using another #RelatedTo annotation with more Pets or Cars related with another annotation. As it would not differ between "OWNS" and any other relation to a Pet-Type, the set returns all related pets (example: peter ->(HATES-Relationsip)->dogs).
If it's a bug or not, I can't tell... But for the database: There are only nodes and relations. Both are not typed, so neo4j does not know anything about your 'Pet'- or 'Car'-Class. Spring data neo4j handles this, either by indexing all nodes per type and setting a type-attribute, or using a specific graph-layout (with subreferences). Even if you would want to fetch all pets of a person with a traversal description, you would have so much more code to write, since the outgoing relations with name 'OWNS' contains two types of objects.
I would recommend using two different names. It's easier to write your custom traversals/queries later on, and its probably even faster as well, because no class-type comparison will be needed. Is there any reason, why you would need these specific names?
PS: It is possible, that not everything is 100% accurate. I don't know springdataneo4j in detail, but that's what I figured out so far.

Resources