Spring Data-Rest POST to sub-resource - spring

Lets say I have the following structure:
#Entity
class Person extends AbstractPersistable<Long> {
String name
String surname
}
#Entity
class Task extends AbstractPersistable<Long> {
String description
#ManyToOne
Person person
}
If I follow proper HAL guidelines I'm not supposed to expose entity id's. Since I don't have a bi-directional relationship I cant PUT or PATCH to http://localhost:8080/persons.
Even if I did create the relation, I probably wouldn't want to first POST the Task to /tasks and then PUT to /persons, (mobile clients are going to kill me). But even then I don't have the Task ID even from the returned Entity so I can PUT to the Person entity. (I obviously can string parse but I don't think it's appropriate).
I probably wouldnt want to have a list of 1000 tasks in the Person entity either. So not exporting the Task entity is not really an option (and this means PATCH will not work)
So how am I supposed to associate the Person with the Task if I cannot get his id? What is the correct approach?

If you want to associate a Task with a Person you need the link to the person.
Lets say the person URI is http://localhost/persons/1
Then you could assign the person to a task by just passing that URI in the person attribute.
So a post to Task could look like this:
{
"description": "some text",
"person": "http://localhost/persons/1"
}
Spring-data-rest will lookup the person and take care of the rest.

In HAL Links are used to reference related resources not ids and a HAL entity should always return a Link to itself, which serves as its unique identifier.
If I'm not mistaken you can also annotate fields with DBRef and links should be generated for you.
If you want related resources to actually show up inline with data you'll have to draft a Projection. See more here:
Spring Boot REST Resource not showing linked objects (sets)
Last but not least - if you want Projections to also contain Links you'll have to make ResourceProcessor's for them, see here:
How to add links to Spring Data REST projections?

Related

What name should gRPC messages have to avoid conflicts with internal classes?

I have a class Book and its information needs to be passed on gRPC.
message Book {
...
}
But if I use this name there will be conflicts between one class and the other. Is there a convention on this? What name for I use for the gRPC equivalents?
Any meaningful and consistent name will be fine. This problem is not specific to protobuf/gRPC. Often times, we will have an entity class called Book and the corresponding DTO (data transfer object) BookDto with more or less same fields. We add the Dto to the entity class name to create BookDto.
This protobuf messages are basically these DTOs. You can follow the same.
You can use the Book name and access via the qualified path to avoid the conflict. You know this and I hope you do not like it.
Is it really a Book object? It might be a BookSearchRequest to query some books and you might expect BookSearchResponse from your gRPC service.

Spring Boot REST - how to POST/PATCH in many-to-many relationship

I Have a problem with adding many-to-many relationship in my REST api.
Let' say we have two entities with many-to-many relationship - Employee and Task (employee have Set<Task> and task have Set<Employee>).
Some specific task is accessible via this endpoint:
http://localhost:8080/api/tasks/2
Tasks assigned to Employee with id 88 is accessible via:
http://localhost:8080/api/employees/88/tasks
The goal is to POST/PATCH this link to the endpoint.
Could you give me a hint, how this endpoint should look like in controller?
I tried something like this, but it's not working.
#PatchMapping("/{employeeId}/tasks")
public Task addTask(#RequestBody Task task, #PathVariable Long taskId) { ... }
Second problem - I would linke to use Postman. Could you tell me which Content-Type should I choose? And how should this link be formatted?
Looking forward for your answers!
EDIT
Do I have to add another constuctor that takes uri?
By definition the PATCH method applies a partial update to a given resource, while the PUT method is used to replace a given resource entirely. The keyword here is that both PATCH and PUT are specific to a given resource.
The POST method is used to create a new resource.
Therefore, it makes sense that if you want to update just a few fields in your resource and you don't need to update it entirely, to use the PATCH method instead of the PUT method.
The PATCH request body describes how the resource shall be updated, with a series of operations. One format that you can use to describe these operations is the JSON Patch.
Since the PATCH operation is specific to a given resource, and making use of the json-patch library, your controller method should be something like:
#PatchMapping("/{employeeId}/tasks/{taskId}")
public Task updateTask(#RequestBody JsonPatch taskPatch, #PathVariable Long employeeId, #PathVariable Long taskId) { ... }
Note that this a different thing than POST, with a different method (updateTask). For example, if you want to update one field from your task resource (given by taskId), your jsonPatch sent in the request body from your client (can be Postman) would be something like
[{
"op":"replace",
"path":"/field",
"value":"newValue"
}]
There are different operations, such as add, remove, replace, copy and test.
Now in your code you will need to apply this patch to the existing resource. This reference shows how can you do that:
https://www.baeldung.com/spring-rest-json-patch
I hope this helps.

DDD configurable contextual validation

We have an aggregate root named Document and a value object named Metadata. There are two types of Domain users i.e. "Data entry operators" and "Regular users". When "Data entry operators" create a Document 4 fields are mandatory out of 20 fields in Metadata but when "Regular Users" create a Document 10 fields are mandatory out of 20 fields in Metadata.
Class Document {
Metadata metadata;
}
We could think of two solutions
Create two separate value objects for "Data entry operators" and "Regular users" and let the value objects do the validation, themselves.
Class OperatorMetadata extends Metadata {}
Class UserMetadata extends Metadata {}
Or, we could create a factory in the aggregate root Document and let it do the validation.
Is there any better way of doing this?
In addition to this, user may want the number of mandatory fields to be configurable, in that case the configuration comes from the DB, how to handle this scenario as well?
You may be over complicating it with the inheritance approach. It also looks like some application specific concepts are leaking into your domain - is there really a concept of 'regular users' in your business domain for this system (there could be, but probably not as it doesn't sound like typical business language or concepts)
Think in terms of the business and not your program and ask "How does this entity come into existence?" For example, if the document belongs in some sort of audit-able document management registry you might use the term 'Lodge' for when someone (who? anyone? employees? specials?) decides to place a document under the control of the registry. This now tells us that we need at least 2 things - information about the person lodging the document, and information about the document as well.
If you are just new Document(params) them into existence you're missing out on the good bits of ddd. Expanding on the assumptions you can align your code to the business as follows.
class document {
private document() {}
documentId Id {get; private set;}
metadata MetaData {get; private set;}
//even with no comments, the types and param names tell me lots about what this does
public static document Lodge(person lodgedBy, metaData lodgementDetails) {
if (ValidateMetaData(lodgedBy,lodgementDetails)) {
return new document() {
Id = documentId.Generate();
MetaData = lodgementDetails;
};
} else {
throw new domainExpceptions.InvalidLodgementException();
}
}
}
If the validation rules are dynamic ("may want" is a massive warning flag that your have not completed your preliminary analysis - stop coding now) then you have a new domain entity 'LodgementValidation'. You could include this in the lodge() parameters, move the lodge() routine to domain service class which orchestrates the now quite a complicate process of document creation or something else more in line with what is actually happen (It might be a purely application layer requirement for a specific UI - maybe the business rules actually don't care about the meta data)
I would recommend re-reading the published literature about DDD. It is more high level methodology and concepts than procedural instructions. Implementation details can and do vary greatly. Sort out your Ubiquitous Language and your Models then go from there; and remember Data Model != Domain Model != Application View Model

Objectify, efficient relationships. Ref<> vs storing id and duplicating fields

I'm having a hard time understanding Objectify entities relationship concepts. Let's say that i have entities User and UsersAction.
class User{
String nick;
}
class UsersAction{
Date actionDate;
}
Now in the frond-end app I want to load many UsersActions and display it, along with corresponding user's nick. I'm familiar with two concepts of dealing with this:
Use Ref<>,
I can put a #Load Ref in UsersAction, so it will create a link between this entites. Later while loading Users Action, Objectify will load proper User.
class User{
String nick;
}
class UsersAction{
#Load Ref<User> user;
Date actionDate;
}
Store Id and duplicate nick in UsersAction:
I can also store User's Id in UsersAction and duplicate User's nick while saving UsersAction.
class User{
String nick;
}
class UsersAction{
Long usersId;
String usersNick;
Date actionDate;
}
When using Ref<>, as far as I understand, Objectify will load all needed UsersActions, then all corresponding Users. When using duplication Objectify will only need to load UsersActions and all data will be there. Now, my question is. Is there a significant difference in performance, between this approaches? Efficiency is my priority but second solution seems ugly and dangerous to me since it causes data duplication and when User changes his nick, I need to update his Actions too.
You're asking whether it is better to denormalize the nickname. It's hard to say without knowing what kinds of queries you plan to run, but generally speaking the answer is probably no. It sounds like premature optimization.
One thing you might consider is making User a #Parent Ref<?> of UserAction. That way the parent will be fetched at the same time as the action in the same bulk get. As long as it fits your required transaction throughput (no more than 1 change per second for the whole User entity group), it should be fine.

Cyclic Dependency when using RelationshipEntity

Let's say, i have an User and a Product.
The user should be able to rate products and a rating can have a lot of properties like for example a number of stars from 1 to 5.
I'd like to have the Product and the User in different Maven modules.
However, Product should know its Owner, so there is a dependency to the module, holding User.
I also would like to have a Rating-Module that contains everything related to ratings.
I constructed the Rating
#RelationshipEntity(type="RATES")
public class Rating{
private Long id;
#StartNode
private User rater;
#EndNode
private Product ratee;
#Property
private RatingProperty property;
//Getter/Setter
}
Where the RatingProperty contains the int representing the 1 to 5 star rating.
Now I understand from the Documentation that I need to have the Rating as an attribute inside some node because SDN4 doesn't accept it otherwise.
Indeed when i did not use it as an attribute and tried save it, i got the id null and not element appeared in the DB.
Since the Rating needs to know both User and Product, I get a cyclic dependency when I try to put the Rating into the User class.
The same when i put it into the Product Class.
As far as I understand at the moment, using a RelationshipEntity seems to not be possible when the Start- and EndNode entities are in different Maven Modules, because the Relationship needs to know both and one of the nodes needs to know the relationship.
This doesn't seem right, so I think I understand something very wrong.
I also tried creating a new NodeEntity inside the Rating-Module just to hold the Rating. This was
#NodeEntity
public class RatingWrapper{
private Long id;
#Relationship(type="RATES)
private Rating rating;
//Getter/Setter
}
but this way i got the same behavior that i did when i didn't use the RelationshipEntity as an attribute somewhere.
Do you see a way to do this better?
A RelationshipEntity represents an edge with in the graph on which you can set and retrieve properties via your domain objects. However, because Neo4j does not support hyper-edges (edges attached to other edges), those properties must be simple Java properties, not other objects in your domain like RatingProperty.
Try replacing RatingProperty with a simple Integer first and see if that solves your problem. If so, you can then use a Custom Converter to convert between the Integer property rating in the graph and your RatingProperty object.
If your domain objects are in different modules this should not cause any problems: just ensure that all of the relevant packages are enumerated in the argument to the SessionFactory constructor:
new SessionFactory("app.module1.domain", "app.module2.domain", ...)
When testing Vinces advice, I changed the progress of creating the relationship. Unitl now, i persisted the startnode, then the endnode and then tried to use a repository extends GraphRepository<Rating> to save the RelationshipEntity. Whenever I did this, i got a rating with id null. When I added the rating as an attribute to the startnode and instead of saving the new relationship, saved the startnode, it worked.
I'm not sure, if this is the proposed way, but I got my relationship, so it works for me.

Resources