I´ve got this on the parent object
#OneToMany(mappedBy="idUser", cascade = CascadeType.MERGE)
public List<Directions> directions;
And in my controller I´ve got this
public static void userUpdate(String apikey, JsonObject body) {
if(validate(apikey)) {
Long idUser = decode(apikey);
User oldUser = User.findById(idUser);
Map<String, User> userMap = new HashMap<String, User>();
Type arrayListType = new TypeToken<Map<String, User>>(){}.getType();
userMap = gson().fromJson(body, arrayListType);
User user = userMap.get("user");
oldUser.em().merge(user);
oldUser.save();
}else{
forbidden();
}
}
It makes the update on the parent object but when I change something on the children object it doesn't update it and neither gives problems with hibernate or Oracle.
Does anyone know why it doesn´t update the child object?
Thanks all!
Updated with solution!
This is how it works for me, as #JB Nizet said you´ve got to save the child objects too.
oldUser.em().merge(user);
oldUser.save();
for (Direction direction : oldUser.directions) {
direction.save();
}
Another aproach!
With this
#OneToMany(cascade = CascadeType.ALL)
#JoinColumn(name = "SASHNBR", insertable = true, updatable = true)
public List<Direction> directions;
I´ve been able to make oldUser.save() and get the child objects saved.
AFAIK, Play requires a call to save() on all the modified entities. So you probably need to iterate through the user's directions and save them as well:
for (Direction direction : user.getDirections()) {
direction.save();
}
Related
I once again joined a project which uses Hibernate (Spring/Hibernate/Kotlin to be exact) and have read through a number of Vlad Mihalcea wonderful articles to refresh my knowledges about this ORM (this article is of my current interest).
What I'm trying to understand is how should I treat add/update/delete operations for nested entities (bidirectional #OneToMany). Here is what I don't understand.
Say we have a Post entity:
#Entity(name = "Post")
#Table(name = "post")
class Post(
#Id
#GeneratedValue
var id: Long? = null,
val title: String,
#OneToMany(mappedBy = "post", cascade = CascadeType.ALL, orphanRemoval = true)
private val comments: MutableList<PostComment> = MutableList<PostComment>()
) {
fun addComment(comment: PostComment) {
comments.add(comment)
comment.post = this
}
fun removeComment(comment: PostComment) {
comments.remove(comment)
comment.post = null
}
}
And a PostComment entity:
#Entity(name = "PostComment")
#Table(name = "post_comment")
class PostComment(
#Id
#GeneratedValue
var id: Long? = null,
val review: String,
#ManyToOne(fetch = FetchType.LAZY)
var post: Post
) {
override fun equals(o: Any?): Boolean {
if (this === o) return true
if (o !is PostComment) false
return id != null && id == o.id
}
override fun hashCode(): Int {
return javaClass.hashCode()
}
}
All in all everything is good, but here is the couple of things I don't know how to cover:
In fact Post class won't compile since I set post field of PostComment to null while it is not nullable. What is a good practice to handle it? Should I make all relations nullable in kotlin just because hibernate require it to be so and it is in contradiction with business logic?
It is more or less clear how to add and delete nested entities, though what should we do if we need to update already existing nested entity. Let's imagine we have a Post(id=1, title="lovely post", comments=[PostComment(id=15, review="good", post=this)] and we get a update action with the following PostDto(id=1, title="not that nice post", comments=[PostComment(id=15, review="bad", post=this)]. As you can see we need to update title for Post and review for PostComment. If we take a look at the Vlad's article I linked above we do not see any update methods. I think it was just ommited since it is not related to article topic.
But I wonder what is the good practice to handle such an update? Something like these two approaches comes to my mind but I'm not sure if these are the best things to do:
#Entity(name = "Post")
#Table(name = "post")
class Post(
//fields...
) {
fun addComment(comment: PostComment) {
comments.add(comment)
comment.post = this
}
fun removeComment(comment: PostComment) {
comments.remove(comment)
comment.post = null
}
// not effective, since issue delete/insert queries, but clean
fun updateComment(comment: PostComment) {
val commentId = comment.id!!
comments.removeIf { it.id == commentId }
comment.team = this
}
// effective, since issue only update query, but dirty as hell
fun updateComment(commentId: Long, review: String) {
val comment = comments.find { it.id == commentId }!!
comment.review = review
}
}
Not actual anymore. Refer good explanation by #Chris in a comment section.
Imagine we need an endpoint to update a comment only. What
is the best way to organise our code base for such a scenario?
Should we always update it like this (always fetch old post, looks
inefficient) or is there any better/efficient approach?
#Transactional
fun reassignComment(newPostId: Long, commentDto: CommentDto) {
val comment = commentRepo.findByIdOrNull(commentDto.id)!!
val oldPost = comment.post
val newPost = postRepo.findByIdOrNull(newPostId)!!
oldPost.removeComment(comment)
newPost.addComment(comment)
}
Thanks anyone for your time and input!
posting for a colleague, we’ve encountered unexpected data fetching behaviour and would like to ask for any ideas on how and why it could be done this way. The problem is that we have managed entities with the same id but in a different state. Unfortunately, it’s not reproducible locally. We can see that behaviour from the AWS-hosted EC2 Intel Instance and are only able to verify it from logs.
Used classes:
class History {
private Enum status;
#OneToMany(mappedBy = HistoryEntry_.HISTORY)
#OrderBy("id ASC")
#BatchSize(size = 10)
private List<HistoryEntry> entries = new LinkedList<>();
}
// binds 2 parts of the system
class HistoryEntry {
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "history_id", updatable = false, nullable = false)
private History history;
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "process_id", updatable = false, nullable = false)
private ProcessInfo processInfo;
}
class ProcessInfo {
#OneToMany(mappedBy = HistoryEntry_.PROCESS_INFO)
// just for deterministic ordering within the transaction
#OrderBy("id ASC")
#BatchSize(size = 10)
private List<HistoryEntry> entries = new LinkedList<>();
}
Initial State
History history1 = new History(Status.PENDING);
ProcessInfo processInfo1 = new ProcessInfo();
ProcessInfo processInfo2 = new ProcessInfo();
HistoryEntry entryA = new HistoryEntry(history, processInfo1);
HistoryEntry entryB = new HistoryEntry(history, processInfo1);
History1 in PENDING status containing a list of [A, B] entries
ProcessInfo1 containing a list of [A, B] entries
ProcessInfo2 without entries
New Entry linking to History1
The execution runs in a single transaction
// returns a list with 2 entries A and B
List<HistoryEntry> entries = historyEntryRepo.findAllByProcessInfoAndNotArchived(processInfo1);
// a set with a single history1. Comparison done by id.
Set<History> histories = entries.stream()
.map(HistoryEntry::getHistory)
.toSet();
// expecting a single element
if(histories.size() > 1){
throw new IllegalStateException();
}
History history1 = histories.iterator().next();
entityManager.lock(history1, LockModeType.OPTIMISTIC_FORCE_INCREMENT);
HistoryEntry entryC = new HistoryEntry();
entryC.setHistory(history1);
history1.getEntries().add(entryC);
history1.status = Status.COMPLETED;
entryC.setProcessInfo(processInfo2);
processInfo2.getEntries().add(entryC);
historyEntryRepo.save(entryC);
Inconsistent Data
Within the same transaction, I want to verify the status of each history entity.
Set<History> histories1 = processInfo1.getEntries().stream()
.map(HistoryEntry::getHistory)
.toSet();
Set<History> histories2 = processInfo2.getEntries().stream()
.map(HistoryEntry::getHistory)
.toSet();
Both return a single result - History1 - with the same id but with a different status field value. This is unexpected, to say the least.
History(id=1, status=PENDING) unexpected
History(id=1, status=COMPLETED)
Reloading for Consistency
It’s “fixed” by doing a fetch from the repo anew.
historyEntryRepo.findAllByProcessInfo(processInfo1).stream()
.map(HistoryEntry::getHistory)
.toSet();
historyEntryRepo.findAllByProcessInfo(processInfo2).stream()
.map(HistoryEntry::getHistory)
.toSet();
Now, the outcome is the same and expected: History(id=1, status=COMPLETED)
Is this some caching or rather cache-invalidation gone wrong?
I've ran into a problem while developing a Spring Boot application with Criteria API.
I'm having a simple Employer entity, which contains a set of Job.ID (not entities, they're pulled out using repository when needed). Employer and Job are in many to many relationship. This mapping is only used on a purpose of finding Employee with no jobs.
public class Employer {
#ElementCollection
#CollectionTable(
name = "EMPLOYEE_JOBS"
joinColumns = #JoinColumn(name = "EMP_ID")
#Column(name = "JOB_ID")
private final Set<String> jobs = new HashSet<>(); //list of ids of jobs for an employee
}
Then I have a generic function, which returns a predicate (Specification) by a given attributePath and command for any IEntity implementation.
public <E extends IEntity> Specification<E> createPredicate(String attributePath, String command) {
return (r, q, b) -> {
Path<?> currentPath = r;
for(String attr : attributePath.split("\\.")) {
currentPath = currentPath.get(attr);
}
if(Collection.class.isAssignableFrom(currentPath.getJavaType())) {
//currentPath points to PluralAttribute
if(command.equalsIgnoreCase("empty")) {
return b.isEmpty((Expression<Collection<?>>)currentPath);
}
}
}
}
If want to get list of all employee, who currently have no job, I wish I could create the predicate as follows:
Specification<Employer> spec = createPredicate("jobs", "empty");
//or if I want only `Work`s whose were done by employer with no job at this moment
Specification<Work> spec = createPredicate("employerFinished.jobs", "empty");
This unfortunately does not works and throws following exception:
org.hibernate.hql.internal.ast.QuerySyntaxException:
unexpected end of subtree
[select generatedAlias0 from Employer as generatedAlias0
where generatedAlias0.jobs is empty]
Is there a workaround how to make this work?
This bug in Hibernate is known since September 2011, but sadly hasn't been fixed yet. (Update: this bug is fixed as of 5.4.11)
https://hibernate.atlassian.net/browse/HHH-6686
Luckily there is a very easy workaround, instead of:
"where generatedAlias0.jobs is empty"
you can use
"where size(generatedAlias0.jobs) = 0"
This way the query will work as expected.
I have a parent which stores a list of children. When i update the children(add/edit/remove), is there a way to automatically decide which child to remove or edit based on the foreign key? Or do i have to manually check through all the child to see which are new or modified?
Parent Class
#Entity
#EntityListeners(PermitEntityListener.class)
public class Permit extends Identifiable {
#OneToMany(fetch = FetchType.LAZY, cascade=CascadeType.ALL, mappedBy = "permit")
private List<Coordinate> coordinates;
}
Child Class
#Entity
public class Coordinate extends Identifiable {
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "permit_id", referencedColumnName = "id")
private Permit permit;
private double lat;
private double lon;
}
Parent's Controller
#PutMapping("")
public #ResponseBody ResponseEntity<?> update(#RequestBody Permit permit) {
logger.debug("update() with body {} of id {}", permit, permit.getId());
if (!repository.findById(permit.getId()).isPresent()) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(null);
}
Permit returnedEntity = repository.save(permit);
repository.flush();
return ResponseEntity.ok(returnedEntity);
}
=EDIT=
Controller Create
#Override
#PostMapping("")
public #ResponseBody ResponseEntity<?> create(#RequestBody Permit permit) {
logger.debug("create() with body {}", permit);
if (permit == null || permit.getId() != null) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(null);
}
List<Coordinate> coordinates = permit.getCoordinates();
if (coordinates != null) {
for (int x = 0; x < coordinates.size(); ++x) {
Coordinate coordinate = coordinates.get(x);
coordinate.setPermit(permit);
}
}
Permit returnedEntity = repository.save(permit);
repository.flush();
return ResponseEntity.ok(returnedEntity);
}
Controller Update
#PutMapping("")
public #ResponseBody ResponseEntity<?> update(#RequestBody Permit permit) {
logger.debug("update() with body {} of id {}", permit, permit.getId());
if (!repository.findById(permit.getId()).isPresent()) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(null);
}
List<Coordinate> repoCoordinate = coordinateRepository.findByPermitId(permit.getId());
List<Long> coordinateIds = new ArrayList<Long>();
for (Coordinate coordinate : permit.getCoordinates()) {
coordinate.setPermit(permit);
//if existing coordinate, save the ID in coordinateIds
if (coordinate.getId() != null) {
coordinateIds.add(coordinate.getId());
}
}
//loop through coordinate in repository to find which coordinate to remove
for (Coordinate coordinate : repoCoordinate) {
if (!(coordinateIds.contains(coordinate.getId()))) {
coordinateRepository.deleteById(coordinate.getId());
}
}
Permit returnedEntity = repository.save(permit);
repository.flush();
return ResponseEntity.ok(returnedEntity);
}
I have tested this and it is working, is there no simplified way of doing this?
You were close to the solution. The only thing you're missing is orphanRemoval=true on your one to many mapping:
#Entity
#EntityListeners(PermitEntityListener.class)
public class Permit extends Identifiable {
#OneToMany(mappedBy = "permit", cascade=CascadeType.ALL, orphanRemoval=true)
private List<Coordinate> coordinates;
}
Flagging the mapping for orphan removal will tell the underlying ORM to delete any entities that no longer belong to any parent entity. Since you removed a child element from the list, it will be deleted when you save the parent element.
Creating new elements and updating old is based on the CascadeType. Since you have CascadeType.ALL all elements in the list without an ID will be saved to the database and assigned a new ID when you save the parent entity, and all elements that are already in the list and have an ID will be updated.
On a side note, you might need to update the setter method for List coordinates to look something like:
public void setCoordinates(List<Coordinates> coordinates) {
this.coordinates = coordinates;
this.coordinates.forEach(coordinate -> coordinates.setPermit(this));
}
Or simply use #JsonManagedReference and #JsonBackReference if you're working with JSON.
I have a parent which stores a list of children.
Lets write the DDL for it.
TABLE parent (
id integer pk
)
TABLE child(
id integer pk
parent_id integer FOREIGN KEY (parent.id)
)
When i update the children(add/edit/remove), is there a way to automatically decide which child to remove or edit based on the foreign key?
Assuming you have a new child #5 bound to the parent #2 and:
The FK in the DDL is correctly
The entitys knows the FK
You are using the same jpa-context
The transaction is executed correctly
Then every call to parent.getChilds() must(!) return all the entitys that are existing before your transaction has been executed and the same instance of the entity that you have just committed to the database.
Then, if you remove child #5 of parent #2 and the transaction executed successfully parent.getChilds() must return all entitys without child #5.
Special case:
If you remove parent #2 and you have cascade-delete in the DDL as well as in the Java-Code all childrens must be removed from the Database as well as the parent #2 in the Database you just removed. In this case the parent #2 is not bound anymore to the jpa-context and all the childrens of parent #2 are not bound anymore to the jpa-context.
=Edit=
You could use merge. This will work for constructs like this:
POST {
"coordinates": [{
"lat":"51.33",
"lon":"22.44"
},{
"lat":"50.22",
"lon":"22.33"
}]
}
It will create one row in table "permit" and two rows in table "coordinate", both coordinates are bound to the permit-row. The result will include the ids set.
But: You will have to do the validation work (check that id is null, check that coordinates not refering different permit, ...)!
The removal of coordinates must be done using the DELETE method:
DELETE /permit/972/coordinate/3826648305
I am new to mongodb and struggling to understand how document update works.
I have a document called 'menu':
{
"someId":"id123",
"someProperty":"property123",
"list" : [{
"innerProperty":"property423"
}]
}
which maps to my entity:
#Document(collection = "menu")
public class Menu {
#Id
private String id;
private String someid;
private String someProperty;
private List<SomeClass> list;
// accessors
}
when I try to find and update this document like this it does not update the document. It sure does find the menu as as it returns the original entity with Id:
#Override
public Menu update(Menu menu) {
Query query = new Query(
Criteria.where("someId").is(menu.getSomeId()));
Update update = Update.update("menu", menu);
return mongoOperations.findAndModify(query, update,
FindAndModifyOptions.options().returnNew(true), Menu.class);
}
But if I change it to this, it works:
#Override
public Menu update(Menu menu) {
Query query = new Query(
Criteria.where("someId").is(menu.getSomeId()));
Update update = new Update().set("someProperty", menu.getSomeProperty())
.set("list", menu.getList());
return mongoOperations.findAndModify(query, update,
FindAndModifyOptions.options().returnNew(true), Menu.class);
}
I don't really like this second method where each element of the document is individually set, as you might imagine I have a rather large document and is prone to errors.
Why does the first method not work? And what could be a better approach to update the document?
Check out the docs for findAndModify - it returns the version of the document before the fields were modified. If you do a new find() straight after, you will see that your changes were actually saved to MongoDB.