I have a repository
public interface PersonRepository extends JpaRepository<Person, Long> {}
and the Entity looks like this:
#Data
#Entity
#NoArgsConstructor
#AllArgsConstructor
public class Person {
#Id
private Long id;
#NotBlank
private String name;
}
I want to have a method which checks if all "persons" exist in database table by id, this what I have so far:
void checkIfAllPersonsExist(List<Long> personIds) {
var persons = personRepository.findAllById(personIds);
if (personIds.size() != persons.size()) {
personIds.removeAll(persons.stream().map(Persons::getId).collect(toList()));
throw new NotFoundException("Persons with id's [id:%s] does not exist", personIds);
}
}
I wonder if Spring JPA Repository can provide anything more elegant? Like specific named query which returns id's which does not exist?
If you want to just know that there are some ids that not exist you can count them
#Query("select COUNT(p.id) from Person p where p.id in :ids")
Long countIds(List<Long> ids);
Or equivalent based on
long countByIdIn(Collection<Long> ids);
Or return list of ids that exists
#Query("select p.id from Person p where p.id in :ids")
List<Long> getExistenIds(List<Long> ids);
And then filter out what you need.
personIds.removeAll(personRepository.getExistenIds(personIds));
if (!personIds.isEmpty()) {
throw new NotFoundException("Persons with id's [id:%s] does not exist", personIds);
}
First of all, your repository should extend JpaRepository<Person, Long> instead of JpaRepository<Person, String >, because your entity's id type is Long.
In and NotIn keywords can help you to achive your goal. Please check them out in this document: Query Creation - Spring Data JPA - Reference Documentation
I modified your code a little bit and it works for me.
Repository class:
public interface PersonRepository extends JpaRepository<Person, Long> {
List<Person> findByIdIn(Collection<Long> ids);
}
And sample snippet:
#Component
public class Bootstrap implements CommandLineRunner {
#Autowired
private PersonRepository repository;
#Override
public void run(String... args) throws Exception {
savePersons();
testFindMethod();
}
private void savePersons() {
Person person1 = Person.builder().id(1L).name("Name 1").build();
Person person2 = Person.builder().id(2L).name("Name 2").build();
Person person3 = Person.builder().id(3L).name("Name 3").build();
Person person4 = Person.builder().id(4L).name("Name 4").build();
repository.save(person1);
repository.save(person2);
repository.save(person3);
repository.save(person4);
}
private void testFindMethod() {
List<Long> toFind = new ArrayList<>();
toFind.add(1L);
toFind.add(2L);
toFind.add(3L);
checkIfAllPersonsExist(toFind);
toFind.add(7L);
checkIfAllPersonsExist(toFind);
}
void checkIfAllPersonsExist(List<Long> personIds) {
List<Person> persons = repository.findByIdIn(personIds);
if (personIds.size() != persons.size()) {
System.out.println("Sizes are different");
} else {
System.out.println("Sizes are same!");
}
}
}
And this is console output:
Sizes are same!
Sizes are different
I hope this will help you.
With this JPA repository method you can get the elements which ids doesn't exists:
List<Person> findByIdNotIn(List<Long> personIds);
If you want to remove them like in your example, you can use this one:
List<Person> deleteByIdNotIn(List<Long> personIds);
I hope it helps!
Related
According to the Quarkus docs, including the line below in the application.properties should result in delete statements to be batched.
quarkus.hibernate-orm.jdbc.statement-batch-size=1000
However, I can't get this to work. Regardless of this property all delete statements are sent to the database individually instead of in batches.
Is there anything else I need to do?
To reproduce, use a simple entity like this:
#Entity
#Table(name = "book")
public class Book {
#GeneratedValue(strategy = IDENTITY)
#Id
private Long id;
private String title;
public Book() {
}
public Long getId() {
return id;
}
}
insert records into the database like this (on PostgreSQL):
INSERT INTO book (id, title)
VALUES(generate_series(1, 200), 'a title');
and a simple integration test like this:
#QuarkusTest
class BookDeleteIT {
#Inject EntityManager em;
#Test
void deletes_records_in_batches() {
List<Book> books = getBooks();
deleteBooks(books);
}
#Transactional
List<Book> getBooks() {
return em.createQuery("SELECT b FROM Book b").getResultList();
}
#Transactional
void deleteBooks(List<Book> books) {
books.forEach(book -> delete(book));
}
private int delete(Book book) {
return em.createQuery("DELETE FROM Book b WHERE b.id = :id")
.setParameter("id", book.getId())
.executeUpdate();
}
}
When I run this test, the deletes are sent to the database individually instead of in batches.
I suppose the error is the way you're deleting a book: use em.remove(book) instead of the query and Hibernate will accumulate deletions.
There are possibilities that deleting a managed entity using a query instead of EntityManager prevents your JPA provider (Hibernate) to manage entity lifecycle and do some optimizations (like batch deletion).
I have a UserAssignmentRole class like this :
#Data
#Entity
public class UserAssignmentRole {
...
#Enumerated(EnumType.STRING)
public Role role;
}
And the Role is enum, it looks like this:
public enum Role{
admin,
member,
pending
}
Now when in my repository I try to query to select all with role of admin, it gives me error:
#Query("select uar from UserAssignmentRole uar where uar.role=Role.admin")
public List<UserAssignmentRole> listAdmin(Long userID, Long assignmentID);
How this can be solved?
Error : org.hibernate.hql.internal.ast.QuerySyntaxException: Invalid path: 'Role.admin'
Full error : https://pastebin.com/tk9r3wDg
It is a strange but intended behaviour of Hibernate since 5.2.x
An enum value is a constant and you're using a non-conventional naming (lowercase)
Take a look at this issue and Vlad Mihalcea's long explanation of the performance penalty.
If you’re using non-conventional Java constants, then you’ll have to set the hibernate.query.conventional_java_constants configuration property to false. This way, Hibernate will fall back to the previous behavior, treating any expression as a possible candidate for a Java constant.
You can try not to write this sql by yourself but with repository create code like this:
#Repository
public interface UserAssignmentRolelRepository extends JpaRepository<UserModel, Long>{
public List<UserAssignmentRole> findByRole(Role role);
}
And then:
#Autowired
UserAssignmentRolelRepository repository ;
public void someMethod(){
List<UserAssignmentRole> userAssignmentRoles = repository.findByRole(Role.admin);
}
UPDATE 1
As it was point out in this answer: non-conventional naming. You can change labels in your enum to uppercase.
public enum Role{
Admin,
Member,
Pending
}
and then:
#Query("select uar from UserAssignmentRole uar where uar.role=com.example.package.Role.Admin")
public List<UserAssignmentRole> listAdmin(Long userID, Long assignmentID);
UPDATE 2
But if you really want to have lowercase in DB.
It requires more code to change. Enum change to:
public enum Role{
Admin("admin"),
Member("member"),
Pending("pending");
private String name;
Role(String name) {
this.name = name;
}
public String getName() { return name; }
public static Role parse(String id) {
Role role = null; // Default
for (Role item : Role.values()) {
if (item.name.equals(id)) {
role = item;
break;
}
}
return role;
}
}
In UserAssignmentRole
// #Enumerated(EnumType.STRING)
#Convert(converter = RoleConverter.class)
private Role role;
And additional class:
import javax.persistence.AttributeConverter;
import javax.persistence.Converter;
#Converter(autoApply = true)
public class RoleConverter implements AttributeConverter<Role, String> {
#Override
public String convertToDatabaseColumn(Role role) {
return role.getName();
}
#Override
public Role convertToEntityAttribute(String dbData) {
return Role.parse(dbData);
}
}
I want to insert Entities to a database from a scalable microservice. I tried #Lock(LockModeType.PESSIMISTIC_WRITE) to prevent from doubled entries. The problem is, I have dependencies on my Entities.
A basic example is:
TestEntity.java
public class TestEntity {
#GeneratedValue()
#Id
private Long id;
private String string;
#ManyToOne
private TestEntityParent testEntityParent;
}
TestEntityParent.java
public class TestEntityParent {
#GeneratedValue()
#Id
private Long id;
private String stringTwo;
#OneToMany(mappedBy = "testEntityParent")
private List<TestEntity> testEntities;
}
TestEnityRepository.java
public interface TestEnityRepository extends JpaRepository<TestEntity,Long> {
#Lock(LockModeType.PESSIMISTIC_WRITE)
TestEntity saveAndFlush(TestEntity testEntity);
Optional<TestEntity> findByStringAndTestEntityParentStringTwo(String string, String stringTwo);
}
TestEntityParentRepository.java
public interface TestEntityParentRepository extends JpaRepository<TestEntityParent, Long> {
#Lock(LockModeType.PESSIMISTIC_WRITE)
TestEntityParent save(TestEntityParent testEntityParent);
Optional<TestEntityParent> findByStringTwo(String stringTwo);
}
AtomicDbService.java
#Service
public class AtomicDbService {
#Autowired
TestEnityRepository testEnityRepository;
#Autowired
TestEntityParentRepository testEntityParentRepository;
#Transactional
public TestEntity atomicInsert(TestEntity testEntity) {
TestEntityParent testEntityParent = testEntityParentRepository.findByStringTwo(testEntity.getTestEntityParent().getStringTwo())
.orElse(testEntityParentRepository.save(testEntity.getTestEntityParent()));
return testEnityRepository.findByStringAndTestEntityParentStringTwo(
testEntity.getString(), testEntity.getTestEntityParent().getStringTwo()
).orElse(testEnityRepository
.save(
TestEntity.builder()
.string(testEntity.getString())
.testEntityParent(testEntityParent)
.build()
)
);
}
}
My test case:
#Test
#Transactional
public void testAtomicInsert(){
TestEntityParent testEntityParent = TestEntityParent.builder().stringTwo("testTwo").build();
TestEntity testEntity = TestEntity.builder().string("test").testEntityParent(testEntityParent).build();
atomicDbService.atomicInsert(testEntity);
System.out.println(testEnityRepository.findAll());
atomicDbService.atomicInsert(testEntity);
System.out.println(testEnityRepository.findAll());
atomicDbService.atomicInsert(testEntity);
System.out.println(testEnityRepository.findAll());
System.out.println(testEnityRepository.findAll());
}
I get the following answer:
[TestEntity(id=2, string=test, testEntityParent=TestEntityParent(id=1, stringTwo=testTwo, testEntities=null))]
[TestEntity(id=2, string=test, testEntityParent=TestEntityParent(id=1, stringTwo=testTwo, testEntities=null)), TestEntity(id=3, string=test, testEntityParent=TestEntityParent(id=1, stringTwo=testTwo, testEntities=null))]
and an error:
query did not return a unique result: 2;
Without dependencies everything works fine.
UPDATE:
Adding #Lock(LockModeType.PESSIMISTIC_WRITE) to the find method leads to
Feature not supported: "MVCC=TRUE && FOR UPDATE && JOIN"; SQL statement:
... same applies to
#Lock(LockModeType.PESSIMISTIC_WRITE)
#Query("SELECT e from TestEntity e join e.testEntityParent p where e.string = :string and p.stringTwo = :stringTwo ")
Optional<TestEntity> findWhatever(#Param("string") String string, #Param("stringTwo") String stringTwo);
... since for update is always generated.
Apparently, it was a stupid mistake I needed to replace orElse with orElseGet and a lambda and everything worked, even without all those #Lock, etc - tricks.
Still I don't understand what exactly went wrong with the transactions and why.
I have a class that represents a user date of birth in two separated fields
public class User {
private int yearOfBirth;
private int monthOfBirth;
}
Is it possible to make a projection that exports the user age? I know we can concatenate fields using #Value.
The easiest way to resolve the problem (if you can add code to the domain class) is to add a method in the user class like the one below:
#JsonIgnore
public int getAge() {
return Period.between(
LocalDate.of(dobYear, dobMonth, 1),
LocalDate.now()
).getYears();
}
You can add the #JsonIgnore to block spring from exporting an "age" field when your entity is serialized. After adding that method you can create projection like the one below:
#Projection(name = "userAge ", types = {User.class})
public interface UserAge {
#Value("#{target.getAge()}")
Integer getAge();
}
Something like this, for example:
public class UserAgeDto {
private int yearOfBirth;
private int monthOfBirth;
public UserAgeDto(int yearOfBirth, int monthOfBirth) {
// constructor implementation...
}
public int getAge() {
// age calculation...
}
}
public interface UserRepo extends JpaRepository<User, Long> {
#Query("select new com.example.myapp.dto.UserAgeDto(u.yearOfBirth, u.monthOfBirth) from User u where u = ?")
UserAgeDto getUserAgeDto(User user);
}
Some info
Imagine that we have an entity:
#Entity
public class Person implements Serializable {
#Id
private String name;
private Long age;
private Boolean isMad;
...
}
And a repository with a trivial (and unnecessary) example for a custom query:
#Repository
public interface PersonRepository extends PagingAndSortingRepository<Info, String> {
#Query("select p.isMad, count(*) from Person p group by p.isMad")
List<Object> aggregateByMadness();
}
Now to parse this List we need to do something like this:
for (Object element : list) {
Object[] result = (Object[]) element;
Boolean isMad = (Boolean) result[0];
Long count = (Long) result[1];
}
which is a pain, can we cast the result of the query directly to List of a POJO?
Yes, you could use the JPQL construction expression:
package com.foo;
public class Madness {
public Madness(boolean isMad, Number count) { /* ...*/ }
}
And in your repository:
#Query("select new com.foo.Madness(p.isMad, count(*)) from Person p group by p.isMad")
List<Madness> aggregateByMadness();