With the reactive version of panache I am unable to select a specific column from table using the project
#Entity
class Test: PanacheEntity(){
#Column(name="amount")
var amount: Double = 0.0
#Column(name="name")
lateinit var name: String
}
#ApplicationScoped
class TestRepository: PanacheRepository<Test> {
fun getSum(name: String) =
find("select sum(l.amount) as amount from Test l where l.name = :name",Paramater().with("name", name)
.project(Result::class)
.singleResult()
}
data class Result(val amount: Double)
For sum reason this is generating an incorrect SQL statement
i.e.
SELECT new org.package.Result(amount) select sum(l.amount) as amount from org.package.Test l where l.name = $1
It never uses the projection. Is there another way to get the single value from the SQL which is not the entity being used? any workaround for this?
UPDATE: The issue as been fixed and included in Quarkus 2.12.CR1
I've reported the issue.
As a workaround, you can remove the .project(Result.class) and run the following query:
select new org.package.Result(sum(l.amount) as amount) from Test l where l.name = :name
The method will look like this:
#ApplicationScoped
class TestRepository: PanacheRepository<Test> {
fun getSum(name: String) =
find("select new org.package.Result(sum(l.amount) as amount) from Test l where l.name = :name",Paramater().with("name", name)
.singleResult()
}
Related
As stated in official documentation, it's preferable to use the Multimap return type for the Android Room database.
With the next very simple example, it's not working correctly!
#Entity
data class User(#PrimaryKey(autoGenerate = true) val _id: Long = 0, val name: String)
#Entity
data class Book(#PrimaryKey(autoGenerate = true) val _id: Long = 0, val bookName: String, val userId: Long)
(I believe a loooot of the developers have the _id primary key in their tables)
Now, in the Dao class:
#Query(
"SELECT * FROM user " +
"JOIN book ON user._id = book.userId"
)
fun allUserBooks(): Flow<Map<User, List<Book>>>
The database tables:
Finally, when I run the above query, here is what I get:
While it should have 2 entries, as there are 2 users in the corresponding table.
PS. I'm using the latest Room version at this point, Version 2.4.0-beta02.
PPS. The issue is in how UserDao_Impl.java is being generated:
all the _id columns have the same index there.
Is there a chance to do something here? (instead of switching to the intermediate data classes).
all the _id columns have the same index there.
Is there a chance to do something here?
Yes, use unique column names e.g.
#Entity
data class User(#PrimaryKey(autoGenerate = true) val userid: Long = 0, val name: String)
#Entity
data class Book(#PrimaryKey(autoGenerate = true) valbookid: Long = 0, val bookName: String, val useridmap: Long)
as used in the example below.
or
#Entity
data class User(#PrimaryKey(autoGenerate = true) #ColumnInfo(name="userid")val _id: Long = 0, val name: String)
#Entity
data class Book(#PrimaryKey(autoGenerate = true) #ColumnInfo(name="bookid")val _id: Long = 0, val bookName: String, val #ColumnInfo(name="userid_map")userId: Long)
Otherwise, as you may have noticed, Room uses the value of the last found column with the duplicated name and the User's _id is the value of the Book's _id column.
Using the above and replicating your data using :-
db = TheDatabase.getInstance(this)
dao = db.getAllDao()
var currentUserId = dao.insert(User(name = "Eugene"))
dao.insert(Book(bookName = "Eugene's book #1", useridmap = currentUserId))
dao.insert(Book(bookName = "Eugene's book #2", useridmap = currentUserId))
dao.insert(Book(bookName = "Eugene's book #3", useridmap = currentUserId))
currentUserId = dao.insert(User(name = "notEugene"))
dao.insert(Book(bookName = "not Eugene's book #4", useridmap = currentUserId))
dao.insert(Book(bookName = "not Eugene's book #5", useridmap = currentUserId))
var mapping = dao.allUserBooks() //<<<<<<<<<< BREAKPOINT HERE
for(m: Map.Entry<User,List<Book>> in mapping) {
}
for convenience and brevity a Flow hasn't been used and the above was run on the main thread.
Then the result is what I believe you are expecting :-
Additional
What if we already have the database structure with a lot of "_id" fields?
Then you have some decisions to make.
You could
do a migration to rename columns to avoid the ambiguous/duplicate column names.
use alternative POJO's in conjunction with changing the extract output column names accordingly
e.g. have :-
data class Alt_User(val userId: Long, val name: String)
and
data class Alt_Book (val bookId: Long, val bookName: String, val user_id: Long)
along with :-
#Query("SELECT user._id AS userId, user.name, book._id AS bookId, bookName, user_id " +
"FROM user JOIN book ON user._id = book.user_id")
fun allUserBooksAlt(): Map<Alt_User, List<Alt_Book>>
so user._id is output with the name as per the Alt_User POJO
other columns output specifically (although you could use * as per allUserBookAlt2)
:-
#Query("SELECT *, user._id AS userId, book._id AS bookId " +
"FROM user JOIN book ON user._id = book.user_id")
fun allUserBooksAlt2(): Map<Alt_User, List<Alt_Book>>
same as allUserBooksAlt but also has the extra columns
you would get a warning warning: The query returns some columns [_id, _id] which are not used by any of [a.a.so70190116kotlinroomambiguouscolumnsfromdocs.Alt_User, a.a.so70190116kotlinroomambiguouscolumnsfromdocs.Alt_Book]. You can use #ColumnInfo annotation on the fields to specify the mapping. You can annotate the method with #RewriteQueriesToDropUnusedColumns to direct Room to rewrite your query to avoid fetching unused columns. You can suppress this warning by annotating the method with #SuppressWarnings(RoomWarnings.CURSOR_MISMATCH). Columns returned by the query: _id, name, _id, bookName, user_id, userId, bookId. public abstract java.util.Map<a.a.so70190116kotlinroomambiguouscolumnsfromdocs.Alt_User, java.util.List<a.a.so70190116kotlinroomambiguouscolumnsfromdocs.Alt_Book>> allUserBooksAlt2();
Due to Note that Room will not rewrite the query if it has multiple columns that have the same name as it does not yet have a way to distinguish which one is necessary. the #RewriteQueriesToDropUnusedColumns doesn't do away with the warning.
if using :-
var mapping = dao.allUserBooksAlt() //<<<<<<<<<< BREAKPOINT HERE
for(m: Map.Entry<Alt_User,List<Alt_Book>> in mapping) {
}
Would result in :-
possibly other options.
However, I'd suggest fixing the issue once and for all by using a migration to rename columns to all have unique names. e.g.
I wanted to update a nested list but I experience a strange behavior where I have to call method twice to get it done...
Here is my POJO:
#Document(collection = "company")
data class Company (
val id: ObjectId,
#Indexed(unique=true)
val name: String,
val customers: MutableList<Customer> = mutableListOf()
//other fields
)
Below is my function from custom repository to do the job which I based on this tutorial
override fun addCustomer(customer: Customer): Mono<Company> {
val query = Query(Criteria.where("employees.keycloakId").`is`(customer.createdBy))
val update = Update().addToSet("customers", customer)
val upsertOption = FindAndModifyOptions.options().upsert(true)
//if I uncomment below this will work...
//mongoTemplate.findAndModify(query, update, upsertOption, Company::class.java).block()
return mongoTemplate.findAndModify(query, update, upsertOption, Company::class.java)
}
In order to actually add this customer I have to either uncomment the block call above or call the method two times in the debugger while running integration tests which is quite confusing to me
Here is the failing test
#Test
fun addCustomer() {
//given
val company = fixture.company
val initialCustomerSize = company.customers.size
companyRepository.save(company).block()
val customerToAdd = CustomerReference(id = ObjectId.get(),
keycloakId = "dummy",
username = "customerName",
email = "email",
createdBy = company.employees[0].keycloakId)
//when, then
StepVerifier.create(companyCustomRepositoryImpl.addCustomer(customerToAdd))
.assertNext { updatedCompany -> assertThat(updatedCompany.customers).hasSize(initialCustomerSize + 1) }
.verifyComplete()
}
java.lang.AssertionError:
Expected size:<3> but was:<2> in:
I found out the issue.
By default mongo returns entity with state of before update. To override it I had to add:
val upsertOption = FindAndModifyOptions.options()
.returnNew(true)
.upsert(true)
I've ran into a problem while developing a Spring Boot application with Criteria API.
I'm having a simple Employer entity, which contains a set of Job.ID (not entities, they're pulled out using repository when needed). Employer and Job are in many to many relationship. This mapping is only used on a purpose of finding Employee with no jobs.
public class Employer {
#ElementCollection
#CollectionTable(
name = "EMPLOYEE_JOBS"
joinColumns = #JoinColumn(name = "EMP_ID")
#Column(name = "JOB_ID")
private final Set<String> jobs = new HashSet<>(); //list of ids of jobs for an employee
}
Then I have a generic function, which returns a predicate (Specification) by a given attributePath and command for any IEntity implementation.
public <E extends IEntity> Specification<E> createPredicate(String attributePath, String command) {
return (r, q, b) -> {
Path<?> currentPath = r;
for(String attr : attributePath.split("\\.")) {
currentPath = currentPath.get(attr);
}
if(Collection.class.isAssignableFrom(currentPath.getJavaType())) {
//currentPath points to PluralAttribute
if(command.equalsIgnoreCase("empty")) {
return b.isEmpty((Expression<Collection<?>>)currentPath);
}
}
}
}
If want to get list of all employee, who currently have no job, I wish I could create the predicate as follows:
Specification<Employer> spec = createPredicate("jobs", "empty");
//or if I want only `Work`s whose were done by employer with no job at this moment
Specification<Work> spec = createPredicate("employerFinished.jobs", "empty");
This unfortunately does not works and throws following exception:
org.hibernate.hql.internal.ast.QuerySyntaxException:
unexpected end of subtree
[select generatedAlias0 from Employer as generatedAlias0
where generatedAlias0.jobs is empty]
Is there a workaround how to make this work?
This bug in Hibernate is known since September 2011, but sadly hasn't been fixed yet. (Update: this bug is fixed as of 5.4.11)
https://hibernate.atlassian.net/browse/HHH-6686
Luckily there is a very easy workaround, instead of:
"where generatedAlias0.jobs is empty"
you can use
"where size(generatedAlias0.jobs) = 0"
This way the query will work as expected.
I'm importing historical football (or soccer, if you're from the US) data into a Neo4j database using a spring boot application (2.1.6.RELEASE) with the spring-boot-starter-data-neo4j dependency and a standalone, locally running 3.5.6 Neo4j database server.
But for some reason searching for an entity by a simple property and an attached, referenced entity, does not work, althought the relation is present in the database.
This is the part of the model, that is currently giving me a headache:
#NodeEntity(label = "Season")
open class Season(
#Id
#GeneratedValue
var id: Long? = null,
#Index(unique = true)
var name: String,
var seasonNumber: Long,
#Relationship(type = "IN_LEAGUE", direction = Relationship.OUTGOING)
var league: League?,
var start: LocalDate,
var end: LocalDate
)
#NodeEntity(label = "League")
open class League(
#Id
#GeneratedValue
var id: Long? = null,
#Index(unique = true)
var name: String,
#Relationship(type = "BELONGS_TO", direction = Relationship.OUTGOING)
var country: Country?
)
(I left out the Country class, as I'm pretty sure that it is not part of the problem)
To allow running the import more than once, I want to check if the corresponding entity is already present in the database and only import newer ones. So I added the following method SeasonRepository:
open class SeasonRepository : CrudRepository<Season, Long> {
fun findBySeasonNumberAndLeague(number: Long, league: League): Season?
}
But it is giving me a null result instead of the existing entity on consecutive runs, hence I get duplicates in my database.
I would have expected spring-data-neo4j to reduce the passed League to its Id and then have a generated query that looks somewhat like this:
MATCH (s:Season)-[:IN_LEAGUE]->(l:League) WHERE id(l) = {leagueId} AND s.seasonNumber = {seasonNumber} WITH s MATCH (s)-[r]->(o) RETURN s,r,o
but when I turn on finer logging on the neo4j package I see this output in the log file:
MATCH (n:`Season`) WHERE n.`seasonNumber` = { `seasonNumber_0` } AND n.`league` = { `league_1` } WITH n RETURN n,[ [ (n)-[r_i1:`IN_LEAGUE`]->(l1:`League`) | [ r_i1, l1 ] ] ], ID(n) with params {league_1={id=30228, name=1. Bundesliga, country={id=29773, name=Deutschland}}, seasonNumber_0=1}
So for some reason, spring-data seems to think, that the league property is a simple / primitive property and not a full releation, that needs to be resolved by the id (n.league= {league_1}).
I only got it to work, by passing the id of the league, and providing a custom query using the #Query annotation but I actually thought, that it would work with spring-data-neo4j out of the box.
Any help appreciated. Let me know if you need more details.
Spring Data Neo4j does not support objects as parameters at the moment. It is possible to query for properties on related entities/nodes e.g. findBySeasonNumberAndLeagueName if this is a suitable solution.
I'm setting up a Spring Data JPA Repo to work with sequences in a postgresql database. I was assuming that this would be pretty simple:
#Query(nativeQuery = true, value = "CREATE SEQUENCE IF NOT EXISTS ':seq_name' START WITH :startAt")
fun createSequence(#Param("seq_name") seq_name: String, #Param("startAt") startAt: Long = 0)
#Query(nativeQuery = true, value = "SELECT nextval(':seq_name')")
fun nextSerial(#Param("seq_name") seq_name: String) : Long
#Query(nativeQuery = true, value = "DROP SEQUENCE IF EXISTS ':seq_name'")
fun dropSequence(#Param("seq_name") seq_name: String)
#Query(nativeQuery = true, value = "setval(':seq_name', :set_to, false")
fun setSequence(#Param("seq_name") seq_name: String, #Param("set_to") setTo: Long)
But for some reason I get
org.springframework.dao.InvalidDataAccessApiUsageException: Parameter with that name [seq_name] did not exist; whenever I'm trying to call the method. Any idea why this might happen?
Ok, based on the answer from #StanislavL and after some debugging around I have a working solution now. As #posz pointed out I cannot bind identifiers which means I have to hard code the queries. I moved the code from a JPA interface to an implemented service which is not as nice but works.
#Service
open class SequenceService (val entityManager: EntityManager){
#Transactional
fun createSequence(seq_name: String, startAt: Long = 0) {
val query = entityManager.createNativeQuery("CREATE SEQUENCE IF NOT EXISTS ${seq_name} START ${startAt}")
with(query){
executeUpdate()
}
}
#Transactional
fun nextSerial(seq_name: String) : Long {
val query = entityManager.createNativeQuery("SELECT nextval(:seq_name)")
with(query){
setParameter("seq_name", seq_name)
val result = singleResult as BigInteger
return result.toLong()
}
}
#Transactional
fun dropSequence(seq_name: String) {
val query = entityManager.createNativeQuery("DROP SEQUENCE IF EXISTS ${seq_name}")
with(query){
executeUpdate()
}
}
#Transactional
fun setSequence(seq_name: String, setTo: Long){
val query = entityManager.createNativeQuery("SELECT setval(:seq_name, :set_to, false)")
with(query){
setParameter("seq_name", seq_name)
setParameter("set_to", setTo)
singleResult
}
}
}
I hope this is helpful for the next person trying to directly work with sequences when using #SequenceGenerator is not an option.