I have an Oracle XMLType column that stores the various language specific strings. I need to construct a Hibernate criteria that orders on this column. In order to do this, I need to extract the value with an Oracle function. This criteria is generated automatically by code I have written but I cannot, for the life of me, figure out how to extract the value and order on it via the criteria API. Basically, the generated SQL should look something like:
SELECT EXTRACTVALUE(title, '//value[#lang="EN"]') AS enTitle
FROM domain_object
ORDER BY enTitle
I fiddled with projections momentarily, but they appear to execute a second select. Which I assume would cause hibernate to select ALL values and in memory sort them based on the projection? This would be very undesirable =\
Ok, I found a solution. Not sure this is the best, so I will leave it open for a little while if some one wants to provide a better answer / refine my solution.
What I did was extend org.hibernate.criterion.Order thusly:
package com.mycorp.common.hibernate;
import org.hibernate.Criteria;
import org.hibernate.HibernateException;
import org.hibernate.criterion.CriteriaQuery;
import org.hibernate.criterion.Order;
import org.hibernate.engine.SessionFactoryImplementor;
import com.mycorp.LocalizationUtil;
public class LocalStringOrder extends Order {
private static final long serialVersionUID = 1L;
private boolean ascending;
private String propName;
public LocalStringOrder(String prop, boolean asc) {
super(prop, asc);
ascending = asc;
propName = prop;
}
public String toSqlString(Criteria criteria, CriteriaQuery criteriaQuery) throws HibernateException {
String[] columns = criteriaQuery.getColumnsUsingProjection(criteria, propName);
StringBuffer fragment = new StringBuffer();
for ( int i=0; i<columns.length; i++ ) {
SessionFactoryImplementor factory = criteriaQuery.getFactory();
fragment.append( factory.getDialect().getLowercaseFunction() )
.append('(');
fragment.append("EXTRACTVALUE(");
fragment.append( columns[i] );
fragment.append(", '//value[#lang=\"" +
LocalizationUtil.getPreferedLanguage().name() +
"\"')");
fragment.append(')');
fragment.append( ascending ? " asc" : " desc" );
if ( i<columns.length-1 ) fragment.append(", ");
}
return fragment.toString();
}
public static Order asc(String propertyName) {
return new LocalStringOrder(propertyName, true);
}
public static Order desc(String propertyName) {
return new LocalStringOrder(propertyName, false);
}
}
Then it was just a matter of saying criteria.addOrder(LocalStringOrder.asc('prop')).
Another general solution is NativeSQLOrder, see http://opensource.atlassian.com/projects/hibernate/browse/HHH-2381. I do not undestand, why this feature is not in Hibernate yet.
Related
I have a JPA Entity (Terminal) which uses an AttributeConverter to convert a Database String into a list of Objects (ProgrmRegistration). The converter just uses a JSON ObjectMapper to turn the JSON String into POJO objects.
Entity Object
#Entity
#Data
public class Terminal {
#Id
private String terminalId;
#NotEmpty
#Convert(converter = ProgramRegistrationConverter.class)
private List<ProgramRegistration> programRegistrations;
#Data
public static class ProgramRegistration {
private String program;
private boolean online;
}
}
The Terminal uses the following JPA AttributeConverter to serialize the Objects from and to JSON
JPA AttributeConverter
public class ProgramRegistrationConverter implements AttributeConverter<List<Terminal.ProgramRegistration>, String> {
private final ObjectMapper objectMapper;
private final CollectionType programRegistrationCollectionType;
public ProgramRegistrationConverter() {
this.objectMapper = new ObjectMapper().setSerializationInclusion(JsonInclude.Include.NON_EMPTY);
this.programRegistrationCollectionType =
objectMapper.getTypeFactory().constructCollectionType(List.class, Terminal.ProgramRegistration.class);
}
#Override
public String convertToDatabaseColumn(List<Terminal.ProgramRegistration> attribute) {
if (attribute == null) {
return null;
}
String json = null;
try {
json = objectMapper.writeValueAsString(attribute);
} catch (final JsonProcessingException e) {
LOG.error("JSON writing error", e);
}
return json;
}
#Override
public List<Terminal.ProgramRegistration> convertToEntityAttribute(String dbData) {
if (dbData == null) {
return Collections.emptyList();
}
List<Terminal.ProgramRegistration> list = null;
try {
list = objectMapper.readValue(dbData, programRegistrationCollectionType);
} catch (final IOException e) {
LOG.error("JSON reading error", e);
}
return list;
}
}
I am using Spring Boot and a JPARepository to fetch a Page of Terminal results from the Database.
To filter the results I am using a BooleanExpression as the Predicate. For all the filter values on the Entity it works well, but the List of objects converted from the JSON string does not allow me to easily write an Expression that will filter the Objects in the list.
REST API that is trying to filter the Entity Objects using QueryDSL
#GetMapping(path = "/filtered/page", produces = MediaType.APPLICATION_JSON_VALUE)
public Page<Terminal> findFilteredWithPage(
#RequestParam(required = false) String terminalId,
#RequestParam(required = false) String programName,
#PageableDefault(size = 20) #SortDefault.SortDefaults({ #SortDefault(sort = "terminalId") }) Pageable p) {
BooleanBuilder builder = new BooleanBuilder();
if (StringUtils.isNotBlank(terminalId))
builder.and(QTerminal.terminal.terminalId.upper()
.contains(StringUtils.upperCase(terminalId)));
// TODO: Figure out how to use QueryDsl to get the converted List as a predicate
// The code below to find the programRegistrations does not allow a call to any(),
// expects a CollectionExpression or a SubqueryExpression for calls to eqAny() or in()
if (StringUtils.isNotBlank(program))
builder.and(QTerminal.terminal.programRegistrations.any().name()
.contains(StringUtils.upperCase(programName)));
return terminalRepository.findAll(builder.getValue(), p);
}
I am wanting to get any Terminals that have a ProgramRegistration object with the program name equal to the parameter passed into the REST service.
I have been trying to get CollectionExpression or SubQueryExpression working without success since they all seem to be wanting to perform a join between two Entity objects. I do not know how to create the path and query so that it can iterate over the programRegistrations checking the "program" field for a match. I do not have a QProgamRegistration object to join with, since it is just a list of POJOs.
How can I get the predicate to match only the Terminals that have programs with the name I am searching for?
This is the line that is not working:
builder.and(QTerminal.terminal.programRegistrations.any().name()
.contains(StringUtils.upperCase(programName)));
AttributeConverters have issues in Querydsl, because they have issues in JPQL - the query language of JPA - itself. It is unclear what actually the underlying query type of the attribute is, and whether the parameter should be a basic type of that query type, or should be converted using the conversion. Such conversion, whilst it appears logical, is not defined in the JPA specification. Thus a basic type of the query type needs to be used instead, which leads to new difficulties, because Querydsl can't know the type it needs to be. It only knows the Java type of the attribute.
A workaround can be to force the field to result into a StringPath by annotating the field with #QueryType(PropertyType.STRING). Whilst this fixes the issue for some queries, you will run into different issues in other scenarios. For more information, see this thread.
Although the following desired QueryDsl looks like it should work
QTerminal.terminal.programRegistrations.any().name().contains(programName);
In reality JPA would never be able to convert it into something that would make sense in terms of SQL. The only SQL that JPA could convert it into could be as follows:
SELECT t.terminal_id FROM terminal t where t.terminal_id LIKE '%00%' and t.program_registrations like '%"program":"MY_PROGRAM_NAME"%';
This would work in this use case, but be semantically wrong, and therefore it is correct that it should not work. Trying to select unstructured data using a structured query language makes no sense
The only solution is to treat the data as characters for the DB search criteria, and to treat it as a list of Objects after the query completes and then perform filtering of the rows in Java. Although This makes the paging feature rather useless.
One possible solution is to have a secondary read only String version of the column that is used for the DB search criteria, that is not converted to JSON by the AttributeConverter.
#JsonIgnore
#Column(name = "programRegistrations", insertable = false, updatable = false)
private String programRegistrationsStr;
The real solution is do not use unstructured data when you want structured queries on that data Therefore convert the data to either a database that supports the JSON natively for queries or model the data correctly in DDL.
To have a short answer: the parameter used in the predicate on attribute with #QueryType must be used in another predicate on attribute of type String.
It's a clearly known issue describe in this thread: https://github.com/querydsl/querydsl/issues/2652
I simply want to share my experience about this bug.
Model
I have an entity like
#Entity
public class JobLog {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private String id;
#QueryType(PropertyType.STRING)
private LocalizedString message;
}
Issue
I want to perform some predicate about message. Unfortunately, with this configuration, I can't do this:
predicates.and(jobLog.message.likeIgnoreCase(escapedTextFilter));
because I have the same issues that all people!
Solution
But I find a way to workaround :)
predicates.and(
(jobLog.id.likeIgnoreCase(escapedTextFilter).and(jobLog.id.isNull()))
.or(jobLog.message.likeIgnoreCase(escapedTextFilter)));
Why it workaround the bug?
It's important that escapedTextFilter is the same in both predicate!
Indeed, in this case, the constant is converter to SQL in the first predicate (which is of String type). And in the second predicate, we use the conterted value
Bad thing?
Add a performance overflow because we have OR in predicate
Hope this can help someone :)
I've found one way to solve this problem, my main idea is to use mysql function cast(xx as char) to cheat hibrenate. Below is my base info. My code is for work , so I've made an example.
// StudentRepo.java
public interface StudentRepo<Student, Long> extends JpaRepository<Student, Long>, QuerydslPredicateExecutor<Student>, JpaSpecificationExecutor<Student> {
}
// Student.java
#Data
#AllArgsConstructor
#NoArgsConstructor
#EqualsAndHashCode(of = "id")
#Entity
#Builder
#Table(name = "student")
public class Student {
#Convert(converter = ClassIdsConvert.class)
private List<String> classIds;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
}
// ClassIdsConvert.java
public class ClassIdsConvert implements AttributeConverter<List<String>, String> {
#Override
public String convertToDatabaseColumn(List<String> ips) {
// classid23,classid24,classid25
return String.join(",", ips);
}
#Override
public List<String> convertToEntityAttribute(String dbData) {
if (StringUtils.isEmpty(dbData)) {
return null;
} else {
return Stream.of(dbData.split(",")).collect(Collectors.toList());
}
}
}
my db is below
id
classIds
name
address
1
2,3,4,11
join
北京市
2
2,31,14,11
hell
福建省
3
2,12,22,33
work
福建省
4
1,4,5,6
ouy
广东省
5
11,31,34,22
yup
上海市
-- ----------------------------
-- Table structure for student
-- ----------------------------
DROP TABLE IF EXISTS `student`;
CREATE TABLE `student` (
`id` int(11) NOT NULL,
`classIds` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
`name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
`address` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;
SET FOREIGN_KEY_CHECKS = 1;
Use JpaSpecificationExecutor solve the problem
Specification<Student> specification = (root, query, criteriaBuilder) -> {
String classId = "classid24"
String classIdStr = StringUtils.wrap(classId, "%");
var predicate = criteriaBuilder.like(root.get("classIds").as(String.class), classIdStr);
return criteriaBuilder.or(predicate);
};
var students = studentRepo.findAll(specification);
log.info(new Gson().toJson(students))
attention the code root.get("classIds").as(String.class)
In my opinion, if I don't add .as(String.class) , hibernate will think the type of student.classIds is list and throw an Exception as below.
SQL will like below which runs correctly in mysql. But hibnerate can't work.
org.springframework.dao.InvalidDataAccessApiUsageException: Parameter value [%classid24%] did not match expected type [java.util.List (n/a)]; nested exception is java.lang.IllegalArgumentException: Parameter value [%classid24%] did not match expected type [java.util.List (n/a)]
SELECT
student0_.id AS id1_0_,
student0_.class_ids AS class_ids2_0_
FROM
student student0_
WHERE
student0_.class_ids LIKE '%classid24%' ESCAPE '!'
if you add .as(String.class) , hibnerate will think the type of student.classIds as string and won't check it at all.
SQL will be like below which can run correct in mysql. Also in JPA.
SELECT
student0_.id AS id1_0_,
student0_.class_ids AS class_ids2_0_
FROM
student student0_
WHERE
cast( student0_.class_ids AS CHAR ) LIKE '%classid24%' ESCAPE '!'
when the problem is solved by JpaSpecificationExecutor, so I think this can be solve also in querydsl. At last I find the template idea in querydsl.
String classId = "classid24";
StringTemplate st = Expressions.stringTemplate("cast({0} as string)", qStudent.classIds);
var students = Lists.newArrayList<studentRepo.findAll(st.like(StringUtils.wrap(classId, "%"))));
log.info(new Gson().toJson(students));
it's sql is like below.
SELECT
student0_.id AS id1_0_,
student0_.class_ids AS class_ids2_0_
FROM
student student0_
WHERE
cast( student0_.class_ids AS CHAR ) LIKE '%classid24%' ESCAPE '!'
I'm trying to persist a SparseArray in a Room database and can not get it to compile. I keep getting the "Not sure how to convert a Cursor to this method's return type" error message along with "The query returns some columns [plannerLineData] which are not use by android.util.SparseArray."
I have tried using a single field in the PlannerLine Entity alone with a separate PlannerLineData class.
I have data converters to convert SparseArray to String and to convert String back to SparseArray.
I have checked several questions on stackoverflow and have successfully used the Date to Long and the Long to Date converters in other projects, but I seem to be missing something somewhere.
Data Files:
#Entity
public class PlannerLine implements Serializable {
private static final long serialVersionUID = 1L;
#TypeConverters(Converters.class)
#PrimaryKey
#SerializedName("planner_line")
#NonNull
public SparseArray plannerLineData;
public SparseArray getPlannerLineData() {
return plannerLineData;
}
public void setPlannerLineData(SparseArray plannerLineData) {
this.plannerLineData = plannerLineData;
}
public class PlannerLineData implements Serializable {
#SerializedName("lineId")
public int lineId;
#SerializedName("plan_text")
public String planText;
public int getLineId() {
return lineId;
}
public void setLineId(int lineId) {
this.lineId = lineId;
}
public String getPlanText() {
return planText;
}
public void setPlanText(String planText) {
this.planText = planText;
}
}
DAO problem area:
#Dao
public interface PlannerDao {
#Query("SELECT * from PlannerLine")
public SparseArray getPlannerLine(); <---Doesn't like this line
I have also tried returning SparseArray<PlannerLine> and SparseArray<PlannerLineData>, but no joy.
Converters class:
public class Converters {
#TypeConverter
public static String sparseArrayToString(SparseArray sparseArray) {
if (sparseArray == null) {
return null;
}
int size = sparseArray.size();
if (size <= 0) {
return "{}";
}
StringBuilder buffer = new StringBuilder(size * 28);
buffer.append('{');
for (int i = 0; i < size; i++) {
if (i > 0) {
buffer.append("-,- ");
}
int key = sparseArray.keyAt(i);
buffer.append(key);
buffer.append("-=-");
Object value = sparseArray.valueAt(i);
buffer.append(value);
}
buffer.append('}');
return buffer.toString();
}
#TypeConverter
public static SparseArray stringToSparseArray(String string) {
if (string == null) {
return null;
}
String entrySeparator = "-=-";
String elementSeparator = "-,-";
SparseArray sparseArray = new SparseArray();
String[] entries = StringUtils.splitByWholeSeparator(string, elementSeparator);
for (int i = 0; i < entries.length; i++) {
String[] parts = StringUtils.splitByWholeSeparator(entries[i], entrySeparator);
int key = Integer.parseInt(parts[0]);
String text = parts[1];
sparseArray.append(key, text);
}
return sparseArray;
}
Suggestions would be appreciated. Thanks
Edit:
My original vision for this app was to store all the plan lines in a single SparseArray, along with two additional SparseIntArrays (which I did not mention before because the solution would be similar to the SparseArray) to hold info on how the plan lines interact with each other.
After reading through #dglozano's helpful responses, I have decided to re-design the app to just store regular DB files in Room and load the data into the SparseArray (and the two SparseIntArrays) at startup, use only the in memory SparseArray and SparseIntArrays while the app is active, then write changes in the Sparse Arrays to the DB during onStop(). I am also considering updating the DB in the background as I work through app.
Because the answers and suggestions provided by #dglozano led me to the re-design decision, I am accepting his answer as the solution.
Thanks for the help.
It seems that you are doing the Conversion properly. However, the problem is in your DAO Query:
#Query("SELECT * from PlannerLine") // This returns a List of PlannerLine, not a SparseArray
public SparseArray getPlannerLine(); // The return type is SparseArray, not a List of PlannerLine
Therefore, you can try two different things:
1 - Change the Query to #Query("SELECT plannerLineData FROM PlannerLine WHERE lineId == :lineId") , so that the query returns the SparseArray inside the PlannerLine with id lineId. You should change the method signature so it accepts the parameter lineId
#Query("SELECT plannerLineData FROM PlannerLine WHERE lineId == :lineId")
public SparseArray getPlannerLine(int lineId);
2 - If you want to return the full PlannerLine object and then access to its SparseArray field, then you should change the return type. You should also add the lineId parameter to return just one record and not a list of all the PlannerLine stored in the database table.
#Query("SELECT * FROM PlannerLine WHERE lineId == :lineId")
public PlannerLine getPlannerLine(int lineId);
UPDATE
If you want to get a List<PlannerLine> with all the PlannerLine stored in the database, use the following query in your Dao.
#Query("SELECT * FROM PlannerLine")
public List<PlannerLine> getAllPlannerLines();
Then you can access to the SparseArray of each PlannerLine in the list as usual.
Is there any existing utility to do in a better/faster way a DB insert?
Now this is what I'm using (the are a lot of fields, I truncated the field list):
public void insert(Ing ing){
String[] fields=new String[]{"field1","field2","field3"};
Object[] params=new Object[]{ing.getField1(),ing.getField2(),ing.getField3()};
String[] paramsPH=new String[fields.length];
for(int i=0;i<paramsPH.length;i++) paramsPH[i]="?";
String sql= "INSERT INTO ing("+StringUtils.join(fields,",")+") VALUES ("+StringUtils.join(paramsPH,",")+");";
getJdbcTemplate().update(sql,params);
}
Check this :
import java.util.LinkedHashMap;
import org.apache.commons.lang3.StringUtils;
import org.springframework.jdbc.core.JdbcTemplate;
JdbcTemplate jt = new JdbcTemplate...// some instance... ;
String tableName="nameDateTable";//your happy table
LinkedHashMap<String,Object>map= new LinkedHashMap<String,Object>();
map.put("col1Name","blabla"); //column name and value
map.put("dateAdd",new Date());//column name and value
// etc..
// You can place any map here (LinkedHashMap!). Here is a magical query:
String sql = "INSERT INTO "+tableName+" (\""+StringUtils.join(map.keySet(), "\",\"")+"\") VALUES ("+StringUtils.repeat("?", ",", map.size())+");";
jt.update(sql, map.values().toArray());
The most important in this solution is
String sql = "INSERT INTO "+tableName+"
(\""+StringUtils.join(map.keySet(), "\",\"")+"\") VALUES ("+StringUtils.repeat("?", ",", map.size())+");";
jt.update(sql, map.values().toArray());
and LinkedHashMap.
In my Spring JdbcTemplate projects, I ususally create a generic BaseDao<T> class that has a method saveObject(T obj).
to achieve this, I use SimpleJdbcInsert like this:
//Constants, from BaseDAO interface that this method implements
String TABLE_NAME = "tableName";
String GENERATED_KEY = "generatedKey";
/**
* Save an object using a {#link BaseObjectMapper} returned from the method {#link #getObjectMapper()}
* Returns the generated key if the map generated by the {#link BaseObjectMapper} contains an entry for {#value #GENERATED_KEY}
* #param the object to be saved
*/
#Override
public int saveObject(T obj){
MapSqlParameterSource params = new MapSqlParameterSource();
//the mapper must transform an object to a map
//and add the table name where to insert, and if any, a generated key
Map<String, Object> paramsMap = getObjectMapper().mapObject(obj);
String table = (String) paramsMap.remove(TABLE_NAME);
if(table == null){
throw new IllegalArgumentException("The ObjectMapper of "+obj.getClass()+" must return the table name among the result map of mapObject method");
}
String generatedKey = (String) paramsMap.remove(GENERATED_KEY);
String[] colNames = paramsMap.keySet().toArray(new String[paramsMap.keySet().size()]);
for(String col: colNames){
params.addValue(col, paramsMap.get(col));
}
//You can have it as a class attribute and create it once the DAO is being instantiated
SimpleJdbcInsert genericJdbcInsert = new SimpleJdbcInsert(jdbcInsert.getJdbcTemplate().getDataSource())
.withSchemaName(currentSchema).withTableName(table)
.usingColumns(colNames);
if(generatedKey != null){
genericJdbcInsert = genericJdbcInsert.usingGeneratedKeyColumns(generatedKey);
return genericJdbcInsert.executeAndReturnKey(paramsMap).intValue();
}else{
genericJdbcInsert.execute(params);
}
return 0;
}
protected BaseObjectMapper<T> getObjectMapper(){
//Implement it in your concrete DAO classes
throw new UnsupportedOperationException("You must implemnt this method in your concrete DAO implementation");
}
Here is the BaseObjectMapper interface:
import java.util.Map;
import org.springframework.jdbc.core.RowMapper;
import com.atlasaas.ws.dao.BaseDao;
import com.atlasaas.ws.entities.BaseEntity;
public interface BaseObjectMapper<T extends BaseEntity> extends RowMapper<T>{
/**
* Method to transform an object into a {#link Map}
* The result map must contain all columns to be inserted as keys
* It also must contain the Table name corresponding to the given object
* The table name must be associated to the key of value: {#link BaseDao#TABLE_NAME}
* Optionally, if you want your save methods to return a generated primary key value
* you should include an entry referencing the the generated column name. This entry
* must then be associated to the key of value: {#link BaseDao#GENERATED_KEY}
* #param obj The object to be transformed
* #return the result of this object transformation
*/
Map<String, Object> mapObject(T obj);
}
If you really want to use SQL in your code, you can use:
org.springframework.jdbc.core.namedparam.NamedParameterJdbcOperations#(String sql, SqlParameterSource paramSource)
where your SQL string would be something like this:
insert into SOME_TABLE(COL1,COL2,COL3) values (:col1Val,:col2Val,:col3Val)
and your SqlParameterSource is built this way:
MapSqlParameterSource params = new MapSqlParameterSource();
params.addValue("col1Val", val1);
params.addValue("col2Val", val2);
params.addValue("col3Val", val3);
I hope this helps
You can use parameterized SQL to make it a bit simpler
Your code would look something like this
String sql = "INSERT INTO ing(field1, field2, field3) values(?, ?, ?)";
Object[] params=new Object[]{ing.getField1(),ing.getField2(),ing.getField3()};
getJdbcTemplate().update(sql,params);
First experiments with Spring Data and MongoDB were great. Now I've got the following structure (simplified):
public class Letter {
#Id
private String id;
private List<Section> sections;
}
public class Section {
private String id;
private String content;
}
Loading and saving entire Letter objects/documents works like a charm. (I use ObjectId to generate unique IDs for the Section.id field.)
Letter letter1 = mongoTemplate.findById(id, Letter.class)
mongoTemplate.insert(letter2);
mongoTemplate.save(letter3);
As documents are big (200K) and sometimes only sub-parts are needed by the application: Is there a possibility to query for a sub-document (section), modify and save it?
I'd like to implement a method like
Section s = findLetterSection(letterId, sectionId);
s.setText("blubb");
replaceLetterSection(letterId, sectionId, s);
And of course methods like:
addLetterSection(letterId, s); // add after last section
insertLetterSection(letterId, sectionId, s); // insert before given section
deleteLetterSection(letterId, sectionId); // delete given section
I see that the last three methods are somewhat "strange", i.e. loading the entire document, modifying the collection and saving it again may be the better approach from an object-oriented point of view; but the first use case ("navigating" to a sub-document/sub-object and working in the scope of this object) seems natural.
I think MongoDB can update sub-documents, but can SpringData be used for object mapping? Thanks for any pointers.
I figured out the following approach for slicing and loading only one subobject. Does it seem ok? I am aware of problems with concurrent modifications.
Query query1 = Query.query(Criteria.where("_id").is(instance));
query1.fields().include("sections._id");
LetterInstance letter1 = mongoTemplate.findOne(query1, LetterInstance.class);
LetterSection emptySection = letter1.findSectionById(sectionId);
int index = letter1.getSections().indexOf(emptySection);
Query query2 = Query.query(Criteria.where("_id").is(instance));
query2.fields().include("sections").slice("sections", index, 1);
LetterInstance letter2 = mongoTemplate.findOne(query2, LetterInstance.class);
LetterSection section = letter2.getSections().get(0);
This is an alternative solution loading all sections, but omitting the other (large) fields.
Query query = Query.query(Criteria.where("_id").is(instance));
query.fields().include("sections");
LetterInstance letter = mongoTemplate.findOne(query, LetterInstance.class);
LetterSection section = letter.findSectionById(sectionId);
This is the code I use for storing only a single collection element:
MongoConverter converter = mongoTemplate.getConverter();
DBObject newSectionRec = (DBObject)converter.convertToMongoType(newSection);
Query query = Query.query(Criteria.where("_id").is(instance).and("sections._id").is(new ObjectId(newSection.getSectionId())));
Update update = new Update().set("sections.$", newSectionRec);
mongoTemplate.updateFirst(query, update, LetterInstance.class);
It is nice to see how Spring Data can be used with "partial results" from MongoDB.
Any comments highly appreciated!
I think Matthias Wuttke's answer is great, for anyone looking for a generic version of his answer see code below:
#Service
public class MongoUtils {
#Autowired
private MongoTemplate mongo;
public <D, N extends Domain> N findNestedDocument(Class<D> docClass, String collectionName, UUID outerId, UUID innerId,
Function<D, List<N>> collectionGetter) {
// get index of subdocument in array
Query query = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query.fields().include(collectionName + "._id");
D obj = mongo.findOne(query, docClass);
if (obj == null) {
return null;
}
List<UUID> itemIds = collectionGetter.apply(obj).stream().map(N::getId).collect(Collectors.toList());
int index = itemIds.indexOf(innerId);
if (index == -1) {
return null;
}
// retrieve subdocument at index using slice operator
Query query2 = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query2.fields().include(collectionName).slice(collectionName, index, 1);
D obj2 = mongo.findOne(query2, docClass);
if (obj2 == null) {
return null;
}
return collectionGetter.apply(obj2).get(0);
}
public void removeNestedDocument(UUID outerId, UUID innerId, String collectionName, Class<?> outerClass) {
Update update = new Update();
update.pull(collectionName, new Query(Criteria.where("_id").is(innerId)));
mongo.updateFirst(new Query(Criteria.where("_id").is(outerId)), update, outerClass);
}
}
This could for example be called using
mongoUtils.findNestedDocument(Shop.class, "items", shopId, itemId, Shop::getItems);
mongoUtils.removeNestedDocument(shopId, itemId, "items", Shop.class);
The Domain interface looks like this:
public interface Domain {
UUID getId();
}
Notice: If the nested document's constructor contains elements with primitive datatype, it is important for the nested document to have a default (empty) constructor, which may be protected, in order for the class to be instantiatable with null arguments.
Solution
Thats my solution for this problem:
The object should be updated
#Getter
#Setter
#Document(collection = "projectchild")
public class ProjectChild {
#Id
private String _id;
private String name;
private String code;
#Field("desc")
private String description;
private String startDate;
private String endDate;
#Field("cost")
private long estimatedCost;
private List<String> countryList;
private List<Task> tasks;
#Version
private Long version;
}
Coding the Solution
public Mono<ProjectChild> UpdateCritTemplChild(
String id, String idch, String ownername) {
Query query = new Query();
query.addCriteria(Criteria.where("_id")
.is(id)); // find the parent
query.addCriteria(Criteria.where("tasks._id")
.is(idch)); // find the child which will be changed
Update update = new Update();
update.set("tasks.$.ownername", ownername); // change the field inside the child that must be updated
return template
// findAndModify:
// Find/modify/get the "new object" from a single operation.
.findAndModify(
query, update,
new FindAndModifyOptions().returnNew(true), ProjectChild.class
)
;
}
What would the equivalent of Oracle's DECODE() function be in the Hibernate Criteria API?
An SQL example of what I need to do:
SELECT DECODE(FIRST_NAME, NULL, LAST_NAME, FIRST_NAME) as NAME ORDER BY NAME;
Which returns LAST_NAME to NAME in the event that FIRST_NAME is NULL.
I would prefer to use the Criteria API but could use HQL if there's no other way.
Check out org.hibernate.criterion.Projections.sqlProjection(...).
Similar to this answer.
For the example you give, you could use COALESCE().
How to simulate NVL in HQL
You can use sqlRestriction to call the native decode function.
session.createCriteria(Table.class).add(Restrictions.sqlRestriction("decode({alias}.firstName,null,
{alias}.lastName,
{alias}.firstName)"))
With HQL, the Oracle dialect already has coalesce and nvl functions, or if you really need decode, you could subclass the dialect and add it as a custom function. I don't know if Hibernate supports a variable length number of arguments like decode does, but worst-case, you could create decode1, decode2, etc to support different numbers of arguments.
Or, if you aren't using the column in a where or group by, you could just bring both attributes back and do the check in Java.
Ended up adding a formula for it:
<property name="name" formula="coalesce(first_name, last_name)"/>
I'm concerned about cross-database problems and possibly efficiency problems with this approach so I'm willing to change the accepted answer.
You can Use Hibernate #Type attribute,Based on your requirement you can customize the annotation and apply on top of the fied. like :
public class PhoneNumberType implements UserType {
#Override
public int[] sqlTypes() {
return new int[]{Types.INTEGER, Types.INTEGER, Types.INTEGER};
}
#Override
public Class returnedClass() {
return PhoneNumber.class;
}
// other methods
}
First, the null SafeGet method:
#Override
public Object nullSafeGet(ResultSet rs, String[] names,
SharedSessionContractImplementor session, Object owner) throws HibernateException,
SQLException {
int countryCode = rs.getInt(names[0]);
if (rs.wasNull())
return null;
int cityCode = rs.getInt(names[1]);
int number = rs.getInt(names[2]);
PhoneNumber employeeNumber = new PhoneNumber(countryCode, cityCode, number);
return employeeNumber;
}
Next, the null SafeSet method:
#Override
public void nullSafeSet(PreparedStatement st, Object value,
int index, SharedSessionContractImplementor session)
throws HibernateException, SQLException {
if (Objects.isNull(value)) {
st.setNull(index, Types.INTEGER);
} else {
PhoneNumber employeeNumber = (PhoneNumber) value;
st.setInt(index,employeeNumber.getCountryCode());
st.setInt(index+1,employeeNumber.getCityCode());
st.setInt(index+2,employeeNumber.getNumber());
}
}
Finally, we can declare our custom PhoneNumberType in our OfficeEmployee entity class:
#Entity
#Table(name = "OfficeEmployee")
public class OfficeEmployee {
#Columns(columns = { #Column(name = "country_code"),
#Column(name = "city_code"), #Column(name = "number") })
#Type(type = "com.baeldung.hibernate.customtypes.PhoneNumberType")
private PhoneNumber employeeNumber;
// other fields and methods
}
This might solve your problem, This will work for all database. if you want more info refer :: https://www.baeldung.com/hibernate-custom-types
If you can use HQL the you can replace DECODE with CASE.
You can update your query from,
SELECT DECODE(FIRST_NAME, NULL, LAST_NAME, FIRST_NAME) as NAME ORDER BY NAME;
to,
SELECT CASE WHEN FIRST_NAME = NULL then LAST_NAME ELSE FIRST_NAME END as NAME ORDER BY NAME;