I have a test case which tries to query Child entity changes by its instanceId, it throws an Exception:
#TypeName("EntityOne")
class EntityOne {
#Id int id
String name
List<EntityTwo> entityTwos
EntityOne(int id, String name, List<EntityTwo> entityTwos) {
this.id = id
this.name = name
this.entityTwos = entityTwos
}
}
#TypeName("EntityTwo")
class EntityTwo {
#Id int id
String name
#Id int entityOneId
EntityTwo(int id, String name, entityOneId) {
this.id = id
this.name = name
this.entityOneId = entityOneId
}
}
These are the data audited
oldOne = new EntityOne(1, "EntityOne", [new EntityTwo(1, "EntityTwo",1)])
newOne = new EntityOne(1, "EntityOne", [new EntityTwo(1, "EntityTwo",1),
new EntityTwo(2, "EntityTwoOne",1)])
This is query throwing exception
entityTwoChanges = javers.findChanges(QueryBuilder.byInstanceId(1, EntityTwo) // Error is thrown
.withNewObjectChanges()
.withChildValueObjects()
.build())
Exception:
java.lang.Integer cannot be cast to java.util.Map
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.Map
at org.javers.core.metamodel.type.InstanceIdFactory.dehydratedLocalId(InstanceIdFactory.java:48)
at org.javers.core.metamodel.type.InstanceIdFactory.create(InstanceIdFactory.java:22)
at org.javers.core.metamodel.type.EntityType.createIdFromInstanceId(EntityType.java:127)
at org.javers.core.metamodel.object.GlobalIdFactory.createInstanceId(GlobalIdFactory.java:115)
at org.javers.core.metamodel.object.GlobalIdFactory.createFromDto(GlobalIdFactory.java:127)
at org.javers.repository.jql.FilterDefinition$IdFilterDefinition.compile(FilterDefinition.java:27)
at org.javers.repository.jql.JqlQuery.compile(JqlQuery.java:120)
at org.javers.repository.jql.QueryCompiler.compile(QueryCompiler.java:16)
at org.javers.repository.jql.ChangesQueryRunner.queryForChanges(ChangesQueryRunner.java:20)
at org.javers.repository.jql.QueryRunner.queryForChanges(QueryRunner.java:48)
at org.javers.core.JaversCore.findChanges(JaversCore.java:196)
at com.example.CaseQueryByCompositeKey.should able to query audit changes by composite key(CaseQueryByCompositeKey.groovy:60)
and also Is there way to query by composite key in JaVers?
It worked after passing instanceId as map:
entityTwoChanges = javers.findChanges(QueryBuilder.byInstanceId([id: 1, entityOneId: 1], EntityTwo)
.withNewObjectChanges()
.withChildValueObjects()
.build())
It's explicitly shown in the javadoc
/**
* Query for selecting Changes, Snapshots or Shadows for a given Entity instance.
* <br/><br/>
*
* For example, last Changes on "bob" Person:
* <pre>
* javers.findChanges( QueryBuilder.byInstanceId("bob", Person.class).build() );
* </pre>
*
* #param localId Value of an Id-property. When an Entity has Composite-Id (more than one Id-property) —
* <code>localId</code> should be <code>Map<String, Object></code> with
* Id-property name to value pairs.
* #see CompositeIdExample.groovy
*/
public static QueryBuilder byInstanceId(Object localId, Class entityClass){
Validate.argumentsAreNotNull(localId, entityClass);
return new QueryBuilder(new IdFilterDefinition(instanceId(localId, entityClass)));
}
try to read the javadoc first, before asking questins about a method
Related
I'm principant on Spring, I want to create my ClientServeur for my app android.
My Beans:
#Entity
data class Aliment(
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
val numAliment: Long,
val nomFrAliment: String,
val nomAnAliment: String,
val numGenre: Float,
#ManyToOne
#JoinColumn(name = "num_genre")
val genre : Genre
) {
constructor() : this(0, "inconnu", "inconnu", 0f, Genre()) {
}
}
#Entity
data class Genre(
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
val numGenre: Float,
val nomAnGenre: String,
val nomFrGenre: String
) {
constructor() : this(0.0f, "", "")
}
My DB tables MYSQL:
CREATE TABLE `aliment` (
`num_aliment` integer NOT NULL AUTO_INCREMENT,
`nom_fr_aliment` varchar(255) NOT NULL,
`nom_an_aliment` varchar(255) NOT NULL,
`num_genre` varchar(4) NOT NULL,
PRIMARY KEY (`num_aliment`),
FOREIGN KEY (`num_genre`) REFERENCES `genre`(`num_genre`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `genre` (
`num_genre` varchar(4) NOT NULL,
`nom_an_genre` varchar(128) NOT NULL,
`nom_fr_genre` varchar(128) NOT NULL,
PRIMARY KEY (`num_genre`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
What I want to do, it's when I request some Aliment, the value genre : Genre will be automaticaly update by them foreign key reference.
How can I do that ?
More details
On my request on my mobile app : I call my http request, and the client return me the JsonObject but the genre value is null ( I want my client search the num_genre correspondent and put it on my value genre).
So for the moment I do an other request to take the genre correspondent and update this value genre = null to the genre correspondent.
On the client I have:
Repository
#Repository
interface AlimRepository : JpaRepository<Aliment, Long> {
fun findAlimentsByNomFrAlimentStartingWith(nomFrAliment: String): List<Aliment>
fun findAlimentsByNomFrAlimentContains(nomFrAliment: String): List<Aliment>
fun findAlimentsByNomAnAlimentStartsWith(nomAnAliment: String): List<Aliment>
fun findAlimentsByNumGenre(numAliment: Float): List<Aliment>
}
#Repository
interface GenreRepository : JpaRepository<Genre, Float>
And I have my Controllers:
#RestController
#RequestMapping("/alim")
class AlimController(private val alimRepository: AlimRepository) {
/**
* RETURN full list Aliments
*
* #link : http://localhost:8080/alim/all
*/
#GetMapping("/all")
fun getAllAliments(): List<Aliment> {
println("/alim/all")
return alimRepository.findAll()
}
/**
* RETURN alim by num aliment ( works like Id )
*
* #link : http://localhost:8080/alim/num/2000
*/
#GetMapping("/num/{numAliment}")
fun getAlimByNum(#PathVariable(value = "numAliment") alimId: Long): ResponseEntity<Aliment> {
println("/alim/num/$alimId")
return alimRepository.findById(alimId).map { aliment ->
ResponseEntity.ok(aliment)
}.orElse(ResponseEntity.notFound().build())
}
/**
* RETURN ListAlims by StartingWith nameFr
*
* #link : http://localhost:8080/alim/name_fr/pat
*/
#GetMapping("/name_fr_start/{nomFrAliment}")
fun getAlimsByNameFrStartingWith(#PathVariable("nomFrAliment") nomFrAliment: String): List<Aliment> {
println("/alim/name_fr_start/$nomFrAliment")
return alimRepository.findAlimentsByNomFrAlimentStartingWith(nomFrAliment)
}
/**
* RETURN ListAlims by contain nameFr
*
* #link : http://localhost:8080/alim/name_fr/pat
*/
#GetMapping("/name_fr_contain/{nomFrAliment}")
fun getAlimsByNameFrContains(#PathVariable("nomFrAliment") nomFrAliment: String): List<Aliment> {
println("/alim/name_fr_contain/$nomFrAliment")
return alimRepository.findAlimentsByNomFrAlimentContains(nomFrAliment)
}
/**
* RETURN ListAlims by containing nameEn
*
* #link : http://localhost:8080/alim/name_en/pota
*/
#GetMapping("/name_en/{nomAnAliment}")
fun getAlimsByNameEn(#PathVariable("nomAnAliment") nomAnAliment: String): List<Aliment> {
println("/alim/name_en/$nomAnAliment")
return alimRepository.findAlimentsByNomAnAlimentStartsWith(nomAnAliment)
}
/**
* RETURN listAlims find by numGenre
*
* #link : http://localhost:8080/alim/num_genre/20
*/
#GetMapping("/num_genre/{numGenre}")
fun getAlimByNumGenre(#PathVariable("numGenre") numGenre: Float): List<Aliment> {
println("/alim/num_genre/$numGenre")
return alimRepository.findAlimentsByNumGenre(numGenre)
}
}
#RestController
#RequestMapping("/genre")
class GenreController(private val genreRepository: GenreRepository) {
/**
* RETURN full list Genre
*
* #link : http://localhost:8080/genre/all
*/
#GetMapping("/all")
fun getAllGenre(): List<Genre> {
println("/genre/all")
return genreRepository.findAll()
}
/**
* RETURN genre by numGenre ( works like Id )
*
* #link : http://localhost:8080/genre/num_genre/23.5
*/
#GetMapping("/num_genre/{numGenre}")
fun getGenreByNumGenre(#PathVariable(value = "numGenre") numId: Float): ResponseEntity<Genre> {
println("/genre/num_genre/$numId")
return genreRepository.findById(numId).map { genre ->
ResponseEntity.ok(genre)
}.orElse(ResponseEntity.notFound().build())
}
}
Of course when I try this code I get this error:
2020-12-03 15:17:06.143 ERROR 14044 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]: Invocation of init method failed; nested exception is org.hibernate.DuplicateMappingException: Table [aliment] contains physical column name [num_genre] referred to by multiple logical column names: [num_genre], [numGenre]
Since one Aliment can have many (#ManyToOne) Genres associated with it. The information which Genre belongs to which Aliment needs to be stored in the table of Genre, which is genre.
genre already has a field num_genre which is its primary key, with #JoinColumn(name = "num_genre") you instruct Hibernate to store the info which Genre belongs to which Aliment in that same column, which is not possible.
Long story short, if you remove #JoinColumn(name = "num_genre") it should work already, because Hibernate will create a column alignment_id in the genre table for you.
I would also remove numGenre, since it is not annotated in any way it won't be managed by Hibernate anyway and the associated Genre will already be represented by the genre property. The existence of this property should not cause an error though.
Hello I am using Cassandra to save user data . I want to store data of a user for only 24 hours so I am giving a ttl for 24 hours. For each user there are multiple entries. So I want to batch insert data for each user instead of multiple calls to data base . I am using Cassandra operations to give ttl . I am able to give ttl for single record . How to provide ttl when inserting data in batches
public class CustomizedUserFeedRepositoryImpl<T> implements CustomizedUserFeedRepository<T> {
private CassandraOperations cassandraOperations;
#Autowired
CustomizedUserFeedRepositoryImpl(CassandraOperations cassandraOperations){
this.cassandraOperations = cassandraOperations;
}
#Override
public <S extends T> S save(S entity, int ttl){
InsertOptions insertOptions;
if(ttl == 0) {
insertOptions = InsertOptions.builder().ttl(Duration.ofHours(24)).build();
} else {
insertOptions = InsertOptions.builder().ttl(ttl).build();
}
cassandraOperations.insert(entity,insertOptions);
return entity;
}
#Override
public void saveAllWithTtl(java.lang.Iterable<T> entities, int ttl){
entities.forEach(entity->{
save(entity,ttl);
});
}
}
As you can see I have to iterate over the list make and make database calls for each record . The batch operation cassandraOperations.batchOps().insert() only takes list of objects . How to set ttl for each record when using batchops() fucntion ?
/**
* Add a collection of inserts with given {#link WriteOptions} to the batch.
*
* #param entities the entities to insert; must not be {#literal null}.
* #param options the WriteOptions to apply; must not be {#literal null}.
* #return {#code this} {#link CassandraBatchOperations}.
* #throws IllegalStateException if the batch was already executed.
* #since 2.0
*/
CassandraBatchOperations insert(Iterable<?> entities, WriteOptions options);
You can use insert(Iterable<?> entities, WriteOptions options) method
#EqualsAndHashCode(callSuper = true)
public class WriteOptions extends QueryOptions {
private static final WriteOptions EMPTY = new WriteOptionsBuilder().build();
private final Duration ttl;
private final #Nullable Long timestamp;
batchOperations.insert(entity, WriteOptions.builder().ttl(20).build());
Link.java
#Entity
#Table(name = "LINK")
#AttributeOverride(name="id", column=#Column(name="LINK_ID"))
public class Link extends AbstractAuditableEntity<Integer> {
/**
*
*/
private static final long serialVersionUID = 3825555385014396995L;
#Column(name="NAME")
private String name;
#Column(name="UI_SREF")
private String uiSref;
#ManyToOne
#JoinColumn(name="PARENT_LINK_ID")
private Link parentLink;
#OneToMany(mappedBy="parentLink", fetch = FetchType.EAGER)
private List<Link> childLinks;
/**
* #return the name
*/
public String getName() {
return name;
}
/**
* #param name the name to set
*/
public void setName(String name) {
this.name = name;
}
/**
* #return the uiSref
*/
public String getUiSref() {
return uiSref;
}
/**
* #param uiSref the uiSref to set
*/
public void setUiSref(String uiSref) {
this.uiSref = uiSref;
}
/**
* #return the parentLink
*/
public Link getParentLink() {
return parentLink;
}
/**
* #param parentLink the parentLink to set
*/
public void setParentLink(Link parentLink) {
this.parentLink = parentLink;
}
/**
* #return the childLinks
*/
public List<Link> getChildLinks() {
return childLinks;
}
/**
* #param childLinks the childLinks to set
*/
public void setChildLinks(List<Link> childLinks) {
this.childLinks = childLinks;
}
}
LinkRepository .java
public interface LinkRepository extends BaseRepository<Integer, Link> {
#Query("select distinct p from Link l JOIN fetch l.parentLink p where l.id in (select lar.link.id from LinkAccessRole lar where lar.accessRoleLu in ?1) and p.id in (select lar.link.id from LinkAccessRole lar where lar.accessRoleLu in ?1)")
public List<Link> getNavigationByaccessRoleLuList(List<AccessRoleLu> accessRoleLu);
}
Link_Table
Link_Access_Role Table
generated Queries:
SELECT DISTINCT t0.LINK_ID, t0.CREATED_BY_ID, t0.CREATED_DATE, t0.LAST_MODIFIED_BY_ID, t0.LAST_MODIFIED_DATE, t0.NAME, t0.UI_SREF, t0.PARENT_LINK_ID FROM LINK t0, LINK t1 WHERE ((t1.LINK_ID IN (SELECT t2.LINK_ID FROM LINK_ACCESS_ROLE t3, LINK t2 WHERE ((t3.ACCESS_ROLE_ID IN (?,?)) AND (t2.LINK_ID = t3.LINK_ID))) AND t0.LINK_ID IN (SELECT t4.LINK_ID FROM LINK_ACCESS_ROLE t5, LINK t4 WHERE ((t5.ACCESS_ROLE_ID IN (?,?)) AND (t4.LINK_ID = t5.LINK_ID)))) AND (t0.LINK_ID = t1.PARENT_LINK_ID))
bind => [4 parameters bound]
SELECT LINK_ID, CREATED_BY_ID, CREATED_DATE, LAST_MODIFIED_BY_ID, LAST_MODIFIED_DATE, NAME, UI_SREF, PARENT_LINK_ID FROM LINK WHERE (PARENT_LINK_ID = ?)
bind => [1 parameter bound]
SELECT LINK_ID, CREATED_BY_ID, CREATED_DATE, LAST_MODIFIED_BY_ID, LAST_MODIFIED_DATE, NAME, UI_SREF, PARENT_LINK_ID FROM LINK WHERE (PARENT_LINK_ID = ?)
bind => [1 parameter bound]
I get one query for each child related to the fetched parent Regardless it has the access role or not.
i want to fetch the parents and its childs that have access role not all childs that related to that parent.
The only way that you can fetch a parent entity and have one of its collections populated with a subset of entries based on some criteria is by using Hibernate's proprietary filters.
I'm not certain whether the other JPA providers provide some proprietary solution either, but JPA itself doesn't offer this directly.
You first need to register a filter definition using #FilterDef and then you need to reference the filter's definition using the #Filter on your collection property.
The hard part here is that you can't rely on Spring data's #Query or their repository implementation generation process to help. You will need to use a real implementation so that you can manually enable this hibernate filter before you query the parent entity.
Filter filter = session.enableFilter( "link-with-restrictions-by-roles" );
filter.setParameter( "roles", yourRolesList );
return session.createQuery( ... ).getResultList();
The documentation describes the use of #Filter and #FilterDef in detail. You can also find another post of mine where I give slightly more implementation details here.
Is there any existing utility to do in a better/faster way a DB insert?
Now this is what I'm using (the are a lot of fields, I truncated the field list):
public void insert(Ing ing){
String[] fields=new String[]{"field1","field2","field3"};
Object[] params=new Object[]{ing.getField1(),ing.getField2(),ing.getField3()};
String[] paramsPH=new String[fields.length];
for(int i=0;i<paramsPH.length;i++) paramsPH[i]="?";
String sql= "INSERT INTO ing("+StringUtils.join(fields,",")+") VALUES ("+StringUtils.join(paramsPH,",")+");";
getJdbcTemplate().update(sql,params);
}
Check this :
import java.util.LinkedHashMap;
import org.apache.commons.lang3.StringUtils;
import org.springframework.jdbc.core.JdbcTemplate;
JdbcTemplate jt = new JdbcTemplate...// some instance... ;
String tableName="nameDateTable";//your happy table
LinkedHashMap<String,Object>map= new LinkedHashMap<String,Object>();
map.put("col1Name","blabla"); //column name and value
map.put("dateAdd",new Date());//column name and value
// etc..
// You can place any map here (LinkedHashMap!). Here is a magical query:
String sql = "INSERT INTO "+tableName+" (\""+StringUtils.join(map.keySet(), "\",\"")+"\") VALUES ("+StringUtils.repeat("?", ",", map.size())+");";
jt.update(sql, map.values().toArray());
The most important in this solution is
String sql = "INSERT INTO "+tableName+"
(\""+StringUtils.join(map.keySet(), "\",\"")+"\") VALUES ("+StringUtils.repeat("?", ",", map.size())+");";
jt.update(sql, map.values().toArray());
and LinkedHashMap.
In my Spring JdbcTemplate projects, I ususally create a generic BaseDao<T> class that has a method saveObject(T obj).
to achieve this, I use SimpleJdbcInsert like this:
//Constants, from BaseDAO interface that this method implements
String TABLE_NAME = "tableName";
String GENERATED_KEY = "generatedKey";
/**
* Save an object using a {#link BaseObjectMapper} returned from the method {#link #getObjectMapper()}
* Returns the generated key if the map generated by the {#link BaseObjectMapper} contains an entry for {#value #GENERATED_KEY}
* #param the object to be saved
*/
#Override
public int saveObject(T obj){
MapSqlParameterSource params = new MapSqlParameterSource();
//the mapper must transform an object to a map
//and add the table name where to insert, and if any, a generated key
Map<String, Object> paramsMap = getObjectMapper().mapObject(obj);
String table = (String) paramsMap.remove(TABLE_NAME);
if(table == null){
throw new IllegalArgumentException("The ObjectMapper of "+obj.getClass()+" must return the table name among the result map of mapObject method");
}
String generatedKey = (String) paramsMap.remove(GENERATED_KEY);
String[] colNames = paramsMap.keySet().toArray(new String[paramsMap.keySet().size()]);
for(String col: colNames){
params.addValue(col, paramsMap.get(col));
}
//You can have it as a class attribute and create it once the DAO is being instantiated
SimpleJdbcInsert genericJdbcInsert = new SimpleJdbcInsert(jdbcInsert.getJdbcTemplate().getDataSource())
.withSchemaName(currentSchema).withTableName(table)
.usingColumns(colNames);
if(generatedKey != null){
genericJdbcInsert = genericJdbcInsert.usingGeneratedKeyColumns(generatedKey);
return genericJdbcInsert.executeAndReturnKey(paramsMap).intValue();
}else{
genericJdbcInsert.execute(params);
}
return 0;
}
protected BaseObjectMapper<T> getObjectMapper(){
//Implement it in your concrete DAO classes
throw new UnsupportedOperationException("You must implemnt this method in your concrete DAO implementation");
}
Here is the BaseObjectMapper interface:
import java.util.Map;
import org.springframework.jdbc.core.RowMapper;
import com.atlasaas.ws.dao.BaseDao;
import com.atlasaas.ws.entities.BaseEntity;
public interface BaseObjectMapper<T extends BaseEntity> extends RowMapper<T>{
/**
* Method to transform an object into a {#link Map}
* The result map must contain all columns to be inserted as keys
* It also must contain the Table name corresponding to the given object
* The table name must be associated to the key of value: {#link BaseDao#TABLE_NAME}
* Optionally, if you want your save methods to return a generated primary key value
* you should include an entry referencing the the generated column name. This entry
* must then be associated to the key of value: {#link BaseDao#GENERATED_KEY}
* #param obj The object to be transformed
* #return the result of this object transformation
*/
Map<String, Object> mapObject(T obj);
}
If you really want to use SQL in your code, you can use:
org.springframework.jdbc.core.namedparam.NamedParameterJdbcOperations#(String sql, SqlParameterSource paramSource)
where your SQL string would be something like this:
insert into SOME_TABLE(COL1,COL2,COL3) values (:col1Val,:col2Val,:col3Val)
and your SqlParameterSource is built this way:
MapSqlParameterSource params = new MapSqlParameterSource();
params.addValue("col1Val", val1);
params.addValue("col2Val", val2);
params.addValue("col3Val", val3);
I hope this helps
You can use parameterized SQL to make it a bit simpler
Your code would look something like this
String sql = "INSERT INTO ing(field1, field2, field3) values(?, ?, ?)";
Object[] params=new Object[]{ing.getField1(),ing.getField2(),ing.getField3()};
getJdbcTemplate().update(sql,params);
First experiments with Spring Data and MongoDB were great. Now I've got the following structure (simplified):
public class Letter {
#Id
private String id;
private List<Section> sections;
}
public class Section {
private String id;
private String content;
}
Loading and saving entire Letter objects/documents works like a charm. (I use ObjectId to generate unique IDs for the Section.id field.)
Letter letter1 = mongoTemplate.findById(id, Letter.class)
mongoTemplate.insert(letter2);
mongoTemplate.save(letter3);
As documents are big (200K) and sometimes only sub-parts are needed by the application: Is there a possibility to query for a sub-document (section), modify and save it?
I'd like to implement a method like
Section s = findLetterSection(letterId, sectionId);
s.setText("blubb");
replaceLetterSection(letterId, sectionId, s);
And of course methods like:
addLetterSection(letterId, s); // add after last section
insertLetterSection(letterId, sectionId, s); // insert before given section
deleteLetterSection(letterId, sectionId); // delete given section
I see that the last three methods are somewhat "strange", i.e. loading the entire document, modifying the collection and saving it again may be the better approach from an object-oriented point of view; but the first use case ("navigating" to a sub-document/sub-object and working in the scope of this object) seems natural.
I think MongoDB can update sub-documents, but can SpringData be used for object mapping? Thanks for any pointers.
I figured out the following approach for slicing and loading only one subobject. Does it seem ok? I am aware of problems with concurrent modifications.
Query query1 = Query.query(Criteria.where("_id").is(instance));
query1.fields().include("sections._id");
LetterInstance letter1 = mongoTemplate.findOne(query1, LetterInstance.class);
LetterSection emptySection = letter1.findSectionById(sectionId);
int index = letter1.getSections().indexOf(emptySection);
Query query2 = Query.query(Criteria.where("_id").is(instance));
query2.fields().include("sections").slice("sections", index, 1);
LetterInstance letter2 = mongoTemplate.findOne(query2, LetterInstance.class);
LetterSection section = letter2.getSections().get(0);
This is an alternative solution loading all sections, but omitting the other (large) fields.
Query query = Query.query(Criteria.where("_id").is(instance));
query.fields().include("sections");
LetterInstance letter = mongoTemplate.findOne(query, LetterInstance.class);
LetterSection section = letter.findSectionById(sectionId);
This is the code I use for storing only a single collection element:
MongoConverter converter = mongoTemplate.getConverter();
DBObject newSectionRec = (DBObject)converter.convertToMongoType(newSection);
Query query = Query.query(Criteria.where("_id").is(instance).and("sections._id").is(new ObjectId(newSection.getSectionId())));
Update update = new Update().set("sections.$", newSectionRec);
mongoTemplate.updateFirst(query, update, LetterInstance.class);
It is nice to see how Spring Data can be used with "partial results" from MongoDB.
Any comments highly appreciated!
I think Matthias Wuttke's answer is great, for anyone looking for a generic version of his answer see code below:
#Service
public class MongoUtils {
#Autowired
private MongoTemplate mongo;
public <D, N extends Domain> N findNestedDocument(Class<D> docClass, String collectionName, UUID outerId, UUID innerId,
Function<D, List<N>> collectionGetter) {
// get index of subdocument in array
Query query = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query.fields().include(collectionName + "._id");
D obj = mongo.findOne(query, docClass);
if (obj == null) {
return null;
}
List<UUID> itemIds = collectionGetter.apply(obj).stream().map(N::getId).collect(Collectors.toList());
int index = itemIds.indexOf(innerId);
if (index == -1) {
return null;
}
// retrieve subdocument at index using slice operator
Query query2 = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query2.fields().include(collectionName).slice(collectionName, index, 1);
D obj2 = mongo.findOne(query2, docClass);
if (obj2 == null) {
return null;
}
return collectionGetter.apply(obj2).get(0);
}
public void removeNestedDocument(UUID outerId, UUID innerId, String collectionName, Class<?> outerClass) {
Update update = new Update();
update.pull(collectionName, new Query(Criteria.where("_id").is(innerId)));
mongo.updateFirst(new Query(Criteria.where("_id").is(outerId)), update, outerClass);
}
}
This could for example be called using
mongoUtils.findNestedDocument(Shop.class, "items", shopId, itemId, Shop::getItems);
mongoUtils.removeNestedDocument(shopId, itemId, "items", Shop.class);
The Domain interface looks like this:
public interface Domain {
UUID getId();
}
Notice: If the nested document's constructor contains elements with primitive datatype, it is important for the nested document to have a default (empty) constructor, which may be protected, in order for the class to be instantiatable with null arguments.
Solution
Thats my solution for this problem:
The object should be updated
#Getter
#Setter
#Document(collection = "projectchild")
public class ProjectChild {
#Id
private String _id;
private String name;
private String code;
#Field("desc")
private String description;
private String startDate;
private String endDate;
#Field("cost")
private long estimatedCost;
private List<String> countryList;
private List<Task> tasks;
#Version
private Long version;
}
Coding the Solution
public Mono<ProjectChild> UpdateCritTemplChild(
String id, String idch, String ownername) {
Query query = new Query();
query.addCriteria(Criteria.where("_id")
.is(id)); // find the parent
query.addCriteria(Criteria.where("tasks._id")
.is(idch)); // find the child which will be changed
Update update = new Update();
update.set("tasks.$.ownername", ownername); // change the field inside the child that must be updated
return template
// findAndModify:
// Find/modify/get the "new object" from a single operation.
.findAndModify(
query, update,
new FindAndModifyOptions().returnNew(true), ProjectChild.class
)
;
}