Update data in mongo Db using spring (MongoTemplate) - spring

I have a collection (A) which has two fields (String, integer) in mongoDB. I want to update the collection by adding some value to sting
Ex. Lets say i have a document A[field1 : ABC,field2 : 25]. I want to update it by adding ,say 5, to it so it will look like A[field1 :ABC,field2 : 30] after updation.
The code I have used for this is as follows:
Query query = new Query();
query.addCriteria(Criteria.where("field1").is("ABC));
BeanName beanName = template.findOne(query, BeanName.class,collectionName);
if(null != beanName){
Update update = new Update();
update.set("field1", "ABC");
update.set("field2", beanName.getField2() + 5)
template.updateFirst(query, update, BeanName.class,collectionName);
}
else{
template.save(beanName, collectionName); // the value of filed1 and field 2 is populated in a bean with instance 'beanName'
}
The code is woriking fine with expected results but the performance is very slow. Is there any other efficient way for it.
I am working on large amount of data to update.

I would suggest you to use findAndModify() method, in combination with the upsert = true feature. Please find the official documentation below :
https://docs.mongodb.org/manual/reference/method/db.collection.findAndModify/

Related

Spring Data elastic search with out entity fields

I'm using spring data elastic search, Now my document do not have any static fields, and it is accumulated data per qtr, I will be getting ~6GB/qtr (we call them as versions). Lets say we get 5GB of data in Jan 2021 with 140 columns, in the next version I may get 130 / 120 columns, which we do not know, The end user requirement is to get the information from the database and show it in a tabular format, and he can filter the data. In MongoDB we have BasicDBObject, do we have anything in springboot elasticsearch
I can provide, let say 4-5 columns which are common in every version record and apart from that, I need to retrieve the data without mentioning the column names in the pojo, and I need to use filters on them just like I can do in MongoDB
List<BaseClass> getMultiSearch(#RequestBody Map<String, Object>[] attributes) {
Query orQuery = new Query();
Criteria orCriteria = new Criteria();
List<Criteria> orExpression = new ArrayList<>();
for (Map<String, Object> accounts : attributes) {
Criteria expression = new Criteria();
accounts.forEach((key, value) -> expression.and(key).is(value));
orExpression.add(expression);
}
orQuery.addCriteria(orCriteria.orOperator(orExpression.toArray(new Criteria[orExpression.size()])));
return mongoOperations.find(orQuery, BaseClass.class);
}
You can define an entity class for example like this:
public class GenericEntity extends LinkedHashMap<String, Object> {
}
To have that returned in your calling site:
public SearchHits<GenericEntity> allGeneric() {
var criteria = Criteria.where("fieldname").is("value");
Query query = new CriteriaQuery(criteria);
return operations.search(query, GenericEntity.class, IndexCoordinates.of("indexname"));
}
But notice: when writing data into Elasticsearch, the mapping for new fields/properties in that index will be dynamically updated. And there is a limit as to how man entries a mapping can have (https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-settings-limit.html). So take care not to run into that limit.

Neo4j 6 Spring boot 2.4 migrate driver session query

I am trying to migrate to neo4j 6. Whats the equivalent of this method in neo4j6?
The Result here contains: {ref=Employee.... etc, so the actual Java objects.
//org.neo4j.ogm.session.Session is autowired
#GetMapping("/companies/{companyId}/refs")
public Result getCompanyRefs(#PathVariable final String companyId)
{
String query = "MATCH (company:Company)-[ref]-(refObj) where company.id=\"" + companyId + "\" RETURN company,ref,refObj";
return this.session.query(query, Collections.emptyMap());
}
I tried with the new neo4j driver like so:
//org.neo4j.driver.Driver is autowired
#GetMapping("/persons/{personId}/refs")
public Result getPersonRefs(#PathVariable final String personId)
{
String query = "MATCH (person:Person)-[ref]-(refObj) where person.id=\"" + personId + "\" RETURN person,ref,refObj";
return this.driver.session().run(query, Collections.emptyMap());
}
but this gives a Result which is not convertable to my #Node (entity) classes. The previous version gave a Result which contained the actual java objects(mapped to classes).
the result here is:Record<{person: node<7>, ref: relationship<8>, refObj: node<0>}>
Basically the main thing is: i need the nodes mapped to java objects. But i need them via a cypher query, because i need to do some things on the (Result) Nodes before deleting the relationships between them.
so it turns out it does give back the things i need.
Result result = this.getPersonRefs(id);
result.list().forEach((entry) -> {
The problem was that for example neither entry.get("refObj").asObject(), nor asNode() actually gave back what i thought it was supposed to give back.
The Solution:
entry.get("refObj").asMap()
this gives back the actual properties of the object. Then you just need to convert it to MyClass.class with an ObjectMapper.

How to query data via Spring data JPA with user defined offset and limit (Range)

Is it possible to fetch data in user defined ranges [int starting record -int last record]?
In my case user will define in query String in which range he wants to fetch data.
I have tried something like this
Pageable pageable = new PageRequest(0, 10);
Page<Project> list = projectRepository.findAll(spec, pageable);
Where spec is my defined specification but unfortunately this do not help.
May be I am doing something wrong here.
I have seen other spring jpa provided methods but nothing are of much help.
user can enter something like this localhost:8080/Section/employee? range{"columnName":name,"from":6,"to":20}
So this says to fetch employee data and it will fetch the first 15 records (sorted by columnName ) does not matter as of now.
If you can suggest me something better that would be great.if you think I have not provided enough information please let me know, I will provide required information.
Update :I do not want to use native or Create query statements (until I don't have any other option).
May be something like this:
Pageable pageable = new PageRequest(0, 10);
Page<Project> list = projectRepository.findAll(spec, new pageable(int startIndex,int endIndex){
// here my logic.
});
If you have better options, you can suggest me that as well.
Thanks.
Your approach didn't work, because new PageRequest(0, 10); doens't do what you think. As stated in docs, the input arguments are page and size, not limit and offset.
As far as I know (and somebody correct me if I'm wrong), there is no "out of the box" support for what you need in default SrpingData repositories. But you can create custom implementation of Pagable, that will take limit/offset parameters. Here is basic example - Spring data Pageable and LIMIT/OFFSET
We can do this with Pagination and by setting the database table column name, value & row counts as below:
#Transactional(readOnly=true)
public List<String> queryEmployeeDetails(String columnName,String columnData, int startRecord, int endRecord) {
Query query = sessionFactory.getCurrentSession().createQuery(" from Employee emp where emp.col= :"+columnName);
query.setParameter(columnName, columnData);
query.setFirstResult(startRecord);
query.setMaxResults(endRecord);
List<String> list = (List<String>)query.list();
return list;
}
If I am understanding your problem correctly, you want your repository to allow user to
Provide criteria for query (through Specification)
Provide column to sort
Provide the range of result to retrieve.
If my understanding is correctly, then:
In order to achieve 1., you can make use of JpaSpecificationExecutor from Spring Data JPA, which allow you to pass in Specificiation for query.
Both 2 and 3 is achievable in JpaSpecificationExecutor by use of Pagable. Pageable allow you to provide the starting index, number of record, and sorting columns for your query. You will need to implement your range-based Pageable. PageRequest is a good reference on what you can implement (or you can extend it I believe).
So i got this working as one of the answer suggested ,i implemented my own Pageable and overrided getPagesize(),getOffset(),getSort() thats it.(In my case i did not need more)
public Range(int startIndex, int endIndex, String sortBy) {
this.startIndex = startIndex;
this.endIndex = endIndex;
this.sortBy = sortBy;
}
#Override
public int getPageSize() {
if (endIndex == 0)
return 0;
return endIndex - startIndex;
}
#Override
public int getOffset() {
// TODO Auto-generated method stub
return startIndex;
}
#Override
public Sort getSort() {
// TODO Auto-generated method stub
if (sortBy != null && !sortBy.equalsIgnoreCase(""))
return new Sort(Direction.ASC, sortBy);
else
return new Sort(Direction.ASC, "id");
}
where startIndex ,endIndex are starting and last index of record.
to access it :
repository.findAll(spec,new Range(0,20,"id");
There is no offset parameter you can simply pass. However there is a very simple solution for this:
int pageNumber = Math.floor(offset / limit) + ( offset % limit );
PageRequest pReq = PageRequest.of(pageNumber, limit);
The client just have to keep track on the offset instead of page number. By this I mean your controller would receive the offset instead of the page number.
Hope this helps!

saving & updating full json document with Spring data MongoTemplate

I'm using Spring data MongoTemplate to manage mongo operations. I'm trying to save & update json full documents (using String.class in java).
Example:
String content = "{MyId": "1","code":"UG","variables":[1,2,3,4,5]}";
String updatedContent = "{MyId": "1","code":"XX","variables":[6,7,8,9,10]}";
I know that I can update code & variables independently using:
Query query = new Query(where("MyId").is("1"));
Update update1 = new Update().set("code", "XX");
getMongoTemplate().upsert(query, update1, collectionId);
Update update2 = new Update().set("variables", "[6,7,8,9,10]");
getMongoTemplate().upsert(query, update2, collectionId);
But due to our application architecture, it could be more useful for us to directly replace the full object. As I know:
getMongoTemplate().save(content,collectionId)
getMongoTemplate().save(updatedContent,collectionId)
implements saveOrUpdate functionality, but this creates two objects, do not update anything.
I'm missing something? Any approach? Thanks
You can use Following Code :
Query query = new Query();
query.addCriteria(Criteria.where("MyId").is("1"));
Update update = new Update();
Iterator<String> iterator = json.keys();
while(iterator.hasNext()) {
String key = iterator.next();
if(!key.equals("MyId")) {
Object value = json.get(key);
update.set(key, value);
}
}
mongoTemplate.updateFirst(query, update, entityClass);
There may be some other way to get keyset from json, you can use according to your convenience.
You can use BasicDbObject to get keyset.
you can get BasicDbObject using mongoTemplate.getConverter().

Elasticearch and Spark: Updating existing entities

What is the correct way, when using Elasticsearch with Spark, to update existing entities?
I wanted to something like the following:
Get existing data as a map.
Create a new map, and populate it with the updated fields.
Persist the new map.
However, there are several issues:
The list of returned fields cannot contain the _id, as it is not part of the source.
If, for testing, I hardcode an existing _id in the map of new values, the following exception is thrown:
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest
How should the _id be retrieved, and how should it be passed back to Spark?
I include the following code below to better illustrate what I was trying to do:
JavaRDD<Map<String, Object>> esRDD = JavaEsSpark.esRDD(jsc, INDEX_NAME+"/"+TYPE_NAME,
"?source=,field1,field2).values();
Iterator<Map<String, Object>> iter = esRDD.toLocalIterator();
List<Map<String, Object>> listToPersist = new ArrayList<Map<String, Object>>();
while(iter.hasNext()){
Map<String, Object> map = iter.next();
// Get existing values, and do transformation logic
Map<String, Object> newMap = new HashMap<String, Object>();
newMap.put("_id", ??????);
newMap.put("field1", new_value);
listToPersist.add(newMap);
}
JavaRDD javaRDD = jsc.parallelize(ImmutableList.copyOf(listToPersist));
JavaEsSpark.saveToEs(javaRDD, INDEX_NAME+"/"+TYPE_NAME);
Ideally, I would want to update the existing map in place, rather than create a new one.
Does anyone have any example code to show, when using Spark, the correct way to update existing entities in elasticsearch?
Thanks
This is how I've done it (Scala/Spark 2.3/Elastic-Hadoop v6.5).
To read (id or other metadata):
spark
.read
.format("org.elasticsearch.spark.sql")
.option("es.read.metadata",true) // allow to read metadata
.load("yourindex/yourtype")
.select(col("_metadata._id").as("myId"),...)
To update particular columns in ES:
myDataFrame
.select("myId","columnToUpdate")
.saveToEs(
"yourindex/yourtype",
Map(
"es.mapping.id" -> "myId",
"es.write.operation" -> "update", // important to change operation to partial update
"es.mapping.exclude" -> "myId"
)
)
Try adding this upsert to your Spark:
.config("es.write.operation", "upsert")
that will let you add new fields to existing documents
According to Elasticsearch Configuration you can get document metadata like _id by set read metadata option to true:
.config("es.read.metadata", "true")
And i think you cannot use '_id' as field name.
But you can create new field with different name like:
newMap.put("idfield", yourId);
then set name of the new field as a value for mapping id option to inform elastic that this field has the document id:
.config("es.mapping.id", "idfield")
BTW don't forget to set write operation as update:
.config("es.write.operation", "update")

Resources