Solr partial update is not working - spring

I'm trying to make Solr partial update is working with Spring Data.
SolrTemplate solr; // autowired
public void updateEntry(final Entry entry) {
PartialUpdate update = new PartialUpdate("id", entry.getId());
update.add("foo_field", "hohoho");
solr.saveBean(update);
solr.commit();
}
However nothing happens with the document - the field "id" doesn't change. Deletion and creation are working. Any ideas would be appreciated.

I'm doing some partial update for my current project with spring data solr and it's working fine. Here's the code i have :
PartialUpdate partialUpdate = new PartialUpdate("id", "yourId");
partialUpdate.setValueOfField("fieldToUpdate", "value");
solrTemplate.saveBean(partialUpdate);
solrTemplate.commit();
Be sure to have the proper solrconfig for your core : (https://wiki.apache.org/solr/Atomic_Updates)
you will need to activate on your solrconfig.xml the updatelog and in your schema.xml you will need to defined the _version_ field.

Related

Java MongoDb driver serialization works in UpdateOne but not when i call toJson

I am updating a record on my mongoDb database.
The update is performed in a wrapper for a series of reason, it goes like:
Contact contact = generateContact();
UpdateResult updateResult = performUpdateOne("collection",new Document("field",fieldValue), new Document("$push", new Document("contacts", contact)),updateOptions);
Please keep in mind the new Document("$set", new Document("contacts", contact)) parameter because is the only mongoDb document that contains a reference to a Pojo, this field is used in the the method performUpdateOne
private UpdateResult performUpdateOne(String collectionName, Document filter, Document set, UpdateOptions updateOptions) {
...
...
LOG.debugf("Performing MongoDb UpdateOne command, collection[%s] query[%s] update[%s].", collectionName, filter.toJson(), set.toJson());
UpdateResult updateResult = collection.updateOne(filter, set, updateOptions);
The set.toJson() call gives me the exception:
Exception: : org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class ...Contact. at org.bson.internal.CodecCache.lambda$getOrThrow$1(CodecCache.java:52)
If I comment the LOG:debugf line the collection.updateOne(filter, set, updateOptions) gives me no problem.
Why is the set document serialized correctly in the updateOne method while giving an error when i call .toJson() on it?
Thanks a lot
I am using mongo sync java driver 4.3.4

Update Builder gives late response when multiple versions are there in Elasticsearch?

Project : Spring Boot
I'm updating my elasticsearch document using following way,
#Override
public Document update(DocumentDTO document) {
try {
Document doc = documentMapper.documentDTOToDocument(document);
Optional<Document> fetchDocument = documentRepository.findById(document.getId());
if (fetchDocument.isPresent()) {
fetchDocument.get().setTag(doc.getTag());
Document result = documentRepository.save(fetchDocument.get());
final UpdateRequest updateRequest = new UpdateRequest(Constants.INDEX_NAME, Constants.INDEX_TYPE, document.getId().toString());
updateRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL);
updateRequest.doc(jsonBuilder().startObject().field("tag", doc.getTag()).endObject());
UpdateResponse updateResponse = client.update(updateRequest, RequestOptions.DEFAULT);
log.info("ES result : "+ updateResponse.status());
return result;
}
} catch (Exception ex) {
log.info(ex.getMessage());
}
return null;
}
Using this my document updated successfully and version incremented but when version goes 20+.
It takes lot many time to retrieve data(around 14sec).
I'm still confused regarding process of versioning. How it works in update and delete scenario? At time of search it process all the data version and send latest one? Is it so?
Elasticsearch internally uses Lucene which uses immutable segments to store the data. as these segments are immutable, every update on elasticsearch internally marks the old document delete(soft delete) and inserts a new document(with a new version).
The old document is later on cleanup during a background segment merging process.
A newly updated document should be available in 1 second(default refresh interval) but it can be disabled or change, so please check this setting in your index. I can see you are using wait_for param in your code, please remove this and you should be able to see the updated document fast if you have not changed the default refresh_interval.
Note:- Here both update and delete operation works similarly, the only difference is that in delete operation new document is not created, and the old document is marked soft delete and later on during segment merge deleted permanently.

scan and scroll in spring data elasticsearch 3

I am trying to migrate my es 2.2.1 with spring data elastic 2 to ES 5.6.1 with spring Data elastic 3 but when i am trying to use scan scroll method for large dataaset i am not able to access them, its look like ElasticsearchTemaplate class do not have these function any more in the newer version Can you please let me know what is the substitute of scan and scroll in SDA 3.x:-
ElasticsearchTemplate rep=null;
String scrollId = rep.scan(searchQuery, (long)25000, false);
//List<SampleEntity> sampleEntities = new ArrayList<SampleEntity>();
boolean hasRecords = true;
You now can use startScroll method instead of scan() and scroll().
It's not presented in current docs.
Here are example ElasticsearchTemplate retrieve big data sets

Breeze entity state doesn't change after saving

My application uses BreezeJS, ASP.NET Web API and EF.
I'm trying to save an object using breeze, as follows:
var saveOptions = this.manager.saveOptions.using({ resourceName: "SaveLocationSettings", tag: clientId, allowConcurrentSaves: true });
var obj = self.manager.saveChanges(null, saveOptions).then(saveSucceeded, saveFailed);
I'm using a custom save method on the server side, which returns a SaveResult object. However, on the client side, the entity manager still maintains the modified state.
My controller on the Web API is a BreezeController.
According to the breeze documentation, if your custom method has the signature similar to the Breeze SaveChanges() method, it should work similar to SaveChanges() method. However, if I use the breeze SaveChanges(), the entity state gets updated properly. But my custom endpoint save does not update the entity state, although the data is saved in the database.
UPDATE:
After some investigation, I figured that this happens only with one entity type that goes to this particular save endpoint. Say, I have a 'location' object, with a collection of 'availability' associated with it, as follows:
Class Location {
public Location() {
this.Availabilities = new HashSet<Availability>();
}
}
Now from the client side, if I only change some property of the Location object, it handles the hasChanges property correctly. But if I change the Availability only or Availability along with another property of the location, then the hasChanges is not updated properly on client side.
This is my server side code that's called from the WebAPI controller:
public SaveResult SaveLocation(Location l, List<MaxAvailability> maxAvailability, int changedBy)
{
// Create a SaveResult object
// we need to return a SaveResult object for breeze
var keyMappings = new List<KeyMapping>();
var entities = new List<object> {l, maxAvailability};
var saveResult = new SaveResult() { Entities = entities, KeyMappings = keyMappings, Errors = null };
try
{
if (l.LocationId == -1)
{
// add new location
l.LocationId = this.AddNewLocationWithItsAssociatedData(l, maxAvailability, changedBy);
}
else
{
// do changes to the existing location
this.UpdateExistingLocationWithItsAssociatedData(l, maxAvailability, changedBy);
}
}
catch (DbEntityValidationException ex)
{
// Log the error and add the errors list to SaveResult.
// Retrieve the error messages as a list of strings.
saveResult.Errors = this.GetErrors(ex);
}
return saveResult;
}
I think I figured out the answer. It was due to some bad practice in my code. When modifying the availability of an existing location, instead of updating the existing record, I was deleting the existing record and adding a new one. This was causing the client side availability object and the database object to have two different states. Once it was resolved, the hasChanges() state was behaving as expected.

How to provide multiple mapping manager for solr multiple cores in SolrNet

I have configured 2 solr cores and trying to map to 2 different classes thru solrnet. I'm currently using Ninject but willing to change to say Windsor if it's not possible in Ninject. I'm trying to use the AllPropertiesMappingManager for mapping. Since I need to set 2 different unique keys for 2 different cores I don't know how to do the same using AllPropertiesMappingManager.
Currently without using the Mapping manager I'm getting the error: Document is missing mandatory uniqueKey field: TranscriptId
EDIT: Error disappears after using attribute based mapping
var solrServers = new SolrServers {
new SolrServerElement {
Id = "markup",
Url = solrMarkupUrl,
DocumentType = typeof(SolrMarkup).AssemblyQualifiedName,
},
new SolrServerElement {
Id = "transcript",
Url = solrTranscriptUrl,
DocumentType = typeof(SolrTranscript).AssemblyQualifiedName,
}
};
kernel = new StandardKernel();
kernel.Load(new SolrNetModule(solrServers));
SolrMarkupCore = kernel.Get<ISolrOperations<SolrMarkup>>("markup");
SolrTranscriptCore = kernel.Get<ISolrOperations<SolrTranscript>>("transcript");
You can look at the SolrNet unit tests for Ninject with multiple cores - NinjectMultiCoreFixtures.cs for a working example.
Also, if you are not using the mapping manager are you useing Attribute based mapping to get things setup? Because you will still need to setup the mapping between your SolrMarkup and SolrTranscript classes fo things to work properly.

Resources