Is there a way to persist a field in Elasticsearch but not in mongodb? - spring

I am using both elasticsearch and mongodb in my project. I am looking for functionality similar to using #Transient, in the sense that the field should not persisted in mongodb, but can be in elasticsearch.
Public class pojo{
#PersistInBothESAndMongo
String field1;
#PersistOnlyInES
String field2;
}

Related

How to make a custom converter for a String field using Spring Data Elasticsearch?

I am using Spring Data Elasticsearch in my project. I have a class like this,
#Document("example")
class Example {
#Id
private String id;
private String name;
private String game;
private String restField;
}
What I need is when ever I am saving the Example object to elasticsearch I am removing a value. But when I am getting the data from elasticsearch I need that removed value to be appended.
save(Example example) {
example.setRestField(example.getRestField().replace("***", ""));
exampleRepository.save(example);
}
get(String id) {
Example example = exampleRepository.findById(id);
example.getRestField().concat("***");
return example;
}
Right now I am doing like the above way. But can we use a custom converter for this? I checked the converter examples for Spring Data Elasticsearch but those are for different different objects. How I can create a custom converter only for this particular String field restField? I don't want to apply this converter for other String fields.
Currently there is no better solution. The converters registered for Spring Data Elasticsearch convert from a class to a String and back. Registering a converter for your case would convert any String property of every entity.
I had thought about custom converters for properties before, I have created a ticket for this.
Edit 05.11.2021:
Implemented with https://github.com/spring-projects/spring-data-elasticsearch/pull/1953 and will be available from 4.3.RC1 on.

Recreating Elastic Index with different Field Type

I'm new to ES currently attempting to use spring-data-elasticsearch 3.2.1.RELEASE in my service.
Design is still in early phase and hence I've had to change/update fields in the ElasticDocument which we annotate by #Document.
It looks somewhat like:
#Docuemnt(...)
public class MyDocument {
#Id
private String id;
...
#Field(type = FieldType.String, name = "myField")
private String myField;
}
I had to change days field to an object, for which I simply changed the Datatype and FieldType Attribute to Object.
#Docuemnt(...)
public class MyDocument {
#Id
private String id;
...
#Field(type = FieldType.Object, name = "myField")
private Object myField;
}
I deleted all documents from my index on cluster and attempted to save this document with new field type but it looks like it's still giving errors due to previous type being Text.
org.springframework.data.elasticsearch.ElasticsearchException: Bulk indexing has failures.
Use ElasticsearchException.getFailedDocuments() for detailed messages
[
{
XYZ=ElasticsearchException[
Elasticsearch exception [
type=mapper_parsing_exception,
reason=failed to parse field [myField] of type [text] in document with id 'XYZ']
];
nested: ElasticsearchException[
Elasticsearch exception [
type=illegal_state_exception,
reason=Can't get text on a START_OBJECT at 1:296 ]
];
}
]
I'm pretty sure this might not be the best practice to change Field Types but I have tried this with a different indexName and that worked.
For another attempt, deleting this particular index manually and letting spring data elasticsearch create it while doing bulk indexing does not help. I see the same error.
Could it be because I have more instances (non-local) which are connected to elastic though not doing any operations on this index at this moment?

Spring Boot and Mongo - findById only works with ObjectID and not Id of String

I have a Spring Boot GraphQL service which reads from Mongo.
I've noticed that if my MongoDB document ID has ObjectID e.g. "_id": ObjectID("5e5605150") I'm able to get results from doing myRepository.findById(myId).
However if that id is just a string like "_id": "5e5605150" then findById returns nothing.
The repository looks like this:
#Repository
interface MyRepository : MongoRepository<Song, String>
And that Song looks like this:
#Document(collection = Song.COLLECTION)
data class Song(
#Id
val id: String,
val title: String
) {
companion object {
const val COLLECTION: String = "song"
}
}
Any ideas?
Thanks.
See the Spring Data MongoDB reference on mapping - the _id field is treated specially. If it's a String or BigInteger type, it will automatically be converted to/from an ObjectId. MongoDB supports plain strings for the special _id field though, so if you're inserting documents that don't go through Spring Data (or inserting documents as a raw JSON string for the MongoDB driver to insert) then MongoDB won't automatically convert strings to ObjectId - it will store them as strings.
So, if you insert the document {"_id": "5e5605150"} into your DB, the Spring Data MongoDB mapper won't be able to find it, because it will be searching by ObjectId only.

Mapping a MongoDB document's _id with Vert.x

I'm using Vert.x MongoDB client as well as Vert.x CodeGen (https://github.com/vert-x3/vertx-codegen).
This is an example of my document:
#DataObject
public class Instrument {
public static final String DB_TABLE = "instruments";
private String id;
private String instrumentId;
private String name;
}
Since the model of my document (or entity) is annotated with #DataObject, how can tell Vert.x which field I want to use as Mongo's _id?
When using Spring (Data), one can annotate an entity with #Document and the field to be used as Mongo's _id with the annotation #Id. Is this possible in Vert.x?
If not, is there any turnaround?
Thanks!
for mongoDB you can use http://vertx.io/docs/vertx-mongo-client/java, with http://vertx.io/docs/apidocs/io/vertx/core/json/Json.html to read/write jsonObject or string.

How to implement autocomplete using spring data solr

I am using spring data solr for my application and trying to implement autocomplete feature.
#SolrDocument
public class UserEntity{
#Field
private String id;
#Field
private String fullName;
#Field
private Date dob;
#Field
private String address;
#Field
private String phoneNumber;
}
I managed to implement autocomplete for single fields by using spring data solr Criteria .
But this got drawback becuase this will return a list of UserEntities. I have to iterate the user list and get the names.
1 . Is there any way that I can get only the matching Names using spring data solr ?
2 . If I need to implement autocomplete across multiple fields (name and address together) is it possible to do it using spring data solr ?
Please have a look at the Solr Example here using Factes for autocomplete.
You also may use a terms vector which requires you to use SolrTemplate.execte(SolrCallback) to access native Solr API.

Resources