How to implement autocomplete using spring data solr - spring

I am using spring data solr for my application and trying to implement autocomplete feature.
#SolrDocument
public class UserEntity{
#Field
private String id;
#Field
private String fullName;
#Field
private Date dob;
#Field
private String address;
#Field
private String phoneNumber;
}
I managed to implement autocomplete for single fields by using spring data solr Criteria .
But this got drawback becuase this will return a list of UserEntities. I have to iterate the user list and get the names.
1 . Is there any way that I can get only the matching Names using spring data solr ?
2 . If I need to implement autocomplete across multiple fields (name and address together) is it possible to do it using spring data solr ?

Please have a look at the Solr Example here using Factes for autocomplete.
You also may use a terms vector which requires you to use SolrTemplate.execte(SolrCallback) to access native Solr API.

Related

Not able to search data in redis cache using spring crud repository by passing list of values for a property of the model saved in cache

We have model class saved in Redis as mentioned below:-
#Data
#NoArgsConstructor
#AllArgsConstructor
#RedisHash("book")
public class Book implements Serializable {
private static final long serialVersionUID = 2208852329346517265L;
#Id
private Integer bookID;
#Indexed
private String title;
#Indexed
private String authors;
private String averageRating;
private String isbn;
private String languageCode;
private String ratingsCount;
private BigDecimal price;
}
We have title and authors as our indexed property.
Now we wanted to search all the records from Redis by passing title and a list of authors using the spring crud repository as mentioned below.
public interface BookSpringRepository extends CrudRepository<Book, String> {
List<Book> findAllByTitleAndAuthors(String title, List<String> authors);
}
Service layer:-
#Override
public Optional<List<Book>> searchBooksByTitleAndAuthorNames(String title, List<String>
autherNames) {
return Optional.ofNullable(bookSpringRepository.findAllByTitleAndAuthors(title,
autherNames));
}
Here we are getting below exception
Unable to fetch data from Spring data Redis cache using List of Integer or
String.
Getting error while fetching - "Resolved
[org.springframework.core.convert.ConversionFailedException: Failed to convert from type
[java.lang.String] to type [byte] for value 'Ronak';
nested exception is java.lang.NumberFormatException: For input string: "Ronak"]."
We would not want to convert the list of string/integer to byte as it is a time-consuming process and as we tried took so much amount of time. Also when the results are retrieved we will again have to convert back to normal integer or string values.
The other option is to loop through the list and pass a single value at a time to the Redis crud repository and this time Redis crud repository is happy but that will be a loop call to Redis and network latency.
We cannot add ID attributes on authors' property as these can be duplicate records.
Does the spring crud repository support the LIKE query in search that way we can create a unique id having these authors' names and make put ID annotation on that new derived property to search the records using spring crud repository using LIKE or contains kind of query.
Any suggestions here are highly appreciated!!
Try to add serialization to your redis key and value. This might help :
https://medium.com/#betul5634/redis-serialization-with-spring-redis-data-lettuce-codec-1a1d2bc73d26

How to make a custom converter for a String field using Spring Data Elasticsearch?

I am using Spring Data Elasticsearch in my project. I have a class like this,
#Document("example")
class Example {
#Id
private String id;
private String name;
private String game;
private String restField;
}
What I need is when ever I am saving the Example object to elasticsearch I am removing a value. But when I am getting the data from elasticsearch I need that removed value to be appended.
save(Example example) {
example.setRestField(example.getRestField().replace("***", ""));
exampleRepository.save(example);
}
get(String id) {
Example example = exampleRepository.findById(id);
example.getRestField().concat("***");
return example;
}
Right now I am doing like the above way. But can we use a custom converter for this? I checked the converter examples for Spring Data Elasticsearch but those are for different different objects. How I can create a custom converter only for this particular String field restField? I don't want to apply this converter for other String fields.
Currently there is no better solution. The converters registered for Spring Data Elasticsearch convert from a class to a String and back. Registering a converter for your case would convert any String property of every entity.
I had thought about custom converters for properties before, I have created a ticket for this.
Edit 05.11.2021:
Implemented with https://github.com/spring-projects/spring-data-elasticsearch/pull/1953 and will be available from 4.3.RC1 on.

How to use generic annotations like #Transient in an entity shared between Mongo and Elastic Search in Spring?

I am using Spring Boot and sharing the same entity between an Elastic Search database and a MongoDB database. The entity is declared this way:
#Document
#org.springframework.data.elasticsearch.annotations.Document(indexName = "...", type = "...", createIndex = true)
public class ProcedureStep {
...
}
Where #Document is from this package: org.springframework.data.mongodb.core.mapping.Document
This works without any issue, but I am not able to use generic annotations to target Elastic Search only. For example:
#Transient
private List<Point3d> c1s, c2s, c3s, c4s;
This will exclude this field from both databases, Mongo and Elastic, whereas my intent was to apply it for Elastic Search only.
I have no issue in using Elastic specific annotations like this:
#Field(type = FieldType.Keyword)
private String studyDescription;
My question is:
what annotation can I use to exclude a field from Elastic Search only and keep it in Mongo?
I don't want to rewrite the class as I don't have a "flat" structure to store (the main class is composed with fields from other classes, which themselves have fields I want to exclude from Elastic)
Many thanks
Assumption: ObjectMapper is used for Serialization/Deserialization
My question is: what annotation can I use to exclude a field from
Elastic Search only and keep it in Mongo? I don't want to rewrite the
class as I don't have a "flat" structure to store (the main class is
composed with fields from other classes, which themselves have fields
I want to exclude from Elastic)
Please understand this is a problem of selective serialization.
It can be achieved using JsonViews.
Example:
Step1: Define 2 views, ES Specific & MongoSpecific.
class Views {
public static class MONGO {};
public static class ES {};
}
Step2: Annotate the fields as below. Description as comments :
#Data
class Product {
private int id; // <======= Serialized for both DB & ES Context
#JsonView(Views.ES.class) //<======= Serialized for ES Context only
private float price;
#JsonView(Views.MONGO.class) // <======= Serialized for MONGO Context only
private String desc;
}
Step 3:
Configure Different Object Mappers for Spring-Data-ES & Mongo.
// Set View for MONGO
ObjectMapper mapper = new ObjectMapper();
mapper.setConfig(mapper.getSerializationConfig().withView(Views.MONGO.class));
// Set View for ES
ObjectMapper mapper = new ObjectMapper();
mapper.setConfig(mapper.getSerializationConfig().withView(Views.ES.class));

Mapping a MongoDB document's _id with Vert.x

I'm using Vert.x MongoDB client as well as Vert.x CodeGen (https://github.com/vert-x3/vertx-codegen).
This is an example of my document:
#DataObject
public class Instrument {
public static final String DB_TABLE = "instruments";
private String id;
private String instrumentId;
private String name;
}
Since the model of my document (or entity) is annotated with #DataObject, how can tell Vert.x which field I want to use as Mongo's _id?
When using Spring (Data), one can annotate an entity with #Document and the field to be used as Mongo's _id with the annotation #Id. Is this possible in Vert.x?
If not, is there any turnaround?
Thanks!
for mongoDB you can use http://vertx.io/docs/vertx-mongo-client/java, with http://vertx.io/docs/apidocs/io/vertx/core/json/Json.html to read/write jsonObject or string.

Spring data MongoDB adding arrays to an existing document

Say I have the following Collections
public #Data class Customer {
#Id
private String id;
private String firstName;
private String lastName;
#DBRef
private List<Address> addressList= new ArrayList<Address>();
}
and
public #Data class Address {
#Id
private String id;
private String address;
private String type;
private String customerID;
}
And each Customer has multiple addresses, and I have implemented MongoRepository. Saving customer for the First time is working pretty well customerRepo.save(customerObject) and before calling the save I am persisting multiple Address Objects and then setting those to the addressList.
Next time when I am updating the same document and want to add a New set of Address to the existing list it is overwriting the whole addressList array. So basically what I have to do now to set new address like thisexistingCustomerObject.getAddressList().addAll(my new List of address) if there are thousand(or more than thousand) of elements or I am slicing the addressList array the following procedure won't be a good idea. My question is what is the best way to achieve this scenario? say if I don't want to use MongoTemplate. Is it possible Just using the MongoRepository
I don't think you can do it in that way. Previously i had the same situation, and I tried the following
1.org.springframework.core.convert.converter.Converter even I have managed to manipulate the DBObject but functions like $push or $set(wrapping them under key) does not work over there.
2.AbstractMongoEventListener by overriding onBeforeSave but Object manipulation was not taking place during save.
However you can try altering the mentioned
you can try override MongoRepository save method, It would better if someone point to the right direction.
Otherwise for my scenario I had to create Custom repository(To update and delete document) which is working parallel with MongoRepository (for Insert and retrieve data/document), but I believe thats an ugly workaround. There has to be a cleaner way to do it.

Resources