How can I parameterize a SpringData ElasticSearch index at runtime?
For example, the data model:
#Document(indexName = "myIndex")
public class Asset {
#Id
public String id;
// ...
}
and the repository:
public interface AssetRepository extends ElasticsearchCrudRepository<Asset, String> {
Asset getAssetById(String assetId);
}
I know I can replace myIndex with a parameter, but that parameter will be resolved during instantiation / boot. We have the same Asset structure for multiple clients / tenants, which have their own index. What I need is something like this:
public interface AssetRepository extends ElasticsearchCrudRepository<Asset, String> {
Asset getAssetByIdFromIndex(String assetId, String index);
}
or this
repoInstance.forIndex("myOtherIndex").getAssetById("123");
I know this does not work out of the box, but is there any way to programmatically 'hack' it?
Even though the bean is init at boot time, you can still achieve it by spring expression language:
#Bean
Name name() {
return new Name();
}
#Document(indexName="#{name.name()}")
public class Asset{}
You can change the bean's property to change the index you want to save/search:
assetRepo.save(new Asset(...));
name.setName("newName");
assetRepo.save(new Asset(...));
What should be noticed is not to share this bean in multiple thread, which may mess up your index.
Here is a working example.
org.springframework.data.elasticsearch.repository.ElasticSearchRepository has a method
FacetedPage<T> search(SearchQuery searchQuery);
where SearchQuery can take multiple indices to be used for searching.
I hope it answers
Related
I'm using spring-data-mongodb at the moment so this question is primarily in context of MongoDB but I suspect my question applies to repository code in general.
Out of the box when using a MongoRepository<T, ID> interface (or any other Repository<T, ID> descendent) the entity type T is expected to be the document type (the type that defines the document schema).
As a result injecting such a repository into service component means this repository is leaking database schema information into the service tier (highly pseudo) :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
interface MyRepository extends MongoRepository<MyDocument, String> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
var documentId = convert(id, ...);
var matchingDocument = repository.findById(documentId).orElse(...);
var model = convert(matchignDocument, ...);
return model;
}
}
Whilst ideally I'd want to do this :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
#Configuration
class MyMagicConversionConfig {
...
}
class MyDocumentToModelConverter implements Converter<MyModel, MyDocument> {
...
}
class MyModelToDocumentConverter implements Converter<MyDocument, MyModel> {
...
}
// Note that the model and the model's ID type are used in the repository declaration
interface MyRepository extends MongoRepository<MyModel, UUID> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
// Repository now returns the model because it was converted upstream
// by the mongo persistence layer.
var matchingModel = repository.findById(documentId).orElse(...);
return matchingModel ;
}
}
Defining this conversion once seems significantly more practical than having to consistently do it throughout your service code so I suspect I'm just missing something.
But of course this requires some way to inform the mongo mapping layer to be aware of what conversion has to be applied to move between MyModel and MyDocument and to use the latter for it's actual source of mapping metadata (e.g. #Document, #Id, etc.).
I've been fiddling with custom converters but I just can't seem to make the MongoDB mapping component do the above.
My two questions are :
Is it currently possible to define custom converters or implement callbacks that allow me to define and implement this model <-> document conversion once and abstract it away from my service tier.
If not, what is the idiomatic way to approach cleaning this up such that the service layer can stay blissfully unaware of how or with what schema an entity is persisted? A lot of Spring Boot codebases appear to be fine with using the type that defines the database schema as their model but that seems supoptimal. Suggestions welcome!
Thanks!
I think you're blowing things a bit out of proportion. The service layer is not aware of the schema. It is aware of the types returned by the repository. How the properties of those are mapped onto the schema, depends on the object-document mapping. This, by default, uses the property name, as that's the most straightforward thing to do. That translation can either be customized using annotations on the document type or by registering a FieldNamingStrategy with Spring Data MongoDB.
Spring Data MongoDB's object-document mapping subsystem provides a lot of customization hooks that allows transforming arbitrary MongoDB documents into entities. The types which the repositories return are your domain objects that - again, only by default - are mapped onto a MongoDB document 1:1, simply because that's the most reasonable thing to do in the first place.
If really in doubt, you can manually implement repository methods individually that allow you to use the MongoTemplate API that allows you to explicitly define the type, the data should be projected into.
You can use something like MapStruct or write your own Singleton Mapper.
Then create default methods in your repository:
interface DogRepository extends MongoRepository<DogDocument, String> {
DogDocument findById(String id);
default DogModel dogById(String id) {
return DogMapper.INSTANCE.toModel(
findById(id)
);
}
}
I’m using Redis OM for spring boot, I am having trouble querying objects because it only returns the first 10 records.
Repository Class:
public interface RedisBillerRepository extends RedisDocumentRepository<Biller, Long> {
List<Biller> findByClientIds(String clientId);
}
Is there a way to return ALL the objects with the specific clientId? Not the first 10 only.
The only way which i found was with the interface Page. For example your Repository would look like this:
public interface RedisBillerRepository extends RedisDocumentRepository<Biller, Long> {
Page<Biller> findByClientIds(String clientId, Pageable pageable);
}
And your class could look like this
public class BillerService {
#Autowired
RedisBillerRepository redisBillerRepository;
public List<Biller> getAllClientsById(String clientId){
Pageable pageRequest = PageRequest.of(0, 500000);
Page<Biller> foundBillers = redisBillerRepository.findByClientIds(clientId, pageRequest);
List<Biller> billersAsList = foundBillers.getContent();
return billersAsList;
}
}
You have to set the limit for now.
I'm the author of the library... #member2 is correct. RediSearch currently has a default for the underlying FT.SEARCH (https://redis.io/commands/ft.search/) method of returning the first 10 records found. To override that, the only way to do so currently is to use the Pagination constructs in Spring.
I will expose a configuration parameter in upcoming versions to set the MAX globally.
Let me explain my problem with SpringData mongo, I have the following interface declared, I declared a custom query, with a projection to ignore the index, this example is only for illustration, in real life I will ignore a bunch of fields.
public interface MyDomainRepo extends MongoRepository<MyDomain, String> {
#Query(fields="{ index: 0 }")
MyDomain findByCode(String code);
}
In my MongoDB instance, the MyDomain has the following info, MyDomain(code="mycode", info=null, index=19), so when I use the findByCode from MyDomainRepo I got the following info MyDomain(code="mycode", info=null, index=null), so far so good, because this is expected behaviour, but the problem happens when..., I decided to save the findByCode return.
For instance, in the following example, I got the findByCode return and set the info property to myinfo and I got the object bellow.
MyDomain(code="mycode", info="myinfo", index=null)
So I used the save from MyDomainRepo, the index was ignored as expected by the projection, but, when I save it back, with or without an update, the SpringData Mongo, overridden the index property to null, and consequently, my record on the MongoDB instance is overridden too, the following example it's my MongoDB JSON.
{
"_id": "5f061f9011b7cb497d4d2708",
"info": "myinfo",
"_class": "io.springmongo.models.MyDomain"
}
There's a way to tell to SpringData Mongo, to simply ignores the null fields on saving?
Save is a replace operation and you won't be able to signal it to patch some fields. It will replace the document with whatever you send
Your option is to use the extension provided by Spring Data Repository to define custom repository methods
public interface MyDomainRepositoryCustom {
void updateNonNull(MyDomain myDomain);
}
public class MyDomainRepositoryImpl implements MyDomainRepositoryCustom {
private final MongoTemplate mongoTemplate;
#Autowired
public BookRepositoryImpl(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Override
public void updateNonNull(MyDomain myDomain) {
//Populate the fileds you want to patch
Update update = Update.update("key1", "value1")
.update("key2", "value2");
// you can you Update.fromDocument(Document object, String... exclude) to
// create you document as well but then you need to make use of `MongoConverter`
//to convert your domain to document.
// create `queryToMatchId` to mtach the id
mongoTemplate.updateFirst(queryToMatchId, update, MyDomain.class);
}
}
public interface MyDomainRepository extends MongoRepository<..., ...>,
MyDomainRepositoryCustom {
}
Say I have a class structure as follows, it is pretty basic inheritance:
Manager extends Person {
private String name;
Manager() {
}
}
Clerk extends Person {
private String salary;
}
In spring Data if I store these in Mongo, is it possible to configure it to map the correct class when I do a getById. I assume i will have to store some class info?
What i dont want to do is the need to create seperate repository classes if i can avoid it, also i dont know what the object will be when i do a getById
If you are using spring-data-mongodb MongoRepository to write data in your database according to your entity model, a _class field will be added to document roots and to complex property types (see this section). This fields store the fully qualified name of the Java class and it allows disambiguation when mapping from MongoDb Document to Spring data model.
However, if you only use MongoRepository to read from your database, you need to tell Spring-data how to map your entities explicitly. You will need to Override Mapping with Explicit Converters.
PersonReadConverter.class
public class PersonReadConverter implements Converter<Document, Person> {
#Override
public Contact convert(Document source) {
if (source.get("attribute_specific_to_Clerk") != null) {
Clerk clerk = new Clerk();
//Set attributes using setters or defined constructor
return clerk;
}
else {
Manager manager = new Manager()
//Set attribute using setters or defined constructor
return manager;
}
}
}
Then, you have to Register Spring Converters with the MongoConverter.
You can find an example of my own at: Spring Data Mongo - How to map inherited POJO entities?
I have three text files, they all contain data of the same type, but data is stored differently in each file.
I want to have one interface:
public interface ItemRepository() {
List<Item> getItems();
}
And instead of creating three implementations I want to create one implementation and use dependency injection to inject a path to the text file
and an analyser class for each text file:
public class ItemRepositoryImpl() implements ItemRepository {
Analyser analyser;
String path;
public ItemRepositoryImpl(Analyser analyser, String path) {
this.analyser = analyser;
this.path = path;
}
public List<Item> getItems() {
// Use injected analyser and a path to the text file to extract the data
}
}
How do I wire everything and inject the ItemRepositoryImpl into my controller?
I know I could simply do:
#Controller
public class ItemController {
#RequestMapping("/items1")
public List<Item> getItems1() {
ItemRepository itemRepository = new ItemRepositoryImpl(new Analyser1(), "file1.txt");
return itemRepository.getItems();
}
#RequestMapping("/items2")
public List<Item> getItems1() {
ItemRepository itemRepository = new ItemRepositoryImpl(new Analyser2(), "file2.txt");
return itemRepository.getItems();
}
#RequestMapping("/items3")
public List<Item> getItems1() {
ItemRepository itemRepository = new ItemRepositoryImpl(new Analyser3(), "file3.txt");
return itemRepository.getItems();
}
}
But I don't know how to configure Spring to autowire it.
You can achieve it in many different ways and it probably depends on your design.
One of them can be initialising 3 different analyzers in spring context and wiring all the three analyzers in ItemRepositoryImpl using '#Qualifier' annotation. With the help of an extra method parameter, ItemRepositoryImpl can decide which analyzer it should route the requests to.
For the path variable also you can follow a similar approach.
If your question is specific about how to wire the primitive type in the bean, check this post . It specifies how to initialize a String variable in spring context.