I am using spring-data with reactive support for Elasticsearch:
#Repository
public interface UserDocumentRepository extends ReactiveCrudRepository<UserDocument, UUID> {
}
For now, my UserDocument is annotated with #Document(indexName = "user-*") It is working properly for searching (as I am using ES supplied from Kafka-connect, my service will not create new documents).
My problem is that I have multiple environments dev/test when I need to parametrize index name for each cluster (using the same elastic search, with different index names).
So for dev, I need dev-user-* and for the test, I need test-user-*. I can use ReactiveElasticsearchTemplate where you can supply indexName, but how to do it with ReactiveCrudRepository?
You can use a SpEL expression in your #Document annotation. Check this question and my answer there about the syntax.
Edit:
Just a couple of examples how to build the index name dynamically:
If you have a configuration from the application.properties named env-name:
#Document(indexName = "#{#environment.getProperty('env-name')}-index-*")
I you have a bean named environmentProvider with a getEnv() method:
#Document(indexName = "#{#environmentProvider.getEnv()}-index-*")
Related
I'm migrating a legacy application from Spring-core 4 to Springboot 2.5.2.
The application is using spring-data-rest (SDR) alongside spring-data-mongodb to handle our entities.
The legacy code was overriding SDR configuration by extending the RepositoryRestMvcConfiguration and overriding the bean definition for persistentEntityJackson2Module to remove serializerModifier and deserializerModifier.
#EnableWebMvc
#EnableSpringDataWebSupport
#Configuration
class RepositoryConfiguration extends RepositoryRestMvcConfiguration {
...
...
#Bean
#Override
protected Module persistentEntityJackson2Module() {
// Remove existing Ser/DeserializerModifier because Spring data rest expect linked resources to be in href form. Our platform is not tailored for it yet
return ConverterHelper.configureSimpleModule((SimpleModule) super.persistentEntityJackson2Module())
.setDeserializerModifier(null)
.setSerializerModifier(null);
}
It was to avoid having to process DBRef as href link when posting entities, we pass the plain POJO instead of the href and we persist it manually before the entity.
Following the migration, there is no way to set the same overrided configuration but to avoid altering all our processes of creation we would like to keep passing the POJO even for DbRef.
I will add an exemple of what was working before :
We have the entity we want to persist :
public class EntityWithDbRefRelation {
....
#Valid
#CreateOnTheFly // Custom annotation to create the dbrefEntity before persisting the current entity
#DBRef
private MyDbRefEntity myDbRefEntity;
}
the DbRefEntity
public class MyDbRefEntity {
...
private String name;
}
and the JSON Post request we are doing:
POST base-api/entityWithDbRefRelations
{
...
"myDbRefEntity": {
"name": "My own dbRef entity"
}
}
In our database this request create our myDbRefEntity and then create the target entityWithDbRefRelation with a dbRef linked to the other entity.
Following the migration, the DBRef is never created because when deserializing the JSON into a PersistingEntity, the myDbRefEntity is ignored because it's expecting an href instead of a complex object.
I see 3 solutions :
Modify all our process to first create the DBRef through one request then create our entity with the link to the dbRef
Very costly as we have a lot of services creating entities through this backend
Compliant with SDR
Define our own rest mvc controllers to do operations, to ignore the SDR mapping machanism
Add AOP into the RepositoryRestMvcConfiguration around the persistentEntityJackson2Module to set le serializerModifier and deserializedModifier to null
I really prefer to avoid this solution as Springboot must have remove a way to configure it on purpose and it could break when migrating on newer version
Does anyone know a way to continue considering the property as a complex object instead of an href link except from my 3 previous points ?
Tell me if you need more information and thanks in advance for your help!
I've been struggling for the past week to successfully integrate Spring Data MongoDB into our application. We use the fairly common practice of having separate databases for each collection that we rely on. For instance, TenantConfiguration database contains only the TenantConfigurations collection.
I've read through the documentation several times and trawled through the code for a solution but have turned up nothing. Surely such a widely adopted project has some solution for this issue? My current attempt looks like this:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = TenantConfigurationRepository.class,
mongoTemplateRef = "tenantConfigurationTemplate")
public class TenantConfigurationRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate tenantConfigurationTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "TenantConfiguration");
}
}
Here is one of the other individual repository configurations:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = RegisteredCardRepository.class,
mongoTemplateRef = "registeredCardTemplate")
public class RegisteredCardRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate registeredCardTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "RegisteredCard");
}
}
Now here is the actual repository definition for the RegisteredCard repository:
#Repository
public interface RegisteredCardRepository extends MongoRepository<RegisteredCard, Guid>,
QueryDslPredicateExecutor<RegisteredCard> { }
This all makes perfect sense to me, the individual configurations uniquely identify the specific repository interfaces they configure and the specific template bean to use with that repository via the mongoTemplateRef parameter of the annotation. At least, this is how the documentation seems to imply it should work.
In reality, when I start up the application, the RegisteredCard repository resolves to a MongoDB repository instance with an associated MongoDbFactory that is bound to the TenantConfiguration database. In fact, every single repository receives the same, incorrect MongoOperations object. Despite each repository having its own unique configuration, it appears that whatever database is accessed first remains the target database for every repository.
Are there any solutions available to this problem?
It's taken me almost a week, but I've actually found a passable solution to this issue. Here's a quick run-down of facts I've picked up while researching this issue:
#EnableMongoRepositories(basePackageClasses = Whatever.class) simply uses a qualified class name to indicate what package it should scan for all of your defined data models. This is entirely equivalent to doing #EnableMongoRepositories(basePackageClasses = "com.mypackage.whatevers") if Whatever.class resides in that package.
#EnableMongoRepositories is not repeatable but can be used to annotate several classes. This has been covered in other SO conversations but bears repeating here. You will need to define several repository configuration classes; one for each database you intend to interact with.
Each of your individual repository configurations must specify its own MongoTemplate instance in the #EnableMongoRepositories annotation. You can get away with providing only a single Mongo bean but the MongoTemplate relies on a specific MongoMappingContext.
The #EnableMongoRepositories annotation helps define your mapping context, this understands the structure of your data models and how to serialize them. It also understands the #Document and #Field annotations and does the heavy lifting of persisting your objects. The Mongo template instances are where your specify what database you want to interact with. So by providing the #EnableMongoRepositories annotation with both a basePackage attribute and a mongoTemplateRef attribute you can tell Spring Data Mongo to "take these models and persist them in this specific database".
The unfortunate requirement for this solution is that you must organize your data models into separate packages depending on what database they belong in. If, like me, you are using a Mongo database structure that allocates a single collection to each database (this is fairly common for heavily accessed collections), this means that each of your data models must reside in its own package. Each of these packages must be pointed to by an #EnableMongoRepositories annotation also containing a mongoTemplateRef attribute to a unique MongoTemplate bean.
I hope this helps someone avoid the trouble I've gone through trying to accomplish what should be a fairly run-of-the-mill Mongo integration.
PS: Abandon all hope, those who seek to combine auditing with this configuration.
I know this is old but for those who are looking for a short solution like me:
#Autowired
#Qualifier("registeredCardTemplate")
private MongoTemplate template;
Qualifier name is your "mongoTemplateRef={XXX}"
I am implementing a Spring Data Repository and having my repository extend the MongoRepository. I am looking for a way to specify a hint on my findBy methods so I can be control. I have seen several times when a non-optimal index would be picked as the winning plan.
This is what my repository looks like right now:
public interface AccountRepository extends MongoRepository<Account, ObjectId> {
#Meta(maxExcecutionTime = 60000L, comment = "Comment" )
public List<Account> findByUserIdAndBrandId(Long userId, Long brandId);
}
I researched a bunch and found that the JPARepository from spring data supports the #QueryHint annotation but I do not believe that annotation is supported for MongoDb. Is there a similar annotation I can specify on top of my findBy method to specify the hint?
MongoTemplate allows to specify a hint, however, I have a ton of findBy methods and I would hate to add an implementation underneath just to specify a hint.
I have started to use SDN 3.0.0 M1 with Neo4j 2.0 (via rest interface) and I want use an existing graph.db with existing datas.
I have no problem to find node created through SDN via hrRepository.save(myObject); but I can't fetch any existing node (not created through SDN), via hrRepository.findAll(); or any other method, despite I have manually added a property __type__ in this existing nodes.
I use a very simple repository to test that :
#Component
public interface HrRepository extends GraphRepository<Hr> {
Hr findByName(String name);
#Query("match (hr:hr) return hr")
EndResult <Hr> GetAllHrByLabels();
}
And the named query GetAllHrByLabels work perfectly.
Is an existing way to use standard methods (findAll() , findByName()) on existing datas without redefine Cypher query ?
I recently ran into the same problem when upgrading from SDN 2.x to 3.0. I was able to get it working by first following the steps in this article: http://maxdemarzi.com/2013/06/26/neo4j-2-0-is-coming/ to create and enable Neo4j Labels on the existing data.
From there, though, I had to get things working for SDN 3. As you encountered, to do this, you need to set the metadata correctly. Here's how to do that:
Consider a #NodeEntity called Person, that inherits from AbstractNodeEntity (imports and extraneous code removed for brevity):
AbstractNodeEntity:
#NodeEntity
public abstract class AbstractNodeEntity {
#GraphId private Long id;
}
Person:
#NodeEntity
#TypeAlias("Person") // <== This line added for SDN 3.0
public class Person extends AbstractNodeEntity {
public String name;
}
As you know, in SDN 2.x, a __type__ property is created automatically that stores the class name used by SDN to instantiate the node entity when it's read from Neo4j. This is still true, although in SDN 3.0 it's now specified using the #TypeAlias annotation, as seen in the example above. SDN 3.0 also adds new metadata in the form of Neo4j Labels representing the class hierarchy, where the node's class is prepended with an underscore (_).
For existing data, you can add these labels In Cypher (I just used the new web-based Browser utilty in Neo4j 2.0.1) like this:
MATCH (n {__type__:'Person'}) SET n:`_Person`:`AbstractNodeEntity`;
Just wash/rinse/repeat for other #NodeEntity types you have.
There is also a Neo4j Label that gets created called SDN_LABEL_STRATEGY but it isn't applied to any nodes, at least in my data. SDN 3 must have created it automatically, as I didn't do so manually.
Hope this helps...
-Chris
Using SDN over REST is probably not the best idea performance-wise. Just that you know.
Data not created with SDN won't have the necessary meta information.
You will have to iterate over the nodes manually and use
template.postEntityCreation(Node,Class);
on each of them to add the type information. Where class is your SDN annotated entity class.
something like:
for (Node n : template.query("match(n) where n.type = 'Hr' return n").to(Node.class))
template.postEntityCreation(n,Hr.class);
Lets say i have the following data model:
public class A {
#Indexed(indexType = IndexType.FULLTEXT, indexName = "property1")
String property1;
}
public class B extends A {
#Indexed(indexType = IndexType.FULLTEXT, indexName = "property2")
String property2;
}
Can i tell the Spring framework to index property1 of class B under a different index name?
If not, what would you do in such case? I mean, what would you do if you have few classes that all extends the same base class, but in the same time, all the properties that those classes inherit form the base class should be indexed. I can annotate those properties for indexing only in the base class, and it is very limiting. What can i do?
Thanks.
The level attribute in the index definition annotation can be set to Level.INSTANCE. For more help please refer spring-data-neo4j documentation here
Here is an excerpt from the doc :
If a field is declared in a superclass but different indexes for
subclasses are needed, the level attribute declares what will be used
as index. Level.CLASS uses the class where the field was declared and
Level.INSTANCE uses the class that is provided or of the actual entity
instance.
I don't think that's possible. Your property1 will always be indexed in index property1. Being able to specify multiple indexes on a single field would probably fix your issue, but it's currently not possible. A while ago, I've raised an issue for this, but it's not yet implemented.
If you really want a domain (entity) object approach, you could also opt for the domain entity approach. It's not related to Spring or Spring Data Neo4j, but it also does the trick. By manually handling your entities this way, you could also manage the indexes yourself, thus gaining all the flexibility you want.
Just a question, why would you want to specify a different index per subclass?