Can Spring Data MongoDB be configured to support a different database for each repository? - spring

I've been struggling for the past week to successfully integrate Spring Data MongoDB into our application. We use the fairly common practice of having separate databases for each collection that we rely on. For instance, TenantConfiguration database contains only the TenantConfigurations collection.
I've read through the documentation several times and trawled through the code for a solution but have turned up nothing. Surely such a widely adopted project has some solution for this issue? My current attempt looks like this:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = TenantConfigurationRepository.class,
mongoTemplateRef = "tenantConfigurationTemplate")
public class TenantConfigurationRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate tenantConfigurationTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "TenantConfiguration");
}
}
Here is one of the other individual repository configurations:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = RegisteredCardRepository.class,
mongoTemplateRef = "registeredCardTemplate")
public class RegisteredCardRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate registeredCardTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "RegisteredCard");
}
}
Now here is the actual repository definition for the RegisteredCard repository:
#Repository
public interface RegisteredCardRepository extends MongoRepository<RegisteredCard, Guid>,
QueryDslPredicateExecutor<RegisteredCard> { }
This all makes perfect sense to me, the individual configurations uniquely identify the specific repository interfaces they configure and the specific template bean to use with that repository via the mongoTemplateRef parameter of the annotation. At least, this is how the documentation seems to imply it should work.
In reality, when I start up the application, the RegisteredCard repository resolves to a MongoDB repository instance with an associated MongoDbFactory that is bound to the TenantConfiguration database. In fact, every single repository receives the same, incorrect MongoOperations object. Despite each repository having its own unique configuration, it appears that whatever database is accessed first remains the target database for every repository.
Are there any solutions available to this problem?

It's taken me almost a week, but I've actually found a passable solution to this issue. Here's a quick run-down of facts I've picked up while researching this issue:
#EnableMongoRepositories(basePackageClasses = Whatever.class) simply uses a qualified class name to indicate what package it should scan for all of your defined data models. This is entirely equivalent to doing #EnableMongoRepositories(basePackageClasses = "com.mypackage.whatevers") if Whatever.class resides in that package.
#EnableMongoRepositories is not repeatable but can be used to annotate several classes. This has been covered in other SO conversations but bears repeating here. You will need to define several repository configuration classes; one for each database you intend to interact with.
Each of your individual repository configurations must specify its own MongoTemplate instance in the #EnableMongoRepositories annotation. You can get away with providing only a single Mongo bean but the MongoTemplate relies on a specific MongoMappingContext.
The #EnableMongoRepositories annotation helps define your mapping context, this understands the structure of your data models and how to serialize them. It also understands the #Document and #Field annotations and does the heavy lifting of persisting your objects. The Mongo template instances are where your specify what database you want to interact with. So by providing the #EnableMongoRepositories annotation with both a basePackage attribute and a mongoTemplateRef attribute you can tell Spring Data Mongo to "take these models and persist them in this specific database".
The unfortunate requirement for this solution is that you must organize your data models into separate packages depending on what database they belong in. If, like me, you are using a Mongo database structure that allocates a single collection to each database (this is fairly common for heavily accessed collections), this means that each of your data models must reside in its own package. Each of these packages must be pointed to by an #EnableMongoRepositories annotation also containing a mongoTemplateRef attribute to a unique MongoTemplate bean.
I hope this helps someone avoid the trouble I've gone through trying to accomplish what should be a fairly run-of-the-mill Mongo integration.
PS: Abandon all hope, those who seek to combine auditing with this configuration.

I know this is old but for those who are looking for a short solution like me:
#Autowired
#Qualifier("registeredCardTemplate")
private MongoTemplate template;
Qualifier name is your "mongoTemplateRef={XXX}"

Related

Spring Batch/Data JPA application not persisting data to db

I'm having really weird issue. I need to say that my code working perfectly fine in my local but not persisting some datas in our pod (k8 environment).
I have different datasources to work with in this batch. Everything running fine. Job Repository is map-based and using ResourcelessTransactionManager for it. I configured it like this
#Configuration
#EnableBatchProcessing
public class BatchConfigurer extends DefaultBatchConfigurer {
#Override
public void setDataSource(DataSource dataSource){
}
}
I also use different platformtransactionmanager then spring batch (issue). So I set my spring allow bean overriding to true in my properties. The platform transaction manager in my configurer is right binded one, I debugged it.
I have custom writer for one of my step. Updating records in multiple tables which in multiple dbs (different datasources, in brief)
public class MyWriter implements ItemWriter<MyDTO> {
#Autowired
private MyFirstRepo myfirstRepo; //table in first datasource
#Autowired
private MySecondRepo mySecondRepo; //table in second datasource
#Override
public void write(List<? extends MyDTO> myDtoList) throws Exception {
//some logic
mySecondRepo.delete(deletableEntity)
//some logic
mySecondRepo.saveAll(updatableEntities)
//some logic
myfirstRepo.saveAll(updatableEntities)
}
}
Since I have multiple datasources, I defined multiple transaction managers, and to give transaction manager to my step I defined chained transaction manager that includes that managers.
#Bean
public Step myStep(#Qualifier("chainedTransactionManager") ChainedTransactionManager chainedTransactionManager) {
return getCommonStepBuilder("myStep")
.transactionManager(chainedTransactionManager)
.<MyDTO,MyDTO>chunk(200)
.reader(myPaginingReader())
.writer(myWriter())
.taskExecutor(myTaskExecutor())
.throttleLimit(15)
.build();
}
chained transaction manager config (both of these transaction manager is JpaTransactionManager):
#Configuration
public class TransactionManagerConfig {
#Primary
#Bean(name = "chainedTransactionManager")
public ChainedTransactionManager transactionManager(
#Qualifier("firstTransactionManager") PlatformTransactionManager firstTransactionManager,
#Qualifier("secondTransactionManager")PlatformTransactionManager secondTransactionManager) {
return new ChainedTransactionManager(firstTransactionManager,secondTransactionManager);
}
}
So my first two jpa operations in writer working just fine( operations that made over MySecondRepo) but the last operation is not persisting data to db. It doesn't throws any errors, job completing succesfully but it doesn't update my records in table.
I must mention second time, it does update in my local actually. Just not updating in our app that lives on k8 environment (dockerized microservice). Which is making it so confusing. Any idea why is it happening?
Edit: I created another writer bean for myfirstRepo.saveAll(updatableEntities) (as jdbc batch item writer, executing same logic) and add two of these writer to composite one. Now it's working as expected. But I have a lot of concerns now since I don't know what caused it. Any idea?
Edit 2: I came across with this thread. I was using JdbcPagingItemReader, does entities fetched with this component are in managed state? Entites inside mySecondRepo.delete(deletableEntity) and
mySecondRepo.saveAll(updatableEntities) are fetched inside writer by using hibernate but myfirstRepo.saveAll(updatableEntities) entities are the ones that came from reader.
It all makes sense if it is the case but even it is then why it was working fine in local?
mySecondRepo.saveAll(updatableEntities) are fetched inside writer by using hibernate but myfirstRepo.saveAll(updatableEntities) entities are the ones that came from reader.
Fetching items in the item writer is the cause of your issue. This is incorrect, it is not expected to read items in the item writer. That's why items coming from the reader are saved, but not the ones fetched in the writer.
What you should know is that all writers in the composite are running in the scope of a single transaction, driven by the transaction manager of the step. So if you are writing data to multiple datasources, you need to make sure the transaction manager is coordinating the transaction between all datasources. ChainedTransactionManager is deprecated, you can use a JtaTransactionManager for your case.

How to use spring-data-rest without href for relation

I'm migrating a legacy application from Spring-core 4 to Springboot 2.5.2.
The application is using spring-data-rest (SDR) alongside spring-data-mongodb to handle our entities.
The legacy code was overriding SDR configuration by extending the RepositoryRestMvcConfiguration and overriding the bean definition for persistentEntityJackson2Module to remove serializerModifier and deserializerModifier.
#EnableWebMvc
#EnableSpringDataWebSupport
#Configuration
class RepositoryConfiguration extends RepositoryRestMvcConfiguration {
...
...
#Bean
#Override
protected Module persistentEntityJackson2Module() {
// Remove existing Ser/DeserializerModifier because Spring data rest expect linked resources to be in href form. Our platform is not tailored for it yet
return ConverterHelper.configureSimpleModule((SimpleModule) super.persistentEntityJackson2Module())
.setDeserializerModifier(null)
.setSerializerModifier(null);
}
It was to avoid having to process DBRef as href link when posting entities, we pass the plain POJO instead of the href and we persist it manually before the entity.
Following the migration, there is no way to set the same overrided configuration but to avoid altering all our processes of creation we would like to keep passing the POJO even for DbRef.
I will add an exemple of what was working before :
We have the entity we want to persist :
public class EntityWithDbRefRelation {
....
#Valid
#CreateOnTheFly // Custom annotation to create the dbrefEntity before persisting the current entity
#DBRef
private MyDbRefEntity myDbRefEntity;
}
the DbRefEntity
public class MyDbRefEntity {
...
private String name;
}
and the JSON Post request we are doing:
POST base-api/entityWithDbRefRelations
{
...
"myDbRefEntity": {
"name": "My own dbRef entity"
}
}
In our database this request create our myDbRefEntity and then create the target entityWithDbRefRelation with a dbRef linked to the other entity.
Following the migration, the DBRef is never created because when deserializing the JSON into a PersistingEntity, the myDbRefEntity is ignored because it's expecting an href instead of a complex object.
I see 3 solutions :
Modify all our process to first create the DBRef through one request then create our entity with the link to the dbRef
Very costly as we have a lot of services creating entities through this backend
Compliant with SDR
Define our own rest mvc controllers to do operations, to ignore the SDR mapping machanism
Add AOP into the RepositoryRestMvcConfiguration around the persistentEntityJackson2Module to set le serializerModifier and deserializedModifier to null
I really prefer to avoid this solution as Springboot must have remove a way to configure it on purpose and it could break when migrating on newer version
Does anyone know a way to continue considering the property as a complex object instead of an href link except from my 3 previous points ?
Tell me if you need more information and thanks in advance for your help!

Spring mongodb compass missing created data/collections

I Save my data to database using spring.
#RepositoryRestResource(collectionResourceRel = "operators", path = "operators")
public interface OperatorsRepository extends MongoRepository<Operator, String> {
}
and I have file:
main\resources\application.properties
spring.data.mongodb.uri=mongodb://admin:password#myclusterurl/test?retryWrites=true&w=majority
In my config class I use:
#Bean
CommandLineRunner commandLineRunner(OperatorsRepository operatorsRepository){operatorsRepository.save(myobjToSave);}
Everything works fine, i get data saved data using REST. But my problem is that in compass mongodb I don't see created collections and data. Why? Using mongo shell and mongo atlas is the same.
Declaring a bean doesn't mean it is automatically executed. If you want to create a new collection from, let's say, a JSON file from the src/main/resources (or test), then you have to trigger the call of this method somehow.
I suggest to use #PostConstruct annotation that triggers once upon the object creation. Since you want to create data using the OperatorsRepository, I'd use it at #Service class injecting that object:
#PostConstruct
void createData() {
this.operatorsRepository.save(myobjToSave);
}
Try this
spring.data.mongodb.uri=mongodb://xxxxxxxxxxxxxxxxxxx
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration

What are possible causes for Spring #ComponentScan being unable to auto create a class anotated by #Repository

I came across a tutorial which seemed to be fitting my usecase and tried implementing it. I failed but wasn't sure why. So I tried to find another example with similar code and looked at the book "Spring in Action, Fourth Edition by Craig Walls"
The books describes at page 300 the same basic approach. Define a JdbcTemplate Bean first.
#Bean
NamedParameterJdbcTemplate jdbcTemplate(DataSource dataSource) {
return new NamedParameterJdbcTemplate(dataSource);
}
Then a Repository implementing an Interface
#Repository
public class CustomRepositoryImpl implements CustomRepository {
private final NamedParameterJdbcOperations jdbcOperations;
private static final String TEST_STRING = "";
#Autowired
public CustomRepositoryImpl(NamedParameterJdbcOperations jdbcOperations) {
this.jdbcOperations = jdbcOperations;
}
So I did like the example in the book suggests, wrote a test but got the error message
Error creating bean with name 'de.myproject.config.SpringJPAPerformanceConfigTest': Unsatisfied dependency expressed through field 'abc'; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'de.myproject.CustomRepository' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {#org.springframework.beans.factory.annotation.Autowired(required=true)}
To my understanding as book and tutorial describe, the Repository should be recognized as a Bean definition by the component scan.
To test this I created an context and asked for all registered Beans.
AnnotationConfigApplicationContext
context = new AnnotationConfigApplicationContext();
context.getBeanDefinitionNames()
As assumed my Repository wasn't among them. So I increased, for test purposes only, scope of the search in my project, and set it to the base package. Every other Bean was shown, except the Repository.
As an alternative to component scanning and autowiring, the books describes the possibility to simply declare the Repository as a Bean, which I did.
#Bean
public CustomRepository(NamedParameterJdbcOperations jdbcOperations) {
return new CustomRepositoryImpl(jdbcOperations);
}
After that Spring was able to wire the Repository. I looked at the github code of the book in hope for a better understanding, but unfortunately only the Bean solution, which runs, is implemented there.
So here are my questions:
1.) what possible reasons are there for a Bean definition, is a scenario like this one, not to be recognized by the component scan?
2.) this project already uses Spring JPA Data Repositories, are there any reasons not to use both approaches at the same time?
The problem is naming of your classes. There are many things to understand here.
You define a repository Interface #Repository is optional provided it extends CRUDRepository or one of the repositories provided by spring-data. In this class you can declare methods(find By....). And spring-data will formulate the query based on the underlying database. You can also specify your query using #Query.
Suppose you have a method which involves complex query or something which spring-data cannot do out of the box, in such case we can use the underlying template class for example JdbcTemplate or MongoTemplate..
The procedure to do this is to create another interface and a Impl class. The naming of this interface should be exactly like Custom and your Impl class should be named Impl.. And all should be in same package.
For example if your Repository name is AbcRepository then Your custom repository should be named AbcRepositoryCustom and the implementation should be named AbcRepositoryImpl.. AbcRepository extends AbcRepositoryCustom(and also other spring-data Repositories). And AbcRepositoryImpl implements AbcRepositoryCustom
I was able to "solve" the problem myself.
As we also have a front end class annotated with the same basePackage for the #ComponentScan
#EnableWebMvc
#Configuration
#ComponentScan(basePackages = {"de.myproject.*"})
so there were actually two identical #ComponentScans annotations which I wasn't aware off and this did lead to a conflict. It seams the ordering how the whole application had to be loaded had changed, but thats only me guessing.
I simply moved my Repository and its Impl to a subpackage, and changed the
#ComponentScan(basePackages = {"de.myproject.subpackage.*"})
and now everything works fine. Though it escapes me, what the exact reason behind this behavior is.

Spring Data MongoDB: Specifying a hint on a Spring Data Repository find method

I am implementing a Spring Data Repository and having my repository extend the MongoRepository. I am looking for a way to specify a hint on my findBy methods so I can be control. I have seen several times when a non-optimal index would be picked as the winning plan.
This is what my repository looks like right now:
public interface AccountRepository extends MongoRepository<Account, ObjectId> {
#Meta(maxExcecutionTime = 60000L, comment = "Comment" )
public List<Account> findByUserIdAndBrandId(Long userId, Long brandId);
}
I researched a bunch and found that the JPARepository from spring data supports the #QueryHint annotation but I do not believe that annotation is supported for MongoDb. Is there a similar annotation I can specify on top of my findBy method to specify the hint?
MongoTemplate allows to specify a hint, however, I have a ton of findBy methods and I would hate to add an implementation underneath just to specify a hint.

Resources