Override entityManager factories mapping at conditionally at runtime - spring

I have a multi tenant/DB application implemented using an AbstractRoutingDataSource. Some of the DBs have a slightly different schema and missing some columns in some tables (this is static and known for each DB). Instead of duplicating all repositories and entities for each DB schema I would like to just mark missing columns as transient if the DB does not have the columns (but still save all information if the columns are available).
I was able to override the annotation based mappings in the entity manager factory using an XML based mapping file which I could create for all possible schemas. My idea is to create an entity manager factory for each tenant with the appropriate XML mapping override. Ideally on the first request of a tenant it will instantiate the entity manager factory and then check what mappings override to apply. Pseudo code:
#Configuration
#EnableTransactionManagement
class JPAconfig {
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource myAbstractRoutingDataSource, TenantService tenantService) {
final LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(myAbstractRoutingDataSource);
em.setPackagesToScan("myPackagesToScan");
final JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
em.setJpaVendorAdapter(vendorAdapter);
em.setJpaProperties(additionalProperties());
String schemaVer = env.getProperty("db.schema.version");
// At runtime based on the tenant, override annotation based mappings
em.setMappingResources(tenantService.getMappingsForTenant());
return em;
}
}
Unfortunately LocalContainerEntityManagerFactoryBean is a factory bean and only seems to allow singleton or prototype scope. It would also be okay to define all possible LocalContainerEntityManagerFactoryBean beans at compile time and somehow select the correct factory based on the tenant at runtime.

Try to reload the Application context after change your hibernate properties/config:
ApplicationContextProvider.getApplicationContext().refresh();
Reference: Spring Hibernate: reload entity mappings

Related

Spring Boot How to load only specified #Repository components

I have a project containing many Dao annotated by #Repository each.
Also several spring boot projects, each having its spring context an can be run independently and they have a reference to the project containing the Daos.
The thing is, I don't want to load all Dao into the spring context in each project. Only some specified Dao are required for each spring boot project.
I used to specify Dao classes by defining them as beans in an XML configuration for each project.
Now we are moving to java and annotation based configuration.
Is there a way to tell the spring context only to load the #Repository that I specify?
I know I can make a #Configuration class and define #Bean methods but I still need them to be treated as #Repository and not a normal bean. Any idea if this is supported and how to implement this?
You can use #Conditional on each of those DAO classes.
Class will be loaded in context only when the condition mentioned using #Conditional annotation is fulfilled. You can have condition like:
#ConditionalOnProperty(
value="module.name",
havingValue = "module1",
matchIfMissing = false)
class DaoForModule1 {
This will load the DaoModule1 if and only if the property module.name has value module1. If you want to load this DaoModule1 when proerty is not set, you can change matchIfMissing to true.
You can also use #Profile annotation to limit the classes loaded based on profile
#Profile("module2")
class DaoForModule2 {
This would load DaoForModule2 only when you have module2 in the list of active profiles. But i would not prefer profile as the use case of profiles is different. We use profiles generally to specify variable resources based on environment.
#SpringBootApplication just combine #EnableAutoConfiguration , #SpringBootConfiguration and #ComponentScan.
The #ComponentScan is the guy that cause all #Repository beans under the scanned package to be registered automatically which is the thing that you don't want it to happen.
So you can use these annotations separately but excluding #ComponentScan. And use #Import to explicitly define the beans that you want to register.
The main application class will look like :
#SpringBootConfiguration
#EnableAutoConfiguration
#Import(value = {FooRepoistory.class, BarRepository.class,.......})
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
From your question, I assume you want to reuse a Spring DAO project with multiple repositories and JPA Entity objects, maybe belonging to different datasources, in several other Spring projects. You prefer to load only a specific set of the JPA entities/repos. The first step is to organize the related entities and repositories into distinct packages and include this project in the path of the other projects.
This is one way to handle this, assuming you have separated the repositories and entities into different packages. Create your own Configuration bean that will instantiate a JPA EntityManagerFactory bean with the specific packages and datasource it needs. in this code below, EntityManagerFactory below will load the entities from MODEL_PACKAGE and the repositories from REPOSITORIES_PACKAGE.
#Configuration
#ComponentScan(basePackages = MODEL_PACKAGE)
#EnableJpaRepositories(basePackages = REPOSITORIES_PACKAGE,
entityManagerFactoryRef = "ENTITY_MANAGER_FACTORY")
#EnableTransactionManagement
public class PersistenceConfig {
public static final String MODEL_PACKAGE = "Your model package";
public static final String REPOSITORIES_PACKAGE = "Your repository package";
public static final String ENTITY_MANAGER_FACTORY = "entity_manager_factory";
public static final String TRANSACTION_MANAGER = "transaction_manager";
#Autowired //This is to get your property file entries (DB connection, etc).
private Environment environment;
#Bean(DATA_SOURCE)
public DataSource dataSource() {
//Create your datasource from environment properties. Example - org.apache.tomcat.jdbc.pool.DataSource
}
#Bean(ENTITY_MANAGER_FACTORY) #Autowired
public LocalContainerEntityManagerFactoryBean entityManagerFactory(
#Qualifier(DATA_SOURCE) DataSource dataSource) throws IllegalStateException {
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setDataSource(dataSource);
Properties jpaProperties = new Properties();
// set properties for your JPA, for example, hibernate.dialect, hibernate.format_sql, etc.
entityManagerFactoryBean.setJpaProperties(jpaProperties);
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
entityManagerFactoryBean.setPackagesToScan(MODEL_PACKAGE);
}
#Bean(TRANSACTION_MANAGER) #Autowired
#Primary
#Qualifier(value = "transactionManager")
public JpaTransactionManager transactionManager(
#Qualifier(ENTITY_MANAGER_FACTORY) EntityManagerFactory entityManagerFactory) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory);
return transactionManager;

Need to configure my JPA layer to use a TransactionManager (Spring Cloud Task + Batch register a PlatformTransactionManager unexpectedly)

I am using Spring Cloud Task + Batch in a project.
I plan to use different datasources for business data and Spring audit data on the task. So I configured something like:
#Bean
public TaskConfigurer taskConfigurer() {
return new DefaultTaskConfigurer(this.singletonNotExposedSpringDatasource());
}
#Bean
public BatchConfigurer batchConfigurer() {
return new DefaultBatchConfigurer(this.singletonNotExposedSpringDatasource());
}
whereas main datasource is autoconfigured through JpaBaseConfiguration.
The problem comes when SimpleBatchConfiguration+DefaultBatchConfigurer expose a PlatformTransactionManager bean, since JpaBaseConfiguration has a #ConditionalOnMissingBean on PlatformTransactionManager. Therefore Batch's PlatformTransactionManager, binded to the spring.datasource takes place.
So far, this seems to be caused because this bug
So I tried to emulate what JpaBaseConfiguration does, defining my own PlatformTransactionManager over my biz datasource/entityManager.
#Primary
#Bean
public PlatformTransactionManager appTransactionManager(final LocalContainerEntityManagerFactoryBean appEntityManager) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(appEntityManager.getObject());
this.appTransactionManager = transactionManager;
return transactionManager;
}
Note I have to define it with a name other than transactionManager, otherwise Spring finds 2 beans and complains (unregardless of #Primary!)
But now it comes the funny part. When running the tests, everything runs smooth, tests finish and DDLs are properly created for both business and Batch/Task's databases, database reads work flawlessly, but business data is not persisted in my testing database, so final assertThats fail when counting. If I #Autowire in my test PlatformTransactionManager or ÈntityManager, everything indicates they are the proper ones. But if I debug within entityRepository.save, and execute org.springframework.transaction.interceptor.TransactionAspectSupport.currentTransactionStatus(), it seems the DatasourceTransactionManager from Batch's configuration is overriding, so my custom exposed PlatformTransactionManager is not being used.
So I guess it is not a problem of my PlatformManager being the primary, but that something is configuring my JPA layer TransactionInterceptor to use the non primary but transactionManager named bean of Batch.
I also tried with making my #Configuration implement TransactionManagementConfigurer and override PlatformTransactionManager annotationDrivenTransactionManager() but still no luck
Thus, I guess what I am asking is whether there is a way to configure the primary TransactionManager for the JPA Layer.
The problem comes when SimpleBatchConfiguration+DefaultBatchConfigurer expose a PlatformTransactionManager bean,
As you mentioned, this is indeed what was reported in BATCH-2788. The solution we are exploring is to expose the transaction manager bean only if Spring Batch creates it.
In the meantime you can set the property spring.main.allow-bean-definition-overriding=true to allow bean definition overriding and set the transaction manager you want Spring Batch to use with BatchConfigurer#getTransactionManager. In your case, it would be something like:
#Bean
public BatchConfigurer batchConfigurer() {
return new DefaultBatchConfigurer(this.singletonNotExposedSpringDatasource()) {
#Override
public PlatformTransactionManager getTransactionManager() {
return new MyTransactionManager();
}
};
}
Hope this helps.

Can't set JPA naming strategy after configuring multiple data sources (Spring 1.4.1 / Hibernate 5.x)

I am using Spring Boot 1.4.1 which uses Hibernate 5.0.11. Initially I configured a data source using application.properties like this:
spring.datasource.uncle.url=jdbc:jtds:sqlserver://hostname:port/db
spring.datasource.uncle.username=user
spring.datasource.uncle.password=password
spring.datasource.uncle.dialect=org.hibernate.dialect.SQLServer2012Dialect
spring.datasource.uncle.driverClassName=net.sourceforge.jtds.jdbc.Driver
I configured it with "uncle" because that will be the name of one of multiple data sources that I'll configure. I configured this data source like this, according to the Spring docs:
#Bean
#Primary
#ConfigurationProperties(prefix = "spring.datasource.uncle")
public DataSource uncleDataSource() {
return DataSourceBuilder.create().build();
}
At this point everything worked fine.
I created an #Entity class without any #Column annotations and let Hibernate figure out the column names, for example if I have a Java property named idBank, Hibernate will automatically assume the column name is id_bank. This is used when generating ddl, running SQL statements, etc. I want to utilize this feature because I'm going to have a lot of entity classes and don't want to have to create and maintain all of the #Column annotations. At this point, this worked fine.
I then added another data source like this:
spring.datasource.aunt.url=jdbc:sybase:Tds:host2:port/db2
spring.datasource.aunt.username=user2
spring.datasource.aunt.password=password2
spring.datasource.aunt.dialect=org.hibernate.dialect.SybaseDialect
spring.datasource.aunt.driverClassName=com.sybase.jdbc4.jdbc.SybDriver
... and also this, following the Spring docs for setting up multiple data sources. Apparently once you define a 2nd data source, it can't configure the default beans and you have to define your own EntityManager and TransactionManager. So in addition to the data source configured above, I added these configurations:
#Bean
#Primary
PlatformTransactionManager uncleTransactionManager(#Qualifier("uncleEntityManagerFactory") final EntityManagerFactory factory) {
return new JpaTransactionManager(factory);
}
#Bean
#Primary
LocalContainerEntityManagerFactoryBean uncleEntityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(uncleDataSource())
.packages(Uncle.class)
.persistenceUnit("uncle")
.build();
}
#Bean
#ConfigurationProperties(prefix = "spring.datasource.aunt")
public DataSource auntDataSource() {
return DataSourceBuilder.create().build();
}
#Bean
PlatformTransactionManager auntTransactionManager(#Qualifier("auntEntityManagerFactory") final EntityManagerFactory factory) {
return new JpaTransactionManager(factory);
}
#Bean
LocalContainerEntityManagerFactoryBean auntEntityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(auntDataSource())
.packages(Aunt.class)
.persistenceUnit("aunt")
.build();
}
This works in terms of connecting to the database and trying to fetch data.
HOWEVER (and here's the problem, thanks for reading this far). After these configurations I have lost the implied naming strategy that translates Java column names to snake case names, so now if I have a Java property idBank it incorrectly uses column name idBank instead of id_bank. I would really like to get that functionality back.
There is a JPA property for this spring.jpa.hibernate.naming-strategy, and there are various naming strategy classes in Spring and Hibernate such as org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy. So I tried setting it like this:
spring.jpa.hibernate.naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
But it did not work. I tried some variations such as:
spring.datasource.uncle.naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
and
spring.datasource.uncle.hibernate.naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
but this did not have any effect.
Then I read that in Hibernate 5, the naming strategy was broken up into two parts, "physical" and "implicit" and there are different settings for each. So I tried this, with a few variations:
spring.jpa.hibernate.naming.physical-strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
and
spring.jpa.hibernate.naming.implicit-strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
and
spring.datasource.uncle.naming.physical-strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
and
spring.datasource.uncle.hibernate.naming.implicit-strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
But none of these worked.
It seems like there should be a way for me to set this configuration in the beans directly, such as on the SessionFactory, but I could not find that API. The documentation around this seems to have some gaps.
I'd really like to avoid setting up a persistence.xml also, which I have not needed up to this point.
So here is where I'm stuck and I'm hoping someone can help out. Really what I would like is a way to debug these property settings, I turned on trace logging in both org.springframework and org.hibernate but there was nothing useful there. I tried stepping through the code when these beans were configured but couldn't find the place where this happens. If anyone has that info and could share it I'd be really grateful.
I had the same problem and fixed it with the following code (adapted to the code in the question - for a single entity manager):
protected Map<String, Object> jpaProperties() {
Map<String, Object> props = new HashMap<>();
props.put("hibernate.physical_naming_strategy", SpringPhysicalNamingStrategy.class.getName());
props.put("hibernate.implicit_naming_strategy", SpringImplicitNamingStrategy.class.getName());
return props;
}
#Primary
#Bean(name = "defaultEntityManager")
public LocalContainerEntityManagerFactoryBean defaultEntityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(auntDataSource())
.packages(Aunt.class)
.persistenceUnit("aunt")
.properties(jpaProperties())
.build();
}
The same as #ewert answer can be gained using properties:
# this works
spring.jpa.properties.hibernate.implicit_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
spring.jpa.properties.hibernate.physical_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
# but that doesn't work
spring.jpa.hibernate.naming.physical-strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
spring.jpa.hibernate.naming.implicit-strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
I think I can explain why the default behaviour disappears, as per your latest question.
As of Spring Boot 2.4.2 the deafult configuration kicks in in this method of JpaBaseConfiguration:
#Bean
#Primary
#ConditionalOnMissingBean({ LocalContainerEntityManagerFactoryBean.class, EntityManagerFactory.class })
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder factoryBuilder) {
Map<String, Object> vendorProperties = getVendorProperties();
customizeVendorProperties(vendorProperties);
return factoryBuilder.dataSource(this.dataSource).packages(getPackagesToScan()).properties(vendorProperties)
.mappingResources(getMappingResources()).jta(isJta()).build();
}
it happens within the customizeVendorProperties method call.
By creating your own LocalContainerEntityManagerFactoryBean bean (two of them actually) this customization is not performed anymore.
If you are using SessionFactory you should use next lines to set naming strategies.
sessionFactory.setImplicitNamingStrategy(SpringImplicitNamingStrategy.INSTANCE);
sessionFactory.setPhysicalNamingStrategy(new SpringPhysicalNamingStrategy());
The only way I get this running properly with Spring-Boot 2+ was setting the following manually:
#Bean(name = "myEmf")
public LocalContainerEntityManagerFactoryBean sapEntityManagerFactory(
EntityManagerFactoryBuilder builder, #Qualifier("myDataSource") DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("my.custom.package")
.persistenceUnit("myPu")
.properties(getProperties())
.build();
}
public Map<String, String> getProperties() {
val props = new HashMap<String, String>();
if (isTest()) {
props.put("hibernate.hbm2ddl.auto", "create");
} else {
props.put("hibernate.dialect", "org.hibernate.dialect.PostgreSQL95Dialect");
}
return props;
}

Spring + multiple H2 instances in memory

Two different H2 instances to be created in-memory. To make sure this happens, I initialized both instances with same shema and different data. So that when I query using DAO different set of data picked from different DataSource. However this is not happening. What am I doing wrong? How to name the instance of H2 correct?
#Bean(name = "DS1")
#Primary
public EmbeddedDatabase dataSource1() {
return new EmbeddedDatabaseBuilder().
setType(EmbeddedDatabaseType.H2).
setName("DB1").
addScript("schema.sql").
addScript("data-1.sql").
build();
}
#Bean(name = "DS2")
public EmbeddedDatabase dataSource2() {
return new EmbeddedDatabaseBuilder().
setType(EmbeddedDatabaseType.H2).
setName("DB2").
addScript("schema.sql").
addScript("data-2.sql").
build();
}
You have created two DataSources and have marked one as #Primary -- this is the one which will be used when your EntityManagerFactories and Repositories are autoconfigured. That's why both DAO's are accessing the same database.
In order to get around this, you need to declare two separate EntityManagerFactories, as described in the Spring Boot documentation:
http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#howto-use-two-entity-managers
After that, you'll need to declare two separate repositories and tell each repository which of the EntityManagerFactory to use. To do this, in your #EnableJpaRepositories annotation you'll have to specify the correct EntityMangerFactory. This article describes very nicely how to do that:
http://scattercode.co.uk/2013/11/18/spring-data-multiple-databases/
It would be nice if Spring Boot supported autoconfiguration with two DataSources, but I don't think it's going to happen soon:
https://github.com/spring-projects/spring-boot/issues/808
UPDATE
The author of the above article has published an updated approach:
https://scattercode.co.uk/2016/01/05/multiple-databases-with-spring-boot-and-spring-data-jpa/
The issue was not with multiple instances of H2; but with DataSource injection.
I solved it by passing the Qualifier in method argument.
#Autowired
#Bean(name = "jdbcTemplate")
public JdbcTemplate getJdbcTemplate(#Qualifier("myDataSource") DataSource dataSource) {
return new JdbcTemplate(dataSource);
}

Spring Bean Extensions

I have a Spring MVC project with a generic application context xml file. This file defines the generic configuration of my applciation such as the base property file for i18n and data source to connect to the database and so on. While i define this context file i also want to define the session factory which will have base configurations such as the data source to use, the second level caching (eh-cache) and so on. But this will not contain the list of entity beans that my application would load. I want to keep the mapping of the entity beans only in separate file and load them based on need.
Is there a possibility to extend the session factory that i had defined in the base file and only add the additional entity beans? I will eventually have several spring configuration files which will load a separate set of entities. can this be achieved?
There are several posibilities.
You can use PropertyPlaceHolderConfigurer to externalize the entity list to a property file. (You can use SPEL in the property file).
You can use an abstract bean definition and use it as parent in other sessionFactory beans, then you can import thems based on a Enviroment PropertySource.
Note that Hibernate SessionFactory is inmutable after building it and SessionFactoryBean build SessionFactory in afterPropertiesSet method so the work of setting up the SessionFactoryBean that you want must be done by some BeanFactoryPostProcessor
EDIT
After reading your comment, I think that you could declare a EntityClassHolder bean and use the Autowire collections facility to get all entities in a EntityClassFactoryBean that you can inject in a single SessionFactoryBean. But i don't sure if that is that you want to do:
public class EntityClassHolder {
List<Class<?>> entityClasses;
public List<Class<?>> getEntityClasses() {
return entityClasses;
}
public void setEntityClasses(List<Class<?>> entityClasses) {
this.entityClasses = entityClasses;
}
}
public class EntityClassFactoryBean extends AbstractFactoryBean<List<Class<?>>> {
#Autowired
List<EntityClassHolder> list;
#Override
public Class<?> getObjectType() {
return List.class;
}
#Override
protected List<Class<?>> createInstance() throws Exception {
ArrayList<Class<?>> classList = new ArrayList<Class<?>>();
for (EntityClassHolder ech : list) {
classList.addAll(ech.getEntityClasses());
}
return classList;
}
}
Now, if you have several applicatonContext-xxx.xml for example, the SessionFactory will be configured with entity classes definied in EntityClassHolder beans when you load one of them.

Resources