hibernate session initialization very slow in production with many entities definition - performance

I have a Spring Boot app with 296 persisted objects annotated by #Entity using Hibernate framework for persitence implementation. In developpement using eclipse on a Core i7-6700 (4x 3.4Ghz - 4Ghz Turbo, SSD), the SessionFactory is initialized in almost 6 seconds.
My production server is a virtualized windows server on a Xeon E5-2620v2 (6x 2,1GHz - 2.6Ghz, HDD (+ SSD cache)). I have no control about allocated resources by virtualization.
When I run my app on this server, it take about 25 seconds to instantiate the SessionFactory.
No speed problem when app is fully initialized: the execution speed of all webservices/databases queries is normal.
Naively I think hibernate create SQL Queries of basical CRUD operations when SessionFactory is instantiated.
I tried adding #DynamicInsert and #DynamicUpdate on all #Entity classes but it did not bring to me a better startup time and SessionFactory instantiation time is the same.
I use the following spring / hibernate / others pertinent libraries versions: (from my pom.xml)
<hibernate.version>5.4.5.Final</hibernate.version>
<spring.version>5.1.9.RELEASE</spring.version>
<springSecurity.version>5.1.6.RELEASE</springSecurity.version>
<springBoot.version>2.1.8.RELEASE</springBoot.version>
<mssql-jdbc.version>7.4.1.jre11</mssql-jdbc.version>
<apache-commons-dbcp2.version>2.5.0</apache-commons-dbcp2.version>
<eh-cache.version>2.10.6</eh-cache.version>
<javaassist.version>3.25.0-GA</javaassist.version>
This is my hibernate SessionFactory configuration (part of application.mssql.properties file):
hibernate.dialect = org.hibernate.dialect.SQLServer2012Dialect
# useless cause I use GenerationType.IDENTITY
# for primary key (id) of all entities
hibernate.jdbc.batch_size = 20
hibernate.cache.use_query_cache = false
hibernate.cache.use_second_level_cache = true
hibernate.cache.provider_class = org.hibernate.cache.EhCacheProvider
hibernate.cache.region.factory_class = org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory
hibernate.connection.driver_class = com.microsoft.sqlserver.jdbc.SQLServerDriver
hibernate.enable_lazy_load_no_trans = true
hibernate.cache.use_reference_entries = true
hibernate.connection.url = jdbc:sqlserver://localhost:1433;databaseName=master;user=usr;password=pwd;
hibernate.connection.username = usr
hibernate.connection.password = pwd
# Multitenancy configuration
hibernate.multiTenancy = SCHEMA
hibernate.multi_tenant_connection_provider = MultiTenantConnectionProvider
hibernate.tenant_identifier_resolver = CurrentTenantIdentifierResolver
That is how I define my hibernate SessionFactory (I know that I can use standard JPA session factory but this app was made like that and I don't want to change that, moreover, I have simplified class content for more readeability):
#SpringBootApplication()
#EnableAutoConfiguration(
excludeName = {
"org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration"
}
)
#ComponentScan(
basePackages = {
// my packages list
}
)
#Configuration
#PropertySource({ "classpath:application.mssql.properties" })
#EnableCaching
#EnableTransactionManagement
public class ApplicationConfiguration {
#Autowired
private Environment env;
/**
* #return SQL-Server pooled connection using apache dbcp2
*/
#Primary
#Bean
public DataSource dataSource() {
BasicDataSource ds = new BasicDataSource();
ds.setUrl(this.env.get("hibernate.connection.url"));
ds.setUsername(this.env.get("hibernate.connection.username"));
ds.setPassword(this.env.get("hibernate.connection.password"));
ds.setDriverClassName(this.env.get("hibernate.connection.driver_class"));
ds.setMaxIdle(10);
ds.setMaxOpenPreparedStatements(100);
return ds;
}
/**
* #return main sessionFactory (the mostly used sessionFactory in app)
*/
#Primary
#Autowired
#Bean
public LocalSessionFactoryBean sessionFactory(#Qualifier("dataSource") DataSource dataSource,
MultiTenantConnectionProvider multiTenantConnectionProviderImpl,
CurrentTenantIdentifierResolver currentTenantIdentifierResolverImpl) {
LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean();
sessionFactory.setPackagesToScan("my.root.pojo.package");
// get hibernate properties from this.env (application.mssql.properties file)
Properties hibernateProperties = getHibernateProperties();
hibernateProperties.put(AvailableSettings.MULTI_TENANT, "SCHEMA");
hibernateProperties.put(AvailableSettings.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProviderImpl);
hibernateProperties.put(AvailableSettings.MULTI_TENANT_IDENTIFIER_RESOLVER, currentTenantIdentifierResolverImpl);
sessionFactory.setHibernateProperties(hibernateProperties);
sessionFactory.setDataSource(dataSource);
return sessionFactory;
// FROM THIS POINT --->
}
/**
* #return transactionManager for main sessionFactory
*/
#Bean
#Autowired
#Primary
public HibernateTransactionManager transactionManager(#Qualifier("sessionFactory") SessionFactory sessionFactory) {
// ----> TO THIS POINT took 6 sec in development, 25 sec in production
HibernateTransactionManager txManager = new HibernateTransactionManager();
txManager.setNestedTransactionAllowed(true);
txManager.setSessionFactory(sessionFactory);
return txManager;
}
// some other stuff:
// - sessionFactory on another databases,
// - getHibernateProperties() implementation,
// - initServer() method annotated by #Bean(initMethod = "init")
// - etc ...
}
Is it normal to have this speed difference?
Did I forgotten something in hibernate configuration ?
Any help will be welcomed.

Related

SpringBoot - Websphere - Java Config - web.xml - resource-ref

I have a web application where I have 3 jndi resources defined in the web.xml
1 for the database and 2 for dyna cache
How can I convert it into Java configuration in Spring boot.
Following is the sample resource ref configuration in the application
<resource-ref>
<description>Resource reference for database</description>
<res-ref-name>jdbc/dbname</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
<resource-ref id="cache1">
<description>cache1 description</description>
<res-ref-name>cache/cache1</res-ref-name>
<res-type>com.ibm.websphere.cache.DistributedMap</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
<resource-ref id="cache2">
<description>cache2 description</description>
<res-ref-name>cache/cache2</res-ref-name>
<res-type>com.ibm.websphere.cache.DistributedMap</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
Thanks
Multiple Databases in Spring Boot
More on / Font: baeldung
Spring Boot can simplify the configuration above.
By default, Spring Boot will instantiate its default DataSource with the configuration properties prefixed by spring.datasource.*:
spring.datasource.jdbcUrl = [url]
spring.datasource.username = [username]
spring.datasource.password = [password]
We now want to keep using the same way to configure the second DataSource, but with a different property namespace:
spring.second-datasource.jdbcUrl = [url]
spring.second-datasource.username = [username]
spring.second-datasource.password = [password]
Because we want the Spring Boot autoconfiguration to pick up those different properties (and instantiate two different DataSources), we'll define two configuration classes similar to the previous sections:
#Configuration
#PropertySource({"classpath:persistence-multiple-db-boot.properties"})
#EnableJpaRepositories(
basePackages = "com.baeldung.multipledb.dao.user",
entityManagerFactoryRef = "userEntityManager",
transactionManagerRef = "userTransactionManager")
public class PersistenceUserAutoConfiguration {
#Primary
#Bean
#ConfigurationProperties(prefix="spring.datasource")
public DataSource userDataSource() {
return DataSourceBuilder.create().build();
}
// userEntityManager bean
// userTransactionManager bean
}
.
#Configuration
#PropertySource({"classpath:persistence-multiple-db-boot.properties"})
#EnableJpaRepositories(
basePackages = "com.baeldung.multipledb.dao.product",
entityManagerFactoryRef = "productEntityManager",
transactionManagerRef = "productTransactionManager")
public class PersistenceProductAutoConfiguration {
#Bean
#ConfigurationProperties(prefix="spring.second-datasource")
public DataSource productDataSource() {
return DataSourceBuilder.create().build();
}
// productEntityManager bean
// productTransactionManager bean
}
Now we have defined the data source properties inside persistence-multiple-db-boot.properties according to the Boot autoconfiguration convention.
The interesting part is annotating the data source bean creation method with #ConfigurationProperties. We just need to specify the corresponding config prefix. Inside this method, we're using a DataSourceBuilder, and Spring Boot will automatically take care of the rest.
But how do the configured properties get injected into the DataSource configuration?
When calling the build() method on the DataSourceBuilder, it'll call its private bind() method:
public T build() {
Class<? extends DataSource> type = getType();
DataSource result = BeanUtils.instantiateClass(type);
maybeGetDriverClassName();
bind(result);
return (T) result;
}
This private method performs much of the autoconfiguration magic, binding the resolved configuration to the actual DataSource instance:
private void bind(DataSource result) {
ConfigurationPropertySource source = new
MapConfigurationPropertySource(this.properties);
ConfigurationPropertyNameAliases aliases = new
ConfigurationPropertyNameAliases();
aliases.addAliases("url", "jdbc-url");
aliases.addAliases("username", "user");
Binder binder = new Binder(source.withAliases(aliases));
binder.bind(ConfigurationPropertyName.EMPTY, Bindable.ofInstance(result));
}
Although we don't have to touch any of this code ourselves, it's still useful to know what's happening under the hood of the Spring Boot autoconfiguration.
Besides this, the Transaction Manager and Entity Manager beans configuration is the same as the standard Spring application.
#Configuration
#EnableTransactionManagement
#ComponentScan("org.example")
#EnableJpaRepositories(basePackages = "org.yours.persistence.dao")
public class PersistenceJNDIConfig {
#Autowired
private Environment env;
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory()
throws NamingException {
LocalContainerEntityManagerFactoryBean em
= new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource());
// rest of entity manager configuration
return em;
}
#Bean
public DataSource dataSource() throws NamingException {
return (DataSource) new JndiTemplate().lookup("jdbc/dbname");
}
#Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory emf) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(emf);
return transactionManager;
}
// rest of persistence configuration
}
We’re going to use a simple model with the #Entity annotation with a generated id and a name:
#Entity
public class Foo {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "ID")
private Long id;
#Column(name = "NAME")
private String name;
// default getters and setters
}
Let’s define a simple repository:
#Repository
public class FooDao {
#PersistenceContext
private EntityManager entityManager;
public List<Foo> findAll() {
return entityManager
.createQuery("from " + Foo.class.getName()).getResultList();
}
}
And lastly, let’s create a simple service:
#Service
#Transactional
public class FooService {
#Autowired
private FooDao dao;
public List<Foo> findAll() {
return dao.findAll();
}
}
With this, you have everything you need in order to use your JNDI datasource in your Spring application.

Spring Cloud Task - specify database config

I have Spring Cloud Task that loads data from SQL Server to Cassandra DB which will be run on Spring Cloud Data Flow.
One of the requirement of Spring Task is to provide relational database to persist metadata like task execution state. But I don't want use either of the above databases for that. Instead, I have to specify third database for persistence. But it seems like Spring Cloud Task flow automatically picks up data source properties of SQL Server from application.properties. How can I specify another db for task state persistence?
My Current properties:
spring.datasource.url=jdbc:sqlserver://iphost;databaseName=dbname
spring.datasource.username=user
spring.datasource.password=password
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.jpa.show-sql=false
#spring.jpa.hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
spring.jpa.hibernate.ddl-auto=none
spring.data.cassandra.contact-points=ip
spring.data.cassandra.port=9042
spring.data.cassandra.username=username
spring.data.cassandra.password=password
spring.data.cassandra.keyspace-name=mykeyspace
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
Update: 1
I added below code to point to 3rd database as suggested by Michael Minella. Now Spring Task is able to connect to this DB and persist state. But now my batch job source queries are also connecting to this database. Only thing I changed was to add datasource for task.
spring.task.datasource.url=jdbc:postgresql://host:5432/testdb?stringtype=unspecified
spring.task.datasource.username=user
spring.task.datasource.password=passwrod
spring.task.datasource.driverClassName=org.postgresql.Driver
#Configuration
public class DataSourceConfigs {
#Bean(name = "taskDataSource")
#ConfigurationProperties(prefix="spring.task.datasource")
public DataSource getDataSource() {
return DataSourceBuilder.create().build();
}
}
#Configuration
public class DDTaskConfigurer extends DefaultTaskConfigurer{
#Autowired
public DDTaskConfigurer(#Qualifier("taskDataSource") DataSource dataSource) {
super(dataSource);
}
}
Update #2:
#Component
#StepScope
public class MyItemReader extends RepositoryItemReader<Scan> implements InitializingBean{
#Autowired
private ScanRepository repository;
private Integer lastScanIdPulled = null;
public MyItemReader(Integer _lastIdPulled) {
super();
if(_lastIdPulled == null || _lastIdPulled <=0 ){
lastScanIdPulled = 0;
} else {
lastScanIdPulled = _lastIdPulled;
}
}
#PostConstruct
protected void setUpRepo() {
final Map<String, Sort.Direction> sorts = new HashMap<>();
sorts.put("id", Direction.ASC);
this.setRepository(this.repository);
this.setSort(sorts);
this.setMethodName("findByScanGreaterThanId");
List<Object> methodArgs = new ArrayList<Object>();
System.out.println("lastScanIdpulled >>> " + lastScanIdPulled);
if(lastScanIdPulled == null || lastScanIdPulled <=0 ){
lastScanIdPulled = 0;
}
methodArgs.add(lastScanIdPulled);
this.setArguments(methodArgs);
}
}
#Repository
public interface ScanRepository extends JpaRepository<Scan, Integer> {
#Query("...")
Page<Scan> findAllScan(final Pageable pageable);
#Query("...")
Page<Scan> findByScanGreaterThanId(int id, final Pageable pageable);
}
Update #3:
If I add config datasource for Repository, I now get below exception. Before you mention that one of the datasource needs to be declared Primary. I already tried that.
Caused by: java.lang.IllegalStateException: Expected one datasource and found 2
at org.springframework.cloud.task.batch.configuration.TaskBatchAutoConfiguration$TaskBatchExecutionListenerAutoconfiguration.taskBatchExecutionListener(TaskBatchAutoConfiguration.java:65) ~[spring-cloud-task-batch-1.0.3.RELEASE.jar:1.0.3.RELEASE]
at org.springframework.cloud.task.batch.configuration.TaskBatchAutoConfiguration$TaskBatchExecutionListenerAutoconfiguration$$EnhancerBySpringCGLIB$$baeae6b9.CGLIB$taskBatchExecutionListener$0(<generated>) ~[spring-cloud-task-batch-1.0.3.RELEASE.jar:1.0.3.RELEASE]
at org.springframework.cloud.task.batch.configuration.TaskBatchAutoConfiguration$TaskBatchExecutionListenerAutoconfiguration$$EnhancerBySpringCGLIB$$baeae6b9$$FastClassBySpringCGLIB$$5a898c9.invoke(<generated>) ~[spring-cloud-task-batch-1.0.3.RELEASE.jar:1.0.3.RELEASE]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) ~[spring-core-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358) ~[spring-context-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.cloud.task.batch.configuration.TaskBatchAutoConfigu
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "myEntityManagerFactory",
basePackages = { "com.company.dd.collector.tool" },
transactionManagerRef = "TransactionManager"
)
public class ToolDbConfig {
#Bean(name = "myEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean
myEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("ToolDataSource") DataSource dataSource
) {
return builder
.dataSource(dataSource)
.packages("com.company.dd.collector.tool")
.persistenceUnit("tooldatasource")
.build();
}
#Bean(name = "myTransactionManager")
public PlatformTransactionManager transactionManager(
#Qualifier("myEntityManagerFactory") EntityManagerFactory
entityManagerFactory
) {
return new JpaTransactionManager(entityManagerFactory);
}
}
#Configuration
public class DataSourceConfigs {
#Bean(name = "taskDataSource")
#ConfigurationProperties(prefix="spring.task.datasource")
public DataSource getDataSource() {
return DataSourceBuilder.create().build();
}
#Primary
#Bean(name = "ToolDataSource")
#ConfigurationProperties(prefix = "tool.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
}
You need to create a TaskConfigurer to specify the DataSource to be used. You can read about this interface in the documentation here: https://docs.spring.io/spring-cloud-task/1.1.1.RELEASE/reference/htmlsingle/#features-task-configurer
The javadoc can be found here: https://docs.spring.io/spring-cloud-task/docs/current/apidocs/org/springframework/cloud/task/configuration/TaskConfigurer.html
UPDATE 1:
When using more than one DataSource, both Spring Batch and Spring Cloud Task follow the same paradigm in that they both have *Configurer interfaces that need to be used to specify what DataSource to use. For Spring Batch, you use the BatchConfigurer (typically by just extending the DefaultBatchConfigurer) and as noted above, the TaskConfigurer is used in Spring Cloud Task. This is because when there is more than one DataSource, the framework has no way of knowing which one to use.

Quartz JDBCJobStore with RoutingDataSource

For my application, we are using the spring's
org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource
The target dataSources are configured and chosen based on request's domain URL.
Eg:
qa.example.com ==> target datasource = DB1
qa-test.example.com ==> target datasource = DB2
Following is the configuration for the same
#Bean(name = "dataSource")
public DataSource dataSource() throws PropertyVetoException, ConfigurationException {
EERoutingDatabase routingDB = new EERoutingDatabase();
Map<Object, Object> targetDataSources = datasourceList();
routingDB.setTargetDataSources(targetDataSources);
return routingDB;
}
public class EERoutingDatabase extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
// This is derived from the request's URL/Domain
return SessionUtil.getDataSourceHolder();
}
}
The task is now using Quartz JDBCJobStore to store the quartz jobs/triggers.
The preferred option is using JobStoreCMT.
We used the following config
#Configuration
public class QuartzConfig {
private static final Logger LOG = LoggerFactory.getLogger(QuartzConfig.class);
private static final String QUARTZ_CONFIG_FILE = "ee-quartz.properties";
#Autowired
private DataSource dataSource;
#Autowired
private PlatformTransactionManager transactionManager;
#Autowired
private ApplicationContext applicationContext;
/**
* Spring wrapper over Quartz Scheduler bean
*/
#Bean(name="quartzRealTimeScheduler")
SchedulerFactoryBean schedulerFactoryBean() {
LOG.info("Creating QUARTZ Scheduler for real time Job invocation");
SchedulerFactoryBean factory = new SchedulerFactoryBean();
factory.setConfigLocation(new ClassPathResource(QUARTZ_CONFIG_FILE));
factory.setDataSource(dataSource);
factory.setTransactionManager(transactionManager);
factory.setJobFactory(springBeanJobFactory());
factory.setWaitForJobsToCompleteOnShutdown(true);
factory.setApplicationContextSchedulerContextKey("applicationContext");
return factory;
}
#Bean
public SpringBeanJobFactory springBeanJobFactory() {
AutoWiringSpringBeanJobFactory jobFactory = new AutoWiringSpringBeanJobFactory();
jobFactory.setApplicationContext(applicationContext);
jobFactory.setIgnoredUnknownProperties("applicationContext");
return jobFactory;
}
}
and following is the config in quartz properties file (ee-quartz.properties)
org.quartz.scheduler.instanceId=AUTO
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.misfireThreshold: 60000
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
On starting the application, following exception occurs
Caused by: java.lang.IllegalStateException: Cannot determine target DataSource for lookup key [null]
at org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource.determineTargetDataSource(AbstractRoutingDataSource.java:202) ~[spring-jdbc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at com.expertly.config.EERoutingDatabase.determineTargetDataSource(EERoutingDatabase.java:60) ~[EERoutingDatabase.class:na]
at org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource.getConnection(AbstractRoutingDataSource.java:164) ~[spring-jdbc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111) ~[spring-jdbc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77) ~[spring-jdbc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:289) ~[spring-jdbc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:329) ~[spring-jdbc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.scheduling.quartz.LocalDataSourceJobStore.initialize(LocalDataSourceJobStore.java:149) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
at org.quartz.impl.StdSchedulerFactory.instantiate(StdSchedulerFactory.java:1321) ~[quartz-2.2.2.jar:na]
at org.quartz.impl.StdSchedulerFactory.getScheduler(StdSchedulerFactory.java:1525) ~[quartz-2.2.2.jar:na]
at org.springframework.scheduling.quartz.SchedulerFactoryBean.createScheduler(SchedulerFactoryBean.java:599) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
at org.springframework.scheduling.quartz.SchedulerFactoryBean.afterPropertiesSet(SchedulerFactoryBean.java:482) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1612) ~[spring-beans-4.0.1.RELEASE.jar:4.0.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1549) ~[spring-
beans-4.0.1.RELEASE.jar:4.0.1.RELEASE]
It seems that
Quartz is trying to create connections with my datasource upfront.
Since my dataSource isn't concrete one (its routing dataSource) and in addition doesn't have knowledge to which target Db to connect (at config time), it fails
Do we have any provision, where quartz can be used with RoutingDataSource? If Not, what would be the next best thing?
Ideally you can try making SchedulerFactoryBean as #Lazy.
But It seems lazy initialization will not work bug, there is also a work around listed in the comments.
Create schedulerFactory bean dynamically after
ContextRefreshedEvent received on root context.
Let us know, If this works.

Spring/Hibernate #Transactional on method prevents update SQL

Spring v4.2.5 Release
Hibernate v5.1.0.Final
I have a Junit test method which performs a load, updates a property and calls saveOrUpdate(bean).
It's behaving oddly in that adding #Transactional to the method signature prevents the update SQL from being performed (No SQL generated in log).
Remove the #Transactional and the update SQL is generated and the database updated.
#Configuration
#EnableTransactionManagement
#PropertySource(
{
"classpath:jdbc.properties",
"classpath:hibernate.properties"
})
#ComponentScan(value = "com.savant.test.spring.donorservice.core.dao")
public class ApplicationContext {
#Bean(destroyMethod = "close")
#Autowired
public DataSource dataSource() {
// Hikari is a connection pool manager.
HikariDataSource dataSource = new HikariDataSource();
dataSource.setUsername(env.getProperty("jdbc.username"));
dataSource.setPassword(env.getProperty("jdbc.password"));
dataSource.setJdbcUrl(env.getProperty("jdbc.url"));
dataSource.setDriverClassName(env.getProperty("jdbc.driverClassName"));
dataSource.setIsolateInternalQueries(true);
System.out.println(dataSource);
dataSource.setConnectionTestQuery("SELECT count(*) from system.onerow");
dataSource.setMaximumPoolSize(3);
dataSource.setAutoCommit(false);
return dataSource;
}
#Bean
#Autowired
public LocalSessionFactoryBean sessionFactory(DataSource datasouce) {
LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean();
sessionFactory.setDataSource(datasouce);
sessionFactory.setPackagesToScan(package_to_scan);
sessionFactory.setHibernateProperties(hibernateProperties());
return sessionFactory;
}
private Properties hibernateProperties() {
Properties hibernateProperties = new Properties();
hibernateProperties.put(hibernate_dialect, env.getProperty(hibernate_dialect));
hibernateProperties.put(hibernate_current_session_context_class, env.getProperty(hibernate_current_session_context_class));
hibernateProperties.put(hibernate_connection_autocommit, env.getProperty(hibernate_connection_autocommit));
hibernateProperties.put(hibernate_format_sql, env.getProperty(hibernate_format_sql));
hibernateProperties.put(hibernate_hbm2ddl_auto, env.getProperty(hibernate_hbm2ddl_auto));
hibernateProperties.put(hibernate_show_sql, env.getProperty(hibernate_show_sql));
// hibernateProperties.put(hibernate_connection_provider_class, env.getProperty(hibernate_connection_provider_class));
return hibernateProperties;
}
#Bean
#Autowired
public HibernateTransactionManager transactionManager(SessionFactory sessionFactory) {
HibernateTransactionManager txManager = new HibernateTransactionManager(sessionFactory);
return txManager;
}
The entities have been auto-generated using Netbeans 'Entity classes from Database'.
The main Entity has
A one-to-one relationship with FetchType.EAGER
A one-to-many relationship with FetchType.EAGER (it was LAZY - read below).
The test method looks like this.
#Test
#Transactional
public void c_testUpdateAddress1() {
System.out.println("findById");
String id = donorId;
Donor donor = donorDao.findById(id);
donor.setAbogrp(" O");
for (DonorAddress da : donor.getDonorAddressCollection()) {
da.setAddr1("Updated line");
System.out.println(da.getDonorAddressPK().getAddrtype() + " " + da.getAddr1());
}
System.out.println("Update");
Donor savedDonor = donorDao.save(donor);
}
Without #Transactional The update SQL is generated and the database
is updated.
With #Transactional The update SQL is not generated, does not appear
in the log. There are no exceptions, stepping over the Save method
in my Dao implementation everything appears fine. The bean passed in
has the correct values (updated field values), the bean returned has
the updated field values - just no SQL generated.
#Override
public Donor save(Donor bean) {
getSession().saveOrUpdate(bean);
return bean;
}
The reason I need #Transactional is to allow the address to the LAZY.
Without #Transactional I can't access the address as LAZY due to exception "failed to lazily initialize a collection of role: could not initialize proxy - no Session"
Which is as expected.
A transaction is started as soon as a #Transactional method is detected and committed as soon as that method call ends. Which in the case of a test is after the end of the test method. So during your tests you will not see the SQL.
Also when using #Transactional on a Spring based test it will by default do a rollback instead of a commit. See here in the reference guide on the default and how to change it.
Answer was provided by M Deinum as a comment

How to define HSQ DB properties in Spring JPA using annoations

I am running HSQL DB as a Im memory using run manger Swing.I have to connect the HSQLDB server from Spring JPA repository using annotations.
My repository class.
#RepositoryRestResource
public interface Vehicle extends JpaRepository<Vehicle , BigInteger>{
public List<Vehicle > findAll(Sort sort);
}
service Method:
#Service
public class LocationService {
#Autowired
VehicletRepository vehicleRepository = null;
/**
* This method is to get all the locations from the repository
*/
public List<Vehicle> getVehicless() {
Order order = new Order(Direction.ASC,"vehicleCode");
Sort sort = new Sort(order);
List<Airport> airports = vehicletRepository .findAll(sort);
System.out.println("inside service");
return vehicles;
}
}
Anyone help to achieve Spring JPA conenction with HSQL DB using annotations.
I assume you dont use Spring boot:
you need #Configuration class -(it basically is new way to configure spring applications in java ) with #EnableJpaRepositories which turn it spring data jpa/ spring data rest for you. You will also have to specify your entity manager, transaction manager and data source beans. Example below:
#Configuration
#EnableJpaRepositories("your.package.with.repositories")
public class DBConfig{
#Bean
public JpaTransactionManager transactionManager() {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory().getObject());
return transactionManager;
}
#Bean
public DataSource dataSource() {
BasicDataSource dataSource = new BasicDataSource();
DBConfigurationCommon.configureDB(dataSource, your_jdbc_url_here, db_username_here, db_password_here);
return dataSource;
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setDataSource(dataSource());
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
//you may need to define new Properties(); here with hibernate dialect and add it to entity manager factory
entityManagerFactoryBean.setPackagesToScan("your_package_with_domain_classes_here");
return entityManagerFactoryBean;
}
}

Resources