SpringBoot web services multiple data sources only one working - spring

I have developed two webservices using Spring Boot framework and I have them in the same project. Each webservice use a different DB, say ws1 uses Oracle1 and ws2 uses Oracle2. I have defined a DataBaseConfig with the beans definition but when I run the app, always works one webservice ( and it's always the same ).
DataBaseConfig
#Configuration
public class DataBaseConfig {
#Bean(name = "ora1")
#ConfigurationProperties(prefix="spring.datasource")
public DataSource mysqlDataSource() {
return DataSourceBuilder.create().build();}
#Bean(name = "ora2")
#ConfigurationProperties(prefix="spring.secondDatasource")
public DataSource sqliteDataSource() {
return DataSourceBuilder.create().build();}
#Bean(name = "clients")
#Autowired
#ConfigurationProperties(prefix = "spring.datasource")
#Qualifier("datasource")
public JdbcTemplate slaveJdbcTemplate(DataSource datasource) {
return new JdbcTemplate(datasource); }
#Bean(name = "places")
#Autowired
#Primary
#ConfigurationProperties(prefix = "spring.secondDatasource")
#Qualifier("secondDatasource")
public JdbcTemplate masterJdbcTemplate(DataSource secondDatasource) {
return new JdbcTemplate(secondDatasource);}
}
I have the services definition with the sql statements and the definition
#Service
public class ClientsService {
#Autowired
#Qualifier("clients")
private JdbcTemplate jdbcTemplate;
and the other service
#Service
public class PlacesService {
#Autowired
#Qualifier("places")
private JdbcTemplate jdbcTemplate;
Then in each controller I have de mapping #RequestMapping. When I run the app I have no connection-related errors and if I separate the webservices in 2 projects, each works fine.

You have a few things going wrong here, including some unnecessary annotations. See below, note the location of #Qualifier and the qualifier name:
#Bean(name = "clients")
public JdbcTemplate slaveJdbcTemplate(#Qualifier("ora1") DataSource datasource) {
return new JdbcTemplate(datasource);
}
#Bean(name = "places")
#Primary
public JdbcTemplate masterJdbcTemplate(#Qualifier("ora2") DataSource secondDatasource) {
return new JdbcTemplate(secondDatasource);
}

Instead of resolving by bean name, which is a bad idea IMO because it's not typesafe, why don't you use constructor injection and create the services in the configuration class (ditch #Service annotation). Create the DataSource and JdbcTemplate beans as usual, don't give them any names (the default is the method name), and also create new PlacesService(placesJdbcTemplate()). The result is a lot simpler code.
This is assuming you want both databases active at runtime. If not, use #Profile.

Related

Spring Disable #Transactional from Configuration java file

I have a code base which is using for two different applications. some of my spring service classes has annotation #Transactional. On server start I would like to disable #Transactional based on some configuration.
The below is my configuration Class.
#Configuration
#EnableTransactionManagement
#PropertySource("classpath:application.properties")
public class WebAppConfig {
private static final String PROPERTY_NAME_DATABASE_DRIVER = "db.driver";
#Resource
private Environment env;
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getRequiredProperty(PROPERTY_NAME_DATABASE_DRIVER));
dataSource.setUrl(url);
dataSource.setUsername(userId);
dataSource.setPassword(password);
return dataSource;
}
#Bean
public PlatformTransactionManager txManager() {
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setIsolationLevel(TransactionDefinition.ISOLATION_DEFAULT);
if(appName.equqls("ABC")) {
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_NEVER);
}else {
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
}
CustomDataSourceTransactionManager txM=new CustomDataSourceTransactionManager(def);
txM.setDataSource(dataSource());
return txM;
}
#Bean
public JdbcTemplate jdbcTemplate() {
JdbcTemplate jdbcTemplate = new JdbcTemplate();
jdbcTemplate.setDataSource(dataSource());
return jdbcTemplate;
}
}
I am trying to ovveried methods in DataSourceTransactionManager to make the functionality. But still it is trying to commit/rollback the transaction at end of transaction. Since there is no database connection available it is throwing exception.
If I keep #Transactional(propagation=Propagation.NEVER), everything works perfectly, but I cannot modify it as another app is using the same code base and it is necessary in that case.
I would like to know if there is a to make transaction fully disable from configuration without modifying #Transactional annotation.
I'm not sure if it would work but you can try to implement custom TransactionInterceptor and override its method that wraps invocation into a transaction, by removing that transactional stuff. Something like this:
public class NoOpTransactionInterceptor extends TransactionInterceptor {
#Override
protected Object invokeWithinTransaction(
Method method,
Class<?> targetClass,
InvocationCallback invocation
) throws Throwable {
// Simply invoke the original unwrapped code
return invocation.proceedWithInvocation();
}
}
Then you declare a conditional bean in one of #Configuration classes
// assuming this property is stored in Spring application properties file
#ConditionalOnProperty(name = "turnOffTransactions", havingValue = "true"))
#Bean
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public TransactionInterceptor transactionInterceptor(
/* default bean would be injected here */
TransactionAttributeSource transactionAttributeSource
) {
TransactionInterceptor interceptor = new NoOpTransactionInterceptor();
interceptor.setTransactionAttributeSource(transactionAttributeSource);
return interceptor;
}
Probably you gonna need additional configurations, I can't verify that right now

Non-primary embedded database schema not being initiliazed

I'm currently working on a Spring Boot (1.5.10.RELEASE) application that uses 2 embedded databases. This two databases are defined equally but one's defined on the application itself and one's defined on a custom autoconfiguration class that's on an imported JAR; yet only the one flagged as #Primary (the one on the application) gets the schema initialized.
This is the current definition for both datasources:
Primary, on the application:
#Configuration
public class DataSourceConfiguration {
#Bean
#ConfigurationProperties(prefix = "first.datasource")
#Primary
public DataSourceProperties firstProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
public DataSource firstDataSource() {
return firstProperties().initializeDataSourceBuilder().build();
}
#Bean
#Primary
public JdbcTemplate firstTemplate() {
return new JdbcTemplate(firstDataSource());
}
}
On the autoconfiguration:
#Configuration
#ConditionalOnProperty(name = "second.datasource.url")
public class SecondDataSourceAutoconfiguration {
#Bean
#ConfigurationProperties(prefix = "second.datasource")
public DataSourceProperties secondProperties() {
return new DataSourceProperties();
}
#Bean
public DataSource secondDataSource() {
return secondProperties().initializeDataSourceBuilder().build();
}
#Bean
public JdbcTemplate secondJdbcTemplate(#Qualifier("secondDataSource") DataSource datasource) {
return new JdbcTemplate(datasource);
}
}
And my application.yml fills the properties:
first:
datasource:
url: jdbc:h2:firstdb;DB_CLOSE_ON_EXIT=FALSE
second:
datasource:
url: jdbc:h2:seconddb
platform: h2
My resources folder contains both a schema.sql that's executed on firstdb and a schema-h2.sql which should get executed on seconddb but does not. I tried playing around with the datasource.schema and datasource.initialize properties, switching the script names and the platform property to the first datasource (in that case, the schema-h2.sql gets executed on firstdb, but nothing on the seconddb) and changing the embedded database provider to HSQLDB; but can't get the schema for the non-primary in-memory DB initialized anyhow.

Spring Cloud Task - specify database config

I have Spring Cloud Task that loads data from SQL Server to Cassandra DB which will be run on Spring Cloud Data Flow.
One of the requirement of Spring Task is to provide relational database to persist metadata like task execution state. But I don't want use either of the above databases for that. Instead, I have to specify third database for persistence. But it seems like Spring Cloud Task flow automatically picks up data source properties of SQL Server from application.properties. How can I specify another db for task state persistence?
My Current properties:
spring.datasource.url=jdbc:sqlserver://iphost;databaseName=dbname
spring.datasource.username=user
spring.datasource.password=password
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.jpa.show-sql=false
#spring.jpa.hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
spring.jpa.hibernate.ddl-auto=none
spring.data.cassandra.contact-points=ip
spring.data.cassandra.port=9042
spring.data.cassandra.username=username
spring.data.cassandra.password=password
spring.data.cassandra.keyspace-name=mykeyspace
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
Update: 1
I added below code to point to 3rd database as suggested by Michael Minella. Now Spring Task is able to connect to this DB and persist state. But now my batch job source queries are also connecting to this database. Only thing I changed was to add datasource for task.
spring.task.datasource.url=jdbc:postgresql://host:5432/testdb?stringtype=unspecified
spring.task.datasource.username=user
spring.task.datasource.password=passwrod
spring.task.datasource.driverClassName=org.postgresql.Driver
#Configuration
public class DataSourceConfigs {
#Bean(name = "taskDataSource")
#ConfigurationProperties(prefix="spring.task.datasource")
public DataSource getDataSource() {
return DataSourceBuilder.create().build();
}
}
#Configuration
public class DDTaskConfigurer extends DefaultTaskConfigurer{
#Autowired
public DDTaskConfigurer(#Qualifier("taskDataSource") DataSource dataSource) {
super(dataSource);
}
}
Update #2:
#Component
#StepScope
public class MyItemReader extends RepositoryItemReader<Scan> implements InitializingBean{
#Autowired
private ScanRepository repository;
private Integer lastScanIdPulled = null;
public MyItemReader(Integer _lastIdPulled) {
super();
if(_lastIdPulled == null || _lastIdPulled <=0 ){
lastScanIdPulled = 0;
} else {
lastScanIdPulled = _lastIdPulled;
}
}
#PostConstruct
protected void setUpRepo() {
final Map<String, Sort.Direction> sorts = new HashMap<>();
sorts.put("id", Direction.ASC);
this.setRepository(this.repository);
this.setSort(sorts);
this.setMethodName("findByScanGreaterThanId");
List<Object> methodArgs = new ArrayList<Object>();
System.out.println("lastScanIdpulled >>> " + lastScanIdPulled);
if(lastScanIdPulled == null || lastScanIdPulled <=0 ){
lastScanIdPulled = 0;
}
methodArgs.add(lastScanIdPulled);
this.setArguments(methodArgs);
}
}
#Repository
public interface ScanRepository extends JpaRepository<Scan, Integer> {
#Query("...")
Page<Scan> findAllScan(final Pageable pageable);
#Query("...")
Page<Scan> findByScanGreaterThanId(int id, final Pageable pageable);
}
Update #3:
If I add config datasource for Repository, I now get below exception. Before you mention that one of the datasource needs to be declared Primary. I already tried that.
Caused by: java.lang.IllegalStateException: Expected one datasource and found 2
at org.springframework.cloud.task.batch.configuration.TaskBatchAutoConfiguration$TaskBatchExecutionListenerAutoconfiguration.taskBatchExecutionListener(TaskBatchAutoConfiguration.java:65) ~[spring-cloud-task-batch-1.0.3.RELEASE.jar:1.0.3.RELEASE]
at org.springframework.cloud.task.batch.configuration.TaskBatchAutoConfiguration$TaskBatchExecutionListenerAutoconfiguration$$EnhancerBySpringCGLIB$$baeae6b9.CGLIB$taskBatchExecutionListener$0(<generated>) ~[spring-cloud-task-batch-1.0.3.RELEASE.jar:1.0.3.RELEASE]
at org.springframework.cloud.task.batch.configuration.TaskBatchAutoConfiguration$TaskBatchExecutionListenerAutoconfiguration$$EnhancerBySpringCGLIB$$baeae6b9$$FastClassBySpringCGLIB$$5a898c9.invoke(<generated>) ~[spring-cloud-task-batch-1.0.3.RELEASE.jar:1.0.3.RELEASE]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) ~[spring-core-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358) ~[spring-context-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.cloud.task.batch.configuration.TaskBatchAutoConfigu
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "myEntityManagerFactory",
basePackages = { "com.company.dd.collector.tool" },
transactionManagerRef = "TransactionManager"
)
public class ToolDbConfig {
#Bean(name = "myEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean
myEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("ToolDataSource") DataSource dataSource
) {
return builder
.dataSource(dataSource)
.packages("com.company.dd.collector.tool")
.persistenceUnit("tooldatasource")
.build();
}
#Bean(name = "myTransactionManager")
public PlatformTransactionManager transactionManager(
#Qualifier("myEntityManagerFactory") EntityManagerFactory
entityManagerFactory
) {
return new JpaTransactionManager(entityManagerFactory);
}
}
#Configuration
public class DataSourceConfigs {
#Bean(name = "taskDataSource")
#ConfigurationProperties(prefix="spring.task.datasource")
public DataSource getDataSource() {
return DataSourceBuilder.create().build();
}
#Primary
#Bean(name = "ToolDataSource")
#ConfigurationProperties(prefix = "tool.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
}
You need to create a TaskConfigurer to specify the DataSource to be used. You can read about this interface in the documentation here: https://docs.spring.io/spring-cloud-task/1.1.1.RELEASE/reference/htmlsingle/#features-task-configurer
The javadoc can be found here: https://docs.spring.io/spring-cloud-task/docs/current/apidocs/org/springframework/cloud/task/configuration/TaskConfigurer.html
UPDATE 1:
When using more than one DataSource, both Spring Batch and Spring Cloud Task follow the same paradigm in that they both have *Configurer interfaces that need to be used to specify what DataSource to use. For Spring Batch, you use the BatchConfigurer (typically by just extending the DefaultBatchConfigurer) and as noted above, the TaskConfigurer is used in Spring Cloud Task. This is because when there is more than one DataSource, the framework has no way of knowing which one to use.

Spring - Data persisted by JdbcTemplate unable to be seen by JpaRepository

I am building a Spring Boot application which requires the need for persistence via JDBC and selecting/reading via JPA/Hibernate. I have implemented both of these types of operations using Spring's JdbcTemplate and Spring Data's JpaRepository.
After I persist using JdbcTemplate I am unable to see the data via JpaRepository even though they share the same datasource. I am able to read the data if I use JdbcTemplate.
NOTE: I am using two data sources. One is configured in another class without the #Primary annotation using its own entity manager factory and transaction manager, which is why I've needed to explicitly define it below using Spring Boot's default bean terminology "transactionManager" and "entityManagerFactory".
The following is my embedded database configuration for the primary beans:
#Configuration
#EnableJpaRepositories(basePackages = {"com.repository"})
public class H2DataSourceConfiguration {
private static final Logger log = LoggerFactory.getLogger(H2DataSourceConfiguration.class);
#Bean(destroyMethod = "shutdown")
#Primary
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.setName("dataSource")
.build();
}
#Bean
#Primary
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder, DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("com.my.domain", "org.springframework.data.jpa.convert.threeten")
.build();
}
#Bean
#Primary
public JpaTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager jpaTransactionManager = new JpaTransactionManager(entityManagerFactory);
return jpaTransactionManager;
}
}
The persistence happens in a different transaction to the reading of the data, however they share the same service.
Both operations happen within the #Transactional annotation. Both repository beans are specified in the same service and also contain the #Transactional annotation. The service looks as follows:
#Service
#Transactional
public class MyServiceImpl implements MyService {
private static final Logger log = LoggerFactory.getLogger(MyServiceImpl.class);
#Autowired
private MyJpaRepository myJpaRepository;
#Autowired
private MyJdbcRepository myJdbcRepository;
...
}
MyJdbcRepositoryImpl.java:
#Repository
#Transactional(propagation = Propagataion.MANDATORY)
public class MyJdbcRepositoryImpl implements MyJdbcRepository {
#Autowired
private NamedParameterJdbcTemplate jdbcTemplate;
// methods within here all use jdbcTemplate.query(...)
}
MyJpaRepository.java:
#Repository
#Transactional(propagation = Propagataion.MANDATORY)
public interface AcquisitionJpaRepository extends JpaRepository<AcquisitionEntity, Long> {
}
Is it at all possible that the jdbctemplate calls are saving to a different h2 database?
The above configuration is correct!
The problem was that the JdbcTemplate calls had the schema owner as a prefix.
For example:
select * from I_AM_SCHEMA.KILL_ME
However, I had both the #Entity annotation and the #Table annotation on the entity object and only specified the table name!
Example:
#Entity
#Table(name = "KILL_ME")
So, we were writing to one table with JdbcTemplate but reading from a completely different other table via JPA/Hibernate due to us missing the prefix.
The correct fix was to prefix the entity name in the #Entity annotation:
#Entity("I_AM_SCHEMA.KILL_ME")
DONE!

Spring Java based config "chicken and Egg situation"

Just recently started looking into Spring and specifically its latest features, like Java config etc.
I have this somewhat strange issue:
Java config Snippet:
#Configuration
#ImportResource({"classpath*:application-context.xml","classpath:ApplicationContext_Output.xml"})
#Import(SpringJavaConfig.class)
#ComponentScan(excludeFilters={#ComponentScan.Filter(org.springframework.stereotype.Controller.class)},basePackages = " com.xx.xx.x2.beans")
public class ApplicationContextConfig extends WebMvcConfigurationSupport {
private static final Log log = LogFactory.getLog(ApplicationContextConfig.class);
#Autowired
private Environment env;
#Autowired
private IExtendedDataSourceConfig dsconfig;
#PostConstruct
public void initApp() {
...
}
#Bean(name="transactionManagerOracle")
#Lazy
public DataSourceTransactionManager transactionManagerOracle() {
return new DataSourceTransactionManager(dsconfig.oracleDataSource());
}
IExtendedDataSourceConfig has two implementations which are based on spring active profile one or the other in instantiated. For this example let say this is the implementation :
#Configuration
#PropertySources(value = {
#PropertySource("classpath:MYUI.properties")})
#Profile("dev")
public class MYDataSourceConfig implements IExtendedDataSourceConfig {
private static final Log log = LogFactory.getLog(MYDataSourceConfig.class);
#Resource
#Autowired
private Environment env;
public MYDataSourceConfig() {
log.info("creating dev datasource");
}
#Bean
public DataSource oracleDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("oracle.jdbc.driver.OracleDriver");
dataSource.setUrl(env.getProperty("oracle.url"));
dataSource.setUsername(env.getProperty("oracle.user"));
dataSource.setPassword(env.getProperty("oracle.pass"));
return dataSource;
}
The problem is that when transactionManagerOracle bean is called, (even if I try to mark it as lazy) dsconfig variable value appears to be null.
I guess #beans are processed first and then all Autowires, is there a fix for this? How do I either tell spring to inject dsconfig variable before creating beans, or somehow create #beans after dsconfig is injected?
You can just specify DataSource as method parameter for the transaction manager bean. Spring will then automatically inject the datasource, which is configured in the active profile:
#Bean(name="transactionManagerOracle")
#Lazy
public DataSourceTransactionManager transactionManagerOracle(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
If you still want to do this through the configuration class, specify that as parameter:
public DataSourceTransactionManager transactionManagerOracle(IExtendedDataSourceConfig dsconfig) {}
In both ways you declare a direct dependency to another bean, and Spring will make sure, that the dependent bean exists and will be injected.

Resources