How to configure springboot to wrap DataSource during integration tests? - spring

My goal is to have a have integration tests that ensures that there isn't too many database queries happening during lookups. (This helps us catch n+1 queries due to incorrect JPA configuration)
I know that the database connection is correct because there is no configuration problems during the test run whenever MyDataSourceWrapperConfiguration is not included in the test. However, once it is added, the circular dependency happens. (see error below) I believe #Primary is necessary in order for the JPA/JDBC code to use the correct DataSource instance.
MyDataSourceWrapper is a custom class that tracks the number of queries that have happened for a given transaction, but it delegates the real database work to the DataSource passed in via constructor.
Error:
The dependencies of some of the beans in the application context form a cycle:
org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration
┌─────┐
| databaseQueryCounterProxyDataSource defined in me.testsupport.database.MyDataSourceWrapperConfiguration
↑ ↓
| dataSource defined in org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Tomcat
↑ ↓
| dataSourceInitializer
└─────┘
My Configuration:
#Configuration
public class MyDataSourceWrapperConfiguration {
#Primary
#Bean
DataSource databaseQueryCounterProxyDataSource(final DataSource delegate) {
return MyDataSourceWrapper(delegate);
}
}
My Test:
#ActiveProfiles({ "it" })
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration({ DatabaseConnectionConfiguration.class, DatabaseQueryCounterConfiguration.class })
#EnableAutoConfiguration
public class EngApplicationRepositoryIT {
#Rule
public MyDatabaseQueryCounter databaseQueryCounter = new MyDatabaseQueryCounter ();
#Rule
public ErrorCollector errorCollector = new ErrorCollector();
#Autowired
MyRepository repository;
#Test
public void test() {
this.repository.loadData();
this.errorCollector.checkThat(this.databaseQueryCounter.getSelectCounts(), is(lessThan(10)));
}
}
UPDATE: This original question was for springboot 1.5. The accepted answer reflects that, however, the answer from #rajadilipkolli works for springboot 2.x

In your case you will get 2 DataSource instances which is probably not what you want. Instead use BeanPostProcessor which is the component actually designed for this. See also the Spring Reference Guide.
Create and register a BeanPostProcessor which does the wrapping.
public class DataSourceWrapper implements BeanPostProcessor {
public Object postProcessBeforeInitialization(Object bean, String beanName) {
if (bean instanceof DataSource) {
return new MyDataSourceWrapper((DataSource)bean);
}
return bean;
}
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
}
Then just register that as a #Bean instead of your MyDataSourceWrapper.
Tip: Instead of rolling your own wrapping DataSource you might be interested in datasource-proxy combined with datasource-assert which has counter etc. support already (saves you maintaining your own components).

Starting from spring boot 2.0.0.M3 using BeanPostProcessor wont work.
As a work around create your own bean like below
#Bean
public DataSource customDataSource(DataSourceProperties properties) {
log.info("Inside Proxy Creation");
final HikariDataSource dataSource = (HikariDataSource) properties
.initializeDataSourceBuilder().type(HikariDataSource.class).build();
if (properties.getName() != null) {
dataSource.setPoolName(properties.getName());
}
return ProxyDataSourceBuilder.create(dataSource).countQuery().name("MyDS")
.logSlowQueryToSysOut(1, TimeUnit.MINUTES).build();
}
Another way is to use datasource-proxy version of datasource-decorator starter

Following solution works for me using Spring Boot 2.0.6.
It uses explicit binding instead of annotation #ConfigurationProperties(prefix = "spring.datasource.hikari").
#Configuration
public class DataSourceConfig {
private final Environment env;
#Autowired
public DataSourceConfig(Environment env) {
this.env = env;
}
#Primary
#Bean
public MyDataSourceWrapper primaryDataSource(DataSourceProperties properties) {
DataSource dataSource = properties.initializeDataSourceBuilder().build();
Binder binder = Binder.get(env);
binder.bind("spring.datasource.hikari", Bindable.ofInstance(dataSource).withExistingValue(dataSource));
return new MyDataSourceWrapper(dataSource);
}
}

You can actually still use BeanPostProcessor in Spring Boot 2, but it needs to return the correct type (the actual type of the declared Bean). To do this you need to create a proxy of the correct type which redirects DataSource methods to your interceptor and all the other methods to the original bean.
For example code see the Spring Boot issue and discussion at https://github.com/spring-projects/spring-boot/issues/12592.

Related

Spring Disable #Transactional from Configuration java file

I have a code base which is using for two different applications. some of my spring service classes has annotation #Transactional. On server start I would like to disable #Transactional based on some configuration.
The below is my configuration Class.
#Configuration
#EnableTransactionManagement
#PropertySource("classpath:application.properties")
public class WebAppConfig {
private static final String PROPERTY_NAME_DATABASE_DRIVER = "db.driver";
#Resource
private Environment env;
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getRequiredProperty(PROPERTY_NAME_DATABASE_DRIVER));
dataSource.setUrl(url);
dataSource.setUsername(userId);
dataSource.setPassword(password);
return dataSource;
}
#Bean
public PlatformTransactionManager txManager() {
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setIsolationLevel(TransactionDefinition.ISOLATION_DEFAULT);
if(appName.equqls("ABC")) {
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_NEVER);
}else {
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
}
CustomDataSourceTransactionManager txM=new CustomDataSourceTransactionManager(def);
txM.setDataSource(dataSource());
return txM;
}
#Bean
public JdbcTemplate jdbcTemplate() {
JdbcTemplate jdbcTemplate = new JdbcTemplate();
jdbcTemplate.setDataSource(dataSource());
return jdbcTemplate;
}
}
I am trying to ovveried methods in DataSourceTransactionManager to make the functionality. But still it is trying to commit/rollback the transaction at end of transaction. Since there is no database connection available it is throwing exception.
If I keep #Transactional(propagation=Propagation.NEVER), everything works perfectly, but I cannot modify it as another app is using the same code base and it is necessary in that case.
I would like to know if there is a to make transaction fully disable from configuration without modifying #Transactional annotation.
I'm not sure if it would work but you can try to implement custom TransactionInterceptor and override its method that wraps invocation into a transaction, by removing that transactional stuff. Something like this:
public class NoOpTransactionInterceptor extends TransactionInterceptor {
#Override
protected Object invokeWithinTransaction(
Method method,
Class<?> targetClass,
InvocationCallback invocation
) throws Throwable {
// Simply invoke the original unwrapped code
return invocation.proceedWithInvocation();
}
}
Then you declare a conditional bean in one of #Configuration classes
// assuming this property is stored in Spring application properties file
#ConditionalOnProperty(name = "turnOffTransactions", havingValue = "true"))
#Bean
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public TransactionInterceptor transactionInterceptor(
/* default bean would be injected here */
TransactionAttributeSource transactionAttributeSource
) {
TransactionInterceptor interceptor = new NoOpTransactionInterceptor();
interceptor.setTransactionAttributeSource(transactionAttributeSource);
return interceptor;
}
Probably you gonna need additional configurations, I can't verify that right now

spring aspect not getting fired on getConnection

I am trying to intercept the getConnection call in spring 3.2.3
#Component
#Aspect
#Order(value = 1)
public class ConnectionAspect {
//#AfterReturning(pointcut = "execution(java.sql.Connection javax.sql.DataSource.getConnection(..))", returning = "connection")
#Around("execution(java.sql.Connection javax.sql.DataSource.getConnection(..))")
public Connection prepare(ProceedingJoinPoint pjp) throws Throwable {
return MyConnectionProxy.newInstance((Connection) pjp.proceed(pjp.getArgs()));
}
}
This aspect does not invoked on calling getConnection .
Is there any mistake in the point cut definition execution(java.sql.Connection javax.sql.DataSource.getConnection(..))
Spring AOP can only advise spring managed beans. If your DataSource instances are not spring managed beans, you won't be able to achieve your goal with Spring AOP.
I would try to solve this issue by creating some kind of delegating proxy around the container provided DataSource, and make it a bean managed by spring. It turns out there's actually a class intended in Spring specifically for this purpose. It's called DelegatingDataSource. You only need to subclass this class, ovverride the getConnection() method (or whichever other method's behavior you need to affect), set it up for delegating to the container provided DataSource, and making it a spring managed bean and you're good to go.
Someting along this example should do it:
#Configuration
public class DataSourceConfiguration {
public static class MySpecialDataSource extends DelegatingDataSource {
public MySpecialDataSource(DataSource delegate) {
super(delegate);
}
#Override
public Connection getConnection() throws SQLException {
return super.getConnection();
}
}
#Bean
public DataSource dataSource(#Autowired DataSource containerDataSource) {
return new MySpecialDataSource(containerDataSource);
}
#Bean(name="containerDataSource")
public JndiObjectFactoryBean containerDataSource() {
JndiObjectFactoryBean factoryBean = new JndiObjectFactoryBean();
factoryBean.setJndiName("jdbc/MyDataSource");
return factoryBean;
}
}
The best thing is that you didn't even need Spring AOP or AspectJ for that.

Null pointer exception using Autowired annotation - Gemfire Listerner

I have moved all the Cassandra into single class. When I tried create instance of CassandraOperations in the gemfire cache listener was getting null pointer exception.Can you please assist me on this error
I have not received any null pointer exception using spring and cassandra but getting while integrating with gemfire.
#Component
public class CacheListener<K, V> extends CacheListenerAdapter<K, V> implements Declarable {
#Autowired
private CassandraOperations cassandraOperations;
#Override
public void init(Properties props) {
}
public void afterCreate(EntryEvent e) {
cassandraOperations.insert(e.getNewValue());
}
#Override
public void close() {
}
}
public class CassandraConfig {
#Autowired
private Environment environment;
private static final Logger LOGGER = LoggerFactory.getLogger(CassandraConfig.class);
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(environment.getProperty("cassandra.contactpoints"));
cluster.setPort(Integer.parseInt(environment.getProperty("cassandra.port")));
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
BasicCassandraMappingContext mappingContext = new BasicCassandraMappingContext();
mappingContext.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), environment.getProperty("cassandra.keyspace"))); return mappingContext;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(environment.getProperty("cassandra.keyspace"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
Exception
[error 2017/05/05 11:16:04.874 CDT <http-nio-7878-exec-1> tid=0x5b] Exception occurred in CacheListener
java.lang.NullPointerException
at CacheListener.afterCreate(CacheListener.java:27)
at com.gemstone.gemfire.internal.cache.EnumListenerEvent$AFTER_CREATE.dispatchEvent(EnumListenerEvent.java:97)
at com.gemstone.gemfire.internal.cache.LocalRegion.dispatchEvent(LocalRegion.java:8897)
at com.gemstone.gemfire.internal.cache.LocalRegion.dispatchListenerEvent(LocalRegion.java:7376)
at com.gemstone.gemfire.internal.cache.LocalRegion.invokePutCallbacks(LocalRegion.java:6158)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.invokeCallbacks(EntryEventImpl.java:1919)
at com.gemstone.gemfire.internal.cache.ProxyRegionMap$ProxyRegionEntry.dispatchListenerEvents(ProxyRegionMap.java:548)
at com.gemstone.gemfire.internal.cache.LocalRegion.basicPutPart2(LocalRegion.java:6012)
at com.gemstone.gemfire.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:232)
at com.gemstone.gemfire.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5824)
at com.gemstone.gemfire.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:118)
at com.gemstone.gemfire.internal.cache.LocalRegion.basicPut(LocalRegion.java:5214)
at com.gemstone.gemfire.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1597)
at com.gemstone.gemfire.internal.cache.LocalRegion.put(LocalRegion.java:1580)
at com.gemstone.gemfire.internal.cache.AbstractRegion.put(AbstractRegion.java:327)
at org.springframework.data.gemfire.GemfireTemplate.put(GemfireTemplate.java:189)
at org.springframework.data.gemfire.repository.support.SimpleGemfireRepository.save(SimpleGemfireRepository.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
What is not apparent in your code/configuration above is how you configured your application-specific, GemFire CacheListener using Spring (Data GemFire).
I see you annotated your application CacheListener using Spring's #Component stereo-type annotation, but this does nothing without help.
Are you using Spring's Classpath component scanning functionality, or perhaps Spring's Annotation-based container configuration support? If you are using the later, you know you have to still explicitly define your application CacheListener in config (JavaConfig or XML), right?
Whenever you encounter a NullPointerException on an #Autowired component/collaborator field to inject a dependency, especially when using Spring's #Autowired annotation, it is good indication you have a configuration problem, particularly since the #Autowired annotation implies that the "dependency" (e.g. CassandraOperations) is "required" (unless you explicitly set the required attribute of the #Autowired annotation to false, which you did not; required defaults to true).
Therefore, if the CacheListener component were picked up in the scan and a dependency could not be injected (auto-wired) because no (other) bean of the specified type (e.g. CassandraOperations) was defined in the Spring application context (which it is), then Spring would throw an Exception when evaluating your configuration class(es).
Although, even your CassandraConfig class must also be annotated with Spring's #Configuration annotation or with the #Component annotation when using either Spring Classpath component scanning or Annotation-based container config. Or, it must be explicitly defined as a bean in the Spring application context if using neither.
NOTE: the naming convention (i.e. CacheListener) is not very good since it clashes with GemFire's own CacheListener interface. It would be better to call your application-specific extension/implementation perhaps, "GemFireToCassandraCacheListener"
By way of example...
import ...;
#Configuration
class GemFireConfiguration {
#Bean
CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
#Bean("CassandraCache")
PartitionedRegionFactoryBean cassandraCacheRegion() {
PartitionedRegionFactoryBean cassandraCacheRegion =
new PartitionedRegionFactoryBean();
cassandraCacheRegion.setCache(gemfireCache());
cassandraCacheRegion.setClose(false);
cassandraCacheRegion.setCacheListeners(
new CacheListener[] { gemfireToCassandraCacheListener() });
return cassandraCacheRegion;
}
#Bean
GemFireToCassandraCacheListener gemfireToCassandraCacheListener() {
return new GemFireToCassandraCacheListener();
}
}
import ...;
#Configuration
class CassandraConfig {
// what you have above
}
I have plenty of GemFire configuration examples here, that shows GemFire native config with Spring (Data GemFire) config, XML vs. JavaConfig vs. annotations, etc, etc.
Finally...
Technically, it might be better to use a GemFire CacheWriter, attached to the Region, rather than a CacheListener, since what you are doing (updating Cassandra on a cache create) is the intended purpose of a CacheWriter.
Of course, the CacheListener is called "after" create vs. the CacheWriter which is "before" create. However, I would say it is always better to update the "primary" data source (or "source of truth") before updating the "cache" to reflect the data source. This is applicable especially if there are constraints in the primary data source that might cause an update to fail. You would not want the cache to be updated if the primary data source could not be.
A CacheWriter is configured similarly to a CacheListener, like so...
#Bean("CassandraCache")
PartitionedRegionFactoryBean cassandraCacheRegion() {
PartitionedRegionFactoryBean cassandraCacheRegion =
new PartitionedRegionFactoryBean();
cassandraCacheRegion.setCache(gemfireCache());
cassandraCacheRegion.setClose(false);
cassandraCacheRegion.setCacheWriter(gemfireToCassandraCacheWriter());
return cassandraCacheRegion;
}
#Bean
GemFireToCassandraCacheWriter gemfireToCassandraCacheWriter(
CassandraOperations cassandraOperations) {
return new GemFireToCassandraCacheWriter(cassandraOperations);
}
Where the GemFireToCassandraCacheWriter would be defined as...
class GemFireToCassandraCacheWriter extends CacheWriterAdapter {
private CassandraOperations cassandraOperations;
// Using constructor injection is better than field injection
GemFireToCassandraCacheWriter(CassandraOperations cassandraOperations) {
this.cassandraOperations = cassandraOperations;
}
public void beforeCreate(EntryEvent<?, ?> event) {
cassandraOperations.insert(event.getNewValue());
}
}
NOTE: a Region can only have 1 CacheWriter. FYI, functionally the CacheWriter is the counterpart to a CacheLoader. See the GemFire User Guide for more details. In particular, see here, here and here.
Additionally, if you are just using GemFire as a cache for state that is primarily managed in Cassandra, then you might also consider Spring's Cache Abstraction, for which Spring Data GemFire positions GemFire as a "provider" in the abstraction.
Not sure what your GemFire to Cassandra UC is all about, but food for thought.
Hope this helps!
-John

Spring Java based config "chicken and Egg situation"

Just recently started looking into Spring and specifically its latest features, like Java config etc.
I have this somewhat strange issue:
Java config Snippet:
#Configuration
#ImportResource({"classpath*:application-context.xml","classpath:ApplicationContext_Output.xml"})
#Import(SpringJavaConfig.class)
#ComponentScan(excludeFilters={#ComponentScan.Filter(org.springframework.stereotype.Controller.class)},basePackages = " com.xx.xx.x2.beans")
public class ApplicationContextConfig extends WebMvcConfigurationSupport {
private static final Log log = LogFactory.getLog(ApplicationContextConfig.class);
#Autowired
private Environment env;
#Autowired
private IExtendedDataSourceConfig dsconfig;
#PostConstruct
public void initApp() {
...
}
#Bean(name="transactionManagerOracle")
#Lazy
public DataSourceTransactionManager transactionManagerOracle() {
return new DataSourceTransactionManager(dsconfig.oracleDataSource());
}
IExtendedDataSourceConfig has two implementations which are based on spring active profile one or the other in instantiated. For this example let say this is the implementation :
#Configuration
#PropertySources(value = {
#PropertySource("classpath:MYUI.properties")})
#Profile("dev")
public class MYDataSourceConfig implements IExtendedDataSourceConfig {
private static final Log log = LogFactory.getLog(MYDataSourceConfig.class);
#Resource
#Autowired
private Environment env;
public MYDataSourceConfig() {
log.info("creating dev datasource");
}
#Bean
public DataSource oracleDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("oracle.jdbc.driver.OracleDriver");
dataSource.setUrl(env.getProperty("oracle.url"));
dataSource.setUsername(env.getProperty("oracle.user"));
dataSource.setPassword(env.getProperty("oracle.pass"));
return dataSource;
}
The problem is that when transactionManagerOracle bean is called, (even if I try to mark it as lazy) dsconfig variable value appears to be null.
I guess #beans are processed first and then all Autowires, is there a fix for this? How do I either tell spring to inject dsconfig variable before creating beans, or somehow create #beans after dsconfig is injected?
You can just specify DataSource as method parameter for the transaction manager bean. Spring will then automatically inject the datasource, which is configured in the active profile:
#Bean(name="transactionManagerOracle")
#Lazy
public DataSourceTransactionManager transactionManagerOracle(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
If you still want to do this through the configuration class, specify that as parameter:
public DataSourceTransactionManager transactionManagerOracle(IExtendedDataSourceConfig dsconfig) {}
In both ways you declare a direct dependency to another bean, and Spring will make sure, that the dependent bean exists and will be injected.

Using Quartz with Spring Boot - injection order changes based upon return type of method

I am trying to get Quartz working with Spring Boot, and am not managing to get the injection working correctly. I am basing myself on the example shown here
Here is my boot class:
#ComponentScan
#EnableAutoConfiguration
public class MyApp {
#Autowired
private DataSource dataSource;
#Bean
public JobFactory jobFactory() {
return new SpringBeanJobFactory();
}
#Bean
public SchedulerFactoryBean quartz() {
final SchedulerFactoryBean bean = new SchedulerFactoryBean();
bean.setJobFactory(jobFactory());
bean.setDataSource(dataSource);
bean.setConfigLocation(new ClassPathResource("quartz.properties"));
...
return bean;
}
public static void main(String[] args) {
SpringApplication.run(MyApp.class, args);
}
}
When the quartz() method is invoked by Spring, dataSource is null. However, if I change the return type of the quartz() method to Object, dataSource is correctly injected with the datasource created by reading application.properties, the bean is built, everything works and I get a subsequent error saying that Quartz has been unable to retrieve any jobs from the database, which is normal as I haven't put the schema in place yet.
I have tried adding a #DependsOn("dataSource") annotation on the quartz() method but that doesn't make any difference.
This class is the only class annotated with #Configuration.
Here are my dependencies (I'm using Maven but present them like this for space reasons):
org.springframework.boot:spring-boot-starter-actuator:1.0.0.RC4
org.springframework.boot:spring-boot-starter-jdbc:1.0.0.RC4
org.springframework.boot:spring-boot-starter-web:1.0.0.RC4
org.quartz-scheduler:quartz:2.2.1
org.springframework:spring-support:2.0.8
And the parent:
org.springframework.boot:spring-boot-starter-parent:1.0.0.RC4
Finally the content of quartz.properties:
org.quartz.threadPool.threadCount = 3
org.quartz.jobStore.class=org.springframework.scheduling.quartz.LocalDataSourceJobStore
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
What am I doing wrong?
(I have seen this question, but that question initialises the datasource in the #Configuration class)
Your app starts up (with a schema error, which is expected) if I use "org.springframework:spring-context-support:4.0.2.RELEASE" ("org.springframework:spring-support:2.0.8" if it ever existed must be nearly 10 years old now and certainly isn't compatible with Boot or Quartz 2).

Resources