#Resource becomes null when Java Config implements AsyncConfigurer - spring

I have a very strange problem. I'm using Spring 3.2.4 and have a Spring Java Config class which is part of a larger mix of Java and XML configuration files.
The file uses 6 resources defined in other config files to construct an #Async bean. When I add "implements AsyncConfigurer" to the file, some of the #Resource fields appear as null, but when I leave the implementation off, they are populated. This is confusing, since the AsyncConfigurer is just supposed be used for configuring your asynchronous executor, it shouldn't do anything funky like cause your configuration to be loaded asynchronously.
When I set a debugger on the construction method, I can see that the beans are in fact null. It seems to be a race condition of some kind, because out of the 6 beans, the 4th one was once null, and then populated the next time.
My file looks like:
public class UserBackgroundProcessorConfiguration implements AsyncConfigurer {
#Resource(name="bean1")
MyBean bean1;
// in reality there are 6 #Resources defined...
#Resource(name="bean6")
MyBean bean6;
#Bean(name="backgroundProcessor")
public BackgroundProcessor getBackgroundProcessor() {
BackgroundProcessor backgroundProcess = new BackgroundProcessor();
backgroundProcess.setBean1(bean1);
// beans 1-3 are always populated
// bean 4 seems to sometimes be populated, sometimes null
// beans 5&6 are always null
backgroundProcess.setBean6(bean6);
return backgroundProcess;
}
#Override
#Bean(name="executor")
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(40);
executor.setQueueCapacity(25);
executor.setThreadNamePrefix("BackgroundProcessor-");
return executor;
}
}
Again, when I remove "implements AsyncConfigurer" and comment out getAsyncExecutor, the problem goes away.
According to the documentation
Interface to be implemented by #Configuration classes annotated with #EnableAsync that wish to customize the Executor instance used when processing async method invocations.
So I don't see how that cause the behavior I am seeing.

It appears that your backgroundProcessor bean is depending on your 6 resource beans.
Try annotating the method signature with:
#Bean(name="backgroundProcessor")
#DependsOn("bean1","bean2","bean3","bean4","bean5","bean6")
public BackgroundProcessor getBackgroundProcessor() {..}

I solved this problem by removing the #Resource annotations from the Java configuration file:
public class UserBackgroundProcessorConfiguration implements AsyncConfigurer {
#Bean(name="backgroundProcessor")
public BackgroundProcessor getBackgroundProcessor() {
BackgroundProcessor backgroundProcess = new BackgroundProcessor();
return backgroundProcess;
}
#Override
#Bean(name="executor")
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(40);
executor.setQueueCapacity(25);
executor.setThreadNamePrefix("BackgroundProcessor-");
return executor;
}
}
And then adding #Resource annotations to the BackgroundProcessor class:
public class BackgroundProcessor {
#Resource private MyBean bean1;
// 4 more
#Resource private MyBean bean6;
}
For some reason this works fine. I don't like this as much, because I would prefer my classes not have a dependency on the IOC annotations, but I'll go with this solution for now.

Related

Mockito mock does not work as expected in Spring MockMvc test

In an Spring mockmvc test I want to replace a bean by a mock implementation which is configured using Mockito.when() definitions. The definitions are indeed respected at the time the mock is configured, as well as at the time the mock is injected into a depending bean (a controller advice in my case) during application context startup. However, when the mock is used during a certain test, all when definitions are gone.
Why?
Some remarks:
The mock is completely new code, so it is impossible that I am not aware of any call to Mockito.reset().
the mock at the time of usage is the same as at the time of creation.
a bypassing solution to the problem is to configure the mock in a #BeforeEach method in AbstractTest. However, I want to understand why it does not work without.
Here a simplified and anonymized example
#Component
public class MyBean {
private String property;
...
public String getProperty() {
return property;
}
}
#ControllerAdvice
public class MyControllerAdvice() {
private MyBean myBean;
#Autowired
public MyControllerAdvice(MyBean myBean) {
this.myBean = myBean;
System.out.println(this.myBean.getProperty()); // --> outputs "FOOBAR"
}
#ModelAttribute
public String getMyBeanProperty() {
return myBean.getProperty(); // --> returns null
}
}
public class AbstractTest {
#Configuration
static class Config {
#Bean
public MyBean () {
MyBean myBean = Mockito.mock(MyBean.class, "I am a mock of MyBean");
when(myBean.getProperty()).thenReturn("FOOBAR");
}
}
}
That's not a problem of Mockito. I think you simplified the example a lot and we don't see the full picture, but I can say that main cause - 2 different beans MyBean: one is initialized with Spring's #Component, second is in configuration class with #Bean.
Why do you use #Component for POJO/DO?
#Bean in the configuration class is being initialized lazy so better way to use #PostConstruct
If you want to leave both beans mark MyBean in the configuration class as #Primary

Null pointer exception using Autowired annotation - Gemfire Listerner

I have moved all the Cassandra into single class. When I tried create instance of CassandraOperations in the gemfire cache listener was getting null pointer exception.Can you please assist me on this error
I have not received any null pointer exception using spring and cassandra but getting while integrating with gemfire.
#Component
public class CacheListener<K, V> extends CacheListenerAdapter<K, V> implements Declarable {
#Autowired
private CassandraOperations cassandraOperations;
#Override
public void init(Properties props) {
}
public void afterCreate(EntryEvent e) {
cassandraOperations.insert(e.getNewValue());
}
#Override
public void close() {
}
}
public class CassandraConfig {
#Autowired
private Environment environment;
private static final Logger LOGGER = LoggerFactory.getLogger(CassandraConfig.class);
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(environment.getProperty("cassandra.contactpoints"));
cluster.setPort(Integer.parseInt(environment.getProperty("cassandra.port")));
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
BasicCassandraMappingContext mappingContext = new BasicCassandraMappingContext();
mappingContext.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), environment.getProperty("cassandra.keyspace"))); return mappingContext;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(environment.getProperty("cassandra.keyspace"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
Exception
[error 2017/05/05 11:16:04.874 CDT <http-nio-7878-exec-1> tid=0x5b] Exception occurred in CacheListener
java.lang.NullPointerException
at CacheListener.afterCreate(CacheListener.java:27)
at com.gemstone.gemfire.internal.cache.EnumListenerEvent$AFTER_CREATE.dispatchEvent(EnumListenerEvent.java:97)
at com.gemstone.gemfire.internal.cache.LocalRegion.dispatchEvent(LocalRegion.java:8897)
at com.gemstone.gemfire.internal.cache.LocalRegion.dispatchListenerEvent(LocalRegion.java:7376)
at com.gemstone.gemfire.internal.cache.LocalRegion.invokePutCallbacks(LocalRegion.java:6158)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.invokeCallbacks(EntryEventImpl.java:1919)
at com.gemstone.gemfire.internal.cache.ProxyRegionMap$ProxyRegionEntry.dispatchListenerEvents(ProxyRegionMap.java:548)
at com.gemstone.gemfire.internal.cache.LocalRegion.basicPutPart2(LocalRegion.java:6012)
at com.gemstone.gemfire.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:232)
at com.gemstone.gemfire.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5824)
at com.gemstone.gemfire.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:118)
at com.gemstone.gemfire.internal.cache.LocalRegion.basicPut(LocalRegion.java:5214)
at com.gemstone.gemfire.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1597)
at com.gemstone.gemfire.internal.cache.LocalRegion.put(LocalRegion.java:1580)
at com.gemstone.gemfire.internal.cache.AbstractRegion.put(AbstractRegion.java:327)
at org.springframework.data.gemfire.GemfireTemplate.put(GemfireTemplate.java:189)
at org.springframework.data.gemfire.repository.support.SimpleGemfireRepository.save(SimpleGemfireRepository.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
What is not apparent in your code/configuration above is how you configured your application-specific, GemFire CacheListener using Spring (Data GemFire).
I see you annotated your application CacheListener using Spring's #Component stereo-type annotation, but this does nothing without help.
Are you using Spring's Classpath component scanning functionality, or perhaps Spring's Annotation-based container configuration support? If you are using the later, you know you have to still explicitly define your application CacheListener in config (JavaConfig or XML), right?
Whenever you encounter a NullPointerException on an #Autowired component/collaborator field to inject a dependency, especially when using Spring's #Autowired annotation, it is good indication you have a configuration problem, particularly since the #Autowired annotation implies that the "dependency" (e.g. CassandraOperations) is "required" (unless you explicitly set the required attribute of the #Autowired annotation to false, which you did not; required defaults to true).
Therefore, if the CacheListener component were picked up in the scan and a dependency could not be injected (auto-wired) because no (other) bean of the specified type (e.g. CassandraOperations) was defined in the Spring application context (which it is), then Spring would throw an Exception when evaluating your configuration class(es).
Although, even your CassandraConfig class must also be annotated with Spring's #Configuration annotation or with the #Component annotation when using either Spring Classpath component scanning or Annotation-based container config. Or, it must be explicitly defined as a bean in the Spring application context if using neither.
NOTE: the naming convention (i.e. CacheListener) is not very good since it clashes with GemFire's own CacheListener interface. It would be better to call your application-specific extension/implementation perhaps, "GemFireToCassandraCacheListener"
By way of example...
import ...;
#Configuration
class GemFireConfiguration {
#Bean
CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
#Bean("CassandraCache")
PartitionedRegionFactoryBean cassandraCacheRegion() {
PartitionedRegionFactoryBean cassandraCacheRegion =
new PartitionedRegionFactoryBean();
cassandraCacheRegion.setCache(gemfireCache());
cassandraCacheRegion.setClose(false);
cassandraCacheRegion.setCacheListeners(
new CacheListener[] { gemfireToCassandraCacheListener() });
return cassandraCacheRegion;
}
#Bean
GemFireToCassandraCacheListener gemfireToCassandraCacheListener() {
return new GemFireToCassandraCacheListener();
}
}
import ...;
#Configuration
class CassandraConfig {
// what you have above
}
I have plenty of GemFire configuration examples here, that shows GemFire native config with Spring (Data GemFire) config, XML vs. JavaConfig vs. annotations, etc, etc.
Finally...
Technically, it might be better to use a GemFire CacheWriter, attached to the Region, rather than a CacheListener, since what you are doing (updating Cassandra on a cache create) is the intended purpose of a CacheWriter.
Of course, the CacheListener is called "after" create vs. the CacheWriter which is "before" create. However, I would say it is always better to update the "primary" data source (or "source of truth") before updating the "cache" to reflect the data source. This is applicable especially if there are constraints in the primary data source that might cause an update to fail. You would not want the cache to be updated if the primary data source could not be.
A CacheWriter is configured similarly to a CacheListener, like so...
#Bean("CassandraCache")
PartitionedRegionFactoryBean cassandraCacheRegion() {
PartitionedRegionFactoryBean cassandraCacheRegion =
new PartitionedRegionFactoryBean();
cassandraCacheRegion.setCache(gemfireCache());
cassandraCacheRegion.setClose(false);
cassandraCacheRegion.setCacheWriter(gemfireToCassandraCacheWriter());
return cassandraCacheRegion;
}
#Bean
GemFireToCassandraCacheWriter gemfireToCassandraCacheWriter(
CassandraOperations cassandraOperations) {
return new GemFireToCassandraCacheWriter(cassandraOperations);
}
Where the GemFireToCassandraCacheWriter would be defined as...
class GemFireToCassandraCacheWriter extends CacheWriterAdapter {
private CassandraOperations cassandraOperations;
// Using constructor injection is better than field injection
GemFireToCassandraCacheWriter(CassandraOperations cassandraOperations) {
this.cassandraOperations = cassandraOperations;
}
public void beforeCreate(EntryEvent<?, ?> event) {
cassandraOperations.insert(event.getNewValue());
}
}
NOTE: a Region can only have 1 CacheWriter. FYI, functionally the CacheWriter is the counterpart to a CacheLoader. See the GemFire User Guide for more details. In particular, see here, here and here.
Additionally, if you are just using GemFire as a cache for state that is primarily managed in Cassandra, then you might also consider Spring's Cache Abstraction, for which Spring Data GemFire positions GemFire as a "provider" in the abstraction.
Not sure what your GemFire to Cassandra UC is all about, but food for thought.
Hope this helps!
-John

How to configure springboot to wrap DataSource during integration tests?

My goal is to have a have integration tests that ensures that there isn't too many database queries happening during lookups. (This helps us catch n+1 queries due to incorrect JPA configuration)
I know that the database connection is correct because there is no configuration problems during the test run whenever MyDataSourceWrapperConfiguration is not included in the test. However, once it is added, the circular dependency happens. (see error below) I believe #Primary is necessary in order for the JPA/JDBC code to use the correct DataSource instance.
MyDataSourceWrapper is a custom class that tracks the number of queries that have happened for a given transaction, but it delegates the real database work to the DataSource passed in via constructor.
Error:
The dependencies of some of the beans in the application context form a cycle:
org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration
┌─────┐
| databaseQueryCounterProxyDataSource defined in me.testsupport.database.MyDataSourceWrapperConfiguration
↑ ↓
| dataSource defined in org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Tomcat
↑ ↓
| dataSourceInitializer
└─────┘
My Configuration:
#Configuration
public class MyDataSourceWrapperConfiguration {
#Primary
#Bean
DataSource databaseQueryCounterProxyDataSource(final DataSource delegate) {
return MyDataSourceWrapper(delegate);
}
}
My Test:
#ActiveProfiles({ "it" })
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration({ DatabaseConnectionConfiguration.class, DatabaseQueryCounterConfiguration.class })
#EnableAutoConfiguration
public class EngApplicationRepositoryIT {
#Rule
public MyDatabaseQueryCounter databaseQueryCounter = new MyDatabaseQueryCounter ();
#Rule
public ErrorCollector errorCollector = new ErrorCollector();
#Autowired
MyRepository repository;
#Test
public void test() {
this.repository.loadData();
this.errorCollector.checkThat(this.databaseQueryCounter.getSelectCounts(), is(lessThan(10)));
}
}
UPDATE: This original question was for springboot 1.5. The accepted answer reflects that, however, the answer from #rajadilipkolli works for springboot 2.x
In your case you will get 2 DataSource instances which is probably not what you want. Instead use BeanPostProcessor which is the component actually designed for this. See also the Spring Reference Guide.
Create and register a BeanPostProcessor which does the wrapping.
public class DataSourceWrapper implements BeanPostProcessor {
public Object postProcessBeforeInitialization(Object bean, String beanName) {
if (bean instanceof DataSource) {
return new MyDataSourceWrapper((DataSource)bean);
}
return bean;
}
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
}
Then just register that as a #Bean instead of your MyDataSourceWrapper.
Tip: Instead of rolling your own wrapping DataSource you might be interested in datasource-proxy combined with datasource-assert which has counter etc. support already (saves you maintaining your own components).
Starting from spring boot 2.0.0.M3 using BeanPostProcessor wont work.
As a work around create your own bean like below
#Bean
public DataSource customDataSource(DataSourceProperties properties) {
log.info("Inside Proxy Creation");
final HikariDataSource dataSource = (HikariDataSource) properties
.initializeDataSourceBuilder().type(HikariDataSource.class).build();
if (properties.getName() != null) {
dataSource.setPoolName(properties.getName());
}
return ProxyDataSourceBuilder.create(dataSource).countQuery().name("MyDS")
.logSlowQueryToSysOut(1, TimeUnit.MINUTES).build();
}
Another way is to use datasource-proxy version of datasource-decorator starter
Following solution works for me using Spring Boot 2.0.6.
It uses explicit binding instead of annotation #ConfigurationProperties(prefix = "spring.datasource.hikari").
#Configuration
public class DataSourceConfig {
private final Environment env;
#Autowired
public DataSourceConfig(Environment env) {
this.env = env;
}
#Primary
#Bean
public MyDataSourceWrapper primaryDataSource(DataSourceProperties properties) {
DataSource dataSource = properties.initializeDataSourceBuilder().build();
Binder binder = Binder.get(env);
binder.bind("spring.datasource.hikari", Bindable.ofInstance(dataSource).withExistingValue(dataSource));
return new MyDataSourceWrapper(dataSource);
}
}
You can actually still use BeanPostProcessor in Spring Boot 2, but it needs to return the correct type (the actual type of the declared Bean). To do this you need to create a proxy of the correct type which redirects DataSource methods to your interceptor and all the other methods to the original bean.
For example code see the Spring Boot issue and discussion at https://github.com/spring-projects/spring-boot/issues/12592.

Spring JavaConfig + WebMvcConfigurerAdapter + #Autowired => NPE

I have an application with 2 Contexts. Parent for web agnostic business logic and ChildContext (implicitly created by dispatcher servlet) for web logic.
My setup loks like
#Configuration
public class BusinessConfig {
#Bean
public ObjectMapper jacksonMapper() { return new ObjectMapper() }
}
and
#Configuration
public class WebConfig extends WebMvcConfigurerAdapter {
#Autowired
private ObjectMapper objectMapper; // <- is null for some reason
#Override
public configureMessageConverters(List<HttpMessageConverter<?>> converters) {
MappingJackson2HttpMessageConverter converter = new MappingJackson2HttpMessageConverter();
converter.setObjectMapper(objectMapper); // <- bang!
messageConverters.add(converter);
}
}
I need the the object mapper in the parent context, as I use it also in security configuration. But can someone explain me, why the #Autowired objectMapper is null? Its created in the parent context (the fact that the parent exists is even logged by spring at startup). Also #Autowired has required=true by default, so it should not blow up in the configure method (it should have blown up in construction of the context, if the bean wasn't there for some reason).
It seems to me that there might be some lifecycle problem in spring - in a sense that it calls the overridden methods first, and then #Autowires the dependencies... I have also tried to #Autowire the BusinessConfig (should be perfectly legal according to documentation - the result was the same (null)).
What should I do to make this working?
Thanks in advance!
EDIT - ISSUE FOUND
I found the issue. Unfortunately it had nothing to do with WebMvcConfigurerAdapter nor #Configuration. It was caused by premature initialization of context triggered by missing static modifier for propertyPlaceholderConfigurer... I have created issue in Spring core jira (https://jira.spring.io/browse/SPR-14382)
What about simply renaming the bean declaration method to match with the autowired bean?
#Configuration
public class BusinessConfig {
#Bean
public ObjectMapper objectMapper() { return new ObjectMapper() }
}
#Configuration
public class WebConfig extends WebMvcConfigurerAdapter {
#Autowired
private ObjectMapper objectMapper;
[...]
}

#Bean annotation on a static method

Can anyone explain me why a #Bean on a static method is returning 2 different instances ?
I can understand that #Bean on a method non static like the class A is returning the same instance because default scope is singleton.
And If I try to inject the class B with #Autowire in a Service it won't work, so it looks like it's not load by the Spring App Context. So using a class like D will be similar !?
I think not because for #PropertySource we need to use in addition (used for the placeholder):
#Bean
public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
return new PropertySourcesPlaceholderConfigurer();
}
and if we remove #Bean from this, it won't work.
Is there other use case where it would be useful to use #Bean on a static method?
EXAMPLE:
when I run:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = {Conf.class})
public class Test {
#org.junit.Test
public void test(){
}
}
for
#Configuration
#ComponentScan
public class Conf {
#Bean
public A aaa(){
return new A();
}
#Bean
public static B bbb(){
return new B();
}
#Bean
#Scope("prototype")
public C ccc(){
return new C();
}
public static D ddd(){
return new D();
}
#PostConstruct
public void post(){
System.out.println(aaa());
System.out.println(aaa());
System.out.println(bbb());
System.out.println(bbb());
System.out.println(ccc());
System.out.println(ccc());
System.out.println(ddd());
System.out.println(ddd());
}
}
public class A {
}
public class B {
}
public class C {
}
public class D {
}
I get:
uk.co.xxx.unit.A#6caf0677
uk.co.xxx.unit.A#6caf0677
uk.co.xxx.unit.B#413d1baf
uk.co.xxx.unit.B#16eb3ea3
uk.co.xxx.unit.C#353352b6
uk.co.xxx.unit.C#4681c175
uk.co.xxx.unit.D#57a78e3
uk.co.xxx.unit.D#402c4085
Because you create a new object for every method call to bbb(). Inter-bean dependencies (if you just call the bean producing method) work in that way, that a proxy is created for your configuration class, and the proxy intercepts method calls to the bean methods to deliver the correct bean (singleton, prototype etc.). However, static methods are not proxied, so when you call the static method, Spring doesn't know about it and you just get the regular Java object. With the PropertySourcesPlaceholderConfigurer it is different, because that method isn't directly called in that class, the bean will only be injected where it is used.
#Bean annotated methods get proxied in order to provide the correct bean instance. Static methods do not get proxied. Hence in your case bbb() invocation each time gives a new instance of B.
PropertySourcesPlaceholderConfigurer class is a special kind of bean since it implements BeanFactoryPostProcessor. In the container lifecycle, a BeanFactoryPostProcessor object must be instantiated earlier than an object of #Configuration-annotated class. Also you don't need to call this static method.
See Bootstrapping section in the java doc : [http://docs.spring.io/spring/docs/4.2.x/javadoc-api/org/springframework/context/annotation/Bean.html][1]
Special consideration must be taken for #Bean methods that return
Spring BeanFactoryPostProcessor (BFPP) types. Because BFPP objects
must be instantiated very early in the container lifecycle, they can
interfere with processing of annotations such as #Autowired, #Value,
and #PostConstruct within #Configuration classes. To avoid these
lifecycle issues, mark BFPP-returning #Bean methods as static

Resources