SpringBoot Cannot enhance #Configuration bean definition 'beanNamePlaceholderRegistryPostProcessor' - spring-boot

I recently started getting this warning on start up of my Spring Boot application:
o.s.c.a.ConfigurationClassPostProcessor - Cannot enhance
#Configuration bean definition
'beanNamePlaceholderRegistryPostProcessor' since its singleton
instance has been created too early. The typical cause is a non-static
#Bean method with a BeanDefinitionRegistryPostProcessor return type:
Consider declaring such methods as 'static'.
I cannot figure out where it is coming from. I have no such classes ('beanNamePlaceholderRegistryPostProcessor', 'BeanDefinitionRegistryPostProcessor') in my app that I can find so not sure how to prevent this from happening.
Anyone have any ideas?
This question is slightly different to this one as that one seems to be with a class that the user has created.

I finally discovered that beanNamePlaceholderRegistryPostProcessor is part of the Jasypt Spring Boot starter package.
I raised a ticket about it and the author replied immediately, indicating that it is nothing to worry about.
https://github.com/ulisesbocchio/jasypt-spring-boot/issues/45
You can ignore the warning if you want by adding the following to Logback (if you use that):
<logger name="org.springframework.context.annotation.ConfigurationClassPostProcessor" level="ERROR"/>

Related

Implementation provided for BatchConfigurer is not connsidered when using #EnableBatchProcessing(modular=true)

I am developing a sample application that Spring Batch with Spring Boot. My requirement is:
Have my own implementation of BasicBatchConfigurer so that I can configure AsyncTaskExecutor and my own dataSource as I am using SAP HANA as DB for which databaseType is not supported.
I want to use #EnableBatchProcessing(modular=true) so that I can register multiple jobs and launch them with separate Child Context
I have added all the required configurations. Without setting modular=true the Job is Launched and works as expected. It initializes the beans defined from my implementation of BasicBatchConfigurer.
However, once modular=true is set, the beans from my implementation are not initialized.
The code is hosted here: https://github.com/VKJEY/spring-framework-evaluation
I debugged further into the issue:
Looks like, When we set modular=true, BatchConfigurationSelector uses ModularBatchConfiguration
In ModularBatchConfiguration, there's a field Collection<BatchConfigurer> configurers. This has been annotated as #autowired.
I assume that this field is auto initialized if I provided a implementation
of BatchConfigurer as it has been mentioned in the comments of ModularBatchConfiguration class as well
However, While debugging I realized that the above field is still null beacuse of which, It loads DefaultBatchConfigurer and follows the default flow.
My question is why is that field configurers not being initialized in ModularBatchConfiguration? Am I missing something?
I am using Spring boot 2.1.2.
My question is why is that field configurers not being initialized in ModularBatchConfiguration? Am I missing something?
You are hitting a lifecycle issue between Spring Boot custom auto-configuration that you defined in the META-INF/spring.factories file and Spring Batch configuration.
I debugged your code and here is how to fix the issue:
remove org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.example.job.data.persistence.config.AsyncBatchConfigurer
from META-INF/spring.factories file. This is not needed as Spring Batch
will detect the AsyncBatchConfigurer when it is declared as a bean.
You can even remove this spring.factories file
remove #ConditionalOnMissingBean(BatchConfigurer.class) from AsyncBatchConfigurer:
Since you declared this class as a #Configuration class, it will also be defined as a bean of type BatchConfigurer and will be detected by ModularBatchConfiguration
With these two changes, the field configurers in ModularBatchConfiguration is correctly autowired with your AsyncBatchConfigurer.
As a side note, you don't need the AsyncBatchConfigurer#configurers method as Spring will do the work of injecting all BatchConfigurer beans in ModularBatchConfiguration.
Hope this helps.

Upgrade to Spring Boot 2.1.2 from 2.0.6 causes repository errors

I tried to upgrade a working application from Spring Boot 2.0.6 to 2.1.2. I had some troubles with tests after this change, but eventually got around that. I can successfully build the application from NetBeans (mvn clean install). However, when I try to run from a command line using mvn spring-boot:run, here is what I get:
APPLICATION FAILED TO START
Description:
The bean 'xxxRepository', defined in null, could not be
registered. A bean with that name has already been defined in null and
overriding is disabled.
Action:
Consider renaming one of the beans or enabling overriding by setting
spring.main.allow-bean-definition-overriding=true
The interesting part is that every time I try to run it, the error is on a different repository, but always with the same message.
It would seem that this has to do with this change:
Bean Overriding
Bean overriding has been disabled by default to prevent a bean being
accidentally overridden. If you are relying on overriding, you will
need to set spring.main.allow-bean-definition-overriding to true.
Given that it is apparently effecting all my repositories, my guess is that there is a configuration problem somewhere. I can follow the recommended action, but it actually made no difference. The problem is that I don't know what to change in the configuration to get this working again. I'm not even sure what to post that is pertinent to the issue. Any ideas on how to figure this out?
We ran into this issue upgrading from Spring Boot 2.0.x to 2.1.x.
I could "solve" this issue by allowing bean definition override with spring.main.allow-bean-definition-overriding: true but it felt like hiding the root cause.
In fact bean definition overriding used to hide poor configuration on our side.
After inspecting our #Configuration classes we were scanning packages containing our repositories twice, using #ComponentScan and #EnableJpaRepository on the same packages from different classes : once with filters #ComponentScan.Filter, once without.
Removing the second component scan fixed the issue.
I have seen this error before and i had a class BOTH annotated with #Component or #Repository or #Service AND also registered as a #Bean in a config class. Is that your case also by any chance?
I got a similar problem, but it was only with #NotNull annotation. When I upgraded the spring it stopped to work. I tried a lot of things that I found here in SO, but the only thing that worked to me was to eliminate the database and run spring again. I know that it sucks, but didn't find another solution.

Injected bean reset to NULL in the Aspect

I am new Spring AOP and Aspectj. I have seen various posts related to injected bean in an aspect being null and I have run into a similar problem. I am still not clear how I should proceed to get past the problem I am currently encountering.
Issue: Currently we are using Spring 3.2.3 and all injection is through Annotation. In my case, the dependent bean is injected properly by Spring but at the point of execution the injected bean is NULL. BTW, this doesn't happen all the time but what I can say is the stack trace when it fails and when it succeeds is slightly different. When the injected bean is not null (I can successfully use the injected bean service), the call to the before advice (in the aspect) always happens before the target method is called as it should.When the injected bean is NULL, the call to the aspect is from the first statement of the target method. At this point, I think another aspect is instantiated and has no reference to the injected bean. Here is the aspect I have created:
#Component
#Aspect
public class Enable{
private NameService nameService;
#Autowired
public void SetNameService(NameSerice service){
// service is injected properly
this.nameSerice = service;
}
#Before("* *.*(..)")
public void callBefore(JoinPoint jp){
//sometimes nameService is null and sometimes it not not
this.nameService.lookup(...);
}
}
Examining the various posts, one way to get around this (as suggested in the post) is to configure the aspect in the XML configuration file and use the factory-method ="aspectOf" and in the configuration inject the reference to the NameService bean as a property. Our whole project uses Annotation based injection (as stated earlier). Assuming I can still configure the above aspect in an XML configuration file, how can I get the reference NameService bean Id so that I can add it to the configuration. I also saw a post related to using Configurable annotation but I assume that is for objects created outside the Spring IOC.
Currently, the aspects are woven using Aspectj compile option in pom.xml. Our root-context.xml contains the entry context:annotation-config and the aspect is injected into Spring IOC because component-scan is turned on for the folder where the aspect resides. Any help will be appreciated
This is well common error when use aspects in spring, you should add
<context:spring-configured/>
and
<aop:aspectj-autoproxy />
also add
#Configurable
#Aspect
public class Enable
To your appContext.xml
aspectOf is another style to do the above but I prefer use the nature of context.
It might be too late to answer this question. But i have come across the same situation and i fixed it as below.
1) Have a setter and getter for "NameService" in your aspect class.
2) Mark "NameService" with #Component ("nameService")
3) Configure "nameService" in xml configuration using setter injection.
4) Re-Start your server after making changes.
This should resolve the problem of getting null for "NameService" in aspect.

#Qualifier and #Resource doesn't work when running test case under Spring test framework

I have a test case which has a dependency of 'ticketDao', like below:
import javax.annotation.Resource;
import org.springframework.beans.factory.annotation.Qualifier;
public class LfnSaleCancellationIntegrationTest extends BaseIntegrationTest {
//#Resource(name = "baseTicketDao")
private BaseTicketDao ticketDao;
....
public void setTicketDao(#Qualifier("baseTicketDao") BaseTicketDao ticketDao) {
this.ticketDao = ticketDao;
}
}
and BaseIntegrationTest extends from spring test framework's AbstractJpaTests, Spring is v3.0.5
When run this test case, I got a similar exception:
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException:
No unique bean of type [com.mpos.lottery.te.gamespec.sale.dao.BaseTicketDao]
is defined: expected single matching bean but found 2:
[baseTicketDao, extraballTicketDao]
My project has evolved a long time, in fact when I encountered this exception at the first time, #Qualifier solved it. Till today this project has changed much, but I really have no idea why #Qaulifier and #Resource don't work any more.
And if i remove the dependency of 'ticketDao', the test case will pass. I am wondering whether there are some change of spring configuration cause this exception? or ... i have googled much, but seem no other people ever faced such a problem, pls give your comments, thanks very much!
You are using AbstractJPATests which is part of old spring test framework and (indirect) subclass of AbstractDependencyInjectionSpringContextTests. By default the injection is not annotation based but it discovers setters and fields and attempts injection by type. It would be recommended to switch to newer annotation based tests, refer to spring documentation for details.
As a workaround try to change autowire mode. Call it in test constructor as this.setAutowireMode(AutowireCapableBeanFactory.AUTOWIRE_BY_NAME), rename your field to baseTicketDao and remove setter.
I knew the reason. In my new project, there are a statement of context:component-scan in spring configuration file, which will register 4 BeanPostProcessors by default:
AutowiredAnnotationBeanPostProcessor(#Autowired)
RequiredAnnotationBeanPostProcessor(#Require)
CommonAnnotationBeanPostProcessor(JSR-250 annotations, #Resource, #PostConstruct etc, #WebServiceRef )
PersistenceAnnotationBeanPostProcessor(#PersistenceUnit and #PersistenceContext)
While in my old project, only the default BeanPostProcessor(internalAutoProxyCreator) has been registered. My understanding is AutowiredAnnotationBeanPostProcessor will always wire by type. Anyway if remove context:component-scan, my test case can pass now.
In fact i have migrate all my test cases to spring test context framework now, and context:component-scan must be stated, otherwise #Autowired, #Resource etc annotation will be ignored, and you will get a great many of NullPointerException of those automaticaly injected dependencies.
NOTE: <context:annotation-config/> will register those 4 BeanPostProcessors too.

#Resource not injected sometimes in Spring

I have a Spring/Struts2 problem and I have asked in Spring forum but no response
#Resource no injected sometimes
for the ease of reading I will repeat the question here. hopefully its not considered spamming
I have a very strange issue when I use #Resource to inject beans.
I am using Struts2 2.2.3.1 with provided Spring plugin with Spring 3.0.0. (I am not able to upgrade to the newest version of Spring without knowing the proper cause because all programs are in production)
The issue or observed problem is when an Struts2 Action is created, the fields annotated with #Resource are supposed to have resources injected by Spring. However, some times and only sometimes one of the annotated resources is not inject or the value is simply null and therefore causes NullPointerException. The point where problem occurs is undetermined which mean the same set of programs running in different environment will result in different behaviors. Also the resource that is not injected is not always the same.
For example, if there are actions A, B, C, and environment E1 and E2, in E1 the A action might have this problem sometimes and in E2 it might be action B that's having the problem. One thing that's certain is that if in E2 B is having problem it will happen from time to time and A and C wont have problem or at least the problem just not observed on A and C. Moreover, if A has 5 #Resource fields, when the problem occurs the NPE may be thrown when accessing the first resource however the next time may be the second resource.
Here is what I mean by the problem only happens "sometimes". Suppose A is having this kind of problem, and I start web server (tomcat or WAS) and I go access A for the first time if the problem occurs it will occur throughout the time of this server start-up. If the problem doesn't occur the first time I access A then throughout the time of this server start-up the problem wont occur. Also if this time it's the first resource not injected then it will be the same for this start-up.
Here is a bit of my application setup:
I use XML inter-mixed with annotation scanning. Basically all Action, Service, Dao classes are defined in XML however all property definitions are left out for Spring to scan the actual class.
Sample definition:
Code:
<!-- have this in all XML files -->
<context:annotation-config/>
<!-- an action definition, all actions are scoped prototype. It will use adm.common.admBranchesManager in the action with field annotated with #Resource -->
<bean id="adm.common.chooseBranchAction" class="com.bi.wms.adm.common.web.ChooseBranchAction" scope="prototype"></bean>
<!-- all service and dao are singleton and do not have any problem, all service/Manager are annotated with #Transactional. In Action we only code against interface and not actual concrete class -->
<bean id="adm.common.admBranchesManager" class="com.bi.wms.adm.common.service.impl.AdmBranchesManagerImpl"/>
<bean id="adm.common.admBranchesDao" class="com.bi.wms.adm.common.dao.jdbc.AdmBranchesDaoImpl"/>
Also for all actions they all extends an abstract action that has a resource field that's session-scoped.
Code:
<bean id="base.wms.login" class="com.bi.wms.common.model.WmsLogin" destroy-method="logout" scope="session">
<aop:scoped-proxy />
<property name="admUserSessionsManager" ref="adm.operation.admUserSessionsManager"/>
</bean>
Here is a part of a sample action:
Code:
//this class is just a sample not the actually one thats having the problem, AbstractWmsAction is the class that have a session-scoped bean
public class AdmWmsControlAction extends AbstractWmsAction
{
#Resource(name = "adm.operation.admWmsBatchGroupsManager")
private AdmWmsBatchGroupsManager admWmsBatchGroupsManager;
#Resource(name = "adm.operation.admWmsControlManager")
private AdmWmsControlManager admWmsControlManager;
//sometimes we use setters for injecting but that doesnt stop the problem from happening
//....omit
}
Don't know if anyone had this kind of issue.
If additional information is needed, I will do my best to provide.
Thanks
I've seen similar issues before with Spring using annotations. Which is why I prefer to use #Transactional in limited situations. Which classes are annotated with #Transactional? The implementation or the interface?
I've only looked at this quickly through the debugger, but I think you may be ending up with multiple beans, one Spring proxy created with the #Transactional -- ly annotated class, and the other defined in your application context. I would suggest that if you are defining the service in the Spring configuration file, you also define the transactional proxy there as well, and inject the proxy by name with your #Resource annotation, OR remove the configuration in xml, and inject by type in your annotation. This way, you'll be notified by Spring if you have duplicate matches by type that cannot be resolved.
to make a note here
after these time I have not found the real cause and solution.
However, time has proven that upgrading to Spring 2.3.3 or later solves the problem or at least the problem has not appeared yet

Resources