I'm new to Spring.
I'm working on a library project which depends on spring-context.
#Scope(value = "##?")
#Service
public class MyService {
#PostConstruct private void constructed() {
}
#PreDestroying private void destroying() {
resource.clear();
}
public void doSome() throws IOException {
// try{}finally{} is not the case
resource = getSome();
doSome(resource); // may throw an IOException
resource.clear();
}
private transient MyResource resource;
}
I want to free the resource in every time this instance being destroyed.
According to #Scope, there four options that I can choose.
ConfigurableBeanFactory.SCOPE_SINGLETON
ConfigurableBeanFactory.SCOPE_PROTOTYPE
WebApplicationContext.SCOPE_REQUEST
WebApplicationContext.SCOPE_SESSION
I found that WebApplicationContext is not available from my dependency tree. (I'm not depends on spring-webmvc)
I'm planning to choose ConfigurableBeanFactory.SCOPE_PROTOTYPE.
Is it true that the scope I choose will make MyService safe? I mean any two or more clients can't be injected with the same service instance? Will the Spring container take care of it?
Indeed, Request, Session, Global-session and Application scopes are only available within Web aware application context.
Singleton (single instance per Spring container) is a default scope used by Spring, so using prototype scope will guarantee that new instance will be created and returned to the client, so yes Prototype is what you need in this case.
Related
I'm working on a legacy Spring-Boot application where I would like to use dependency injection with some code that exists outside the application context. One a part of the application comes as a separate JAR-file and cannot be modified. But I am able to modify some classes that are instantiated in that part. Here how I'm planning to do this:
class ServiceHolder {
private static FooService fooService;
public static FooService getFooService() { return fooService; }
public static void setFooService(FooService service) { fooService = service; }
}
#Bean
#Profile("production")
FooService fooService() {
var service = new ProductionFooService();
ServiceHolder.setFooService(service);
return service;
}
public class LegacyPojo {
private final FooService fooService;
public LegacyPojo() {
fooService = ServiceHolder.getFooService();
}
//.. some business logic
}
I'm worried about possible visibility problems when different requests in separate threads will call new LegacyPojo() and reach for FooService instance.
So my question is: should I declare ServiceHolder#getFooService and ServiceHolder#setFooService synchronized or not?
There is a lot of others things you can do that with security, dont you think that you could have the instance of FooService into LegacyPojo passing it by constructor, it will be less coupled.
Other thing that you can do is to control the instances of FooService, you may do it as a singleton, declaring it as a static property on ServiceHolder and not to have a setMethod. I think, the way you told, you want a single instance of FooService.
Even LegacyPojo being a Pojo, you dont need to create a getter for FooService.
Once you use ServiceHolder.setFooService(service); you may do a implementation like this:
class ServiceHolder{
private static FooService fooService;
public static void setFooService(FooService newFooService){
if(fooService== null){
fooService = newFooService;
}
}
}
So that way, you will set only the first instance of FOoService and it will not be changed, of course you can do any condition to setFooService in ServiceHolder
It would work without any synchronization because singleton bean will be instantiated in a critical section inside synchronized block. In DefaultSingletonBeanRegistry class there is a method getSingleton which, according to the doc:
/**
* Return the (raw) singleton object registered under the given name,
* creating and registering a new one if none registered yet.
* ...
*/
And at the very beginning of this method, the critical section starts synchronized (this.singletonObjects). So the effect of ServiceHolder.setFooService(service) call will be visible to all threads after leaving the critical section.
I have a Hibernate Search ClassBridge where I want to use #Inject to inject a Spring 4.1 managed DAO/Service class. I have annotated the ClassBridge with #Configurable. I noticed that Spring 4.2 adds some additional lifecycle methods that might do the trick, but I'm on Spring 4.1
The goal of this is to store a custom field into the index document based on a query result.
However, since the DAO, depends on the SessionFactory getting initialized, it doesn't get injected because it doesn't exist yet when the #Configurable bean gets processed.
Any suggestions on how to achieve this?
You might try to create a custom field bridge provider, which could get hold of the Spring application context through some static method. When provideFieldBridge() is called you may return a Spring-ified instance of that from the application context, assuming the timing is better and the DAO bean is available by then.
Not sure whether it'd fly, but it may be worth trying.
Hibernate Search 5.8.0 includes support for bean injection. You can see the issue https://hibernate.atlassian.net/browse/HSEARCH-1316.
However I couldn't make it work in my application and I had implemented a workaround.
I have created an application context provider to obtain the Spring application context.
public class ApplicationContextProvider implements ApplicationContextAware {
private static ApplicationContext context;
public static ApplicationContext getApplicationContext() {
return context;
}
#Override
public void setApplicationContext(ApplicationContext context) throws BeansException {
ApplicationContextProvider.context = context;
}
}
I have added it to the configuration class.
#Configuration
public class RootConfig {
#Bean
public ApplicationContextProvider applicationContextProvider() {
return new ApplicationContextProvider();
}
}
Finally I have used it in a bridge to retrieve the spring beans.
public class AttachmentTikaBridge extends TikaBridge {
#Override
public void set(String name, Object value, Document document, LuceneOptions luceneOptions) {
// get service bean from the application context provider (to be replaced when HS bridges support beans injection)
ApplicationContext applicationContext = ApplicationContextProvider.getApplicationContext();
ExampleService exampleService = applicationContext.getBean(ExampleService .class);
// use exampleService ...
super.set(name, content, document, luceneOptions);
}
}
I think this workaround it's quite simple in comparision with other solutions and it doesn't have any big side effect except the bean injection happens in runtime.
After spending 2 days on this issue I really can't make any more progress on my own. I am working on a standard web application with Spring for dependency injection and the likes. I am also using Spring to cache several expensive methods I use a lot.
After I introduced Apache Shiro for the security layer, I was experiencing a strange issue where #Cacheable methods in a certain service no longer got cached. To this point, I was able to strip the problem down to its core, but there's still a lot of code for you to look at - sorry for that...
First, I configure all relevant packages (all classes shown in the following are in one of those).
#Configuration
#ComponentScan(basePackages = {
"my.package.config",
"my.package.controllers",
"my.package.security",
"my.package.services",
})
public class AppConfiguration {
}
Here is the configuration file for caching.
#Configuration
#EnableCaching
public class CacheConfiguration {
#Bean(name = "cacheManager")
public SimpleCacheManager cacheManager() {
SimpleCacheManager simpleCacheManager = new SimpleCacheManager();
simpleCacheManager.setCaches(Arrays.asList(
new ConcurrentMapCache("datetime")
));
return simpleCacheManager;
}
}
For my minimal example, I am using a very simple service that only returns the current timestamp. The Impl class is as simple as you would imagine.
public interface DateService {
#Cacheable("datetime")
LocalDateTime getCurrent();
}
I inject this service into a controller.
#Controller
#RequestMapping("/v1/date")
public class DateController {
#Autowired
DateService dateService;
#RequestMapping(value = "/current", method = RequestMethod.GET)
#ResponseBody
public ResponseEntity<String> getCurrent() {
Subject s = SecurityUtils.getSubject();
s.login(new MyToken());
return new ResponseEntity<>(dateService.getCurrent().toString(), HttpStatus.OK);
}
}
The application is set up and started via Jetty, and everything works as expected so far. When calling <api-url>/v1/date/current for the first time the current timestamp is returned, but afterwards one always receives the cached result.
Now, I introduce Shiro with yet another config file.
#Configuration
public class ShiroSecurityConfiguration {
#Bean
#Autowired
public DefaultSecurityManager securityManager(MyRealm realm) {
List<Realm> realms = new ArrayList<>();
// MyToken is a static stub for this example
realm.setAuthenticationTokenClass(MyToken.class);
realms.add(realm);
DefaultSecurityManager manager = new DefaultSecurityManager(realms);
SecurityUtils.setSecurityManager(manager);
return manager;
}
// other Shiro related beans that are - at least to me - irrelevant here
// EDIT 2: I figured out that the described problem only occurs with this bean
// (transitively depending on DateService) in the application
// the bean is required for annotations such as #RequiresAuthentication to work
#Bean
#Autowired
public AuthorizationAttributeSourceAdvisor authorizationAttributeSourceAdvisor(DefaultSecurityManager securityManager) {
AuthorizationAttributeSourceAdvisor advisor = new AuthorizationAttributeSourceAdvisor();
advisor.setSecurityManager(securityManager);
return advisor;
}
}
Finally, here comes the realm which also depends on my service.
#Component
public class MyRealm extends AuthenticatingRealm {
private static final String REALM_NAME = "MyRealm";
#Autowired
private DateService dateService;
#Override
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) throws AuthenticationException {
System.out.println("User authenticated at "+dateService.getCurrent());
return new SimpleAuthenticationInfo("",token.getCredentials(),REALM_NAME);
}
}
With that, the caching is broken in my entire application. There is no error message, it just doesn't use the cache anymore. I was able to implement a workaround, but I am now seeking for a better solution and maybe also some advice to better understand the essence of my issue. So, here comes the workaround.
#Component
public class MyRealm extends AuthenticatingRealm {
private static final String REALM_NAME = "MyRealm";
private DateService dateService;
#Autowired
private ApplicationContext applicationContext;
private void wireManually() {
if (dateService == null) {
dateService = applicationContext.getBean(DateService.class);
}
}
#Override
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) throws AuthenticationException {
wireManually();
System.out.println("User authenticated at "+dateService.getCurrent());
return new SimpleAuthenticationInfo("",token.getCredentials(),REALM_NAME);
}
}
Now it's back to working, and I was able to debug the reason for that. Shiro and hence MyRealm gets initialized very early, even before the whole caching with my SimpleCacheManager and all the related stuff (cacheInterceptor etc.) is loaded. Therefore, there is no proxy to wrap around the service when it gets initialized before the realm when using #Autowired. With the workaround shown above, the service is not injected before everything is set up properly and the first request is being served, and then there is no problem.
Simply put, as soon as I make MyRealm dependent on DateService (annotating the last version of MyRealm with #DependsOn("dateServiceImpl") is enough to break the application) it gets initialized too early (i.e. before caching is set up).
So I would need to either postpone the initialization of MyRealm, but I don't know how to do that. I tried #DependsOn("cacheManager"), but that doesn't help as the other beans required for caching are loaded later nonetheless. Or - which is the same from another perspective - I could make sure the whole caching infrastructure (I am not enough of an expert to describe it in detail) is initialized earlier. Unfortunately, I also don't know how to do that...
Thanks in advance to everyone who made it to this point. Looking forward to any input, no matter if it's an idea to fix the code in a better way or an explanation why exactly Spring can't get this right on its own.
I finally figured out what the problem is and can at least explain its cause in more detail, even though my proposed solution is still a bit hacky.
Enabling the caching aspect in Spring introduces a org.springframework.cache.interceptor.CacheInterceptor, which is essentially an org.aopalliance.aop.Advice used by a org.springframework.cache.interceptor.BeanFactoryCacheOperationSourceAdvisor that implements org.springframework.aop.Advisor.
The org.apache.shiro.spring.security.interceptor.AuthorizationAttributeSourceAdvisor I introduced for Shiro is another Advisor which transitively depends on the DateService via DefaultSecurityManager and MyRealm.
So I have two Advisors for two different aspects - Caching and Security - of which the one for security is initialized first. In fact, whenever I introduce any Advisor dependent on DateService - even if its only a dummy implementation as in the following example - the caching doesn't work anymore for the same reason as it was broken when adding Shiro. This causes the DateService to be loaded before the caching aspect is ready, so it cannot be applied.
#Bean
#Autowired
public Advisor testAdvisor(DateService dateService) {
return new StaticMethodMatcherPointcutAdvisor() {
#Override
public boolean matches(Method method, Class<?> targetClass) {
return false;
}
};
}
Hence, the only proper fix for that is to change the order of aspect initialization. I am aware of the #Order(Ordered.LOWEST_PRECEDENCE) respectively #Order(Ordered.HIGHEST_PRECEDENCE) annotation for the case the multiple Advisors are applicable at a specific joinpoint, but this is not the case for me so this doesn't help. The order of initialization matters for other reasons.
Adding the following code in DateServiceImpl actually solves the problem:
#Autowired
BeanFactoryCacheOperationSourceAdvisor waitForCachingAspect;
With that, the service always waits for the cache before it can be initialized even though this dependency is not used anywhere in the implementation. So now everything is working as it should because the dependency tree now includes Shiro --> DateService --> Cache which makes the Shiro Advisor wait long enough.
It is still not as nice and clean as I would like it to be, but nevertheless, I think this explanation helps to understand the core of the problem and "How can I change the order in which Advisors are initialized in Spring" is a separate question I posted here.
Since Spring 4, #Lazy can be used to achieve the same behavior as in the original question in a more declarative way (see Spring 4 JavaDoc and compare it with earlier versions).
Tested this and it works.
#Component
public class MyRealm extends AuthenticatingRealm {
private static final String REALM_NAME = "MyRealm";
#Autowired
#Lazy
private DateService dateService;
#Override
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) throws AuthenticationException {
System.out.println("User authenticated at "+dateService.getCurrent());
return new SimpleAuthenticationInfo("",token.getCredentials(),REALM_NAME);
}
}
I'm trying to initialize some components in my Jersey application in the Application constructor (the thing that inherits from ResourceConfig) . It looks like this
public Application(#Context ServletContext context,
#Context ServiceLocator locator)...
When I try to use the locator at any point, I still can't create instances of things that I have registered in an AbstractBinder using the locator.create(MyThing.class) method.
I'm certain that they are bound correctly because they are injected properly into my resource classes via the #inject field annotation.
The difference is that the Jersey/HK2 framework is instantiating my resource classes (as expected, since they're in my package scan path), but I can not seem to leverage the ServiceLocator through code.
My ultimate goal is to have other non-jersey classes injected when they have the #Inject attribute, eg. I have a worker class that needs to be injected with the configured database access layer. I want to say
locator.Create(AWorker.class)
and have it injected.
How do I get the real ServiceLocator that will inject everything I've already registered/bound with my Binder? (Or should I be using something other than ServiceLocator?)
I am going to assume you are starting up a servlet and have a class extending org.glassfish.jersey.server.ResourceConfig and your bindings are correctly registered (e.g. using a Binder and registerInstances). If you then want to access the ServiceLocator in order to perform additional initialization, you have two choices:
One approach is to register a ContainerLifecycleListener (as seen here in this post):
// In Application extends ResourceConfig constructor
register(new ContainerLifecycleListener() {
#Override
public void onStartup(final Container container) {
// access the ServiceLocator here
final ServiceLocator serviceLocator = container.getApplicationHandler().getInjectionManager().getInstance(ServiceLocator.class);
// Perform whatever with serviceLocator
}
#Override
public void onReload(final Container container) {
/* ... */}
#Override
public void onShutdown(final Container container) {
/* ... */}
});
The second approach is to use a Feature, which can also be auto-discovered using #Provider:
#Provider
public final class StartupListener implements Feature {
private final ServiceLocator sl;
#Inject
public ProvisionStartupListener(final ServiceLocator sl) {
this.sl = sl;
}
#Override
public boolean configure(final FeatureContext context) {
// Perform whatever action with serviceLocator
return true;
}
How are you starting up your container? If you are using ApplicationHandler, you can just call:handler.getServiceLocator(). The ServiceLocator is, indeed, what you want to be using to access your dependencies.
If you are starting up a servlet, I found that the best way to get access to the service locator was to have a Jersey feature set it on my startup class:
private static final class LocatorSetFeature implements Feature {
private final ServiceLocator scopedLocator;
#Inject
private LocatorSetFeature(ServiceLocator scopedLocator) {
this.scopedLocator = scopedLocator;
}
#Override
public boolean configure(FeatureContext context) {
locator = this.scopedLocator; // this would set our member locator variable
return true;
}
}
The feature would just be registered with our resource config with config.register(new LocatorSetFeature()).
It would be important to tie in startup of other components based on the lifecycle of your container, so this still feels a bit hacky. You might consider adding those classes as first class dependencies in the HK2 container and simply injecting the appropriate dependencies into your third party classes (using a Binder, for example).
I have a web application with spring 3.0. I need to run a class with main method from a cron that uses beans defined in appcontext xml(using component scan annocations). I have my main class in same src directory.
How can I inject beans from web context into main method. I tried to do it using
ApplicationContext context = new ClassPathXmlApplicationContext("appservlet.xml");
I tried to use AutoWired and it returns a null bean. So I used Application ctx and this is creating a new context (as expected) when I run main method. But is it possible that I can use existing beans from container.
#Autowired
static DAO dao;
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("xman- servlet.xml");
TableClient client = context.getBean(TableClient.class);
client.start(context);
}
You can not inject a Spring bean into any object that was not created by spring. Another way to say that is: Spring can only inject into objects that it manages.
Since you are creating the context, you will need to call getBean for your DAO object.
Check out Spring Batch it may be useful to you.
You may use a spring context for your main application, and reuse the same beans as the webapp. You could even reuse some Spring XML configuration files, provided they don't define beans which only make sense in a webapp context (request-scope, web controllers, etc.).
But you'll get different instances, since you'll have two JVMs running. If you really want to reuse the same bean instances, then your main class should remotely call some method of a bean in your webapp, using a web service, or HttpInvoker.
Try with this Main:
public class Main {
public static void main(String[] args) {
Main p = new Main();
p.start(args);
}
#Autowired
private MyBean myBean;
private void start(String[] args) {
ApplicationContext context =
new ClassPathXmlApplicationContext("classpath*:/META-INF/spring/applicationContext*.xml");
System.out.println("The method of my Bean: " + myBean.getStr());
}
}
And this Bean:
#Service
public class MyBean {
public String getStr() {
return "mybean!";
}
}
In order to address this issue, I have created https://jira.springsource.org/browse/SPR-9044. If you like the proposed approach, please vote for it.
Spring boot provides an official solution for this. Download a skeleton from
https://start.spring.io/
and make sure packaging in the pom.xml is set to jar. As long as you don't include any web dependency the application will remain a console app.