In spring batch framework, what is the difference between 'lazy-init=true' and 'scope=step'? - spring

When I defined a 'MethodInvokingFactory' bean with 'scope=step', I got an error that the type of the bean can't be determined. It worked fine when I replaced 'scope=step' with 'lazy-init=true'. As per my knowledge, both are used for the late binding of the beans, except for that one difference. Are there any other differences between these two ways? Also, is my usage correct?
Please let me know your thoughts about this.

To answer to your question from low-level perspective:
lazy-init="true" means that bean will not be instantiated when the context is created, but will be created when it is referred e.g. by another bean. I think this is clear, also from #AravindA comment.
Scoped bean works in different manner. When context is created this bean is wrapped into additional proxy object (by default created by CGLIB), which is passed to the bean that refers it (this proxy is by default singleton, e.g. shared). So each time the method is invoked on the proxy in runtime Spring intersects the call, requests the factory to return the instance of the bean and invokes the method on that bean. The factory in its turn may lookup for "real" bean instance e.g. in HTTP request ("request" scope) or HTTP session ("session" scope) and/or create new instance if necessary. Late instantiation allows to initialize the scoped bean with "runtime" (scope) values, e.g. values from HTTP request/session which are obviously undefined when context was created. In particular "step"-scoped beans are bound to thread local (remember that steps are run in parallel for partitioning). So, scoped beans are dereferred when you call a method on them. Finally one can easily break this elegant Spring "ideology" by calling any method on scoped bean just after it is set to another bean (e.g. in the setter) :)

One thing to understand about lazy-initialization is that even though a bean
definition may be marked up as being lazy-initialized, if the lazy-initialized
bean is the dependency of a singleton bean that is not lazy-initialized, when the
ApplicationContext is eagerly pre-instantiating the singleton, it will have to
satisfy all of the singletons dependencies, one of which will be the
lazy-initialized bean!
Using a scope of Step is required in order to use late binding since the bean
cannot actually be instantiated until the Step starts, which allows the
attributes to be found. Because it is not part of the Spring container by
default, the scope must be added explicitly, either by using the batch namespace
or by including a bean definition explicitly for the StepScope (but not both):
<bean class="org.springframework.batch.core.scope.StepScope" />
Read here and here for more info
The scope="step" has nothing to do with Lazy initialization . It is use for late binding of parameters inside a "Step" .

the step scope is specifically for latebinding of job/step attributes and not really for late-binding of beans, meaning the spring bean context/factory will enhance stepscoped beans and look for attributes to set, e.g.
value="#{jobParameters[input.file.name]}

Related

Where does the dependency injection take place?

As far as i know:
BeanFactory creates beans according to its definitions.
Then sends it to BeanPostProcessor to invoke postProcessBeforeInitialization().
Receives it back and provides init() method.
BeanPostProcessor with postProcessAfterInitialization() method.
Bean finally configured and sent into IoC container.
My question is: "At what stage does dependency injection take place?".
Really curious on how it works under the hood.
Dependency injection typically takes place during the initialization of a bean, after the bean has been instantiated by the BeanFactory, but before the bean is fully initialized and ready to be used by the application.
The process typically works as follows:
The BeanFactory creates an instance of the bean according to its definitions.
The BeanPostProcessor's postProcessBeforeInitialization() method is invoked, which allows for any pre-initialization processing to take place, such as setting up some initial values or performing some other setup tasks.
The BeanFactory then proceeds to set the bean's properties and dependencies, using either setter injection or constructor injection, which are the two most common forms of dependency injection. This step is where the actual injection of dependencies takes place.
The BeanPostProcessor's postProcessAfterInitialization() method is invoked, which allows for any post-initialization processing to take place, such as performing some validation or other tasks on the fully initialized bean.
The Bean is finally configured and sent into IoC container.
It's important to note that the step 2 and 4 are optional, and not all BeanPostProcessor implementation will invoke them.
BeanPostProcessor is an interface that allows for additional processing to be done on bean instances before and after their initialization. It is useful for performing custom logic on beans during the initialization process, but it is not necessary for dependency injection to take place.

Spring Beans Configurations

if we have
1- a case scenario where we have class A configured as singleton and a child class B as a member within Class A configured as prototype.
2- Another case scenario, which is the opposite to the first one, where we have Class A defined as prototype and Class B defined as singleton.
How Spring container is gonna initialize and deal with these two situations when request is made to these classes A and B?
Please take a look at this answer - Spring session-scoped beans as dependencies in prototype beans?
You can always inject a bean of wider scope (e.g. a singleton) into a
bean of narrower scope (e.g. a session-scoped bean), but to it the
other way around, you need a scoped-proxy.
This applies to your questions.
You are injecting narrower scope bean in wider scoped bean. (Prototype is narrower than singleton). It should work for you.
You are trying to inject wider scope bean into narrower scoped bean. You need to use a scoped-proxy.

Inconvenient of #Value Annotation in Spring 3.0

I'm wondering if I add a #Value annotation on a property, the class who contains this property cannot be used by another one with a different value, Example :
MyClassUtil.java had
#Value("${some.value}")
private int _myProperty;
And of course there is one module.properties who contain :
some.value=10
Another class ClassA.java wants to use this class with value 10. Ok, no problem.
But another class ClassB.java wants to use this class but with another value : 20. I cannot do this if I'm not mistaken.
Because before #Value era, I could declare two beans in the moduleContext.xml without any problem.
So, is #Value pushes you to do some strong coupling ?
You are right that the annotation configuration can not be instance specific. It is important to understand the concept of bean definitions in bean factory.
Manual bean definition:
Single <bean> element in your XML config leads to a single bean definition. Multiple <bean> mean multiple definitions (regardless of a bean type).
Single #Bean method within #Configuration class leads to a single bean definition. Multiple #Bean methods mean multiple definitions (regardless of a bean type).
However when using component scan, classes annotated with #Component-like annotations are auto-registered as a single bean definition. There is no way you can register bean multiple times via component scan.
Similarly, annotation configurations (#Value, #Autowired, etc.) are type-wide. Your bean instances are always augmented and processed with the same effect (e.g. injecting the same value). There is no way you can alter annotation processing behaviour from instance to instance.
Is this tight coupling? It is not in its general understanding - bean factory (Spring) is still free to inject whatever it thinks is suitable. However it is more of a service lookup pattern. This simplifies your life when working with domain specific singletons. And most beans in an application context tend to be singletons, many of them domain specific (controllers, services, DAOs). Framework singletons (non-project specific reusable classes) should never use annotation based configuration - in this scope, it is an unwanted tight coupling.
If you need different bean instances, you should not use annotation configuration and define your beans manually.

Spring - why initialization is called after setter method

Just curious.
Why bean initialization is done after setter's method? I thought initialization is best done before setter method - like doing validation to ensure it's good prior to setting the value to the instance member
Why beanPostProcessor considered a after initialization when it has a beforeInitialization method?
From my understanding, calls of setters etc is considered to be the action to setup the initial state of the bean. You cannot do any meaningful initialization without the initial state of the bean set. Just think what will happen if initialization is done before setters: (assume we are using setter injection, not ctor injection) The bean is created by calling the default ctor, and then you call the initialization, what can you initialize then? The bean is simply a blank object without dependencies properly injected. If you can do initialization in such case, such kind of initialization can be simply put in your ctor.
For BeanPostProcessor, I believe the "post" is not referring to post-initialize. It is simply for you to do post processing after the bean is created (i.e. post-creation). As it is common to do such kind of post processing in two different timing, which is before and after the bean is initialized. Hence for the two methods.
So initialization can use the values set on the bean.
Because it's a post-processor.

Overriding the bean defined in parent context in a child context

Our app has a requirement to support multi-tenancy. Each of the boarded customer might potentially override 1 or more beans or some properties of a bean defined at the core platform level (common code/definitions). I am wondering what is the best way to handle this.
Spring allows you to redefine the same bean name multiple times, and takes the last bean definition processed for a given name to be the one that wins. So for example, your could have an XML file defining your core beans, and import that in a client-specific XML file, which also redefines some of those beans. It's a bit fragile, though, since there's no mechanism to specifically say "this bean definition is an override".
I've found that the cleanest way to handle this is using the new #Bean-syntax introduced in Spring 3. Rather than defining beans as XML, you define them in Java. So your core beans would be defined in one #Bean-annotated class, and your client configs would subclass that, and override the appropriate beans. This allows you to use standard java #Override annotations, explicitly indicating that a given bean definition is being overridden.

Resources