Spring Boot: how to configure a set of implementations to inject? - spring

In my Spring Boot app, I have events that should be reported in various ways, depending on the environment.
I defined interface IReporter and several implementations, namely LogReporter, EmailReporter, RpcReporter. Also new implementations may appear in future.
Now I want my events to be reported using a subset of reporters. The subset must be specified in the app configuration as the following (for example):
events.reporters=LogReporter,EmailReporter
or
events.reporters=${EVENT_REPORTERS}
or something similar.
As a result, I would like the component that handles events (say class EventHandler) to have magically injected List<IReporter> myReporters containing only the necessary implementations.
And, of course, if a new implementation is created, the only thing to update must be the app configuration (not the code of EventHandler).
I'm familiar with injecting all available implementations, only a #Primary implementation, or a specific one picked with #Qualifier. But looks like these do not solve my problem.

class EventHandler {
#Value("${events.reporter}")
List<String> requiredImplementation;
#Autowired
List<IReporter> myReporters;
public List<IReporter> getRequiredImplementation() {
return myReporters.stream()
.filter(reporter -> requiredImplementation.contains(reporter.getClass().getSimpleName()))
.collect(Collectors.toList());
}
}
EDIT:
One option I could think if we want framework to autowire only the required beans, we can explore #ConditionalOnExpression.
For instance:
#Component
#ConditionalOnExpression("#{'${events.reporter}'.contains('LogReporter')}")
Class LogReporter {
}
#Component
#ConditionalOnExpression("#{'${events.reporter}'.contains('EmailReporter')}")
Class EmailReporter {
}

Related

Hack a #Component bean in the context at runtime and override one of its particular field-injected dependencies (no test-scope)

I have a case where an Spring AutoConfiguration class is getting its dependencies through field injection and creating and exposing certain beans after interacting with them.
I would like to override one of its dependencies so the exposed beans are initialized in the way I expect.
Obviously I can disable the Autoconfiguration class and duplicate it completely locally with my desired dependency, but that would not be a maintainable solution since the amount of behaviour to reproduce is huge, and it might break on each spring update.
Is there any easy mechanisme to let the autconfiguration be loaded, and later on use the BeanFactory or something to reinject a particular instance into a particular bean?
I cannot guarantee that this is the ideal solution since this is for topics, instead of classes, but for most cases, it will do the trick.
The AutoConfiguration can be disabled in one topic, and any bean in the topic can be initialized using a particular method in the class Configuration (as usual).
List of AutoConfigurations classes (=topics)
Syntax (to exclude from autoconfiguration):
#Configuration
#EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class})
public class MyConfiguration {
#bean
public SpecificClass getSpecificClass() {
//init the instance as you want
}
}

Factory design pattern and Spring

I am wondering what is the current best practice as to the use of factory pattern within the context of Spring framework in using dependency injection. My wonder arises about whether the factory pattern is still relevant nowadays in light of the use of Spring dependency injection. I did some searching and see some past discussion (Dependency Injection vs Factory Pattern) but seem there is different view.
I see in some real life project in using a Map to hold all the beans and rely on autowiring to create those beans. When the bean is needed, it get it via the map using the key.
public abstract class Service {
//some methods
}
#Component
public class serviceA extends Service {
//implementation
}
#Component
public class serviceB extends Service {
//implementation
}
Map<String, Service> services;
But I see there is some difference among the two approaches.
Using the above method, all beans are created on application start up and the creation of object is handled by the framework. It also implies there is only one bean for each type.
While for factory pattern, the factory class creates the object on request. And it can create a new object for each request.
I think a deeper question may be, when Spring framework is used in a project, should it be strived to not create any object inside a class, which means the factory pattern ( or any creational design patterns?) should not be used, as Spring is supposed to be the central handler of the objects dependency ?
The answer to this question can be really deep and broad, I'll try to provide some points that hopefully will help.
First off, spring stores its beans (singletons) in the ApplicationContext. Essentially this is the map you're talking about. In a nutshell, it allows getting the bean by name, type, etc.
ApplicationContext, while being a really important concept, is not the whole Spring, in fact Spring framework allows much more flexibility:
You say, using a map implies that all the beans will be created at the beginning of the application and there is one instance of the bean.
Spring has a concept of Lazy beans, basically supporting a concept of beans being actually created only when they're required for the first time, so Spring supports the "delayed" beans initialization
Spring also allows more than one instance of a bean per type. So this map is more "advanced". For example you can create more than one implementation of the interface and use declare both as beans. As long as you provide enough information about what bean should be injected to the class that might use them (for example with a help of qualifiers suppored in spring), you're good to go. In addition, there are features in spring IoC container that allow injecting all registered implementations of an interface into a list:
interface Foo {}
#Component
class FooImpl1 implements Foo {}
#Component
class FooImpl2 implements Foo {}
class Client {
#Autowired
List<Foo> allFoos;
}
Now you say:
While for factory pattern, the factory class creates the object on request. And it can create a new object for each request.
Actually Spring can create objects per request. Not all beans have to be singletons, in general spring has a concept of scopes for this purposes.
For example, scope prototype means that Spring will create a bean upon each usage. In particular one interesting usage that spring supports in variety of ways is Injecting prototype bean into singleton. Some solutions use exactly like a factory (read about annotation #Lookup others rely on auto-generated proxy in runtime (like javax.inject.Provider). Prototype scope beans are not held in the application context, so here again spring goes beyond a simple map abstraction.
Last feature that you haven't mentioned is that sometimes even for singletons the initialization can be a little bit more complicated then calling a constructor with Parameters. Spring can address that by using Java Configurations:
#Configuration
public class MyConfig {
public SomeComplicatedObject foo(#Value("...") config, Bar bar) {
SomeComplicatedObject obj = new SomeComplicatedObject() // lets pretend this object is from some thirdparty, it only has no-op constructor, and you can't place spring annotations on it (basically you can't change it):
obj.setConfig(config);
obj.setBar(bar);
return obj;
}
}
The method foo here initializes the object SomeComplicatedObject and returns it. This can be used instead of factories to integrate "legacy" code (well, java configurations go way beyond this, but its out of scope for this question).
So bottom line, you Spring as an IoC container can provide many different ways to deal with object creation, in particular it can do everything that factory design pattern offers.
Now, I would like to also refer to your last sentense:
I think a deeper question may be, when Spring framework is used in a project, should it be strived to not create any object inside a class, which means the factory pattern ( or any creational design patterns?) should not be used, as Spring is supposed to be the central handler of the objects dependency ?
Indeed you don't have to use Factory Pattern when using Spring, since (as I hopefully have convinced you) provides everything that factory can do and more.
Also I agree that spring is supposed to be the central handler of the objects dependency (unless there are also parts of the application which are written in a different manner so you have to support both :) )
I don't think we should avoid using "new" altogether, not everything should/can be a bean, but I do see (from my subjective experience, so this is arguable) that you use it much less leaving the creation of most of the objects to Spring.
Should we avoid a usage of any creation design pattern? I don't think so, sometimes you can opt for implementing "builder" design pattern for example, its also a creational pattern but spring doesn't provide a similar abstraction.
I think if your project uses Spring framework you should use it. Although it depends on your project design e.g. You may use creational patterns along side with Spring IoC. e.g when you have abstraction layers not framework dependant (agnostic code)
interface ServiceFactory {
Service create(String type);
}
#Component
class SpringServiceFactory implements ServiceFactory {
#Autowired private ApplicationContext context;
Service create(String type) {
return context.getBean(type)
}
}
I use Factory pattern as well when I refactor legacy not unit testable code which also uses Spring Framework in order to implement unit tests.
// legacy service impossible to mock
class LegacyApiClient implements Closeable {...}
#Component
class LegacyApiClientFactory {
LegacyApiClient create(String endpoint) {
return new LegacyApiClient(endpoint);
}
}
#Component
class OtherService {
private final String endpoint
private final LegacyApiClientFactory factory;
OtherService(#Value("${post.endpoint}") String endpoint,
LegacyApiClientFactory factory) {...}
void doCall {
try (LegacyApiClient client = factory.create(endpoint)) {
client.postSomething();
}
}
}
....
// a random unit test
LegacyApiClient client = mock(LegacyApiClient.class)
LegacyApiClientFactory factory = mock(LegacyApiClientFactory.class)
OtherService service = new OtherService("http://scxsc", factory);
when(factory.create(any())).thenReturn(client)
service.doCall()
....

Spring 5 State Based Bean Injection - possible?

I am trying to find some 'controller' (not #Controller) within Spring 5.0 that is responsible to resolve what instance of an implementation to inject within Spring. I want to provide my own implementation of that controller (or to extend it), so that I can add my own logic for state-based bean resolution.
For example, given some interface Foo, with implementation FooImpl1 and FooImpl2, and some state Baz.
Then, when Baz = 1, I want to step into my own logic to decide to provide FooImpl1 instead of FooImpl2 for the required inject of Foo implementation.
Spring does this today, the logic seems to be:
Given the need to inject class X, find its implementations
If only one of X is found, use that
If more than one X is found, use Primary
If more than one X is found and no Primary, find Qualifier
If more than one X if found and no Primary and no Qualifier, attempt to match X with property or parameter X by name (ie: don't inject Y if the parameter or property is x and not y).
What I want to do is at some point in the logic above, to invoke my own disambiguation/resolution of the required implementation to be injected, based on my own logic and state.
So, before I go and dig into Spring to locate where that logic is implemented, I am hoping to find that it is implemented in some controller/service that I can extend, best still if this is backed by some configuration...
You can implement your own #Configuration that returns a Spring #Bean:
#Configuration
public class Config {
private final Baz baz;
#Autowired
Config(Baz baz) {
this.baz = baz
}
#Bean
public Foo getFoo() {
switch (baz) {
case 1:
return new FooImpl1();
default:
return new FooImpl2();
}
}
}
Please also read the paragraph about Full #Configuration vs “lite” #Bean mode?. The last paragraph states that:
In common scenarios, #Bean methods are to be declared within #Configuration classes, ensuring that “full” mode is always used and that cross-method references therefore get redirected to the container’s lifecycle management. This prevents the same #Bean method from accidentally being invoked through a regular Java call, which helps to reduce subtle bugs that can be hard to track down when operating in “lite” mode.
It seems what I am looking for are the interfaces BeanFactory and most likely ConfigurableBeanFactory. There are still some things to work through off-course, but this definitely is the right direction.
For those immediately curious as to what I am talking about ... see https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/beans/factory/config/ConfigurableBeanFactory.html

Java Configuration vs Component Scan Annotations

Java configuration allows us to manage bean creation within a configuration file. Annotated #Component, #Service classes used with component scanning does the same. However, I'm concerned about using these two mechanisms at the same time.
Should Java configuration and annotated component scans be avoided in the same project? I ask because the result is unclear in the following scenario:
#Configuration
public class MyConfig {
#Bean
public Foo foo() {
return new Foo(500);
}
}
...
#Component
public class Foo {
private int value;
public Foo() {
}
public Foo(int value) {
this.value = value;
}
}
...
public class Consumer {
#Autowired
Foo foo;
...
}
So, in the above situation, will the Consumer get a Foo instance with a 500 value or 0 value? I've tested locally and it appears that the Java configured Foo (with value 500) is created consistently. However, I'm concerned that my testing isn't thorough enough to be conclusive.
What is the real answer? Using both Java config and component scanning on #Component beans of the same type seems like a bad thing.
I think your concern is more like raised by the following use case:
You have a custom spring-starter-library that have its own #Configuration classes and #Bean definitions, BUT if you have #Component/#Service in this library, you will need to explicitly #ComponentScan these packages from your service, since the default #ComponentScan (see #SpringBootApplication) will perform component scanning from the main class, to all sub-packages of your app, BUT not the packages inside the external library. For that purpose, you only need to have #Bean definitions in your external library, and to inject these external configurations via #EnableSomething annotation used on your app's main class (using #Import(YourConfigurationAnnotatedClass.class) OR via using spring.factories in case you always need the external configuration to be used/injected.
Of course, you CAN have #Components in this library, but the explicit usage of #ComponentScan annotation may lead to unintended behaviour in some cases, so I would recommend to avoid that.
So, to answer your question -> You can have both approaches of defining beans, only if they're inside your app, but bean definitions outside your app (e.g. library) should be explicitly defined with #Bean inside a #Configuration class.
It is perfectly valid to have Java configuration and annotated component scans in the same project because they server different purposes.
#Component (#Service,#Repository etc) are used to auto-detect and auto-configure beans.
#Bean annotation is used to explicitly declare a single bean, instead of letting Spring do it automatically.
You can do the following with #Bean. But, this is not possible with #Component
#Bean
public MyService myService(boolean someCondition) {
if(someCondition) {
return new MyServiceImpl1();
}else{
return new MyServiceImpl2();
}
}
Haven't really faced a situation where both Java config and component scanning on the bean of the same type were required.
As per the spring documentation,
To declare a bean, simply annotate a method with the #Bean annotation.
When JavaConfig encounters such a method, it will execute that method
and register the return value as a bean within a BeanFactory. By
default, the bean name will be the same as the method name
So, As per this, it is returning the correct Foo (with value 500).
In general, there is nothing wrong with component scanning and explicit bean definitions in the same application context. I tend to use component scanning where possible, and create the few beans that need more setup with #Bean methods.
There is no upside to include classes in the component scan when you create beans of their type explicitly. Component scanning can easily be targeted at certain classes and packages. If you design your packages accordingly, you can component scan only the packages without "special" bean classes (or else use more advanced filters on scanning).
In a quick look I didn't find any clear information about bean definition precedence in such a case. Typically there is a deterministic and fairly stable order in which these are processed, but if it is not documented it maybe could change in some future Spring version.

Spring autowiring based on service availability

I have a need of conditionally creating one of three possible implementations of a service depending upon the environment detected by a Spring application at runtime. If Service A is available, then I want to create a concrete implementation class that uses Service A as a dependency. If Service A is not available, then I want to create an implementation using Service B as a dependency. And so-on.
Classes which depend on the implementation will Autowire the Interface and not care what the underlying Service was that got selected for the particular environment.
My first stab at this was to implement multiple #Bean methods which either return a bean or null, depending on whether the Service is available, and to then have a separate #Configuration class which #Autowire(required=false) the two possible services, conditionally creating the implementation depending on which of the #Autowired fields was not-null.
The problem here is that when required=false, Spring doesn't appear to care whether it waits around for candidates to be constructed; that is to say, the class which tries to pick the implementation might be constructed before one or both of the required=false Beans gets constructed, thus ensuring that one or both might always be null, regardless of whether it may manage to initialize correctly.
It kind of feels like I'm going against the grain at this point, so I'm looking for advice on the "right" way to do this sort of thing, where a whole set of beans might get switched out based on the availability of some outside service or environment.
Profiles don't look like the right answer, because I won't know until after my Service beans try to initialize which implementation I want to choose; I certainly won't know it at the time I create the context.
#Order doesn't achieve the goal either. Nor does #Conditional and testing on the existence of the bean (because it still might not be constructed yet). Same problem with FactoryBean- it does no good to check for the existence of beans that might not have been constructed at the time the FactoryBean is asked to create an instance.
What I really need to do is create a Bean based on the availability of other beans, but only AFTER those beans have at least had a chance to try to initialize.
Spring Profiles is your friend. You can set the current profile by way of environmental variable, command-line argument, and other methods. You can annotate a Spring-managed component so that it's created for a certain profile.
Spring Profiles from the Spring Documentation
Well in this case it turned out to be a tangential mistake that influenced the whole wrong behavior.
To give some background, my first, naive (but workable) approach looked like this:
#Autowired(required=false)
#Qualifier(RedisConfig.HISTORY)
private RLocalCachedMap<String, History> redisHistoryMap;
#Autowired(required=false)
#Qualifier(HazelcastConfig.HISTORY)
private IMap<String, History> hazelcastHistoryMap;
// RequestHistory is an interface
#Bean
public RequestHistory requestHistory() {
if (redisHistoryMap != null) {
return new RedisClusteredHistory(redisHistoryMap);
} else if (hazelcastHistoryMap != null) {
return new HazelcastClusteredHistory(hazelcastHistoryMap);
} else {
return new LocalRequestHistory(); // dumb hashmap
}
}
In other #Configuration classes, if the beans that get #Autowired here aren't available (due to missing configuration, exceptions during initialization, etc), the #Bean methods that create them return null.
The observed behavior was that this #Bean method was getting called after the RLocalCachedMap<> #Bean method got called, but before Spring attempted to create the IMap<> by calling its #Bean method. I had incorrectly thought that this had something to do with required=false but in fact that had nothing to do with it.
What actually happened was I accidentally used the same constant for both #Bean names (and consequently #Qualifiers), so presumably Spring couldn't tell the difference when it was calculating its dependency graph for this #Configuration class... because the two #Autowired beans appeared to be the same thing (because they had the same name).
(There's a secondary reason for using #Qualifier in this case, which I won't go into here, but suffice it to say it's possible to have many maps of the same type.)
Once I qualified the names better, the code did exactly what I wanted it to, albeit in a way that's somewhat inelegant/ugly.
At some point I'll go back and see if it looks more elegant / less ugly and works just as well to use #Conditional and #Primary instead of the if/else foulness.
The lesson here is that if you explicitly name beans, make absolutely sure your names are unique across your application, even if you plan to swap things around like this.

Resources