In Spring Java configuration, suppose I want to re-use a #Bean in another #Bean definition. I can do this either in one file:
#Bean
public A buildA() {
return new A();
}
#Bean
public B buildB() {
return new B(buildA());
}
or I can configure A in one file and autowire it in another file like (field injection for brevity):
#Autowired
private A a;
#Bean
public B buildB() {
return new B(a);
}
I wonder, if the two possibilities are exactly the same? For me it looks as if, the first version might instatiate A twice, while the second doesn't.
I am asking this, since in my special use case, A is establishing a connection to a messaging broker and I have several Bs that consume the stream (I use .toReactivePublisher() from spring integration in A), and I don't want to connect twice or more to the broker.
Yes, they're exactly the same. Multiple calls to a #Bean annotated method will not create multiple instances of the same bean.
For an explanation on why it doesn't happen, please see this answer.
Related
I'm able to make Spring+Rabbit work with the non-functional way (prior to 2.0?), but I'm trying to use with the functional pattern as the previous one is deprecated.
I've been following this doc: https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_binding_and_binding_names
The queue (consumer) is not being created in Rabbit with the new method. I can see the connection being created but without any consumer.
I have the following in my application.properties:
spring.cloud.stream.function.bindings.approved-in-0=approved
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
which is replacing:
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
And the new class
#Slf4j
#Service
public class ApprovedReceiver {
#Bean
public Consumer<String> approved() {
// I also saw that it's recommended to not use Consumer, but use Function instead
// https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_consumer_reactive
return value -> log.info("value: {}", value);
}
}
which is replacing
// BindableApprovedChannel.class
#Configuration
public interface BindableApprovedChannel {
#Input("approved")
SubscribableChannel getApproved();
}
// ApprovedReceiver.class
#Service
#EnableBinding(BindableApprovedChannel.class)
public class ApprovedReceiver {
#StreamListener("approved")
public void handleMessage(String payload) {
log.info("value: {}", payload);
}
}
Thanks!
If you have multiple beans of type Function, Supplier or Consumer (which could be declared by third party libraries), the framework does not know which one to bind to.
Try setting the spring.cloud.function.definition property to approved.
https://docs.spring.io/spring-cloud-stream/docs/3.1.3/reference/html/spring-cloud-stream.html#spring_cloud_function
In the event you only have single bean of type java.util.function.[Supplier/Function/Consumer], you can skip the spring.cloud.function.definition property, since such functional bean will be auto-discovered. However, it is considered best practice to use such property to avoid any confusion. Some time this auto-discovery can get in the way, since single bean of type java.util.function.[Supplier/Function/Consumer] could be there for purposes other then handling messages, yet being single it is auto-discovered and auto-bound. For these rare scenarios you can disable auto-discovery by providing spring.cloud.stream.function.autodetect property with value set to false.
Gary's answer is correct. If adding the definition property alone doesn't resolve the issue I would recommend sharing what you're doing for your supplier.
This is also a very helpful general discussion for transitioning from imperative to functional with links to repos with more in depth examples: EnableBinding is deprecated in Spring Cloud Stream 3.x
I have a java web application developed on Spring framework which uses mybatis. I see that the datasource is defined in beans.xml. Now I want to add a secondary data source too as a backup. For e.g, if the application is not able to connect to the DB and gets some error, or if the server is down, then it should be able to connect to a different datasource. Is there a configuration in Spring to do this or we will have to manually code this in the application?
I have seen primary and secondary notations in Spring boot but nothing in Spring. I could achieve these in my code where the connection is created/retrieved, by connecting to the secondary datasource if the connection to the primary datasource fails/timed out. But wanted to know if this can be achieved by making changes just in Spring configuration.
Let me clarify things one-by-one-
Spring Boot has a #Primary annotation but there is no #Secondary annotation.
The purpose of the #Primary annotation is not what you have described. Spring does not automatically switch data sources in any way. #Primary merely tells the spring which data source to use in case we don't specify one in any transaction. For more detail on this- https://www.baeldung.com/spring-data-jpa-multiple-databases
Now, how do we actually switch datasources when one goes down-
Most people don't manage this kind of High-availability in code. People usually prefer to 2 master database instances in an active-passive mode which are kept in sync. For auto-failovers, something like keepalived can be used. This is also a high subjective and contentious topic and there are a lot of things to consider here like can we afford replication lag, are there slaves running for each master(because then we have to switch slaves too as old master's slaves would now become out of sync, etc. etc.) If you have databases spread across regions, this becomes even more difficult(read awesome) and requires yet more engineering, planning, and design.
Now since, the question specifically mentions using application code for this. There is one thing you can do. I don't advice to use it in production though. EVER. You can create an ASPECTJ advice around your all primary transactional methods using your own custom annotation. Lets call this annotation #SmartTransactional for our demo.
Sample Code. Did not test it though-
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SmartTransactional {}
public class SomeServiceImpl implements SomeService {
#SmartTransactional
#Transactional("primaryTransactionManager")
public boolean someMethod(){
//call a common method here for code reusability or create an abstract class
}
}
public class SomeServiceSecondaryTransactionImpl implements SomeService {
#Transactional("secondaryTransactionManager")
public boolean usingTransactionManager2() {
//call a common method here for code reusability or create an abstract class
}
}
#Component
#Aspect
public class SmartTransactionalAspect {
#Autowired
private ApplicationContext context;
#Pointcut("#annotation(...SmartTransactional)")
public void smartTransactionalAnnotationPointcut() {
}
#Around("smartTransactionalAnnotationPointcut()")
public Object methodsAnnotatedWithSmartTransactional(final ProceedingJoinPoint joinPoint) throws Throwable {
Method method = getMethodFromTarget(joinPoint);
Object result = joinPoint.proceed();
boolean failure = Boolean.TRUE;// check if result is failure
if(failure) {
String secondaryTransactionManagebeanName = ""; // get class name from joinPoint and append 'SecondaryTransactionImpl' instead of 'Impl' in the class name
Object bean = context.getBean(secondaryTransactionManagebeanName);
result = bean.getClass().getMethod(method.getName()).invoke(bean);
}
return result;
}
}
MQ starter has
#ConfigurationProperties(prefix = "ibm.mq")
public class MQConfigurationProperties {
I want to change the config prefix to infrastructure.ibm.mq and the rest of the hiearchy on the
config is the same
I want to avoid changing the MQConfiguration.java file, and recompiling, I just want to use the starter as is, but use a slightly different config prefix
This is one way I was able to override it. The #Primary means that this takes precedence. Otherwise you get errors about finding 2 beans where only a single is accepted.
#Bean
#Primary
#ConfigurationProperties(prefix = "my.local.prefix")
public MQConfigurationProperties localConfigurationProperties() {
return new MQConfigurationProperties();
}
I have multiple RouterFunctions which I register as beans (one per section of code).
One of them is /** for dynamic routing for React. Basically, if no other route matches, I want it to go to that one.
The problem is sometimes, depending on the whims of what order they are used, the /** will block another endpoint.
Is there a way to order the separate RouterFunctions or a better way to deal with having everything that doesn't match something else go to a specific route?
Spring WebFlux is gathering all RouterFunction beans and reducing them into one using RouterFunction::andOther (See RouterFunctionMapping).
So you can just order your RouterFunction beans as regular beans and Spring WebFlux will do the rest.
#Bean
#Order(1)
public RouterFunction first() {
//
}
#Bean
#Order(2)
public RouterFunction second() {
//
}
#Bean
#Order(3)
public RouterFunction third() {
//
}
I've worked out a solution that takes advantage of the fact that RouterFunction has an add() function to combine them together.
First, I had RouterFunction beans that looked something like this:
#Configuration
class MyRouter {
#Bean
fun myRoutes(myHandler: MyHandler): RouterFunction<ServerResponse>
= router {
GET("/path", myHandler::handlePath)
}
}
I had multiple of these, and if there were some conflicting paths (like /**), which one ran was kind of question mark.
I decided to merge these into one RouterFunction where I could control the order. Since I didn't want to have to manually manage these somewhere (i.e., if I made a new Router class, I just wanted it picked up automatically).
First, I had to make my normal routes no longer beans. I also needed an easy way to let Spring find them all, so I decided to create an abstract class for them to extend.
That looked like this:
abstract class RouterConfig {
open val priority: Int = 0 // higher number, later in list
open val routes: RouterFunction<ServerResponse>
get() = TODO()
}
The priority lets me override the order they are added (with the default being fine if it doesn't matter). A larger number means it'll be loaded later.
After that, I changed my Router classes to Components and made them not spit out a bean anymore. Something like this:
#Configuration
class MyRouter(
private val myHandler: MyHandler
) {
override val routes: RouterFunction<ServerResponse>
get() = router {
GET("/path", myHandler::handlePath)
}
}
Finally, I create a new class and bean which would gather them all up:
#Configuration
class AppRouter {
#Bean
fun appRoutes(routerConfigs: List<RouterConfig>): RouterFunction<ServerResponse>
= routerConfigs.sortedBy { it.priority }
.map { it.routes }
.reduce { r, c -> r.add(c) }
}
And that seemed to do the trick. Now the routes will be added in priority order, so for the one that might be a bit greedy (/**) I simply set the priority of that class to 100 to make it come last.
I have to mock jerseyclient which is being created in Constructor of subjected service. Subjected service is System under test injected via Spring's #Autowired.
In constructor of the service client=client.create() method is written. We can't change this code(Although this is a code smell). I want to mock the jersey client but it is in constructor of the service. I am not able to mock this
sooo... long story short.. admitting you use mockito, in your src for test you should have an applicationcontext for your test.. usually we define one programmatically so, something along those lines..
import the .xml file you use for test purpose (in my case i imported the one for the mailserver, for the connection and for the authentication) instead of the one i use for the "local" environmnet. After then define a method to setup each and every of your service.
You might need to add a mock for your template resolver as well, but ultimately this all depends on your stack...
So based on your approach the final thing might be a bit different, but ultimately you're gonna do something along the lines of what i outline below:
#Configuration
#ImportResource(value = {
"classpath:applicationContext-jdbc-test.xml",
"classpath:applicationContext-ldap-test.xml",
"classpath:applicationContext-mail-test.xml"})
public class ApplicationTestContext {
#Bean
public ObjectMapperWrapper objectMapperWrapper() {
return Mockito.mock(ObjectMapperWrapper.class);
}
#Bean
public YourService yourService() {
return new YourServiceImpl();
}
}