I'm able to make Spring+Rabbit work with the non-functional way (prior to 2.0?), but I'm trying to use with the functional pattern as the previous one is deprecated.
I've been following this doc: https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_binding_and_binding_names
The queue (consumer) is not being created in Rabbit with the new method. I can see the connection being created but without any consumer.
I have the following in my application.properties:
spring.cloud.stream.function.bindings.approved-in-0=approved
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
which is replacing:
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
And the new class
#Slf4j
#Service
public class ApprovedReceiver {
#Bean
public Consumer<String> approved() {
// I also saw that it's recommended to not use Consumer, but use Function instead
// https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_consumer_reactive
return value -> log.info("value: {}", value);
}
}
which is replacing
// BindableApprovedChannel.class
#Configuration
public interface BindableApprovedChannel {
#Input("approved")
SubscribableChannel getApproved();
}
// ApprovedReceiver.class
#Service
#EnableBinding(BindableApprovedChannel.class)
public class ApprovedReceiver {
#StreamListener("approved")
public void handleMessage(String payload) {
log.info("value: {}", payload);
}
}
Thanks!
If you have multiple beans of type Function, Supplier or Consumer (which could be declared by third party libraries), the framework does not know which one to bind to.
Try setting the spring.cloud.function.definition property to approved.
https://docs.spring.io/spring-cloud-stream/docs/3.1.3/reference/html/spring-cloud-stream.html#spring_cloud_function
In the event you only have single bean of type java.util.function.[Supplier/Function/Consumer], you can skip the spring.cloud.function.definition property, since such functional bean will be auto-discovered. However, it is considered best practice to use such property to avoid any confusion. Some time this auto-discovery can get in the way, since single bean of type java.util.function.[Supplier/Function/Consumer] could be there for purposes other then handling messages, yet being single it is auto-discovered and auto-bound. For these rare scenarios you can disable auto-discovery by providing spring.cloud.stream.function.autodetect property with value set to false.
Gary's answer is correct. If adding the definition property alone doesn't resolve the issue I would recommend sharing what you're doing for your supplier.
This is also a very helpful general discussion for transitioning from imperative to functional with links to repos with more in depth examples: EnableBinding is deprecated in Spring Cloud Stream 3.x
Related
I have a java web application developed on Spring framework which uses mybatis. I see that the datasource is defined in beans.xml. Now I want to add a secondary data source too as a backup. For e.g, if the application is not able to connect to the DB and gets some error, or if the server is down, then it should be able to connect to a different datasource. Is there a configuration in Spring to do this or we will have to manually code this in the application?
I have seen primary and secondary notations in Spring boot but nothing in Spring. I could achieve these in my code where the connection is created/retrieved, by connecting to the secondary datasource if the connection to the primary datasource fails/timed out. But wanted to know if this can be achieved by making changes just in Spring configuration.
Let me clarify things one-by-one-
Spring Boot has a #Primary annotation but there is no #Secondary annotation.
The purpose of the #Primary annotation is not what you have described. Spring does not automatically switch data sources in any way. #Primary merely tells the spring which data source to use in case we don't specify one in any transaction. For more detail on this- https://www.baeldung.com/spring-data-jpa-multiple-databases
Now, how do we actually switch datasources when one goes down-
Most people don't manage this kind of High-availability in code. People usually prefer to 2 master database instances in an active-passive mode which are kept in sync. For auto-failovers, something like keepalived can be used. This is also a high subjective and contentious topic and there are a lot of things to consider here like can we afford replication lag, are there slaves running for each master(because then we have to switch slaves too as old master's slaves would now become out of sync, etc. etc.) If you have databases spread across regions, this becomes even more difficult(read awesome) and requires yet more engineering, planning, and design.
Now since, the question specifically mentions using application code for this. There is one thing you can do. I don't advice to use it in production though. EVER. You can create an ASPECTJ advice around your all primary transactional methods using your own custom annotation. Lets call this annotation #SmartTransactional for our demo.
Sample Code. Did not test it though-
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SmartTransactional {}
public class SomeServiceImpl implements SomeService {
#SmartTransactional
#Transactional("primaryTransactionManager")
public boolean someMethod(){
//call a common method here for code reusability or create an abstract class
}
}
public class SomeServiceSecondaryTransactionImpl implements SomeService {
#Transactional("secondaryTransactionManager")
public boolean usingTransactionManager2() {
//call a common method here for code reusability or create an abstract class
}
}
#Component
#Aspect
public class SmartTransactionalAspect {
#Autowired
private ApplicationContext context;
#Pointcut("#annotation(...SmartTransactional)")
public void smartTransactionalAnnotationPointcut() {
}
#Around("smartTransactionalAnnotationPointcut()")
public Object methodsAnnotatedWithSmartTransactional(final ProceedingJoinPoint joinPoint) throws Throwable {
Method method = getMethodFromTarget(joinPoint);
Object result = joinPoint.proceed();
boolean failure = Boolean.TRUE;// check if result is failure
if(failure) {
String secondaryTransactionManagebeanName = ""; // get class name from joinPoint and append 'SecondaryTransactionImpl' instead of 'Impl' in the class name
Object bean = context.getBean(secondaryTransactionManagebeanName);
result = bean.getClass().getMethod(method.getName()).invoke(bean);
}
return result;
}
}
I would like to propagate JTA state (= the transaction) between a transactional REST endpoint that emits a message to a reactive-messaging connector.
#Inject
#Channel("test")
Emitter<String> emitter;
#POST
#Transactional
public Response test() {
emitter.send("test");
}
and
#ApplicationScoped
#Connector("test")
public class TestConnector implements OutgoingConnectorFactory {
#Inject
TransactionManager tm;
#Override
public SubscriberBuilder<? extends Message<?>, Void> getSubscriberBuilder(Config config) {
return ReactiveStreams.<Message<?>>builder()
.flatMapCompletionStage(message -> {
tm.getTransaction(); // = null
return message.ack();
})
.ignore();
}
}
As I understand, context-propagation is responsible for making the transaction available (see io.smallrye.context.jta.context.propagation.JtaContextProvider#currentContext). The problem seems to be, that currentContext gets created on subscription, which happens when the injection point (Emitter<String> emitter) get its instance. Which is too early to properly capture the transaction.
What am I missing?
By the way, I am having the same problem when using #Incoming / #Outgoing instead of the emitter. I have decided to give you this example because it is easy to understand and reproduce.
At the moment, you need to pass the current Transaction in the message metadata. Thus, it will be propagated to your different downstream components (as well as the connector).
Note that, Transaction tends to be attached to the request scope, which means that in your connector, it may already be too late to use it. So, make sure your endpoint is asynchronous and only returns when the emitted message is acknowledged.
Context Propagation is not going to help in this case as the underlying streams are built at startup time (at build time in Quarkus) so, there are no capture contexts.
I have a use-case where I need to produce to multiple Kafka topics/destinations determined at runtime. I tried to combine Functions with multiple input and output arguments by using a returning Flux<Message<T>> from a functional bean of type Function with setting the header spring.cloud.stream.sendto.destination for each Message as described here. I came up with the following implementation:
#Bean
public Function<Person, Flux<Message<Person>>> route() {
return person -> Flux.fromIterable(Stream.of(person.getEvents())
.map(e -> MessageBuilder.withPayload(person)
.setHeader("spring.cloud.stream.sendto.destination", e).build())
.collect(Collectors.toList()));
}
and I have also this in my config:
spring.cloud.stream.dynamic-destinations=
This is my Person:
#AllArgsConstructor
#NoArgsConstructor
#Data
public class Person {
private String[] events;
private String name;
}
events contains the list of Kafka topic names.
However, it doesn't work. What am I'm missing?
spring.cloud.stream.sendto.destination uses BinderAwareChannelResolver internally which is deprecated in favor of StreamBridge. I think you can rewrite your code as below. I haven't tested it, but here is the template:
#Autowired StreamBridge streamBridge;
#Bean
public Consumer<Person> route() {
return person -> streamBridge.send(person.getName(), person);
}
Behind the scenes, Spring Cloud Stream will create a binding for Person dynamically.
If you know your destinations in advance at deployment time, you can also set them through configuration. For e.g. spring.cloud.stream.source as foo;bar..;.... Then the framework creates output bindings in the form of foo-out-0, bar-out-0 etc. Then you need to set destinations - spring.cloud.stream.bindings.foo-out-0.destination=foo. But since your use case is strictly about dynamic destinations, you can't go with this approach, rather try using what I suggested above.
One solution that works uses BinderAwareChannelResolver. However, it's deprecated in favor if providing spring.cloud.stream.sendto.destination property in 3.0.
#Autowired
private BinderAwareChannelResolver binderAwareChannelResolver;
#Bean
public Consumer<Person> route() {
return person ->
Stream.of(person.getEvents())
.forEach(e -> binderAwareChannelResolver.resolveDestination(e)
.send(MessageBuilder.withPayload(person).build()));
}
I don't like this solution because it combines the function-based programming model with the "legacy-style" programming model. If anyone has a better solution, please feel free to comment/answer.
In Spring Java configuration, suppose I want to re-use a #Bean in another #Bean definition. I can do this either in one file:
#Bean
public A buildA() {
return new A();
}
#Bean
public B buildB() {
return new B(buildA());
}
or I can configure A in one file and autowire it in another file like (field injection for brevity):
#Autowired
private A a;
#Bean
public B buildB() {
return new B(a);
}
I wonder, if the two possibilities are exactly the same? For me it looks as if, the first version might instatiate A twice, while the second doesn't.
I am asking this, since in my special use case, A is establishing a connection to a messaging broker and I have several Bs that consume the stream (I use .toReactivePublisher() from spring integration in A), and I don't want to connect twice or more to the broker.
Yes, they're exactly the same. Multiple calls to a #Bean annotated method will not create multiple instances of the same bean.
For an explanation on why it doesn't happen, please see this answer.
I would like to build a Spring application, where new components can be added easily and without much configuration. For example: You have different kinds of documents. These documents should be able to get exported into different fileformats.
To make this functionality easy to maintain, it should (basically) work the following way:
Someone programs the file format exporter
He/ She writes a component, which checks if the file format exporter is licensed (based on Spring Conditions). If the exporter is licensed a specialized Bean is injected in the application context.
The "whole rest" works dynamically based on the injected beans. Nothing needs to be touched in order to display it on the GUI, etc.
I pictured it the following way:
#Component
public class ExcelExporter implements Condition {
#PostConstruct
public void init() {
excelExporter();
}
#Bean
public Exporter excelExporter(){
Exporter exporter= new ExcelExporter();
return exporter;
}
#Override
public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
return true;
}
}
In order to work with those exporters (display them, etc.) I need to get all of them. I tried this:
Map<String, Exporter> exporter =BeanFactoryUtils.beansOfTypeIncludingAncestors(appContext, Exporter.class, true, true);
Unfortunate this does not work (0 beans returned). I am fairly new to this, would anyone mind to tell me how this is properly done in Spring? Maybe there is a better solution for my problem than my approach?
You can get all instances of a given type of bean in a Map effortlessly, since it's a built in Spring feature.
Simply autowire your map, and all those beans will be injected, using as a key the ID of the bean.
#Autowired
Map<String,Exporter> exportersMap;
If you need something more sophisticated, such as a specific Map implementation or a custom key. Consider defining your custom ExporterMap, as follows
#Component
class ExporterMap implements Map{
#Autowired
private Set<Exporter> availableExporters;
//your stuff here, including init if required with #PostConstruct
}