I have a use-case where I need to produce to multiple Kafka topics/destinations determined at runtime. I tried to combine Functions with multiple input and output arguments by using a returning Flux<Message<T>> from a functional bean of type Function with setting the header spring.cloud.stream.sendto.destination for each Message as described here. I came up with the following implementation:
#Bean
public Function<Person, Flux<Message<Person>>> route() {
return person -> Flux.fromIterable(Stream.of(person.getEvents())
.map(e -> MessageBuilder.withPayload(person)
.setHeader("spring.cloud.stream.sendto.destination", e).build())
.collect(Collectors.toList()));
}
and I have also this in my config:
spring.cloud.stream.dynamic-destinations=
This is my Person:
#AllArgsConstructor
#NoArgsConstructor
#Data
public class Person {
private String[] events;
private String name;
}
events contains the list of Kafka topic names.
However, it doesn't work. What am I'm missing?
spring.cloud.stream.sendto.destination uses BinderAwareChannelResolver internally which is deprecated in favor of StreamBridge. I think you can rewrite your code as below. I haven't tested it, but here is the template:
#Autowired StreamBridge streamBridge;
#Bean
public Consumer<Person> route() {
return person -> streamBridge.send(person.getName(), person);
}
Behind the scenes, Spring Cloud Stream will create a binding for Person dynamically.
If you know your destinations in advance at deployment time, you can also set them through configuration. For e.g. spring.cloud.stream.source as foo;bar..;.... Then the framework creates output bindings in the form of foo-out-0, bar-out-0 etc. Then you need to set destinations - spring.cloud.stream.bindings.foo-out-0.destination=foo. But since your use case is strictly about dynamic destinations, you can't go with this approach, rather try using what I suggested above.
One solution that works uses BinderAwareChannelResolver. However, it's deprecated in favor if providing spring.cloud.stream.sendto.destination property in 3.0.
#Autowired
private BinderAwareChannelResolver binderAwareChannelResolver;
#Bean
public Consumer<Person> route() {
return person ->
Stream.of(person.getEvents())
.forEach(e -> binderAwareChannelResolver.resolveDestination(e)
.send(MessageBuilder.withPayload(person).build()));
}
I don't like this solution because it combines the function-based programming model with the "legacy-style" programming model. If anyone has a better solution, please feel free to comment/answer.
Related
I'm able to make Spring+Rabbit work with the non-functional way (prior to 2.0?), but I'm trying to use with the functional pattern as the previous one is deprecated.
I've been following this doc: https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_binding_and_binding_names
The queue (consumer) is not being created in Rabbit with the new method. I can see the connection being created but without any consumer.
I have the following in my application.properties:
spring.cloud.stream.function.bindings.approved-in-0=approved
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
which is replacing:
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
And the new class
#Slf4j
#Service
public class ApprovedReceiver {
#Bean
public Consumer<String> approved() {
// I also saw that it's recommended to not use Consumer, but use Function instead
// https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_consumer_reactive
return value -> log.info("value: {}", value);
}
}
which is replacing
// BindableApprovedChannel.class
#Configuration
public interface BindableApprovedChannel {
#Input("approved")
SubscribableChannel getApproved();
}
// ApprovedReceiver.class
#Service
#EnableBinding(BindableApprovedChannel.class)
public class ApprovedReceiver {
#StreamListener("approved")
public void handleMessage(String payload) {
log.info("value: {}", payload);
}
}
Thanks!
If you have multiple beans of type Function, Supplier or Consumer (which could be declared by third party libraries), the framework does not know which one to bind to.
Try setting the spring.cloud.function.definition property to approved.
https://docs.spring.io/spring-cloud-stream/docs/3.1.3/reference/html/spring-cloud-stream.html#spring_cloud_function
In the event you only have single bean of type java.util.function.[Supplier/Function/Consumer], you can skip the spring.cloud.function.definition property, since such functional bean will be auto-discovered. However, it is considered best practice to use such property to avoid any confusion. Some time this auto-discovery can get in the way, since single bean of type java.util.function.[Supplier/Function/Consumer] could be there for purposes other then handling messages, yet being single it is auto-discovered and auto-bound. For these rare scenarios you can disable auto-discovery by providing spring.cloud.stream.function.autodetect property with value set to false.
Gary's answer is correct. If adding the definition property alone doesn't resolve the issue I would recommend sharing what you're doing for your supplier.
This is also a very helpful general discussion for transitioning from imperative to functional with links to repos with more in depth examples: EnableBinding is deprecated in Spring Cloud Stream 3.x
In Spring Java configuration, suppose I want to re-use a #Bean in another #Bean definition. I can do this either in one file:
#Bean
public A buildA() {
return new A();
}
#Bean
public B buildB() {
return new B(buildA());
}
or I can configure A in one file and autowire it in another file like (field injection for brevity):
#Autowired
private A a;
#Bean
public B buildB() {
return new B(a);
}
I wonder, if the two possibilities are exactly the same? For me it looks as if, the first version might instatiate A twice, while the second doesn't.
I am asking this, since in my special use case, A is establishing a connection to a messaging broker and I have several Bs that consume the stream (I use .toReactivePublisher() from spring integration in A), and I don't want to connect twice or more to the broker.
Yes, they're exactly the same. Multiple calls to a #Bean annotated method will not create multiple instances of the same bean.
For an explanation on why it doesn't happen, please see this answer.
My question is about what is best way to inhibit an endpoint that is automatically provided by Olingo?
I am playing with a simple app based on Spring boot and using Apache Olingo.On short, this is my servlet registration:
#Configuration
public class CxfServletUtil{
#Bean
public ServletRegistrationBean getODataServletRegistrationBean() {
ServletRegistrationBean odataServletRegistrationBean = new ServletRegistrationBean(new CXFNonSpringJaxrsServlet(), "/user.svc/*");
Map<String, String> initParameters = new HashMap<String, String>();
initParameters.put("javax.ws.rs.Application", "org.apache.olingo.odata2.core.rest.app.ODataApplication");
initParameters.put("org.apache.olingo.odata2.service.factory", "com.olingotest.core.CustomODataJPAServiceFactory");
odataServletRegistrationBean.setInitParameters(initParameters);
return odataServletRegistrationBean;
} ...
where my ODataJPAServiceFactory is
#Component
public class CustomODataJPAServiceFactory extends ODataJPAServiceFactory implements ApplicationContextAware {
private static ApplicationContext context;
private static final String PERSISTENCE_UNIT_NAME = "myPersistenceUnit";
private static final String ENTITY_MANAGER_FACTORY_ID = "entityManagerFactory";
#Override
public ODataJPAContext initializeODataJPAContext()
throws ODataJPARuntimeException {
ODataJPAContext oDataJPAContext = this.getODataJPAContext();
try {
EntityManagerFactory emf = (EntityManagerFactory) context.getBean(ENTITY_MANAGER_FACTORY_ID);
oDataJPAContext.setEntityManagerFactory(emf);
oDataJPAContext.setPersistenceUnitName(PERSISTENCE_UNIT_NAME);
return oDataJPAContext;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
...
My entity is quite simple ...
#Entity
public class User {
#Id
private String id;
#Basic
private String firstName;
#Basic
private String lastName;
....
Olingo is doing its job perfectly and it helps me with the generation of all the endpoints around CRUD operations for my entity.
My question is : how can I "inhibit" some of them? Let's say for example that I don't want to enable the delete my entity.
I could try to use a Filter - but this seems a bit harsh. Are there any other, better ways to solve my problem?
Thanks for the help.
As you have said, you could use a filter, but then you are really coupled with the URI schema used by Olingo. Also, things will become complicated when you have multiple, related entity sets (because you could navigate from one to the other, making the URIs more complex).
There are two things that you can do, depending on what you want to achieve:
If you want to have a fined grained control on what operations are allowed or not, you can create a wrapper for the ODataSingleProcesor and throw ODataExceptions where you want to disallow an operation. You can either always throw exceptions (i.e. completely disabling an operation type) or you can use the URI info parameters to obtain the target entity set and decide if you should throw an exception or call the standard single processor. I have used this approach to create a read-only OData service here (basically, I just created a ODAtaSingleProcessor which delegates some calls to the standard one + overridden a method in the service factory to wrap the standard single processor in my wrapper).
If you want to completely un-expose / ignore a given entity or some properties, then you can use a JPA-EDM mapping model end exclude the desired components. You can find an example of such a mapping here: github. The mapping model is just an XML file which maps the JPA entities / properties to EDM entity type / properties. In order for olingo to pick it up, you can pass the name of the file to the setJPAEdmMappingModel method of the ODataJPAContext in your initialize method.
I am trying to add an aggregator to my code.
Couple of problems I am facing.
1. How do I setup a messagestore using annotations only.
2. Is there any design of aggregator works ? basically some picture explaining the same.
#MessageEndpoint
public class Aggregator {
#Aggregator(inputChannel = "abcCH",outputChannel = "reply",sendPartialResultsOnExpiry = "true")
public APayload aggregatingMethod(List<APayload> items) {
return items.get(0);
}
#ReleaseStrategy
public boolean canRelease(List<Message<?>> messages){
return messages.size()>2;
}
#CorrelationStrategy
public String correlateBy(Message<AbcPayload> message) {
return (String) message.getHeaders().get(RECEIVED_MESSAGE_KEY);
}
}
In the Reference Manual we have a note:
Annotation configuration (#Aggregator and others) for the Aggregator component covers only simple use cases, where most default options are sufficient. If you need more control over those options using Annotation configuration, consider using a #Bean definition for the AggregatingMessageHandler and mark its #Bean method with #ServiceActivator:
And a bit below:
Starting with the version 4.2 the AggregatorFactoryBean is available, to simplify Java configuration for the AggregatingMessageHandler.
So, actually you should configure AggregatorFactoryBean as a #Bean and with the #ServiceActivator(inputChannel = "abcCH",outputChannel = "reply").
Also consider to use Spring Integration Java DSL to simplify your life with the Java Configuration.
I would like to build a Spring application, where new components can be added easily and without much configuration. For example: You have different kinds of documents. These documents should be able to get exported into different fileformats.
To make this functionality easy to maintain, it should (basically) work the following way:
Someone programs the file format exporter
He/ She writes a component, which checks if the file format exporter is licensed (based on Spring Conditions). If the exporter is licensed a specialized Bean is injected in the application context.
The "whole rest" works dynamically based on the injected beans. Nothing needs to be touched in order to display it on the GUI, etc.
I pictured it the following way:
#Component
public class ExcelExporter implements Condition {
#PostConstruct
public void init() {
excelExporter();
}
#Bean
public Exporter excelExporter(){
Exporter exporter= new ExcelExporter();
return exporter;
}
#Override
public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
return true;
}
}
In order to work with those exporters (display them, etc.) I need to get all of them. I tried this:
Map<String, Exporter> exporter =BeanFactoryUtils.beansOfTypeIncludingAncestors(appContext, Exporter.class, true, true);
Unfortunate this does not work (0 beans returned). I am fairly new to this, would anyone mind to tell me how this is properly done in Spring? Maybe there is a better solution for my problem than my approach?
You can get all instances of a given type of bean in a Map effortlessly, since it's a built in Spring feature.
Simply autowire your map, and all those beans will be injected, using as a key the ID of the bean.
#Autowired
Map<String,Exporter> exportersMap;
If you need something more sophisticated, such as a specific Map implementation or a custom key. Consider defining your custom ExporterMap, as follows
#Component
class ExporterMap implements Map{
#Autowired
private Set<Exporter> availableExporters;
//your stuff here, including init if required with #PostConstruct
}