How I post and consume a Kafka message in the same IntegrationFlows builder - spring

I want to do a simple kafka producer/consumer using spring integration, the way I did is separate in two builders each one like bean, but I wish do this just one #Bean
#Bean
fun myProducerFlow(kafkaTemplate: KafkaTemplate<*, *>): IntegrationFlow {
return IntegrationFlows.from("testChannel")
.handle(Kafka.outboundChannelAdapter(kafkaTemplate)
.topic("channel1"))
.get()
}
#Bean
fun myConsumerFlow(consumerFactory: ConsumerFactory<*, *>): IntegrationFlow {
return IntegrationFlows.from(Kafka.messageDrivenChannelAdapter(consumerFactory,"channel1"))
.handle { message -> println(message) }
.get()
}
I wish something like this:
#Bean
fun myFlow(kafkaTemplate: KafkaTemplate<*, *>): IntegrationFlow {
return IntegrationFlows.from("testChannel")
.handle(Kafka.outboundChannelAdapter(kafkaTemplate)
.topic("channel1"))
.channel(Kafka.messageDrivenChannelAdapter(consumerFactory,"channel1"))
.handle { message -> println(message) }
.get()
}

There's no way to do that; the message-driven adapter always starts a flow.

This is two diffrents flow. use other way

Related

Method annotated with #Bean is called directly. Use dependency injection instead

I'm following a tutorial for Spring Batch and when I write the following code - IntelliJ is complaining that the tasklet(null) call in the job function is called directly:
Method annotated with #Bean is called directly. Use dependency injection instead.
I can get the error to go away if I remove the #Bean annotation from the job - but I want to know what's going on. How can I inject the bean there? Simply writing tasklet(Tasklet tasklet(null)) gives the same error.
#Bean
#StepScope
public Tasklet tasklet(#Value("#{jobParameters['name']}") String name) {
return ((contribution, chunkContext) -> {
System.out.println(String.format("This is %s", name));
return RepeatStatus.FINISHED;
});
}
#Bean
public Job job() {
return jobBuilderFactory.get("job")
.start(stepBuilderFactory.get("step1")
.tasklet(tasklet(null)) // tasklet(null) = problem
.build())
.build();
}
asd
#Bean
#StepScope
public Tasklet tasklet(#Value("#{jobParameters['name']}") String name) {
return ((contribution, chunkContext) -> {
System.out.println(String.format("This is %s", name));
return RepeatStatus.FINISHED;
});
}
#Bean
public Job job(Tasklet tasklet) {
return jobBuilderFactory.get("job")
.start(stepBuilderFactory.get("step1")
.tasklet(tasklet)
.build())
.build();
}
Spring Bean creation and AOPs are very picky. You need to be very careful with the usage.
In this case you can use bean dependency to solve the TaskLet name being null.

Spring integration: discardChannel doesn't work for filter of integration flow

I faced with a problem when I create IntegrationFlow dynamically using DSL.
If discardChannel is defined as message channel object and if the filter returns false - nothing happens (the message is not sent to specified discard channel)
The source is:
#Autowired
#Qualifier("SIMPLE_CHANNEL")
private MessageChannel simpleChannel;
IntegrationFlow integrationFlow = IntegrationFlows.from("channelName")
.filter(simpleMessageSelectorImpl, e -> e.discardChannel(simpleChannel))
.get();
...
#Autowired
#Qualifier("SIMPLE_CHANNEL")
private MessageChannel simpleChannel;
#Bean
public IntegrationFlow simpleFlow() {
return IntegrationFlows.from(simpleChannel)
.handle(m -> System.out.println("Hello world"))
.get();
#Bean(name = "SIMPLE_CHANNEL")
public MessageChannel simpleChannel() {
return new DirectChannel();
}
But if the discard channel is defined as name of the channel, everything works.
Debuging I found that mentioned above the part of the code:
IntegrationFlow integrationFlow = IntegrationFlows.from("channelName")
.filter(simpleMessageSelectorImpl, e -> e.discardChannel(simpleChannel))
.get();
returns flow object which has map with integrationComponents and one of the component which is FilterEndpointSpec has "handler" field of type MessageFilter with discardChannel = null, and discardChannelName = null;
But if discard channel is defined as name of the channel the mentioned field "handler" with discardChannel=null but discardChannelName="SIMPLE_CHANNEL", as result everything works good.
It is behavior of my running application. Also I wrote the test and in test everything works good for both cases (the test doesn't run all spring context so maybe it is related to any conflict there)
Maybe someone has idea what it can be.
The spring boot version is 2.1.8.RELEASE, spring integration is 5.1.7.RELEASE
Thanks
The behaviour you describe is indeed incorrect and made me wonder, but after testing it out I can't seem to reproduce it, so perhaps there is something missing from the information you provided. In any event, here is the complete app that I've modeled after yours which works as expected. So perhaps you can compare and see if something jumps:
#SpringBootApplication
public class IntegrationBootApp {
public static void main(String[] args) {
ApplicationContext context = SpringApplication.run(IntegrationBootApp.class, args);
MessageChannel channel = context.getBean("channelName", MessageChannel.class);
PollableChannel resultChannel = context.getBean("resultChannel", PollableChannel.class);
PollableChannel discardChannel = context.getBean("SIMPLE_CHANNEL", PollableChannel.class);
channel.send(MessageBuilder.withPayload("foo").build());
System.out.println("SUCCESS: " + resultChannel.receive());
channel.send(MessageBuilder.withPayload("bar").build());
System.out.println("DISCARD: " + discardChannel.receive());
}
#Autowired
#Qualifier("SIMPLE_CHANNEL")
private PollableChannel simpleChannel;
#Bean
public IntegrationFlow integrationFlow() {
IntegrationFlow integrationFlow = IntegrationFlows.from("channelName")
.filter(v -> v.equals("foo"), e -> e.discardChannel(simpleChannel))
.channel("resultChannel")
.get();
return integrationFlow;
}
#Bean(name = "SIMPLE_CHANNEL")
public PollableChannel simpleChannel() {
return new QueueChannel();
}
#Bean
public PollableChannel resultChannel() {
return new QueueChannel(10);
}
}
with output
SUCCESS: GenericMessage [payload=foo, headers={id=cf7e2ef1-e49d-1ecb-9c92-45224d0d91c1, timestamp=1576219339077}]
DISCARD: GenericMessage [payload=bar, headers={id=bf209500-c3cd-9a7c-0216-7d6f51cd5f40, timestamp=1576219339078}]

Spring Boot RSocketRequester deal with server restart

I have a question about Springs RSocketRequester. I have a rsocket server and client. Client connects to this server and requests #MessageMapping endpoint. It works as expected.
But what if I restart the server. How to do automatic reconnect to rsocket server from client? Thanks
Server:
#Controller
class RSC {
#MessageMapping("pong")
public Mono<String> pong(String m) {
return Mono.just("PONG " + m);
}
}
Client:
#Bean
public RSocketRequester rSocketRequester() {
return RSocketRequester
.builder()
.connectTcp("localhost", 7000)
.block();
}
#RestController
class RST {
#Autowired
private RSocketRequester requester;
#GetMapping(path = "/ping")
public Mono<String> ping(){
return this.requester
.route("pong")
.data("TEST")
.retrieveMono(String.class)
.doOnNext(System.out::println);
}
}
Updated for Spring Framework 5.2.6+
You could achieve it with io.rsocket.core.RSocketConnector#reconnect.
#Bean
Mono<RSocketRequester> rSocketRequester(RSocketRequester.Builder rSocketRequesterBuilder) {
return rSocketRequesterBuilder
.rsocketConnector(connector -> connector
.reconnect(Retry.fixedDelay(Integer.MAX_VALUE, Duration.ofSeconds(1))))
.connectTcp("localhost", 7000);
}
#RestController
public class RST {
#Autowired
private Mono<RSocketRequester> rSocketRequesterMono;
#GetMapping(path = "/ping")
public Mono<String> ping() {
return rSocketRequesterMono.flatMap(rSocketRequester ->
rSocketRequester.route("pong")
.data("TEST")
.retrieveMono(String.class)
.doOnNext(System.out::println));
}
}
I don't think I would create a RSocketRequester bean in an application. Unlike WebClient (which has a pool of reusable connections), the RSocket requester wraps a single RSocket, i.e. a single network connection.
I think it's best to store a Mono<RSocketRequester> and subscribe to that to get an actual requester when needed. Because you don't want to create a new connection for each call, you can cache the result. Thanks to Mono retryXYZ operators, there are many ways you can refine the reconnection behavior.
You could try something like the following:
#Service
public class RSocketPingService {
private final Mono<RSocketRequester> requesterMono;
// Spring Boot is creating an auto-configured RSocketRequester.Builder bean
public RSocketPingService(RSocketRequester.Builder builder) {
this.requesterMono = builder
.dataMimeType(MediaType.APPLICATION_CBOR)
.connectTcp("localhost", 7000).retry(5).cache();
}
public Mono<String> ping() {
return this.requesterMono.flatMap(requester -> requester.route("pong")
.data("TEST")
.retrieveMono(String.class));
}
}
the answer here https://stackoverflow.com/a/58890649/2852528 is the right one. The only thing I would like to add is that reactor.util.retry.Retry has many options for configuring the logic of your retry including even logging.
So I would slightly improve the original answer, so we'd be increasing the time between the retry till riching the max value (16 sec) and before each retry log the failure - so we could monitor the activity of the connector:
#Bean
Mono<RSocketRequester> rSocketRequester(RSocketRequester.Builder builder) {
return builder.rsocketConnector(connector -> connector.reconnect(Retry.backoff(Integer.MAX_VALUE, Duration.ofSeconds(1L))
.maxBackoff(Duration.ofSeconds(16L))
.jitter(1.0D)
.doBeforeRetry((signal) -> log.error("connection error", signal.failure()))))
.connectTcp("localhost", 7000);
}

Message headers not included in error handling with Spring Integration DSL

I am trying to track all transactions adding extra headers on each operation, these extra headers work fine with request and response, but in error case no headers are included.
This is my configuration (with Spring Integration DSL and Java 1.7)
#Bean
public IntegrationFlow inboundFlow() {
return IntegrationFlows.from(Amqp.inboundGateway(simpleMessageListenerContainer())
.mappedReplyHeaders(AMQPConstants.AMQP_CUSTOM_HEADER_FIELD_NAME_MATCH_PATTERN)
.mappedRequestHeaders(AMQPConstants.AMQP_CUSTOM_HEADER_FIELD_NAME_MATCH_PATTERN)
.errorChannel(gatewayErrorChannel())
.requestChannel(gatewayRequestChannel())
.replyChannel(gatewayResponseChannel())
)
.transform(getCustomFromJsonTransformer())
.route(new HeaderValueRouter(AMQPConstants.OPERATION_ROUTING_KEY))
.get();
}
#Bean
public MessageChannel gatewayRequestChannel() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel gatewayResponseChannel() {
return MessageChannels.publishSubscribe().get();
}
#Bean
public MessageChannel gatewayErrorChannel() {
return MessageChannels.publishSubscribe().get();
}
#Bean
public IntegrationFlow responseTrackerOutboundFlow() {
return trackerOutboundFlowTemplate(gatewayResponseChannel());
}
#Bean
public IntegrationFlow errorTrackerOutboundFlow() {
return trackerOutboundFlowTemplate(gatewayErrorChannel());
}
private IntegrationFlow trackerOutboundFlowTemplate(MessageChannel fromMessageChannel) {
return IntegrationFlows.from(fromMessageChannel)
.handle(Amqp.outboundAdapter(new RabbitTemplate(getConnectionFactory()))
.exchangeName(LOGGER_EXCHANGE_NAME)
.routingKey(LOGGER_EXCHANGE_ROUTING_KEY)
.mappedRequestHeaders("*"))
.get();
}
I am using errorChannel for inboundGateway and also using mappedReplyHeaders and mappedRequestHeaders, is it possible to have headers in errorChannel? There is a way to configure mapped error headers or something like that?
The mappedReplyHeaders work only if you receive the good reply from the downstream flow. They are applied exactly before sending the reply message to the AMQP.
The errorChannel is a part of integration messaging, therefore no access to the mappedReplyHeaders at all. Forget them here!
From other side the errorChannel is responsible to wrap an Exception into the new ErrorMessage. That's why you don't see your headers there directly.
But you should bare in mind that integration messaging in most cases is MessagingException with the failedMessage property. That failedMessage is a "guilty" message for an exception.
And if the normal headers population process is done everywhere, you can get access to your headers from this failedMessage of the MessagingException payload of the ErrorMessage in the errorChannel flow.

Spring Integration - #Filter discardChannel and/or throwExceptionOnRejection being ignored?

I have a java DSL based spring integration (spring-integration-java-dsl:1.0.1.RELEASE) flow which puts messages through a Filter to filter out certain messages. The Filter component works okay in terms of filtering out unwanted messages.
Now, I would like to set either a discardChannel="discard.ch" but, when I set the discard channel, the filtered out messages never seem to actually go to the specified discardChannel. Any ideas why this might be?
My #Filter annotated class/method:
#Component
public class MessageFilter {
#Filter(discardChannel = "discard.ch")
public boolean filter(String payload) {
// force all messages to be discarded to test discardChannel
return false;
}
}
My Integration Flow class:
#Configuration
#EnableIntegration
public class IntegrationConfig {
#Autowired
private MessageFilter messageFilter;
#Bean(name = "discard.ch")
public DirectChannel discardCh() {
return new DirectChannel();
}
#Bean
public IntegrationFlow inFlow() {
return IntegrationFlows
.from(Jms.messageDriverChannelAdapter(mlc)
.filter("#messageFilter.filter('payload')")
...
.get();
}
#Bean
public IntegrationFlow discardFlow() {
return IntegrationFlows
.from("discard.ch")
...
.get();
}
}
I have turned on spring debugging on and, I can't see where discarded messages are actually going. It is as though the discardChannel I have set on the #Filter is not being picked up at all. Any ideas why this might be?
The annotation configuration is for when using annotation-based configuration.
When using the dsl, the annotation is not relevant; you need to configure the .filter within the DSL itself...
.filter("#messageFilter.filter('payload')", e -> e.discardChannel(discardCh())

Resources