Can record processor be spring singleton bean? - apache-kafka-streams

I am using spring-kafka to implement the topology to convert lower-case to upper-case like this:
#Bean
public KStream<String, String> kStreamPromoToUppercase(StreamsBuilder builder) {
KStream<String, String> sourceStream = builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String()));
// A new processor object is created here per record
sourceStream.process(() -> new CapitalCaseProcessor());
...
}
The processor is not a spring singleton bean and is declared as follows:
public class CapitalCaseProcessor implements Processor<String, String> {
private ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, String value) {
context.headers().forEach(System.out::println);
}
The above processor is a stateful and holds the state of processor context.
Now, what would happen if we convert the stateful CapitalCaseProcessor to a spring singleton bean ?
#Component
public class CapitalCaseProcessor implements Processor<String, String> {
//Is the ProcessorContext going to have thread safety issue now?
private ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, String value) {
context.headers().forEach(System.out::println);
}
and try to inject it in the main topology as spring bean:
#Configuration
public class UppercaseTopologyProcessor {
#Autowired CapitalCaseProcessor capitalCaseProcessor;
#Bean
public KStream<String, String> kStreamPromoToUppercase(StreamsBuilder builder) {
KStream<String, String> sourceStream = builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String()));
// A singleton spring bean processor is now used for all the records
sourceStream.process(() -> capitalCaseProcessor);
...
}
Is it going to cause thread safety issue with the CapitalCaseProcessor now as it contains processorContext as a state?
Or is it better to declare it as a prototype bean like this as this?
#Configuration
public class UppercaseTopologyProcessor {
#Lookup
public CapitalCaseProcessor getCapitalCaseProcessor() {return null;}
#Bean
public KStream<String, String> kStreamPromoToUppercase(StreamsBuilder builder) {
KStream<String, String> sourceStream = builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String()));
// A singleton spring bean processor is now used for all the records
sourceStream.process(() -> getCapitalCaseProcessor());
...
}
Update: I essentially would like to know two things:
Should the processor instance be associated per stream record like AKKA actor model where actors are stateful and works per request or it can be a singleton object?
Is ProcessorContext thread safe?

I just ran a test and, the processor context is NOT thread-safe, what makes the stream thread-safe is you use a ProcessorSupplier (in your first example) to create a new processor instance for each record.
You must certainly not replace this with a Spring singleton.
Here is my test, using the MessagingTransformer provided by Spring for Apache Kafka:
#SpringBootApplication
#EnableKafkaStreams
public class So66200448Application {
private static final Logger log = LoggerFactory.getLogger(So66200448Application.class);
public static void main(String[] args) {
SpringApplication.run(So66200448Application.class, args);
}
#Bean
KStream<String, String> stream(StreamsBuilder sb) {
KStream<String, String> stream = sb.stream("so66200448");
stream.transform(() -> new MessagingTransformer(msg -> {
log.info(msg.toString());
log.info(new String(msg.getHeaders().get("foo", byte[].class)));
return msg;
}, new MessagingMessageConverter()) {
#Override
public KeyValue transform(Object key, Object value) {
try {
Thread.sleep(5000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return super.transform(key, value);
}
})
.to("so66200448out");
return stream;
}
#Bean
public NewTopic topic1() {
return TopicBuilder.name("so66200448").partitions(2).replicas(1).build();
}
#Bean
public NewTopic topic2() {
return TopicBuilder.name("so66200448out").partitions(2).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
Headers headers = new RecordHeaders();
headers.add(new RecordHeader("foo", "bar".getBytes()));
ProducerRecord<String, String> record = new ProducerRecord<>("so66200448", 0, null, "foo", headers);
template.send(record);
headers.remove("foo");
headers.add(new RecordHeader("foo", "baz".getBytes()));
record = new ProducerRecord<>("so66200448", 1, null, "bar", headers);
template.send(record);
};
}
#KafkaListener(id = "so66200448out", topics = "so66200448out")
public void listen(String in) {
System.out.println(in);
}
}
spring.kafka.streams.application-id=so66200448
spring.kafka.streams.properties.num.stream.threads=2
spring.kafka.consumer.auto-offset-reset=earliest
2021-02-16 15:57:34.322 INFO 17133 --- [-StreamThread-1] com.example.demo.So66200448Application : bar
2021-02-16 15:57:34.322 INFO 17133 --- [-StreamThread-2] com.example.demo.So66200448Application : baz
Changing the supplier to return the same instance each time, definitely breaks it.
#Bean
KStream<String, String> stream(StreamsBuilder sb) {
KStream<String, String> stream = sb.stream("so66200448");
MessagingTransformer transformer = new MessagingTransformer(msg -> {
log.info(msg.toString());
log.info(new String(msg.getHeaders().get("foo", byte[].class)));
return msg;
}, new MessagingMessageConverter()) {
#Override
public KeyValue transform(Object key, Object value) {
try {
Thread.sleep(5000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return super.transform(key, value);
}
};
stream.transform(() -> transformer)
.to("so66200448out");
return stream;
}
2021-02-16 15:54:28.975 INFO 16406 --- [-StreamThread-1] com.example.demo.So66200448Application : baz
2021-02-16 15:54:28.975 INFO 16406 --- [-StreamThread-2] com.example.demo.So66200448Application : baz
So, streams relies on getting a new instance each time for thread-safety.

Related

Can MockProducer and MockConsumer be used with kafkaTemplate producer

I have a service which using kafkaTemplate to send the kafka message.
#Service
class KafkaProducer(#Autowired val kafkaTemplate: KafkaTemplate<String, String>) {
final fun sendMessage(msg: String) {
val abc = kafkaTemplate.send(AppConstants.TOPIC_NAME, "abc", msg)
abc.whenComplete {
result, ex ->
if (ex != null) {
print("ex occured")
print(ex.message)
} else {
print("sent successfully")
print(result.producerRecord.value())
}
}
}
}
For unit test, I am trying to mock this KafkaProducer.
Can i use MockProducer (import org.apache.kafka.clients.producer.MockProducer) for this?
#Test
fun verify_test() {
val mockProducer = MockProducer(true,StringSerializer(), StringSerializer())
val kafkaProducer = KafkaProducer(mockProducer)
}
On trying the above code, i am getting error bcoz KafkaProducer takes argument as KafkaTemplate object, and I am providing MockProducer.
Here is an example using a mock ProducerFactory...
#SpringJUnitConfig
class So74993413ApplicationTests {
#Test
void test(#Autowired KafkaTemplate<String, String> template,
#Autowired MockProducer<String, String> producer) throws Exception {
CompletableFuture<SendResult<String, String>> future = template.send("foo", "bar");
SendResult<String, String> sendResult = future.get();
// System.out.println(sendResult);
List<ProducerRecord<String, String>> history = producer.history();
assertThat(history).hasSize(1);
ProducerRecord<String, String> record = history.get(0);
assertThat(record.topic()).isEqualTo("foo");
assertThat(record.value()).isEqualTo("bar");
}
#Configuration
public static class Config {
#Bean
MockProducer<String, String> producer() {
return new MockProducer<>(true, new StringSerializer(), new StringSerializer());
}
#Bean
ProducerFactory<String, String> pf (MockProducer producer) {
return new ProducerFactory<>() {
#Override
public Producer<String, String> createProducer() {
return producer;
}
};
}
#Bean
KafkaTemplate<String, String> template(ProducerFactory<String, String> pf) {
return new KafkaTemplate<>(pf);
}
}
}
KafkaTemplate is not a type of Producer. Native Kafka classes cannot be used in place of Spring's - that'd be a reverse dependency.
You would have to mock the template instead since that's what you're actually using. By doing so, it'll bypass any KafkaProducer instance the template used, mock or not.

Test onFailure of spring-kafka sending message

I try to test the onFailure case when I send a kafka message with producer but the onFailure method is never fire.
Here is my code where I send a message :
#Component
public class MessageSending {
#Autowired
Map<String, KafkaTemplate<String, String>> producerByCountry;
String topicName = "countryTopic";
public void sendMessage(String data) {
producerByCountry.get("countryName").send(topicName, data).addCallback(
onSuccess -> {},
onFailure -> log.error("failed")
);
}
}
Here is the test class but it's still a success case and I have no idea how I can test the failure case (I want to add some processing inside the onFailure block but I would like to first know how I can trigger onFailure by testing).
#EmbeddedKafka
#SpringBootTest
public class MessageSendingTest {
#MockBean
Map<Country, KafkaTemplate<String, String>> producerByCountry;
#Autowired
EmbeddedKafkaBroker embeddedKafka;
#Autowired
MessageSending messageSending;
#Test
void failTest(CapturedOutput capturedOutput) {
var props = KafkaTestUtils.producerProps(embeddedKafka);
var producerTemplate = new DefaultKafkaProducerFactory<String, String>(props);
var template = new KafkaTemplate<>(producerTemplate);
given(producerByCountry.get("USA"))).willReturn(template);
messageSending.sendMessage("data");
assertThat(capturedOutput).contains("failed");
}
}
I also tried the idea of this topic How to test Kafka OnFailure callback with Junit? by doing
doAnswer(invocationOnMock -> {
ListenableFutureCallback<SendResult<String, String>> listenableFutureCallback = invocationOnMock.getArgument(0);
KafkaProducerException value = new KafkaProducerException(new ProducerRecord<String, String>("myTopic", "myMessage"), "error", ex);
listenableFutureCallback.onFailure(value);
return null;
}).when(mock(ListenableFuture.class)).addCallback(any(ListenableFutureCallback.class));
But I got this mockito exception org.mockito.exceptions.misusing.UnnecessaryStubbingException due by when().addCallback
Can someone can help ?
Thanks.
You can use a mock template; see this answer for an example:
How to mock result from KafkaTemplate
EDIT
You can also mock the underlying Producer object - here is an example that is closer to your use case...
#SpringBootApplication
public class So75074961Application {
public static void main(String[] args) {
SpringApplication.run(So75074961Application.class, args);
}
#Bean
KafkaTemplate<String, String> france(ProducerFactory<String, String> pf) {
return new KafkaTemplate<>(pf, Map.of(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "france:9092"));
}
#Bean
KafkaTemplate<String, String> germany(ProducerFactory<String, String> pf) {
return new KafkaTemplate<>(pf, Map.of(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "germany:9092"));
}
}
#Component
class MessageSending {
private static final Logger log = LoggerFactory.getLogger(MessageSending.class);
#Autowired
Map<String, KafkaTemplate<String, String>> producerByCountry;
String topicName = "countryTopic";
public void sendMessage(String country, String data) {
producerByCountry.get(country).send(topicName, data).addCallback(
onSuccess -> log.info(onSuccess.getRecordMetadata().toString()),
onFailure -> log.error("failed: " + onFailure.getMessage()));
}
}
#SpringBootTest
#ExtendWith(OutputCaptureExtension.class)
class So75074961ApplicationTests {
#Test
void test(#Autowired MessageSending sending, CapturedOutput capture) {
ProducerFactory<String, String> pf = mock(ProducerFactory.class);
Producer<String, String> prod = mock(Producer.class);
given(pf.createProducer()).willReturn(prod);
willAnswer(inv -> {
Callback callback = inv.getArgument(1);
callback.onCompletion(null, new RuntimeException("test"));
return mock(Future.class);
}).given(prod).send(any(), any());
// inject the mock pf into "france" template
Map<?, ?> producers = KafkaTestUtils.getPropertyValue(sending, "producerByCountry", Map.class);
new DirectFieldAccessor(producers.get("france")).setPropertyValue("producerFactory", pf);
sending.sendMessage("france", "foo");
assertThat(capture)
.contains("failed: Failed to send; nested exception is java.lang.RuntimeException: test");
}
}
Use CompletableFuture instead of ListenableFuture for versions 3.0 or later.
public void sendMessage(String country, String data) {
producerByCountry.get(country).send(topicName, data).whenComplete(
(res, ex) -> {
if (ex == null) {
log.info(res.getRecordMetadata().toString());
}
else {
log.error("failed: " + ex.getMessage());
}
});
}
and
assertThat(capture)
.contains("failed: Failed to send");
(the latter because Spring Framework 6.0+ no longer merges nested exception messages; the top level exception is a KafkaProducerException, with the actual exception as its cause).

Spring Boot Kafka Configure DefaultErrorHandler?

I created a batch-consumer following the Spring Kafka docs:
#SpringBootApplication
public class ApplicationConsumer {
private static final Logger LOGGER = LoggerFactory.getLogger(ApplicationConsumer.class);
private static final String TOPIC = "foo";
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(ApplicationConsumer.class, args);
}
#Bean
public RecordMessageConverter converter() {
return new JsonMessageConverter();
}
#Bean
public BatchMessagingMessageConverter batchConverter() {
return new BatchMessagingMessageConverter(converter());
}
#KafkaListener(topics = TOPIC)
public void listen(List<Name> ps) {
LOGGER.info("received name beans: {}", Arrays.toString(ps.toArray()));
}
}
I was able to successfully get the consumer running by defining the following additional configuration env variables, that Spring automatically picks up:
export SPRING_KAFKA_BOOTSTRAP-SERVERS=...
export SPRING_KAFKA_CONSUMER_GROUP-ID=...
So the above code works. But now I want to customize the default error handler to use exponential backoff. From the ref docs I tried adding the following to ApplicationConsumer class:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setCommonErrorHandler(new DefaultErrorHandler(new ExponentialBackOffWithMaxRetries(10)));
factory.setConsumerFactory(consumerFactory());
return factory;
}
#Bean
public ConsumerFactory<String, Object> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
return props;
}
But now I get errors saying that it can't find some of the configuration. It looks like I'm stuck having to redefine all of the properties in consumerConfigs() that were already being automatically defined before. This includes everything from bootstrap server uris to the json-deserialization config.
Is there a good way to update my first version of the code to just override the default-error handler?
Just define the error handler as a #Bean and Boot will automatically wire it into its auto configured container factory.
EDIT
This works as expected for me:
#SpringBootApplication
public class So70884203Application {
public static void main(String[] args) {
SpringApplication.run(So70884203Application.class, args);
}
#Bean
DefaultErrorHandler eh() {
return new DefaultErrorHandler((rec, ex) -> {
System.out.println("Recovered: " + rec);
}, new FixedBackOff(0L, 0L));
}
#KafkaListener(id = "so70884203", topics = "so70884203")
void listen(String in) {
System.out.println(in);
throw new RuntimeException("test");
}
#Bean
NewTopic topic() {
return TopicBuilder.name("so70884203").partitions(1).replicas(1).build();
}
}
foo
Recovered: ConsumerRecord(topic = so70884203, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1643316625291, serialized key size = -1, serialized value size = 3, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = foo)

Loading a custom ApplicationContextInitializer in AWS Lambda Spring boot

How to loada custom ApplicationContextInitializer to in spring boot AWS Lambda?
I have an aws lambda application using spring boot, I would like to write an ApplicationContextInitializer for decrypting database passwords. I have the following code that works while running it as a spring boot application locally, but when I deploy it to the AWS console as a lambda it doesn't work.
Here is my code
1. applications.properties
spring.datasource.url=url
spring.datasource.username=testuser
CIPHER.spring.datasource.password=encryptedpassword
The following code is the ApplicationContextInitializer, assuming password is Base64 encoded for testing only (In the actual case it will be encrypted by AWM KMS). The idea here is if the key is starting with 'CIPHER.' (as in CIPHER.spring.datasource.password)I assume it's value needs to be decrypted and another key value pair with actual, key (here spring.datasource.password) and its decrypted value will be added at context initialization.
will be like spring.datasource.password=decrypted password
#Component
public class DecryptedPropertyContextInitializer
implements ApplicationContextInitializer<ConfigurableApplicationContext> {
private static final String CIPHER = "CIPHER.";
#Override
public void initialize(ConfigurableApplicationContext applicationContext) {
ConfigurableEnvironment environment = applicationContext.getEnvironment();
for (PropertySource<?> propertySource : environment.getPropertySources()) {
Map<String, Object> propertyOverrides = new LinkedHashMap<>();
decodePasswords(propertySource, propertyOverrides);
if (!propertyOverrides.isEmpty()) {
PropertySource<?> decodedProperties = new MapPropertySource("decoded "+ propertySource.getName(), propertyOverrides);
environment.getPropertySources().addBefore(propertySource.getName(), decodedProperties);
}
}
}
private void decodePasswords(PropertySource<?> source, Map<String, Object> propertyOverrides) {
if (source instanceof EnumerablePropertySource) {
EnumerablePropertySource<?> enumerablePropertySource = (EnumerablePropertySource<?>) source;
for (String key : enumerablePropertySource.getPropertyNames()) {
Object rawValue = source.getProperty(key);
if (rawValue instanceof String && key.startsWith(CIPHER)) {
String cipherRemovedKey = key.substring(CIPHER.length());
String decodedValue = decode((String) rawValue);
propertyOverrides.put(cipherRemovedKey, decodedValue);
}
}
}
}
public String decode(String encodedString) {
byte[] valueDecoded = org.apache.commons.codec.binary.Base64.decodeBase64(encodedString);
return new String(valueDecoded);
}
Here is the Spring boot initializer
#SpringBootApplication
#ComponentScan(basePackages = "com.amazonaws.serverless.sample.springboot.controller")
public class Application extends SpringBootServletInitializer {
#Bean
public HandlerMapping handlerMapping() {
return new RequestMappingHandlerMapping();
}
#Bean
public HandlerAdapter handlerAdapter() {
return new RequestMappingHandlerAdapter();
}
#Bean
public HandlerExceptionResolver handlerExceptionResolver() {
return new HandlerExceptionResolver() {
#Override
public ModelAndView resolveException(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) {
return null;
}
};
}
//loading the initializer here
public static void main(String[] args) {
SpringApplication application=new SpringApplication(Application.class);
application.addInitializers(new DecryptedPropertyContextInitializer());
application.run(args);
}
This is working when run as a spring boot appliaction, But when it deployed as a lambda into AWS the main() method in my SpringBootServletInitializer will never be called by lambda. Here is my Lambda handler.
public class StreamLambdaHandler implements RequestStreamHandler {
private static Logger LOGGER = LoggerFactory.getLogger(StreamLambdaHandler.class);
private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
handler.onStartup(servletContext -> {
FilterRegistration.Dynamic registration = servletContext.addFilter("CognitoIdentityFilter", CognitoIdentityFilter.class);
registration.addMappingForUrlPatterns(EnumSet.of(DispatcherType.REQUEST), true, "/*");
});
} catch (ContainerInitializationException e) {
e.printStackTrace();
throw new RuntimeException("Could not initialize Spring Boot application", e);
}
}
#Override
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context)
throws IOException {
handler.proxyStream(inputStream, outputStream, context);
outputStream.close();
}
}
What change is to be made in the code to load the ApplicationContextInitializer by Lambda? Any help will be highly appreciated.
I was able to nail it in the following way.
First changed the property value with place holder with a prefix, where the prefix denotes the values need to be decrypted, ex.
spring.datasource.password=${MY_PREFIX_placeHolder}
aws lambda environment variable name should match to the placeholder
('MY_PREFIX_placeHolder') and it value is encrypted using AWS KMS (This sample is base64 decoding).
create an ApplicationContextInitializer which will decrypt the property value
public class DecryptedPropertyContextInitializer
implements ApplicationContextInitializer<ConfigurableApplicationContext> {
private static final String CIPHER = "MY_PREFIX_";
#Override
public void initialize(ConfigurableApplicationContext applicationContext) {
ConfigurableEnvironment environment = applicationContext.getEnvironment();
for (PropertySource<?> propertySource : environment.getPropertySources()) {
Map<String, Object> propertyOverrides = new LinkedHashMap<>();
decodePasswords(propertySource, propertyOverrides);
if (!propertyOverrides.isEmpty()) {
PropertySource<?> decodedProperties = new MapPropertySource("decoded "+ propertySource.getName(), propertyOverrides);
environment.getPropertySources().addBefore(propertySource.getName(), decodedProperties);
}
}
}
private void decodePasswords(PropertySource<?> source, Map<String, Object> propertyOverrides) {
if (source instanceof EnumerablePropertySource) {
EnumerablePropertySource<?> enumerablePropertySource = (EnumerablePropertySource<?>) source;
for (String key : enumerablePropertySource.getPropertyNames()) {
Object rawValue = source.getProperty(key);
if (rawValue instanceof String && key.startsWith(CIPHER)) {
String decodedValue = decode((String) rawValue);
propertyOverrides.put(key, decodedValue);
}
}
}
}
public String decode(String encodedString) {
byte[] valueDecoded = org.apache.commons.codec.binary.Base64.decodeBase64(encodedString);
return new String(valueDecoded);
}
}
The above code will decrypt all the values with prefix MY_PREFIX_ and add them at the top of the property source.
As the spring boot is deployed into aws lambda, lambda will not invoke the main() function, so if the ApplicationContextInitializer is initialized in main() it is not going to work. In order to make it work need to override createSpringApplicationBuilder() method of SpringBootServletInitializer, so SpringBootServletInitializer will be like
#SpringBootApplication
#ComponentScan(basePackages = "com.amazonaws.serverless.sample.springboot.controller")
public class Application extends SpringBootServletInitializer {
#Bean
public HandlerMapping handlerMapping() {
return new RequestMappingHandlerMapping();
}
#Bean
public HandlerAdapter handlerAdapter() {
return new RequestMappingHandlerAdapter();
}
#Bean
public HandlerExceptionResolver handlerExceptionResolver() {
return new HandlerExceptionResolver() {
#Override
public ModelAndView resolveException(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) {
return null;
}
};
}
#Override
protected SpringApplicationBuilder createSpringApplicationBuilder() {
SpringApplicationBuilder builder = new SpringApplicationBuilder();
builder.initializers(new DecryptedPropertyContextInitializer());
return builder;
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
No need to make any changes for the lambdahandler.

PublishSubscribeChannel using TaskExecutor - Thread behaviour

I have a simple spring dsl flow as follows:
#Configuration
public class OrderFlow {
private static final Logger logger = LoggerFactory.getLogger(OrderFlow.class);
#Autowired
private OrderSubFlow orderSubFlow;
#Autowired
private ThreadPoolTaskExecutor threadPoolTaskExecutor;
#Bean
public IntegrationFlow orders() {
return IntegrationFlows.from(MessageChannels.direct("order_input").get()).handle(new GenericHandler<Order>() {
#Override
public Object handle(Order order, Map<String, Object> headers) {
logger.info("Pre-Processing order with id: {}", order.getId());
return MessageBuilder.withPayload(order).copyHeaders(headers).build();
}
}).publishSubscribeChannel(threadPoolTaskExecutor, new Consumer<PublishSubscribeSpec>() {
#Override
public void accept(PublishSubscribeSpec t) {
t.subscribe(orderSubFlow);
}
}).handle(new GenericHandler<Order>() {
#Override
public Object handle(Order order, Map<String, Object> headers) {
logger.info("Post-Processing order with id: {}", order.getId());
return MessageBuilder.withPayload(order).copyHeaders(headers).build();
}
}).get();
}
#Bean
public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setMaxPoolSize(2);
threadPoolTaskExecutor.setCorePoolSize(2);
threadPoolTaskExecutor.setQueueCapacity(10);
return threadPoolTaskExecutor;
}
}
And the OrderSubFlow is
#Configuration
public class OrderSubFlow implements IntegrationFlow {
private static final Logger logger = LoggerFactory.getLogger(OrderSubFlow.class);
#Override
public void configure(IntegrationFlowDefinition<?> flow) {
flow.handle(new GenericHandler<Order>() {
#Override
public Object handle(Order order, Map<String, Object> headers) {
logger.info("Processing order with id: {}", order.getId());
return null;
}
});
}
}
When I put a message into the "order_input" channel, it's executing the first OrderFlow handler in the main thread and OrderSubFlow handler in TaskExecutor thread, which is expected. But the OrderFlow second handler is also getting executed in TaskExecutor thread. Is this an expected behaviour? Shouldn't OrderFlow second handler be executed in the main thread itself?
Please see the logs below.
INFO 9648 --- [ main] com.example.flows.OrderFlow : Pre-Processing order with id: 10
INFO 9648 --- [lTaskExecutor-1] com.example.flows.OrderSubFlow : Processing order with id: 10
INFO 9648 --- [lTaskExecutor-2] com.example.flows.OrderFlow : Post-Processing order with id: 10
Here is the gateway I'm using
#MessagingGateway
public interface OrderService {
#Gateway(requestChannel="order_input")
Order processOrder(Order order);
}
Please, read a discussion in the https://jira.spring.io/browse/INT-4264. That is really expected behavior. Just because that handler is one more subscriber to that publishSubscribeChannel.
To make what you want is possible with the .routeToRecipients() when one of the recipients is pub-sub with Executor, and another is DirectChannel to continue in the main thread.

Resources