PublishSubscribeChannel using TaskExecutor - Thread behaviour - spring

I have a simple spring dsl flow as follows:
#Configuration
public class OrderFlow {
private static final Logger logger = LoggerFactory.getLogger(OrderFlow.class);
#Autowired
private OrderSubFlow orderSubFlow;
#Autowired
private ThreadPoolTaskExecutor threadPoolTaskExecutor;
#Bean
public IntegrationFlow orders() {
return IntegrationFlows.from(MessageChannels.direct("order_input").get()).handle(new GenericHandler<Order>() {
#Override
public Object handle(Order order, Map<String, Object> headers) {
logger.info("Pre-Processing order with id: {}", order.getId());
return MessageBuilder.withPayload(order).copyHeaders(headers).build();
}
}).publishSubscribeChannel(threadPoolTaskExecutor, new Consumer<PublishSubscribeSpec>() {
#Override
public void accept(PublishSubscribeSpec t) {
t.subscribe(orderSubFlow);
}
}).handle(new GenericHandler<Order>() {
#Override
public Object handle(Order order, Map<String, Object> headers) {
logger.info("Post-Processing order with id: {}", order.getId());
return MessageBuilder.withPayload(order).copyHeaders(headers).build();
}
}).get();
}
#Bean
public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setMaxPoolSize(2);
threadPoolTaskExecutor.setCorePoolSize(2);
threadPoolTaskExecutor.setQueueCapacity(10);
return threadPoolTaskExecutor;
}
}
And the OrderSubFlow is
#Configuration
public class OrderSubFlow implements IntegrationFlow {
private static final Logger logger = LoggerFactory.getLogger(OrderSubFlow.class);
#Override
public void configure(IntegrationFlowDefinition<?> flow) {
flow.handle(new GenericHandler<Order>() {
#Override
public Object handle(Order order, Map<String, Object> headers) {
logger.info("Processing order with id: {}", order.getId());
return null;
}
});
}
}
When I put a message into the "order_input" channel, it's executing the first OrderFlow handler in the main thread and OrderSubFlow handler in TaskExecutor thread, which is expected. But the OrderFlow second handler is also getting executed in TaskExecutor thread. Is this an expected behaviour? Shouldn't OrderFlow second handler be executed in the main thread itself?
Please see the logs below.
INFO 9648 --- [ main] com.example.flows.OrderFlow : Pre-Processing order with id: 10
INFO 9648 --- [lTaskExecutor-1] com.example.flows.OrderSubFlow : Processing order with id: 10
INFO 9648 --- [lTaskExecutor-2] com.example.flows.OrderFlow : Post-Processing order with id: 10
Here is the gateway I'm using
#MessagingGateway
public interface OrderService {
#Gateway(requestChannel="order_input")
Order processOrder(Order order);
}

Please, read a discussion in the https://jira.spring.io/browse/INT-4264. That is really expected behavior. Just because that handler is one more subscriber to that publishSubscribeChannel.
To make what you want is possible with the .routeToRecipients() when one of the recipients is pub-sub with Executor, and another is DirectChannel to continue in the main thread.

Related

Spring Boot 3 context propagation in micrometer tracing

Spring Boot 3 has changed context propagation in tracing.
https://github.com/micrometer-metrics/tracing/wiki/Spring-Cloud-Sleuth-3.1-Migration-Guide#async-instrumentation
They deliver now library to this issue. I guess I don't quite understand how it works.
I have created a taskExecutor as in guide.
#Bean(name = "taskExecutor")
ThreadPoolTaskExecutor threadPoolTaskScheduler() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor() {
#Override
protected ExecutorService initializeExecutor(ThreadFactory threadFactory, RejectedExecutionHandler rejectedExecutionHandler) {
ExecutorService executorService = super.initializeExecutor(threadFactory, rejectedExecutionHandler);
return ContextExecutorService.wrap(executorService, ContextSnapshot::captureAll);
}
};
threadPoolTaskExecutor.initialize();
return threadPoolTaskExecutor;
}
And I have marked #Async like this:
#Async("taskExecutor")
public void run() {
// invoke some service
}
But context is not propagated to child context in taskExecutor thread.
I was facing the same problem. Pls add this code to the configuration and everything works as expected.
#Configuration(proxyBeanMethods = false)
static class AsyncConfig implements AsyncConfigurer, WebMvcConfigurer {
#Override
public Executor getAsyncExecutor() {
return ContextExecutorService.wrap(Executors.newCachedThreadPool(), ContextSnapshot::captureAll);
}
#Override
public void configureAsyncSupport(AsyncSupportConfigurer configurer) {
configurer.setTaskExecutor(new SimpleAsyncTaskExecutor(r -> new Thread(ContextSnapshot.captureAll().wrap(r))));
}
}

How to use Async in Spring Boot?

Below is my code.
With my below code, different thread ids are not getting created.
The output has same thread id.
#Controller
#RequestMapping(value = "/Main")
public class MyController
{
#Autowired
private MyService myService;
#PostMapping("/Sub")
#ResponseBody
public String readInput(#RequestBody String name)
{
for (int i = 0;i<5;i++)
{
myService.asyncMethod();
}
return "Success";
}
}
With my below code, different thread ids are not getting created.
#Repository
#Configuration
#EnableAsync
public class MyService {
#Bean(name = "threadPoolTaskExecutor")
public Executor threadPoolTaskExecutor() {
return new ThreadPoolTaskExecutor();
}
#Async("threadPoolTaskExecutor")
public void asyncMethod() {
System.out.println("Thread " + Thread.currentThread().getId()+ " is running");
}
}
First of all, it is impossible to judge whether the thread pool is used by the thread id. You can set the thread prefix and judge by the log
Configure thread pool
#Slf4j
#Configuration
public class ThreadExecutorConfig {
#Autowired
private ThreadPoolProperties threadPoolProperties;
#Bean(name = "taskExecutor")
public ExecutorService executorService() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(threadPoolProperties.getCorePoolSize());
executor.setMaxPoolSize(threadPoolProperties.getMaxPoolSize());
executor.setQueueCapacity(threadPoolProperties.getQueueSize());
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
executor.setThreadNamePrefix("myThread-");
executor.initialize();
log.info("threadPoolConfig;corePoolSize:[{}];maxPoolSize:[{}];queueSize:[{}]",
threadPoolProperties.getCorePoolSize(),
threadPoolProperties.getMaxPoolSize(),
threadPoolProperties.getQueueSize());
return executor.getThreadPoolExecutor();
}
}
Use #Async annotations on methods
#Async(value = "taskExecutor")
#Override
public void asyncSave(OperationLogModel entity) {
if (log.isDebugEnabled()) {
log.debug("thread:{};entity:{}", Thread.currentThread().getName(), entity.toString());
}
entity.setCreateTime(LocalDateTime.now());
super.save(entity);
}
View log
Good question! The answer is in ThreadPoolTaskExecutor. Its default corePoolSize is one.
#Bean(name = "threadPoolTaskExecutor")
public Executor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setCorePoolSize(3);//or any (positive) integer that suits you.
return threadPoolTaskExecutor;
}
..will behave more as we expect:
Thread 127 is running
Thread 128 is running
Thread 128 is running
Thread 129 is running
Thread 127 is running

Can record processor be spring singleton bean?

I am using spring-kafka to implement the topology to convert lower-case to upper-case like this:
#Bean
public KStream<String, String> kStreamPromoToUppercase(StreamsBuilder builder) {
KStream<String, String> sourceStream = builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String()));
// A new processor object is created here per record
sourceStream.process(() -> new CapitalCaseProcessor());
...
}
The processor is not a spring singleton bean and is declared as follows:
public class CapitalCaseProcessor implements Processor<String, String> {
private ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, String value) {
context.headers().forEach(System.out::println);
}
The above processor is a stateful and holds the state of processor context.
Now, what would happen if we convert the stateful CapitalCaseProcessor to a spring singleton bean ?
#Component
public class CapitalCaseProcessor implements Processor<String, String> {
//Is the ProcessorContext going to have thread safety issue now?
private ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, String value) {
context.headers().forEach(System.out::println);
}
and try to inject it in the main topology as spring bean:
#Configuration
public class UppercaseTopologyProcessor {
#Autowired CapitalCaseProcessor capitalCaseProcessor;
#Bean
public KStream<String, String> kStreamPromoToUppercase(StreamsBuilder builder) {
KStream<String, String> sourceStream = builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String()));
// A singleton spring bean processor is now used for all the records
sourceStream.process(() -> capitalCaseProcessor);
...
}
Is it going to cause thread safety issue with the CapitalCaseProcessor now as it contains processorContext as a state?
Or is it better to declare it as a prototype bean like this as this?
#Configuration
public class UppercaseTopologyProcessor {
#Lookup
public CapitalCaseProcessor getCapitalCaseProcessor() {return null;}
#Bean
public KStream<String, String> kStreamPromoToUppercase(StreamsBuilder builder) {
KStream<String, String> sourceStream = builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String()));
// A singleton spring bean processor is now used for all the records
sourceStream.process(() -> getCapitalCaseProcessor());
...
}
Update: I essentially would like to know two things:
Should the processor instance be associated per stream record like AKKA actor model where actors are stateful and works per request or it can be a singleton object?
Is ProcessorContext thread safe?
I just ran a test and, the processor context is NOT thread-safe, what makes the stream thread-safe is you use a ProcessorSupplier (in your first example) to create a new processor instance for each record.
You must certainly not replace this with a Spring singleton.
Here is my test, using the MessagingTransformer provided by Spring for Apache Kafka:
#SpringBootApplication
#EnableKafkaStreams
public class So66200448Application {
private static final Logger log = LoggerFactory.getLogger(So66200448Application.class);
public static void main(String[] args) {
SpringApplication.run(So66200448Application.class, args);
}
#Bean
KStream<String, String> stream(StreamsBuilder sb) {
KStream<String, String> stream = sb.stream("so66200448");
stream.transform(() -> new MessagingTransformer(msg -> {
log.info(msg.toString());
log.info(new String(msg.getHeaders().get("foo", byte[].class)));
return msg;
}, new MessagingMessageConverter()) {
#Override
public KeyValue transform(Object key, Object value) {
try {
Thread.sleep(5000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return super.transform(key, value);
}
})
.to("so66200448out");
return stream;
}
#Bean
public NewTopic topic1() {
return TopicBuilder.name("so66200448").partitions(2).replicas(1).build();
}
#Bean
public NewTopic topic2() {
return TopicBuilder.name("so66200448out").partitions(2).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
Headers headers = new RecordHeaders();
headers.add(new RecordHeader("foo", "bar".getBytes()));
ProducerRecord<String, String> record = new ProducerRecord<>("so66200448", 0, null, "foo", headers);
template.send(record);
headers.remove("foo");
headers.add(new RecordHeader("foo", "baz".getBytes()));
record = new ProducerRecord<>("so66200448", 1, null, "bar", headers);
template.send(record);
};
}
#KafkaListener(id = "so66200448out", topics = "so66200448out")
public void listen(String in) {
System.out.println(in);
}
}
spring.kafka.streams.application-id=so66200448
spring.kafka.streams.properties.num.stream.threads=2
spring.kafka.consumer.auto-offset-reset=earliest
2021-02-16 15:57:34.322 INFO 17133 --- [-StreamThread-1] com.example.demo.So66200448Application : bar
2021-02-16 15:57:34.322 INFO 17133 --- [-StreamThread-2] com.example.demo.So66200448Application : baz
Changing the supplier to return the same instance each time, definitely breaks it.
#Bean
KStream<String, String> stream(StreamsBuilder sb) {
KStream<String, String> stream = sb.stream("so66200448");
MessagingTransformer transformer = new MessagingTransformer(msg -> {
log.info(msg.toString());
log.info(new String(msg.getHeaders().get("foo", byte[].class)));
return msg;
}, new MessagingMessageConverter()) {
#Override
public KeyValue transform(Object key, Object value) {
try {
Thread.sleep(5000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return super.transform(key, value);
}
};
stream.transform(() -> transformer)
.to("so66200448out");
return stream;
}
2021-02-16 15:54:28.975 INFO 16406 --- [-StreamThread-1] com.example.demo.So66200448Application : baz
2021-02-16 15:54:28.975 INFO 16406 --- [-StreamThread-2] com.example.demo.So66200448Application : baz
So, streams relies on getting a new instance each time for thread-safety.

Spring Boot: how to use FilteringMessageListenerAdapter

I have a Spring Boot application which listens to messages on a Kafka queue. To filter those messages, have the following two classs
#Component
public class Listener implements MessageListener {
private final CountDownLatch latch1 = new CountDownLatch(1);
#Override
#KafkaListener(topics = "${spring.kafka.topic.boot}")
public void onMessage(Object o) {
System.out.println("LISTENER received payload *****");
this.latch1.countDown();
}
}
#Configuration
#EnableKafka
public class KafkaConfig {
#Autowired
private Listener listener;
#Bean
public FilteringMessageListenerAdapter filteringReceiver() {
return new FilteringMessageListenerAdapter(listener, recordFilterStrategy() );
}
public RecordFilterStrategy recordFilterStrategy() {
return new RecordFilterStrategy() {
#Override
public boolean filter(ConsumerRecord consumerRecord) {
System.out.println("IN FILTER");
return false;
}
};
}
}
While messages are being processed by the Listener class, the RecordFilterStrategy implementation is not being invoked. What is the correct way to use FilteringMessageListenerAdapter?
Thanks
The solution was as follows:
No need for the FilteringMessageListenerAdapter class.
Rather, create a ConcurrentKafkaListenerContainerFactory, rather than relying on what Spring Boot provides out of the box. Then, set the RecordFilterStrategy implementation on this class.
#Bean
ConcurrentKafkaListenerContainerFactory<Integer, String>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setRecordFilterStrategy(recordFilterStrategy());
return factory;
}

#EventListener with #Async in Spring

Trying to enable async event handling combining the #Async and #EventListener annotations, but I still see that the listener is running in the publishing thread.
The example you can find here:
#SpringBootApplication
#EnableAsync
class AsyncEventListenerExample {
static final Logger logger = LoggerFactory.getLogger(AsyncEventListenerExample.class);
#Bean
TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor();
}
static class MedicalRecordUpdatedEvent {
private String id;
public MedicalRecordUpdatedEvent(String id) {
this.id = id;
}
#Override
public String toString() {
return "MedicalRecordUpdatedEvent{" +
"id='" + id + '\'' +
'}';
}
}
#Component
static class Receiver {
#EventListener
void handleSync(MedicalRecordUpdatedEvent event) {
logger.info("thread '{}' handling '{}' event", Thread.currentThread(), event);
}
#Async
#EventListener
void handleAsync(MedicalRecordUpdatedEvent event) {
logger.info("thread '{}' handling '{}' event", Thread.currentThread(), event);
}
}
#Component
static class Producer {
private final ApplicationEventPublisher publisher;
public Producer(ApplicationEventPublisher publisher) {
this.publisher = publisher;
}
public void create(String id) {
publisher.publishEvent(new MedicalRecordUpdatedEvent(id));
}
#Async
public void asynMethod() {
logger.info("running async method with thread '{}'", Thread.currentThread());
}
}
}
and my test case:
#RunWith(SpringRunner.class)
#SpringBootTest(classes = AsyncEventListenerExample.class)
public class AsyncEventListenerExampleTests {
#Autowired
Producer producer;
#Test
public void createEvent() throws InterruptedException {
producer.create("foo");
//producer.asynMethod();
// A chance to see the logging messages before the JVM exists.
Thread.sleep(2000);
}
}
However in logs I see that both #EventListeners run in the main thread.
2016-05-12 08:52:43.184 INFO 18671 --- [ main] c.z.e.async2.AsyncEventListenerExample : thread 'Thread[main,5,main]' handling 'MedicalRecordUpdatedEvent{id='foo'}' event
2016-05-12 08:52:43.186 INFO 18671 --- [ main] c.z.e.async2.AsyncEventListenerExample : thread 'Thread[main,5,main]' handling 'MedicalRecordUpdatedEvent{id='foo'}' event
The async infrastructure is initialised with #EnableAsync with an asynchronous TaskExecutor.
Not sure what I am doing wrong. Could you help?
Thanks.
Using Spring Boot 1.4.2.M2, so Spring 4.3.0.RC1
There was a regression in Spring Framework 4.3.0.RC1 that leads to that very issue you're having. If you use the SNAPSHOT, your project runs fine.
onTicketUpdatedEvent runs in also main Thread with Spring Framework 4.2.4 Release as follows.
But it runs in SimpleAsyncTaskExecutor if AsyncConfigurer is not implemented.
#EnableAsync(proxyTargetClass = true)
#Component
#Slf4j
public class ExampleEventListener implements AsyncConfigurer {
#Async
#EventListener
public void onTicketUpdatedEvent(TicketEvent ticketEvent) {
log.debug("received ticket updated event");
}
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setMaxPoolSize(100);
executor.initialize();
return executor;
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return null;
}
}
I resolved my issue by configuring task-executor bean as follows.
#Bean(name = "threadPoolTaskExecutor")
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setMaxPoolSize(100);
executor.initialize();
return executor;
}
#Async("threadPoolTaskExecutor")
#EventListener
public void onTicketUpdatedEvent(TicketEvent ticketEvent) {
log.debug("received ticket updated event");
}

Resources