Spring Integration Reactor configuration - spring

I'm running an application that process tasks using spring integration.
I'd like to make it process multiple tasks concurrently but any attempt failed so far.
My configuration is:
ReactorConfiguration.java
#Configuration
#EnableAutoConfiguration
public class ReactorConfiguration {
#Bean
Environment reactorEnv() {
return new Environment();
}
#Bean
Reactor createReactor(Environment env) {
return Reactors.reactor()
.env(env)
.dispatcher(Environment.THREAD_POOL)
.get();
}
}
TaskProcessor.java
#MessagingGateway(reactorEnvironment = "reactorEnv")
public interface TaskProcessor {
#Gateway(requestChannel = "routeTaskByType", replyChannel = "")
Promise<Result> processTask(Task task);
}
IntegrationConfiguration.java (simplified)
#Bean
public IntegrationFlow routeFlow() {
return IntegrationFlows.from(MessageChannels.executor("routeTaskByType", Executors.newFixedThreadPool(10)))
.handle(Task.class, (payload, headers) -> {
logger.info("Task submitted!" + payload);
payload.setRunning(true);
//Try-catch
Thread.sleep(999999);
return payload;
})
.route(/*...*/)
.get();
}
My testing code can be simplified like this:
Task task1 = new Task();
Task task2 = new Task();
Promise<Result> resultPromise1 = taskProcessor.processTask(task1).flush();
Promise<Result> resultPromise2 = taskProcessor.processTask(task2).flush();
while( !task1.isRunning() || !task2.isRunning() ){
logger.info("Task2: {}, Task2: {}", task1, task2);
Thread.sleep(1000);
}
logger.info("Yes! your tasks are running in parallel!");
But unfortunately, the last log line, will never get executed!
Any ideas?
Thanks a lot

Well, I've reproduced it just with simple Reactor test-case:
#Test
public void testParallelPromises() throws InterruptedException {
Environment environment = new Environment();
final AtomicBoolean first = new AtomicBoolean(true);
for (int i = 0; i < 10; i++) {
final Promise<String> promise = Promises.task(environment, () -> {
if (!first.getAndSet(false)) {
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
return "foo";
}
);
String result = promise.await(10, TimeUnit.SECONDS);
System.out.println(result);
assertNotNull(result);
}
}
(It is with Reactor-2.0.6).
The problem is because of:
public static <T> Promise<T> task(Environment env, Supplier<T> supplier) {
return task(env, env.getDefaultDispatcher(), supplier);
}
where DefaultDispatcher is RingBufferDispatcher extends SingleThreadDispatcher.
Since the #MessagingGateway is based on the request/reply scenario, we are waiting for reply within that RingBufferDispatcher's Thread. Since you don't return reply there (Thread.sleep(999999);), we aren't able to accept the next event within RingBuffer.
Your dispatcher(Environment.THREAD_POOL) doesn't help here because it doesn't affect the Environment. You should consider to use reactor.dispatchers.default = threadPoolExecutor property. Something like this file: https://github.com/reactor/reactor/blob/2.0.x/reactor-net/src/test/resources/META-INF/reactor/reactor-environment.properties#L46.
And yes: upgrade, please, to the latest Reactor.

Related

can anyone help me Spring Batch Issue? (Unintended schedule Spring Batch)

The implemented function is to send LMS to the user at the alarm time.
Send a total of 4 alarms (9:00, 13:00, 19:00, 21:00).
Log was recorded regardless of success.
It was not recorded in the Log, but when I looked at the batch data in the DB, I found an unintended COMPLETED.
Issue>
Batch was successfully executed at 9 and 13 on the 18th.
But at 13:37 it's not even a schedule, but it's executed. (and FAILED)
Subsequently, 13:38, 40, 42, 44 minutes executed. (all COMPLETED)
Q1. Why was it executed when it wasn't even the batch execution time?
Q2. I save the log even when executing batch and sending SMS. Log was printed normally at 9 and 13 o'clock.
But Log is not saved for non-schedule(13:37, 38, 40, 42, 44).
Check spring boot service and tomcat service with one
server CPU, memory usage is normal
Batch Problem
Spring Boot (2.2.6 RELEASE)
Spring Boot - Embedded Tomcat
===== Start Scheduler =====
#Component
public class DosageAlarmScheduler {
public static final int MORNING_HOUR = 9;
public static final int LUNCH_HOUR = 13;
public static final int DINNER_HOUR = 19;
public static final int BEFORE_SLEEP_HOUR = 21;
#Scheduled(cron = "'0 0 */1 * * *") // every hour
public void executeDosageAlarmJob() {
LocalDateTime nowDateTime = LocalDateTime.now();
try {
if(isExecuteTime(nowDateTime)) {
log.info("[Send LMS], {}", nowDateTime);
EatFixCd eatFixCd = currentEatFixCd(nowDateTime);
jobLauncher.run(
alarmJob,
new JobParametersBuilder()
.addString("currentDate", nowDateTime.toString())
.addString("eatFixCodeValue", eatFixCd.getCodeValue())
.toJobParameters()
);
} else {
log.info("[Not Send LMS], {}", nowDateTime);
}
} catch (JobExecutionAlreadyRunningException e) {
log.error("[JobExecutionAlreadyRunningException]", e);
} catch (JobRestartException e) {
log.error("[JobRestartException]", e);
} catch (JobInstanceAlreadyCompleteException e) {
log.error("[JobInstanceAlreadyCompleteException]", e);
} catch (JobParametersInvalidException e) {
log.error("[JobParametersInvalidException]", e);
} catch(Exception e) {
log.error("[Exception]", e);
}
/* Start private method */
private boolean isExecuteTime(LocalDateTime nowDateTime) {
return nowDateTime.getHour() == MORNING_TIME.getHour()
|| nowDateTime.getHour() == LUNCH_TIME.getHour()
|| nowDateTime.getHour() == DINNER_TIME.getHour()
|| nowDateTime.getHour() == BEFORE_SLEEP_TIME.getHour();
}
private EatFixCd currentEatFixCd(LocalDateTime nowDateTime) {
switch(nowDateTime.getHour()) {
case MORNING_HOUR:
return EatFixCd.MORNING;
case LUNCH_HOUR:
return EatFixCd.LUNCH;
case DINNER_HOUR:
return EatFixCd.DINNER;
case BEFORE_SLEEP_HOUR:
return EatFixCd.BEFORE_SLEEP;
default:
throw new RuntimeException("Not Dosage Time");
}
}
/* End private method */
}
}
===== End Scheduler =====
===== Start Job =====
#Configuration
public class DosageAlarmConfiguration {
private final int chunkSize = 20;
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
private final EntityManagerFactory entityManagerFactory;
#Bean
public Job dosageAlarmJob() {
log.info("[dosageAlarmJob excute]");
return jobBuilderFactory.get("dosageAlarmJob")
.start(dosageAlarmStep(null, null)).build();
}
#Bean
#JobScope
public Step dosageAlarmStep(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Step excute]");
return stepBuilderFactory.get("dosageAlarmStep")
.<Object[], DosageReceiverInfoDto>chunk(chunkSize)
.reader(dosageAlarmReader(currentDate, eatFixCodeValue))
.processor(dosageAlarmProcessor(currentDate, eatFixCodeValue))
.writer(dosageAlarmWriter(currentDate, eatFixCodeValue))
.build();
}
#Bean
#StepScope
public JpaPagingItemReader<Object[]> dosageAlarmReader(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Reader excute : {}, {}]", currentDate, eatFixCodeValue);
if(currentDate == null) {
return null;
} else {
JpaPagingItemReader<Object[]> jpaPagingItemReader = new JpaPagingItemReader<>();
jpaPagingItemReader.setName("dosageAlarmReader");
jpaPagingItemReader.setEntityManagerFactory(entityManagerFactory);
jpaPagingItemReader.setPageSize(chunkSize);
jpaPagingItemReader.setQueryString("select das from DosageAlarm das where :currentDate between das.startDate and das.endDate ");
HashMap<String, Object> parameterValues = new HashMap<>();
parameterValues.put("currentDate", LocalDateTime.parse(currentDate).toLocalDate());
jpaPagingItemReader.setParameterValues(parameterValues);
return jpaPagingItemReader;
}
}
#Bean
#StepScope
public ItemProcessor<Object[], DosageReceiverInfoDto> dosageAlarmProcessor(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Processor excute : {}, {}]", currentDate, eatFixCodeValue);
...
convert to DosageReceiverInfoDto
...
}
#Bean
#StepScope
public ItemWriter<DosageReceiverInfoDto> dosageAlarmWriter(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Writer excute : {}, {}]", currentDate, eatFixCodeValue);
...
make List
...
if(reqMessageDtoList != null) {
sendMessages(reqMessageDtoList);
} else {
log.info("[reqMessageDtoList not Exist]");
}
}
public SmsExternalSendResDto sendMessages(List<reqMessagesDto> reqMessageDtoList) {
log.info("[receiveList] smsTypeCd : {}, contentTypeCd : {}, messages : {}", smsTypeCd.LMS, contentTypeCd.COMM, reqMessageDtoList);
...
send Messages
}
}
===== End Job =====
Thank U.
i want to fix my problem and i hope this question is hepled another people.

Spring Integration: how to unit test a poller advice

I'm trying to unit test an advice on the poller which blocks execution of the mongo channel adapter until a certain condition is met (=all messages from this batch are processed).
The flow looks as follow:
IntegrationFlows.from(MongoDb.reactiveInboundChannelAdapter(mongoDbFactory,
new Query().with(Sort.by(Sort.Direction.DESC, "modifiedDate")).limit(1))
.collectionName("metadata")
.entityClass(Metadata.class)
.expectSingleResult(true),
e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(pollingIntervalSeconds))
.advice(this.advices.waitUntilCompletedAdvice())))
.handle((p, h) -> {
this.advices.waitUntilCompletedAdvice().setWait(true);
return p;
})
.handle(doSomething())
.channel(Channels.DOCUMENT_HEADER.name())
.get();
And the following advice bean:
#Bean
public WaitUntilCompletedAdvice waitUntilCompletedAdvice() {
DynamicPeriodicTrigger trigger = new DynamicPeriodicTrigger(Duration.ofSeconds(1));
return new WaitUntilCompletedAdvice(trigger);
}
And the advice itself:
public class WaitUntilCompletedAdvice extends SimpleActiveIdleMessageSourceAdvice {
AtomicBoolean wait = new AtomicBoolean(false);
public WaitUntilCompletedAdvice(DynamicPeriodicTrigger trigger) {
super(trigger);
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
if (getWait())
return false;
return true;
}
public boolean getWait() {
return wait.get();
}
public void setWait(boolean newWait) {
if (getWait() == newWait)
return;
while (true) {
if (wait.compareAndSet(!newWait, newWait)) {
return;
}
}
}
}
I'm using the following test for testing the flow:
#Test
public void testClaimPoollingAdapterFlow() throws Exception {
// given
ArgumentCaptor<Message<?>> captor = messageArgumentCaptor();
CountDownLatch receiveLatch = new CountDownLatch(1);
MessageHandler mockMessageHandler = mockMessageHandler(captor).handleNext(m -> receiveLatch.countDown());
this.mockIntegrationContext.substituteMessageHandlerFor("retrieveDocumentHeader", mockMessageHandler);
LocalDateTime modifiedDate = LocalDateTime.now();
ProcessingMetadata data = Metadata.builder()
.modifiedDate(modifiedDate)
.build();
assert !this.advices.waitUntilCompletedAdvice().getWait();
// when
itf.getInputChannel().send(new GenericMessage<>(Mono.just(data)));
// then
assertThat(receiveLatch.await(10, TimeUnit.SECONDS)).isTrue();
verify(mockMessageHandler).handleMessage(any());
assertThat(captor.getValue().getPayload()).isEqualTo(modifiedDate);
assert this.advices.waitUntilCompletedAdvice().getWait();
}
Which works fine but when I send another message to the input channel, it still processes the message without respecting the advice.
Is it intended behaviour? If so, how can I verify using unit test that the poller is really waiting for this advice?
itf.getInputChannel().send(new GenericMessage<>(Mono.just(data)));
That bypasses the poller and sends the message directly.
You can unit test the advice has been configured by calling beforeReceive() from your test
Or you can create a dummy test flow with the same advice
IntegationFlows.from(() -> "foo", e -> e.poller(...))
...
And verify that just one message is sent.
Example implementation:
#Test
public void testWaitingActivate() {
// given
this.advices.waitUntilCompletedAdvice().setWait(true);
// when
Message<ProcessingMetadata> receive = (Message<ProcessingMetadata>) testChannel.receive(3000);
// then
assertThat(receive).isNull();
}
#Test
public void testWaitingInactive() {
// given
this.advices.waitUntilCompletedAdvice().setWait(false);
// when
Message<ProcessingMetadata> receive = (Message<ProcessingMetadata>) testChannel.receive(3000);
// then
assertThat(receive).isNotNull();
}
#Configuration
#EnableIntegration
public static class Config {
#Autowired
private Advices advices;
#Bean
public PollableChannel testChannel() {
return new QueueChannel();
}
#Bean
public IntegrationFlow fakeFlow() {
this.advices.waitUntilCompletedAdvice().setWait(true);
return IntegrationFlows.from(() -> "foo", e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(1))
.advice(this.advices.waitUntilCompletedAdvice()))).channel("testChannel").get();
}
}

Usage of exceptionExpression in Spring Retry

According to documentation, I can use something like this in exceptionExpression: #Retryable(exceptionExpression="message.contains('this can be retried')")
But I want to get response body and check message inside it (from RestClientResponseException), something similar to this: exceptionExpression = "getResponseBodyAsString().contains('important message')"
I tried like that but it doesn't work. So, is it possible to do something similar and check info from responseBody?
Edit: Adding whole #Retryable annotation parameters with Gary Russell's suggestion:
#Retryable(value = HttpClientErrorException.class, exceptionExpression = "#{#root instanceof T(org.springframework.web.client.HttpClientErrorException) AND responseBodyAsString.contains('important message')}")
I'm using actual RestClientResponseException subclass that I'm catching but is still not triggering retry.
With the current release, the expression incorrectly requires static template markers; they will not be needed in 1.3.
#Retryable(exceptionExpression = "#{responseBodyAsString.contains('foo')}")
However, you can't use this expression if there are include or exclude properties so the expression should check the type:
#Retryable(exceptionExpression =
"#{#root instanceof T(org.springframework.web.client.RestClientResponseException) "
+ "AND responseBodyAsString.contains('foo')}")
EDIT
#SpringBootApplication
#EnableRetry
public class So61488237Application {
public static void main(String[] args) {
SpringApplication.run(So61488237Application.class, args).close();
}
#Bean
public ApplicationRunner runner(Foo foo) {
return args -> {
try {
foo.test(1, "foo.");
}
catch (Exception e) {
}
};
}
}
#Component
class Foo {
#Retryable(exceptionExpression =
"#{#root instanceof T(org.springframework.web.client.RestClientException) "
+ "AND responseBodyAsString.contains('foo')}")
public void test(int val, String str) {
System.out.println(val + ":" + str);
throw new RestClientResponseException("foo", 500, "bar", new HttpHeaders(), "foo".getBytes(),
StandardCharsets.UTF_8);
}
}
1:foo.
1:foo.
1:foo.
I've implemented the following approach, which in my opinion is much more convenient.
#Retryable(value = WebClientException.class,
exceptionExpression = RetryCheckerService.EXPRESSION,
maxAttempts = 5,
backoff = #Backoff(delay = 500))
public List<ResultDto> getSomeResource () {}
Here the RetryCheckerService encapsulates all needed logic.
#Service
public class RetryCheckerService {
public static final String EXPRESSION = "#retryCheckerService.shouldRetry(#root)";
public boolean shouldRetry(WebClientException ex) {
if (ex instanceof WebClientResponseException responseException) {
return responseException.getStatusCode().is5xxServerError()
|| responseException.getStatusCode().equals(HttpStatus.NOT_FOUND);
}
if (ex instanceof WebClientRequestException requestException) {
String message = requestException.getMessage();
if (message == null) {
return false;
}
return message.contains("HttpConnectionOverHTTP");
}
return false;
}
}

How can I create many kafka topics during spring-boot application start up?

I have this configuration:
#Configuration
public class KafkaTopicConfig {
private final TopicProperties topics;
public KafkaTopicConfig(TopicProperties topics) {
this.topics = topics;
}
#Bean
public NewTopic newTopicImportCharge() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CHARGES.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
#Bean
public NewTopic newTopicImportPayment() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_PAYMENTS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
#Bean
public NewTopic newTopicImportCatalog() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CATALOGS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
}
I can add 10 differents topics to TopicProperties. And I don't want create each similar bean manually. Does some way exist for create all topic in spring-kafka or only spring?
Use an admin client directly; you can get a pre-built properties map from Boot's KafkaAdmin.
#SpringBootApplication
public class So55336461Application {
public static void main(String[] args) {
SpringApplication.run(So55336461Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaAdmin kafkaAdmin) {
return args -> {
AdminClient admin = AdminClient.create(kafkaAdmin.getConfigurationProperties());
List<NewTopic> topics = new ArrayList<>();
// build list
admin.createTopics(topics).all().get();
};
}
}
EDIT
To check if they already exist, or if the partitions need to be increased, the KafkaAdmin has this logic...
private void addTopicsIfNeeded(AdminClient adminClient, Collection<NewTopic> topics) {
if (topics.size() > 0) {
Map<String, NewTopic> topicNameToTopic = new HashMap<>();
topics.forEach(t -> topicNameToTopic.compute(t.name(), (k, v) -> t));
DescribeTopicsResult topicInfo = adminClient
.describeTopics(topics.stream()
.map(NewTopic::name)
.collect(Collectors.toList()));
List<NewTopic> topicsToAdd = new ArrayList<>();
Map<String, NewPartitions> topicsToModify = checkPartitions(topicNameToTopic, topicInfo, topicsToAdd);
if (topicsToAdd.size() > 0) {
addTopics(adminClient, topicsToAdd);
}
if (topicsToModify.size() > 0) {
modifyTopics(adminClient, topicsToModify);
}
}
}
private Map<String, NewPartitions> checkPartitions(Map<String, NewTopic> topicNameToTopic,
DescribeTopicsResult topicInfo, List<NewTopic> topicsToAdd) {
Map<String, NewPartitions> topicsToModify = new HashMap<>();
topicInfo.values().forEach((n, f) -> {
NewTopic topic = topicNameToTopic.get(n);
try {
TopicDescription topicDescription = f.get(this.operationTimeout, TimeUnit.SECONDS);
if (topic.numPartitions() < topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
}
else if (topic.numPartitions() > topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d, increasing "
+ "if the broker supports it", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
topicsToModify.put(n, NewPartitions.increaseTo(topic.numPartitions()));
}
}
catch (#SuppressWarnings("unused") InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (TimeoutException e) {
throw new KafkaException("Timed out waiting to get existing topics", e);
}
catch (#SuppressWarnings("unused") ExecutionException e) {
topicsToAdd.add(topic);
}
});
return topicsToModify;
}
Currently we can just use KafkaAdmin.NewTopics
Spring Doc

Subscribers onnext does not contain complete item

We are working with project reactor and having a huge problem right now. This is how we produce (publish our data):
public Flux<String> getAllFlux() {
return Flux.<String>create(sink -> {
new Thread(){
public void run(){
Iterator<Cache.Entry<String, MyObject>> iterator = getAllIterator();
ObjectMapper mapper = new ObjectMapper();
while(iterator.hasNext()) {
try {
sink.next(mapper.writeValueAsString(iterator.next().getValue()));
} catch (IOException e) {
e.printStackTrace();
}
}
sink.complete();
}
} .start();
});
}
As you can see we are taking data from an iterator and are publishing each item in that iterator as a json string. Our subscriber does the following:
flux.subscribe(new Subscriber<String>() {
private Subscription s;
int amount = 1; // the amount of received flux payload at a time
int onNextAmount;
String completeItem="";
ObjectMapper mapper = new ObjectMapper();
#Override
public void onSubscribe(Subscription s) {
System.out.println("subscribe");
this.s = s;
this.s.request(amount);
}
#Override
public void onNext(String item) {
MyObject myObject = null;
try {
System.out.println(item);
myObject = mapper.readValue(completeItem, MyObject.class);
System.out.println(myObject.toString());
} catch (IOException e) {
System.out.println(item);
System.out.println("failed: " + e.getLocalizedMessage());
}
onNextAmount++;
if (onNextAmount % amount == 0) {
this.s.request(amount);
}
}
#Override
public void onError(Throwable t) {
System.out.println(t.getLocalizedMessage())
}
#Override
public void onComplete() {
System.out.println("completed");
});
}
As you can see we are simply printing the String item which we receive and parsing it into an object using jackson wrapper. The problem we got now is that for most of our items everything works fine:
{"itemId": "someId", "itemDesc", "some description"}
But for some items the String is cut off like this for example:
{"itemId": "some"
And the next item after that would be
"Id", "itemDesc", "some description"}
There is no pattern for those cuts. It is completely random and it is different everytime we run that code. Ofcourse our jackson is gettin an error Unexpected end of Input with that behaviour.
So what is causing such a behaviour and how can we solve it?
Solution:
Send the Object inside the flux instead of the String:
public Flux<ItemIgnite> getAllFlux() {
return Flux.create(sink -> {
new Thread(){
public void run(){
Iterator<Cache.Entry<String, ItemIgnite>> iterator = getAllIterator();
while(iterator.hasNext()) {
sink.next(iterator.next().getValue());
}
}
} .start();
});
}
and use the following produces type:
#RequestMapping(value="/allFlux", method=RequestMethod.GET, produces="application/stream+json")
The key here is to use stream+json and not only json.

Resources