How can I create many kafka topics during spring-boot application start up? - spring

I have this configuration:
#Configuration
public class KafkaTopicConfig {
private final TopicProperties topics;
public KafkaTopicConfig(TopicProperties topics) {
this.topics = topics;
}
#Bean
public NewTopic newTopicImportCharge() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CHARGES.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
#Bean
public NewTopic newTopicImportPayment() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_PAYMENTS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
#Bean
public NewTopic newTopicImportCatalog() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CATALOGS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
}
I can add 10 differents topics to TopicProperties. And I don't want create each similar bean manually. Does some way exist for create all topic in spring-kafka or only spring?

Use an admin client directly; you can get a pre-built properties map from Boot's KafkaAdmin.
#SpringBootApplication
public class So55336461Application {
public static void main(String[] args) {
SpringApplication.run(So55336461Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaAdmin kafkaAdmin) {
return args -> {
AdminClient admin = AdminClient.create(kafkaAdmin.getConfigurationProperties());
List<NewTopic> topics = new ArrayList<>();
// build list
admin.createTopics(topics).all().get();
};
}
}
EDIT
To check if they already exist, or if the partitions need to be increased, the KafkaAdmin has this logic...
private void addTopicsIfNeeded(AdminClient adminClient, Collection<NewTopic> topics) {
if (topics.size() > 0) {
Map<String, NewTopic> topicNameToTopic = new HashMap<>();
topics.forEach(t -> topicNameToTopic.compute(t.name(), (k, v) -> t));
DescribeTopicsResult topicInfo = adminClient
.describeTopics(topics.stream()
.map(NewTopic::name)
.collect(Collectors.toList()));
List<NewTopic> topicsToAdd = new ArrayList<>();
Map<String, NewPartitions> topicsToModify = checkPartitions(topicNameToTopic, topicInfo, topicsToAdd);
if (topicsToAdd.size() > 0) {
addTopics(adminClient, topicsToAdd);
}
if (topicsToModify.size() > 0) {
modifyTopics(adminClient, topicsToModify);
}
}
}
private Map<String, NewPartitions> checkPartitions(Map<String, NewTopic> topicNameToTopic,
DescribeTopicsResult topicInfo, List<NewTopic> topicsToAdd) {
Map<String, NewPartitions> topicsToModify = new HashMap<>();
topicInfo.values().forEach((n, f) -> {
NewTopic topic = topicNameToTopic.get(n);
try {
TopicDescription topicDescription = f.get(this.operationTimeout, TimeUnit.SECONDS);
if (topic.numPartitions() < topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
}
else if (topic.numPartitions() > topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d, increasing "
+ "if the broker supports it", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
topicsToModify.put(n, NewPartitions.increaseTo(topic.numPartitions()));
}
}
catch (#SuppressWarnings("unused") InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (TimeoutException e) {
throw new KafkaException("Timed out waiting to get existing topics", e);
}
catch (#SuppressWarnings("unused") ExecutionException e) {
topicsToAdd.add(topic);
}
});
return topicsToModify;
}

Currently we can just use KafkaAdmin.NewTopics
Spring Doc

Related

can anyone help me Spring Batch Issue? (Unintended schedule Spring Batch)

The implemented function is to send LMS to the user at the alarm time.
Send a total of 4 alarms (9:00, 13:00, 19:00, 21:00).
Log was recorded regardless of success.
It was not recorded in the Log, but when I looked at the batch data in the DB, I found an unintended COMPLETED.
Issue>
Batch was successfully executed at 9 and 13 on the 18th.
But at 13:37 it's not even a schedule, but it's executed. (and FAILED)
Subsequently, 13:38, 40, 42, 44 minutes executed. (all COMPLETED)
Q1. Why was it executed when it wasn't even the batch execution time?
Q2. I save the log even when executing batch and sending SMS. Log was printed normally at 9 and 13 o'clock.
But Log is not saved for non-schedule(13:37, 38, 40, 42, 44).
Check spring boot service and tomcat service with one
server CPU, memory usage is normal
Batch Problem
Spring Boot (2.2.6 RELEASE)
Spring Boot - Embedded Tomcat
===== Start Scheduler =====
#Component
public class DosageAlarmScheduler {
public static final int MORNING_HOUR = 9;
public static final int LUNCH_HOUR = 13;
public static final int DINNER_HOUR = 19;
public static final int BEFORE_SLEEP_HOUR = 21;
#Scheduled(cron = "'0 0 */1 * * *") // every hour
public void executeDosageAlarmJob() {
LocalDateTime nowDateTime = LocalDateTime.now();
try {
if(isExecuteTime(nowDateTime)) {
log.info("[Send LMS], {}", nowDateTime);
EatFixCd eatFixCd = currentEatFixCd(nowDateTime);
jobLauncher.run(
alarmJob,
new JobParametersBuilder()
.addString("currentDate", nowDateTime.toString())
.addString("eatFixCodeValue", eatFixCd.getCodeValue())
.toJobParameters()
);
} else {
log.info("[Not Send LMS], {}", nowDateTime);
}
} catch (JobExecutionAlreadyRunningException e) {
log.error("[JobExecutionAlreadyRunningException]", e);
} catch (JobRestartException e) {
log.error("[JobRestartException]", e);
} catch (JobInstanceAlreadyCompleteException e) {
log.error("[JobInstanceAlreadyCompleteException]", e);
} catch (JobParametersInvalidException e) {
log.error("[JobParametersInvalidException]", e);
} catch(Exception e) {
log.error("[Exception]", e);
}
/* Start private method */
private boolean isExecuteTime(LocalDateTime nowDateTime) {
return nowDateTime.getHour() == MORNING_TIME.getHour()
|| nowDateTime.getHour() == LUNCH_TIME.getHour()
|| nowDateTime.getHour() == DINNER_TIME.getHour()
|| nowDateTime.getHour() == BEFORE_SLEEP_TIME.getHour();
}
private EatFixCd currentEatFixCd(LocalDateTime nowDateTime) {
switch(nowDateTime.getHour()) {
case MORNING_HOUR:
return EatFixCd.MORNING;
case LUNCH_HOUR:
return EatFixCd.LUNCH;
case DINNER_HOUR:
return EatFixCd.DINNER;
case BEFORE_SLEEP_HOUR:
return EatFixCd.BEFORE_SLEEP;
default:
throw new RuntimeException("Not Dosage Time");
}
}
/* End private method */
}
}
===== End Scheduler =====
===== Start Job =====
#Configuration
public class DosageAlarmConfiguration {
private final int chunkSize = 20;
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
private final EntityManagerFactory entityManagerFactory;
#Bean
public Job dosageAlarmJob() {
log.info("[dosageAlarmJob excute]");
return jobBuilderFactory.get("dosageAlarmJob")
.start(dosageAlarmStep(null, null)).build();
}
#Bean
#JobScope
public Step dosageAlarmStep(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Step excute]");
return stepBuilderFactory.get("dosageAlarmStep")
.<Object[], DosageReceiverInfoDto>chunk(chunkSize)
.reader(dosageAlarmReader(currentDate, eatFixCodeValue))
.processor(dosageAlarmProcessor(currentDate, eatFixCodeValue))
.writer(dosageAlarmWriter(currentDate, eatFixCodeValue))
.build();
}
#Bean
#StepScope
public JpaPagingItemReader<Object[]> dosageAlarmReader(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Reader excute : {}, {}]", currentDate, eatFixCodeValue);
if(currentDate == null) {
return null;
} else {
JpaPagingItemReader<Object[]> jpaPagingItemReader = new JpaPagingItemReader<>();
jpaPagingItemReader.setName("dosageAlarmReader");
jpaPagingItemReader.setEntityManagerFactory(entityManagerFactory);
jpaPagingItemReader.setPageSize(chunkSize);
jpaPagingItemReader.setQueryString("select das from DosageAlarm das where :currentDate between das.startDate and das.endDate ");
HashMap<String, Object> parameterValues = new HashMap<>();
parameterValues.put("currentDate", LocalDateTime.parse(currentDate).toLocalDate());
jpaPagingItemReader.setParameterValues(parameterValues);
return jpaPagingItemReader;
}
}
#Bean
#StepScope
public ItemProcessor<Object[], DosageReceiverInfoDto> dosageAlarmProcessor(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Processor excute : {}, {}]", currentDate, eatFixCodeValue);
...
convert to DosageReceiverInfoDto
...
}
#Bean
#StepScope
public ItemWriter<DosageReceiverInfoDto> dosageAlarmWriter(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Writer excute : {}, {}]", currentDate, eatFixCodeValue);
...
make List
...
if(reqMessageDtoList != null) {
sendMessages(reqMessageDtoList);
} else {
log.info("[reqMessageDtoList not Exist]");
}
}
public SmsExternalSendResDto sendMessages(List<reqMessagesDto> reqMessageDtoList) {
log.info("[receiveList] smsTypeCd : {}, contentTypeCd : {}, messages : {}", smsTypeCd.LMS, contentTypeCd.COMM, reqMessageDtoList);
...
send Messages
}
}
===== End Job =====
Thank U.
i want to fix my problem and i hope this question is hepled another people.

How can i use #autowire in runnable spring boot

I have few MongoTemplate and Repos and i need to call them using #Autowire in my runnable class that is being executed by exceutor class using multi threading, now the problem is that when i run the application my AutoWire for mongoTempelate and Repos returns null pointer exception.
Executor class:
#Component
public class MessageConsumer implements ConsumerSeekAware {
#Autowired
AlarmDataRepository alarmDataRepository;
int assignableCores = ((Runtime.getRuntime().availableProcessors()));
ExecutorService executor = Executors.newFixedThreadPool(
assignableCores > 1 ? assignableCores : 1
);
int counter = 0;
List<String> uniqueRecords = new ArrayList<String>();
#KafkaListener(topics = "teltonikaTest", groupId = "xyz")
public void processMessages(#Payload List<String> payload, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions, #Header(KafkaHeaders.OFFSET) List<Long> offsets) throws UnsupportedEncodingException, DecodeException {
System.out.println("assignable resources are: " + assignableCores);
log.info("Batch Size is: {}", payload.size());
if(counter==0){
log.info("Teletonica Packets Received!");
}
for (int i = 0; i < payload.size(); i++) {
log.info("processing message='{}' with partition off-set='{}'", payload.get(i), partitions.get(i) + " _" + offsets.get(i));
}
uniqueRecords = payload.stream().distinct().collect(Collectors.toList());
Runnable worker = new TeltonikaWorkerThread(uniqueRecords);
executor.execute(worker);
counter++;
}
}
public class TeltonikaWorkerThread implements Runnable{
List<String> records;
List<CurrentDevice> currentDevices = new ArrayList<>();
#Autowired
CurrentDeviceRepository currentDeviceRepository;
#Autowired
MongoTemplate mongoTemplate;
public TeltonikaWorkerThread(List<String> records) {
this.records = records;
}
public void run() {
try {
processMessage();
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
} catch (DecodeException e) {
e.printStackTrace();
}
}
public void processMessage() throws UnsupportedEncodingException,DecodeException {
for(Object record : records){
if(record!="0"){
try{
int IMEILength = record.toString().indexOf("FF");
String IMEI = record.toString().substring(0,IMEILength);
}
catch (Exception e){
e.printStackTrace();
}
}
}
}
}
If I understand correctly, your problem is about multiple beans and Spring doesn't know which one should be injected. There are several options here.
For example, you can use #Qualifier annotation based on the bean name or #Primary annotation.
If your problem is something else, please add an example to your question.

How to use Netty's channel pool map as a ConnectorProvider for a Jax RS client

I have wasted several hours trying to solve a issue with the use of netty's channel pool map and a jax rs client.
I have used jersey's own netty connector as an inspiration but exchanged netty's channel with netty's channel pool map.
https://jersey.github.io/apidocs/2.27/jersey/org/glassfish/jersey/netty/connector/NettyConnectorProvider.html
My problem is that I have references that I need inside my custom SimpleChannelInboundHandler. However by the design of netty's way to create a channel pool map, I can not pass the references through my custom ChannelPoolHandler, because as soon as the pool map has created a pool the constructor of the channel pool handler never runs again.
This is the method where it makes acquires a pool and check out a channel to make a HTTP request.
#Override
public Future<?> apply(ClientRequest request, AsyncConnectorCallback callback) {
final CompletableFuture<Object> completableFuture = new CompletableFuture<>();
try{
HttpRequest httpRequest = buildHttpRequest(request);
// guard against prematurely closed channel
final GenericFutureListener<io.netty.util.concurrent.Future<? super Void>> closeListener =
future -> {
if (!completableFuture.isDone()) {
completableFuture.completeExceptionally(new IOException("Channel closed."));
}
};
try {
ClientRequestDTO clientRequestDTO = new ClientRequestDTO(NettyChannelPoolConnector.this, request, completableFuture, callback);
dtoMap.putIfAbsent(request.getUri(), clientRequestDTO);
// Retrieves a channel pool for the given host
FixedChannelPool pool = this.poolMap.get(clientRequestDTO);
// Acquire a new channel from the pool
io.netty.util.concurrent.Future<Channel> f = pool.acquire();
f.addListener((FutureListener<Channel>) futureWrite -> {
//Succeeded with acquiring a channel
if (futureWrite.isSuccess()) {
Channel channel = futureWrite.getNow();
channel.closeFuture().addListener(closeListener);
try {
if(request.hasEntity()) {
channel.writeAndFlush(httpRequest);
final JerseyChunkedInput jerseyChunkedInput = new JerseyChunkedInput(channel);
request.setStreamProvider(contentLength -> jerseyChunkedInput);
if(HttpUtil.isTransferEncodingChunked(httpRequest)) {
channel.write(jerseyChunkedInput);
} else {
channel.write(jerseyChunkedInput);
}
executorService.execute(() -> {
channel.closeFuture().removeListener(closeListener);
try {
request.writeEntity();
} catch (IOException ex) {
callback.failure(ex);
completableFuture.completeExceptionally(ex);
}
});
channel.flush();
} else {
channel.closeFuture().removeListener(closeListener);
channel.writeAndFlush(httpRequest);
}
} catch (Exception ex) {
System.err.println("Failed to sync and flush http request" + ex.getLocalizedMessage());
}
pool.release(channel);
}
});
} catch (NullPointerException ex) {
System.err.println("Failed to acquire socket from pool " + ex.getLocalizedMessage());
}
} catch (Exception ex) {
completableFuture.completeExceptionally(ex);
return completableFuture;
}
return completableFuture;
}
This is my ChannelPoolHandler
public class SimpleChannelPoolHandler implements ChannelPoolHandler {
private ClientRequestDTO clientRequestDTO;
private boolean ssl;
private URI uri;
private int port;
SimpleChannelPoolHandler(URI uri) {
this.uri = uri;
if(uri != null) {
this.port = uri.getPort() != -1 ? uri.getPort() : "https".equals(uri.getScheme()) ? 443 : 80;
ssl = "https".equalsIgnoreCase(uri.getScheme());
}
}
#Override
public void channelReleased(Channel ch) throws Exception {
System.out.println("Channel released: " + ch.toString());
}
#Override
public void channelAcquired(Channel ch) throws Exception {
System.out.println("Channel acquired: " + ch.toString());
}
#Override
public void channelCreated(Channel ch) throws Exception {
System.out.println("Channel created: " + ch.toString());
int readTimeout = Integer.parseInt(ApplicationEnvironment.getInstance().get("READ_TIMEOUT"));
SocketChannelConfig channelConfig = (SocketChannelConfig) ch.config();
channelConfig.setConnectTimeoutMillis(2000);
ChannelPipeline channelPipeline = ch.pipeline();
if(ssl) {
SslContext sslContext = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();
channelPipeline.addLast("ssl", sslContext.newHandler(ch.alloc(), uri.getHost(), this.port));
}
channelPipeline.addLast("client codec", new HttpClientCodec());
channelPipeline.addLast("chunked content writer",new ChunkedWriteHandler());
channelPipeline.addLast("content decompressor", new HttpContentDecompressor());
channelPipeline.addLast("read timeout", new ReadTimeoutHandler(readTimeout, TimeUnit.MILLISECONDS));
channelPipeline.addLast("business logic", new JerseyNettyClientHandler(this.uri));
}
}
And this is my SimpleInboundHandler
public class JerseyNettyClientHandler extends SimpleChannelInboundHandler<HttpObject> {
private final NettyChannelPoolConnector nettyChannelPoolConnector;
private final LinkedBlockingDeque<InputStream> isList = new LinkedBlockingDeque<>();
private final AsyncConnectorCallback asyncConnectorCallback;
private final ClientRequest jerseyRequest;
private final CompletableFuture future;
public JerseyNettyClientHandler(ClientRequestDto clientRequestDTO) {
this.nettyChannelPoolConnector = clientRequestDTO.getNettyChannelPoolConnector();
ClientRequestDTO cdto = clientRequestDTO.getNettyChannelPoolConnector().getDtoMap().get(clientRequestDTO.getClientRequest());
this.asyncConnectorCallback = cdto.getCallback();
this.jerseyRequest = cdto.getClientRequest();
this.future = cdto.getFuture();
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception {
if(msg instanceof HttpResponse) {
final HttpResponse httpResponse = (HttpResponse) msg;
final ClientResponse response = new ClientResponse(new Response.StatusType() {
#Override
public int getStatusCode() {
return httpResponse.status().code();
}
#Override
public Response.Status.Family getFamily() {
return Response.Status.Family.familyOf(httpResponse.status().code());
}
#Override
public String getReasonPhrase() {
return httpResponse.status().reasonPhrase();
}
}, jerseyRequest);
for (Map.Entry<String, String> entry : httpResponse.headers().entries()) {
response.getHeaders().add(entry.getKey(), entry.getValue());
}
if((httpResponse.headers().contains(HttpHeaderNames.CONTENT_LENGTH) && HttpUtil.getContentLength(httpResponse) > 0) || HttpUtil.isTransferEncodingChunked(httpResponse)) {
ctx.channel().closeFuture().addListener(future -> isList.add(NettyInputStream.END_OF_INPUT_ERROR));
response.setEntityStream(new NettyInputStream(isList));
} else {
response.setEntityStream(new InputStream() {
#Override
public int read() {
return -1;
}
});
}
if(asyncConnectorCallback != null) {
nettyChannelPoolConnector.executorService.execute(() -> {
asyncConnectorCallback.response(response);
future.complete(response);
});
}
}
if(msg instanceof HttpContent) {
HttpContent content = (HttpContent) msg;
ByteBuf byteContent = content.content();
if(byteContent.isReadable()) {
byte[] bytes = new byte[byteContent.readableBytes()];
byteContent.getBytes(byteContent.readerIndex(), bytes);
isList.add(new ByteArrayInputStream(bytes));
}
}
if(msg instanceof LastHttpContent) {
isList.add(NettyInputStream.END_OF_INPUT);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
if(asyncConnectorCallback != null) {
nettyChannelPoolConnector.executorService.execute(() -> asyncConnectorCallback.failure(cause));
}
future.completeExceptionally(cause);
isList.add(NettyInputStream.END_OF_INPUT_ERROR);
}
The references needed to be passed to the SimpleChannelInboundHandler is what is packed into the ClientRequestDTO as seen in the first code block.
I am not sure as it is not a tested code. But it could be achieved by the following code.
SimpleChannelPool sPool = poolMap.get(Req.getAddress());
Future<Channel> f = sPool.acquire();
f.get().pipeline().addLast("inbound", new NettyClientInBoundHandler(Req, jbContext, ReportData));
f.addListener(new NettyClientFutureListener(this.Req, sPool));
where Req, jbContext, ReportData could be input data for InboundHandler().

Subscribers onnext does not contain complete item

We are working with project reactor and having a huge problem right now. This is how we produce (publish our data):
public Flux<String> getAllFlux() {
return Flux.<String>create(sink -> {
new Thread(){
public void run(){
Iterator<Cache.Entry<String, MyObject>> iterator = getAllIterator();
ObjectMapper mapper = new ObjectMapper();
while(iterator.hasNext()) {
try {
sink.next(mapper.writeValueAsString(iterator.next().getValue()));
} catch (IOException e) {
e.printStackTrace();
}
}
sink.complete();
}
} .start();
});
}
As you can see we are taking data from an iterator and are publishing each item in that iterator as a json string. Our subscriber does the following:
flux.subscribe(new Subscriber<String>() {
private Subscription s;
int amount = 1; // the amount of received flux payload at a time
int onNextAmount;
String completeItem="";
ObjectMapper mapper = new ObjectMapper();
#Override
public void onSubscribe(Subscription s) {
System.out.println("subscribe");
this.s = s;
this.s.request(amount);
}
#Override
public void onNext(String item) {
MyObject myObject = null;
try {
System.out.println(item);
myObject = mapper.readValue(completeItem, MyObject.class);
System.out.println(myObject.toString());
} catch (IOException e) {
System.out.println(item);
System.out.println("failed: " + e.getLocalizedMessage());
}
onNextAmount++;
if (onNextAmount % amount == 0) {
this.s.request(amount);
}
}
#Override
public void onError(Throwable t) {
System.out.println(t.getLocalizedMessage())
}
#Override
public void onComplete() {
System.out.println("completed");
});
}
As you can see we are simply printing the String item which we receive and parsing it into an object using jackson wrapper. The problem we got now is that for most of our items everything works fine:
{"itemId": "someId", "itemDesc", "some description"}
But for some items the String is cut off like this for example:
{"itemId": "some"
And the next item after that would be
"Id", "itemDesc", "some description"}
There is no pattern for those cuts. It is completely random and it is different everytime we run that code. Ofcourse our jackson is gettin an error Unexpected end of Input with that behaviour.
So what is causing such a behaviour and how can we solve it?
Solution:
Send the Object inside the flux instead of the String:
public Flux<ItemIgnite> getAllFlux() {
return Flux.create(sink -> {
new Thread(){
public void run(){
Iterator<Cache.Entry<String, ItemIgnite>> iterator = getAllIterator();
while(iterator.hasNext()) {
sink.next(iterator.next().getValue());
}
}
} .start();
});
}
and use the following produces type:
#RequestMapping(value="/allFlux", method=RequestMethod.GET, produces="application/stream+json")
The key here is to use stream+json and not only json.

Spring Integration Reactor configuration

I'm running an application that process tasks using spring integration.
I'd like to make it process multiple tasks concurrently but any attempt failed so far.
My configuration is:
ReactorConfiguration.java
#Configuration
#EnableAutoConfiguration
public class ReactorConfiguration {
#Bean
Environment reactorEnv() {
return new Environment();
}
#Bean
Reactor createReactor(Environment env) {
return Reactors.reactor()
.env(env)
.dispatcher(Environment.THREAD_POOL)
.get();
}
}
TaskProcessor.java
#MessagingGateway(reactorEnvironment = "reactorEnv")
public interface TaskProcessor {
#Gateway(requestChannel = "routeTaskByType", replyChannel = "")
Promise<Result> processTask(Task task);
}
IntegrationConfiguration.java (simplified)
#Bean
public IntegrationFlow routeFlow() {
return IntegrationFlows.from(MessageChannels.executor("routeTaskByType", Executors.newFixedThreadPool(10)))
.handle(Task.class, (payload, headers) -> {
logger.info("Task submitted!" + payload);
payload.setRunning(true);
//Try-catch
Thread.sleep(999999);
return payload;
})
.route(/*...*/)
.get();
}
My testing code can be simplified like this:
Task task1 = new Task();
Task task2 = new Task();
Promise<Result> resultPromise1 = taskProcessor.processTask(task1).flush();
Promise<Result> resultPromise2 = taskProcessor.processTask(task2).flush();
while( !task1.isRunning() || !task2.isRunning() ){
logger.info("Task2: {}, Task2: {}", task1, task2);
Thread.sleep(1000);
}
logger.info("Yes! your tasks are running in parallel!");
But unfortunately, the last log line, will never get executed!
Any ideas?
Thanks a lot
Well, I've reproduced it just with simple Reactor test-case:
#Test
public void testParallelPromises() throws InterruptedException {
Environment environment = new Environment();
final AtomicBoolean first = new AtomicBoolean(true);
for (int i = 0; i < 10; i++) {
final Promise<String> promise = Promises.task(environment, () -> {
if (!first.getAndSet(false)) {
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
return "foo";
}
);
String result = promise.await(10, TimeUnit.SECONDS);
System.out.println(result);
assertNotNull(result);
}
}
(It is with Reactor-2.0.6).
The problem is because of:
public static <T> Promise<T> task(Environment env, Supplier<T> supplier) {
return task(env, env.getDefaultDispatcher(), supplier);
}
where DefaultDispatcher is RingBufferDispatcher extends SingleThreadDispatcher.
Since the #MessagingGateway is based on the request/reply scenario, we are waiting for reply within that RingBufferDispatcher's Thread. Since you don't return reply there (Thread.sleep(999999);), we aren't able to accept the next event within RingBuffer.
Your dispatcher(Environment.THREAD_POOL) doesn't help here because it doesn't affect the Environment. You should consider to use reactor.dispatchers.default = threadPoolExecutor property. Something like this file: https://github.com/reactor/reactor/blob/2.0.x/reactor-net/src/test/resources/META-INF/reactor/reactor-environment.properties#L46.
And yes: upgrade, please, to the latest Reactor.

Resources