Spring file Inbound adapters - priority to directories with single channels - spring

Using single inbound channel I need to process two directories lowpriority and highprioiry but lowpriority files pick after the highpriority .
Any one know how handle multiple directories in file inbound adapter with single channel ?
#Bean
public IntegrationFlow processFileFlow() {
return pushFileForProcess(lowDirectory
,"processFile"
, "fileInputChannel");
}
private IntegrationFlow pushFileForProcess(String processDir, String methodName, String adapterName) {
String fileProcessor = "fileProcessor";
return IntegrationFlows
.from(Files.inboundAdapter(Paths.get(processDir).toFile())
.regexFilter(FILE_PATTERN_REGEX)
.preventDuplicates(false),
e -> e.poller(Pollers.fixedDelay(j.getPollerIntervalMs())
.maxMessagesPerPoll(j.getPollerMaxMsgs())
.errorChannel("errorChannel")) // moves processed files
.id(adapterName))
.handle(fileProcessor, methodName).get();
}

Use a Smart Poller Advice to reconfigure the FileReadingMessageSource when the poll returns no files for the high priority directory.
Presumably you should reconfigure it back on each low-priority poll (successful or not).
EDIT
Example:
#SpringBootApplication
public class So53868122Application {
private static final File HIGH = new File("/tmp/high");
private static final File LOW = new File("/tmp/low");
public static void main(String[] args) {
HIGH.mkdir();
LOW.mkdir();
SpringApplication.run(So53868122Application.class, args);
}
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(Files.inboundAdapter(HIGH),
e -> e.poller(Pollers.fixedDelay(5_000)
.advice(dirSwitcher())))
.handle(System.out::println)
.get();
}
#Bean
public Advice dirSwitcher() {
return new HighLowPollerAdvice();
}
public static class HighLowPollerAdvice extends AbstractMessageSourceAdvice {
private boolean isHigh = true;
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (this.isHigh && result == null) {
System.out.println("No results from " + HIGH + ", switching to " + LOW);
this.isHigh = false;
((FileReadingMessageSource) source).setDirectory(LOW);
}
else if (!this.isHigh) {
System.out.println("After one poll of " + LOW + " that returned "
+ (result == null ? "no file" : result.getPayload()) + ", switching to " + HIGH);
this.isHigh = true;
((FileReadingMessageSource) source).setDirectory(HIGH);
}
return result;
}
}
}

Related

can anyone help me Spring Batch Issue? (Unintended schedule Spring Batch)

The implemented function is to send LMS to the user at the alarm time.
Send a total of 4 alarms (9:00, 13:00, 19:00, 21:00).
Log was recorded regardless of success.
It was not recorded in the Log, but when I looked at the batch data in the DB, I found an unintended COMPLETED.
Issue>
Batch was successfully executed at 9 and 13 on the 18th.
But at 13:37 it's not even a schedule, but it's executed. (and FAILED)
Subsequently, 13:38, 40, 42, 44 minutes executed. (all COMPLETED)
Q1. Why was it executed when it wasn't even the batch execution time?
Q2. I save the log even when executing batch and sending SMS. Log was printed normally at 9 and 13 o'clock.
But Log is not saved for non-schedule(13:37, 38, 40, 42, 44).
Check spring boot service and tomcat service with one
server CPU, memory usage is normal
Batch Problem
Spring Boot (2.2.6 RELEASE)
Spring Boot - Embedded Tomcat
===== Start Scheduler =====
#Component
public class DosageAlarmScheduler {
public static final int MORNING_HOUR = 9;
public static final int LUNCH_HOUR = 13;
public static final int DINNER_HOUR = 19;
public static final int BEFORE_SLEEP_HOUR = 21;
#Scheduled(cron = "'0 0 */1 * * *") // every hour
public void executeDosageAlarmJob() {
LocalDateTime nowDateTime = LocalDateTime.now();
try {
if(isExecuteTime(nowDateTime)) {
log.info("[Send LMS], {}", nowDateTime);
EatFixCd eatFixCd = currentEatFixCd(nowDateTime);
jobLauncher.run(
alarmJob,
new JobParametersBuilder()
.addString("currentDate", nowDateTime.toString())
.addString("eatFixCodeValue", eatFixCd.getCodeValue())
.toJobParameters()
);
} else {
log.info("[Not Send LMS], {}", nowDateTime);
}
} catch (JobExecutionAlreadyRunningException e) {
log.error("[JobExecutionAlreadyRunningException]", e);
} catch (JobRestartException e) {
log.error("[JobRestartException]", e);
} catch (JobInstanceAlreadyCompleteException e) {
log.error("[JobInstanceAlreadyCompleteException]", e);
} catch (JobParametersInvalidException e) {
log.error("[JobParametersInvalidException]", e);
} catch(Exception e) {
log.error("[Exception]", e);
}
/* Start private method */
private boolean isExecuteTime(LocalDateTime nowDateTime) {
return nowDateTime.getHour() == MORNING_TIME.getHour()
|| nowDateTime.getHour() == LUNCH_TIME.getHour()
|| nowDateTime.getHour() == DINNER_TIME.getHour()
|| nowDateTime.getHour() == BEFORE_SLEEP_TIME.getHour();
}
private EatFixCd currentEatFixCd(LocalDateTime nowDateTime) {
switch(nowDateTime.getHour()) {
case MORNING_HOUR:
return EatFixCd.MORNING;
case LUNCH_HOUR:
return EatFixCd.LUNCH;
case DINNER_HOUR:
return EatFixCd.DINNER;
case BEFORE_SLEEP_HOUR:
return EatFixCd.BEFORE_SLEEP;
default:
throw new RuntimeException("Not Dosage Time");
}
}
/* End private method */
}
}
===== End Scheduler =====
===== Start Job =====
#Configuration
public class DosageAlarmConfiguration {
private final int chunkSize = 20;
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
private final EntityManagerFactory entityManagerFactory;
#Bean
public Job dosageAlarmJob() {
log.info("[dosageAlarmJob excute]");
return jobBuilderFactory.get("dosageAlarmJob")
.start(dosageAlarmStep(null, null)).build();
}
#Bean
#JobScope
public Step dosageAlarmStep(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Step excute]");
return stepBuilderFactory.get("dosageAlarmStep")
.<Object[], DosageReceiverInfoDto>chunk(chunkSize)
.reader(dosageAlarmReader(currentDate, eatFixCodeValue))
.processor(dosageAlarmProcessor(currentDate, eatFixCodeValue))
.writer(dosageAlarmWriter(currentDate, eatFixCodeValue))
.build();
}
#Bean
#StepScope
public JpaPagingItemReader<Object[]> dosageAlarmReader(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Reader excute : {}, {}]", currentDate, eatFixCodeValue);
if(currentDate == null) {
return null;
} else {
JpaPagingItemReader<Object[]> jpaPagingItemReader = new JpaPagingItemReader<>();
jpaPagingItemReader.setName("dosageAlarmReader");
jpaPagingItemReader.setEntityManagerFactory(entityManagerFactory);
jpaPagingItemReader.setPageSize(chunkSize);
jpaPagingItemReader.setQueryString("select das from DosageAlarm das where :currentDate between das.startDate and das.endDate ");
HashMap<String, Object> parameterValues = new HashMap<>();
parameterValues.put("currentDate", LocalDateTime.parse(currentDate).toLocalDate());
jpaPagingItemReader.setParameterValues(parameterValues);
return jpaPagingItemReader;
}
}
#Bean
#StepScope
public ItemProcessor<Object[], DosageReceiverInfoDto> dosageAlarmProcessor(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Processor excute : {}, {}]", currentDate, eatFixCodeValue);
...
convert to DosageReceiverInfoDto
...
}
#Bean
#StepScope
public ItemWriter<DosageReceiverInfoDto> dosageAlarmWriter(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Writer excute : {}, {}]", currentDate, eatFixCodeValue);
...
make List
...
if(reqMessageDtoList != null) {
sendMessages(reqMessageDtoList);
} else {
log.info("[reqMessageDtoList not Exist]");
}
}
public SmsExternalSendResDto sendMessages(List<reqMessagesDto> reqMessageDtoList) {
log.info("[receiveList] smsTypeCd : {}, contentTypeCd : {}, messages : {}", smsTypeCd.LMS, contentTypeCd.COMM, reqMessageDtoList);
...
send Messages
}
}
===== End Job =====
Thank U.
i want to fix my problem and i hope this question is hepled another people.

Spring Boot WebSocket URL Not Responding and RxJS Call Repetition?

I'm trying to follow a guide to WebSockets at https://www.devglan.com/spring-boot/spring-boot-angular-websocket
I'd like it to respond to ws://localhost:8448/wsb/softlayer-cost-file, but I'm sure I misunderstood something. I'd like to get it to receive a binary file and issue periodic updates as the file is being processed.
Questions are:
How come Spring does not respond to my requests despite all the multiple URLs I try (see below).
Does my RxJS call run once and then conclude, or does it keep running until some closure has happened? Sorry to ask what might be obvious to others.
On my Spring Boot Server start, I see no errors. After about 5-7 minutes of running, I saw the following log message:
INFO o.s.w.s.c.WebSocketMessageBrokerStats - WebSocketSession[0 current WS(0)-HttpStream(0)-HttpPoll(0), 0 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(0)-CONNECTED(0)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], outboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], sockJsScheduler[pool size = 6, active threads = 1, queued tasks = 0, completed tasks = 5]
I've pointed my browser at these URLs and can't get the Spring Boot server to show any reaction:
ws://localhost:8448/app/message
ws://localhost:8448/greeting/app/message
ws://localhost:8448/topic
ws://localhost:8448/queue
(I got the initial request formed in Firefox, then clicked edit/resend to try again).
WebSocketConfig.java
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Autowired
CostFileUploadWebSocketHandler costFileUploadWebSocketHandler;
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(new SocketTextHandler(), "/wst");
registry.addHandler(costFileUploadWebSocketHandler, "/wsb/softlayer-cost-file");
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic/", "/queue/");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/greeting").setAllowedOrigins("*");
// .withSockJS();
}
}
CostFileUploadWebSocketHandler.java
#Component
public class CostFileUploadWebSocketHandler extends BinaryWebSocketHandler {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
private SoftLayerJobService softLayerJobService;
private SoftLayerService softLayerService;
private AuthenticationFacade authenticationFacade;
#Autowired
CostFileUploadWebSocketHandler(SoftLayerJobService softLayerJobService, SoftLayerService softLayerService,
AuthenticationFacade authenticationFacade) {
this.softLayerJobService = softLayerJobService;
this.softLayerService = softLayerService;
this.authenticationFacade = authenticationFacade;
}
Map<WebSocketSession, FileUploadInFlight> sessionToFileMap = new WeakHashMap<>();
#Override
public boolean supportsPartialMessages() {
return true;
}
class WebSocketProgressReporter implements ProgressReporter {
private WebSocketSession session;
public WebSocketProgressReporter(WebSocketSession session) {
this.session = session;
}
#Override
public void reportCurrentProgress(BatchStatus currentBatchStatus, long currentPercentage) {
try {
session.sendMessage(new TextMessage("BatchStatus "+currentBatchStatus));
session.sendMessage(new TextMessage("Percentage Complete "+currentPercentage));
} catch(IOException e) {
throw new RuntimeException(e);
}
}
}
#Override
protected void handleBinaryMessage(WebSocketSession session, BinaryMessage message) throws Exception {
ByteBuffer payload = message.getPayload();
FileUploadInFlight inflightUpload = sessionToFileMap.get(session);
if (inflightUpload == null) {
throw new IllegalStateException("This is not expected");
}
inflightUpload.append(payload);
if (message.isLast()) {
File fileNameSaved = save(inflightUpload.name, "websocket", inflightUpload.bos.toByteArray());
BatchStatus currentBatchStatus = BatchStatus.UNKNOWN;
long percentageComplete;
ProgressReporter progressReporter = new WebSocketProgressReporter(session);
SoftLayerCostFileJobExecutionThread softLayerCostFileJobExecutionThread =
new SoftLayerCostFileJobExecutionThread(softLayerService, softLayerJobService, fileNameSaved,progressReporter);
logger.info("In main thread about to begin separate thread");
ForkJoinPool.commonPool().submit(softLayerCostFileJobExecutionThread);
while(!softLayerCostFileJobExecutionThread.jobDone());
// softLayerCostFileJobExecutionThread.run();
// Wait for above to complete somehow
// StepExecution foundStepExecution = jobExplorer.getJobExecution(
// jobExecutionThread.getJobExecutionResult().getJobExecution().getId()
// ).getStepExecutions().stream().filter(stepExecution->stepExecution.getStepName().equals("softlayerUploadFile")).findFirst().orElseGet(null);
// if (!"COMPLETED".equals(jobExecutionResult.getExitStatus())) {
// throw new UploadFileException(file.getOriginalFilename() + " exit status: " + jobExecutionResult.getExitStatus());
// }
logger.info("In main thread after separate thread submitted");
session.sendMessage(new TextMessage("UPLOAD "+inflightUpload.name));
session.close();
sessionToFileMap.remove(session);
logger.info("Uploaded "+inflightUpload.name);
}
String response = "Upload Chunk: size "+ payload.array().length;
logger.debug(response);
}
private File save(String fileName, String prefix, byte[] data) throws IOException {
Path basePath = Paths.get(".", "uploads", prefix, UUID.randomUUID().toString());
logger.info("Saving incoming cost file "+fileName+" to "+basePath);
Files.createDirectories(basePath);
FileChannel channel = new FileOutputStream(Paths.get(basePath.toString(), fileName).toFile(), false).getChannel();
channel.write(ByteBuffer.wrap(data));
channel.close();
return new File(basePath.getFileName().toString());
}
#Override
public void afterConnectionEstablished(WebSocketSession session) throws Exception {
sessionToFileMap.put(session, new FileUploadInFlight(session));
}
static class FileUploadInFlight {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
String name;
String uniqueUploadId;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
/**
* Fragile constructor - beware not prod ready
* #param session
*/
FileUploadInFlight(WebSocketSession session) {
String query = session.getUri().getQuery();
String uploadSessionIdBase64 = query.split("=")[1];
String uploadSessionId = new String(Base64Utils.decodeUrlSafe(uploadSessionIdBase64.getBytes()));
List<String> sessionIdentifiers = Splitter.on("\\").splitToList(uploadSessionId);
String uniqueUploadId = session.getRemoteAddress().toString()+sessionIdentifiers.get(0);
String fileName = sessionIdentifiers.get(1);
this.name = fileName;
this.uniqueUploadId = uniqueUploadId;
logger.info("Preparing upload for "+this.name+" uploadSessionId "+uploadSessionId);
}
public void append(ByteBuffer byteBuffer) throws IOException{
bos.write(byteBuffer.array());
}
}
}
Below is a snippet of Angular code where I make the call to the websocket. The service is intended to receive a file, then provide regular updates of percentage complete until the service is completed. Does this call need to be in a loop, or does the socket run until it's closed?
Angular Snippet of call to WebSocket:
this.softlayerService.uploadBlueReportFile(this.blueReportFile)
.subscribe(data => {
this.showLoaderBlueReport = false;
this.successBlueReport = true;
this.blueReportFileName = "No file selected";
this.responseBlueReport = 'File '.concat(data.fileName).concat(' ').concat('is ').concat(data.exitStatus);
this.blueReportSelected = false;
this.getCurrentUserFiles();
},
(error)=>{
if(error.status === 504){
this.showLoaderBlueReport = false;
this.stillProcessing = true;
}else{
this.showLoaderBlueReport = false;
this.displayUploadBlueReportsError(error, 'File upload failed');
}
});
}

Spring integration TCP Server multiple connections of more than 5

I'm using the following version of Spring Boot and Spring integration now.
spring.boot.version 2.3.4.RELEASE
spring-integration 5.3.2.RELEASE
My requirement is to create a TCP client server communication and i'm using spring integration for the same. The spike works fine for a single communication between client and server and also works fine for exactly 5 concurrent client connections.
The moment i have increased the concurrent client connections from 5 to any arbitary numbers, it doesn't work but the TCP server accepts only 5 connections.
I have used the 'ThreadAffinityClientConnectionFactory' mentioned by #Gary Russell in one of the earlier comments ( for similar requirements ) but still doesn't work.
Below is the code i have at the moment.
#Slf4j
#Configuration
#EnableIntegration
#IntegrationComponentScan
public class SocketConfig {
#Value("${socket.host}")
private String clientSocketHost;
#Value("${socket.port}")
private Integer clientSocketPort;
#Bean
public TcpOutboundGateway tcpOutGate(AbstractClientConnectionFactory connectionFactory) {
TcpOutboundGateway gate = new TcpOutboundGateway();
//connectionFactory.setTaskExecutor(taskExecutor());
gate.setConnectionFactory(clientCF());
return gate;
}
#Bean
public TcpInboundGateway tcpInGate(AbstractServerConnectionFactory connectionFactory) {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(connectionFactory);
inGate.setRequestChannel(fromTcp());
return inGate;
}
#Bean
public MessageChannel fromTcp() {
return new DirectChannel();
}
// Outgoing requests
#Bean
public ThreadAffinityClientConnectionFactory clientCF() {
TcpNetClientConnectionFactory tcpNetClientConnectionFactory = new TcpNetClientConnectionFactory(clientSocketHost, serverCF().getPort());
tcpNetClientConnectionFactory.setSingleUse(true);
ThreadAffinityClientConnectionFactory threadAffinityClientConnectionFactory = new ThreadAffinityClientConnectionFactory(
tcpNetClientConnectionFactory);
// Tested with the below too.
// threadAffinityClientConnectionFactory.setTaskExecutor(taskExecutor());
return threadAffinityClientConnectionFactory;
}
// Incoming requests
#Bean
public AbstractServerConnectionFactory serverCF() {
log.info("Server Connection Factory");
TcpNetServerConnectionFactory tcpNetServerConnectionFactory = new TcpNetServerConnectionFactory(clientSocketPort);
tcpNetServerConnectionFactory.setSerializer(new CustomSerializer());
tcpNetServerConnectionFactory.setDeserializer(new CustomDeserializer());
tcpNetServerConnectionFactory.setSingleUse(true);
return tcpNetServerConnectionFactory;
}
#Bean
public TaskExecutor taskExecutor () {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(50);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(50);
executor.setAllowCoreThreadTimeOut(true);
executor.setKeepAliveSeconds(120);
return executor;
}
}
Did anyone had the same issue with having multiple concurrent Tcp client connections of more than 5 ?
Thanks
Client Code:
#Component
#Slf4j
#RequiredArgsConstructor
public class ScheduledTaskService {
// Timeout in milliseconds
private static final int SOCKET_TIME_OUT = 18000;
private static final int BUFFER_SIZE = 32000;
private static final int ETX = 0x03;
private static final String HEADER = "ABCDEF ";
private static final String data = "FIXED DARATA"
private final AtomicInteger atomicInteger = new AtomicInteger();
#Async
#Scheduled(fixedDelay = 100000)
public void sendDataMessage() throws IOException, InterruptedException {
int numberOfRequests = 10;
Callable<String> executeMultipleSuccessfulRequestTask = () -> socketSendNReceive();
final Collection<Callable<String>> callables = new ArrayList<>();
IntStream.rangeClosed(1, numberOfRequests).forEach(i-> {
callables.add(executeMultipleSuccessfulRequestTask);
});
ExecutorService executorService = Executors.newFixedThreadPool(numberOfRequests);
List<Future<String>> taskFutureList = executorService.invokeAll(callables);
List<String> strings = taskFutureList.stream().map(future -> {
try {
return future.get(20000, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
e.printStackTrace();
}
return "";
}).collect(Collectors.toList());
strings.forEach(string -> log.info("Message received from the server: {} ", string));
}
public String socketSendNReceive() throws IOException{
int requestCounter = atomicInteger.incrementAndGet();
String host = "localhost";
int port = 8000;
Socket socket = new Socket();
InetSocketAddress address = new InetSocketAddress(host, port);
socket.connect(address, SOCKET_TIME_OUT);
socket.setSoTimeout(SOCKET_TIME_OUT);
//Send the message to the server
OutputStream os = socket.getOutputStream();
BufferedOutputStream bos = new BufferedOutputStream(os);
bos.write(HEADER.getBytes());
bos.write(data.getBytes());
bos.write(ETX);
bos.flush();
// log.info("Message sent to the server : {} ", envio);
//Get the return message from the server
InputStream is = socket.getInputStream();
String response = receber(is);
log.info("Received response");
return response;
}
private String receber(InputStream in) throws IOException {
final StringBuffer stringBuffer = new StringBuffer();
int readLength;
byte[] buffer;
buffer = new byte[BUFFER_SIZE];
do {
if(Objects.nonNull(in)) {
log.info("Input Stream not null");
}
readLength = in.read(buffer);
log.info("readLength : {} ", readLength);
if(readLength > 0){
stringBuffer.append(new String(buffer),0,readLength);
log.info("String ******");
}
} while (buffer[readLength-1] != ETX);
buffer = null;
stringBuffer.deleteCharAt(resposta.length()-1);
return stringBuffer.toString();
}
}
Since you are opening the connections all at the same time, you need to increase the backlog property on the server connection factory.
It defaults to 5.
/**
* The number of sockets in the connection backlog. Default 5;
* increase if you expect high connection rates.
* #param backlog The backlog to set.
*/
public void setBacklog(int backlog) {

How can I create many kafka topics during spring-boot application start up?

I have this configuration:
#Configuration
public class KafkaTopicConfig {
private final TopicProperties topics;
public KafkaTopicConfig(TopicProperties topics) {
this.topics = topics;
}
#Bean
public NewTopic newTopicImportCharge() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CHARGES.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
#Bean
public NewTopic newTopicImportPayment() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_PAYMENTS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
#Bean
public NewTopic newTopicImportCatalog() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CATALOGS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
}
I can add 10 differents topics to TopicProperties. And I don't want create each similar bean manually. Does some way exist for create all topic in spring-kafka or only spring?
Use an admin client directly; you can get a pre-built properties map from Boot's KafkaAdmin.
#SpringBootApplication
public class So55336461Application {
public static void main(String[] args) {
SpringApplication.run(So55336461Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaAdmin kafkaAdmin) {
return args -> {
AdminClient admin = AdminClient.create(kafkaAdmin.getConfigurationProperties());
List<NewTopic> topics = new ArrayList<>();
// build list
admin.createTopics(topics).all().get();
};
}
}
EDIT
To check if they already exist, or if the partitions need to be increased, the KafkaAdmin has this logic...
private void addTopicsIfNeeded(AdminClient adminClient, Collection<NewTopic> topics) {
if (topics.size() > 0) {
Map<String, NewTopic> topicNameToTopic = new HashMap<>();
topics.forEach(t -> topicNameToTopic.compute(t.name(), (k, v) -> t));
DescribeTopicsResult topicInfo = adminClient
.describeTopics(topics.stream()
.map(NewTopic::name)
.collect(Collectors.toList()));
List<NewTopic> topicsToAdd = new ArrayList<>();
Map<String, NewPartitions> topicsToModify = checkPartitions(topicNameToTopic, topicInfo, topicsToAdd);
if (topicsToAdd.size() > 0) {
addTopics(adminClient, topicsToAdd);
}
if (topicsToModify.size() > 0) {
modifyTopics(adminClient, topicsToModify);
}
}
}
private Map<String, NewPartitions> checkPartitions(Map<String, NewTopic> topicNameToTopic,
DescribeTopicsResult topicInfo, List<NewTopic> topicsToAdd) {
Map<String, NewPartitions> topicsToModify = new HashMap<>();
topicInfo.values().forEach((n, f) -> {
NewTopic topic = topicNameToTopic.get(n);
try {
TopicDescription topicDescription = f.get(this.operationTimeout, TimeUnit.SECONDS);
if (topic.numPartitions() < topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
}
else if (topic.numPartitions() > topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d, increasing "
+ "if the broker supports it", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
topicsToModify.put(n, NewPartitions.increaseTo(topic.numPartitions()));
}
}
catch (#SuppressWarnings("unused") InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (TimeoutException e) {
throw new KafkaException("Timed out waiting to get existing topics", e);
}
catch (#SuppressWarnings("unused") ExecutionException e) {
topicsToAdd.add(topic);
}
});
return topicsToModify;
}
Currently we can just use KafkaAdmin.NewTopics
Spring Doc

SftpInboundFileSynchronizer not synchronizing

I have the following SFTP file synchronizer:
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(applicationProperties.getSftpDirectory());
CompositeFileListFilter<ChannelSftp.LsEntry> compositeFileListFilter = new CompositeFileListFilter<ChannelSftp.LsEntry>();
compositeFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(store, "sftp"));
compositeFileListFilter.addFilter(new SftpSimplePatternFileListFilter(applicationProperties.getLoadFileNamePattern()));
fileSynchronizer.setFilter(compositeFileListFilter);
fileSynchronizer.setPreserveTimestamp(true);
return fileSynchronizer;
}
When the application first runs, it synchronizes to the local directory with the remote SFTP site directory. However, it fails to pick up any subsequent changes in the remote SFTP directory files.
It is scheduled to poll as follows:
#Bean
#InboundChannelAdapter(autoStartup="true", channel = "sftpChannel", poller = #Poller("pollerMetadata"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(applicationProperties.getScheduledLoadDirectory());
source.setAutoCreateLocalDirectory(true);
ChainFileListFilter<File> chainFileFilter = new ChainFileListFilter<File>();
chainFileFilter.addFilter(new LastModifiedFileListFilter());
FileSystemPersistentAcceptOnceFileListFilter fs = new FileSystemPersistentAcceptOnceFileListFilter(store, "dailyfilesystem");
fs.setFlushOnUpdate(true);
chainFileFilter.addFilter(fs);
source.setLocalFilter(chainFileFilter);
source.setCountsEnabled(true);
return source;
}
#Bean
public PollerMetadata pollerMetadata(RetryCompoundTriggerAdvice retryCompoundTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
List<Advice> adviceChain = new ArrayList<Advice>();
adviceChain.add(retryCompoundTriggerAdvice);
pollerMetadata.setAdviceChain(adviceChain);
pollerMetadata.setTrigger(compoundTrigger());
pollerMetadata.setMaxMessagesPerPoll(1);
return pollerMetadata;
}
#Bean
public CompoundTrigger compoundTrigger() {
CompoundTrigger compoundTrigger = new CompoundTrigger(primaryTrigger());
return compoundTrigger;
}
#Bean
public CronTrigger primaryTrigger() {
return new CronTrigger(applicationProperties.getSchedule());
}
#Bean
public PeriodicTrigger secondaryTrigger() {
return new PeriodicTrigger(applicationProperties.getRetryInterval());
}
In the afterReceive method of RetryCompoundTriggerAdvice which extends AbstractMessageSourceAdvice, I get a null result after the first run.
How can I configure the synchronizer such that it synchronizes periodically (rather than just once at app startup)?
Update
I have found that when the SFTP site has no file in its directory on my application startup, the SftpInboundFileSynchronizer syncs at every polling interval. So I can see com.jcraft.jsch log statements at every poll. But as soon as a file is found on the SFTP site, it syncs to get that file locally and then syncs no more.
Update 2
My apologies... here's the custom code:
#Component
public class RetryCompoundTriggerAdvice extends AbstractMessageSourceAdvice {
private final static Logger logger = LoggerFactory.getLogger(RetryCompoundTriggerAdvice.class);
private final CompoundTrigger compoundTrigger;
private final Trigger override;
private final ApplicationProperties applicationProperties;
private final Mail mail;
private int attempts = 0;
private boolean expectedMessage;
private boolean inProcess;
public RetryCompoundTriggerAdvice(CompoundTrigger compoundTrigger,
#Qualifier("secondaryTrigger") Trigger override,
ApplicationProperties applicationProperties,
Mail mail) {
this.compoundTrigger = compoundTrigger;
this.override = override;
this.applicationProperties = applicationProperties;
this.mail = mail;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
logger.debug("!inProcess is " + !inProcess);
return !inProcess;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (expectedMessage) {
logger.info("Received expected load file. Setting cron trigger.");
this.compoundTrigger.setOverride(null);
expectedMessage = false;
return result;
}
final int maxOverrideAttempts = applicationProperties.getMaxFileRetry();
attempts++;
if (result == null && attempts < maxOverrideAttempts) {
logger.info("Unable to find file after " + attempts + " attempt(s). Will reattempt");
this.compoundTrigger.setOverride(this.override);
} else if (result == null && attempts >= maxOverrideAttempts) {
String message = "Unable to find daily file" +
" after " + attempts +
" attempt(s). Will not reattempt since max number of attempts is set at " +
maxOverrideAttempts + ".";
logger.warn(message);
mail.sendAdminsEmail("Missing Load File", message);
attempts = 0;
this.compoundTrigger.setOverride(null);
} else {
attempts = 0;
// keep periodically checking until we are certain
// that this message is the expected message
this.compoundTrigger.setOverride(this.override);
inProcess = true;
logger.info("Found load file");
}
return result;
}
public void foundExpectedMessage(boolean found) {
logger.debug("Expected message was found? " + found);
this.expectedMessage = found;
inProcess = false;
}
}
You have the logic:
#Override
public boolean beforeReceive(MessageSource<?> source) {
logger.debug("!inProcess is " + !inProcess);
return !inProcess;
}
Let's study its JavaDoc:
/**
* Subclasses can decide whether to proceed with this poll.
* #param source the message source.
* #return true to proceed.
*/
public abstract boolean beforeReceive(MessageSource<?> source);
And the logic around this method:
Message<?> result = null;
if (beforeReceive((MessageSource<?>) target)) {
result = (Message<?>) invocation.proceed();
}
return afterReceive(result, (MessageSource<?>) target);
So, we call invocation.proceed() (SFTP synchronization) only if beforeReceive() returns true. In your case it is the case only if !inProcess.
In the afterReceive() implementation you have inProcess = true; in case you have a result - at the first attempt. And looks like you reset it back to the false only when someone calls that foundExpectedMessage().
So, what do you expect from us as an answer to your problem? It is really in your custom code and not related to the Framework. Sorry...

Resources