How to set the override on a compound trigger? - spring

I have a Spring integration application that normally polls daily for a file via SFTP using a cron trigger. But if it doesn't find the file it expects, it should poll every x minutes via a periodic trigger until y attempts. To do this I use the following component:
#Component
public class RetryCompoundTriggerAdvice extends AbstractMessageSourceAdvice {
private final static Logger logger = LoggerFactory.getLogger(RetryCompoundTriggerAdvice.class);
private final CompoundTrigger compoundTrigger;
private final Trigger override;
private final ApplicationProperties applicationProperties;
private final Mail mail;
private int attempts = 0;
public RetryCompoundTriggerAdvice(CompoundTrigger compoundTrigger,
#Qualifier("secondaryTrigger") Trigger override,
ApplicationProperties applicationProperties,
Mail mail) {
this.compoundTrigger = compoundTrigger;
this.override = override;
this.applicationProperties = applicationProperties;
this.mail = mail;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
final int maxOverrideAttempts = applicationProperties.getMaxFileRetry();
attempts++;
if (result == null && attempts < maxOverrideAttempts) {
logger.info("Unable to find load file after " + attempts + " attempt(s). Will reattempt");
this.compoundTrigger.setOverride(this.override);
} else if (result == null && attempts >= maxOverrideAttempts) {
mail.sendAdminsEmail("Missing File");
attempts = 0;
this.compoundTrigger.setOverride(null);
}
else {
attempts = 0;
this.compoundTrigger.setOverride(null);
logger.info("Found load file");
}
return result;
}
public void setOverrideTrigger() {
this.compoundTrigger.setOverride(this.override);
}
public CompoundTrigger getCompoundTrigger() {
return compoundTrigger;
}
}
If a file doesn't exist, this works great. That is, the override (i.e. periodic trigger) takes effect and polls every x minutes until y attempts.
However, if a file does exist but it's not the expected file (e.g. the data is at the wrong date), another class (that reads the file) calls the setOverrideTrigger of the RetryCompoundTriggerAdvice class. But afterReceive is not subsequently called at every x minutes. Why would this be?
Here's more of the application code:
SftpInboundFileSynchronizer:
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(applicationProperties.getSftpDirectory());
CompositeFileListFilter<ChannelSftp.LsEntry> compositeFileListFilter = new CompositeFileListFilter<ChannelSftp.LsEntry>();
compositeFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(store, "sftp"));
compositeFileListFilter.addFilter(new SftpSimplePatternFileListFilter(applicationProperties.getLoadFileNamePattern()));
fileSynchronizer.setFilter(compositeFileListFilter);
fileSynchronizer.setPreserveTimestamp(true);
return fileSynchronizer;
}
Session factory is:
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory sftpSessionFactory = new DefaultSftpSessionFactory();
sftpSessionFactory.setHost(applicationProperties.getSftpHost());
sftpSessionFactory.setPort(applicationProperties.getSftpPort());
sftpSessionFactory.setUser(applicationProperties.getSftpUser());
sftpSessionFactory.setPassword(applicationProperties.getSftpPassword());
sftpSessionFactory.setAllowUnknownKeys(true);
return new CachingSessionFactory<LsEntry>(sftpSessionFactory);
}
The SftpInboundFileSynchronizingMessageSource is set to poll using the compound trigger.
#Bean
#InboundChannelAdapter(autoStartup="true", channel = "sftpChannel", poller = #Poller("pollerMetadata"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(applicationProperties.getScheduledLoadDirectory());
source.setAutoCreateLocalDirectory(true);
CompositeFileListFilter<File> compositeFileFilter = new CompositeFileListFilter<File>();
compositeFileFilter.addFilter(new LastModifiedFileListFilter());
compositeFileFilter.addFilter(new FileSystemPersistentAcceptOnceFileListFilter(store, "dailyfilesystem"));
source.setLocalFilter(compositeFileFilter);
source.setCountsEnabled(true);
return source;
}
#Bean
public PollerMetadata pollerMetadata(RetryCompoundTriggerAdvice retryCompoundTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
List<Advice> adviceChain = new ArrayList<Advice>();
adviceChain.add(retryCompoundTriggerAdvice);
pollerMetadata.setAdviceChain(adviceChain);
pollerMetadata.setTrigger(compoundTrigger());
pollerMetadata.setMaxMessagesPerPoll(1);
return pollerMetadata;
}
#Bean
public CompoundTrigger compoundTrigger() {
CompoundTrigger compoundTrigger = new CompoundTrigger(primaryTrigger());
return compoundTrigger;
}
#Bean
public CronTrigger primaryTrigger() {
return new CronTrigger(applicationProperties.getSchedule());
}
#Bean
public PeriodicTrigger secondaryTrigger() {
return new PeriodicTrigger(applicationProperties.getRetryInterval());
}
Update
Here's the message handler:
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler dailyHandler(SimpleJobLauncher jobLauncher, Job job, Mail mail) {
JobRunner jobRunner = new JobRunner(jobLauncher, job, store, mail);
jobRunner.setDaily("true");
jobRunner.setOverwrite("false");
return jobRunner;
}
JobRunner kicks off a Spring Batch job. After processing the job, my application looks to see if the file had the data it expected for the day. If not, it is setting the override trigger.

That's the way triggers work - you only get an opportunity to change the trigger when the trigger fires.
Since you reset to the cron trigger, the next opportunity for change is when that trigger fires (if the poller thread is released by the downstream flow before changing the trigger).
Are you handing off the file to another thread (queue channel or executor)? If not, I would expect any changes to the trigger should be applied, because nextExecutionTime() will not be called until the downstream flow returns.
If there's a thread handoff, you have no opportunity to change the trigger.

Related

Recursively read files from remote server present in subdirectories with Spring Integration

I have working flow for getting files from single folder present in remote server using inbound adopter but i want for get files for all subfolder present in any remote server parent folder
I have code like this
#Bean
public SessionFactory<SftpClient.DirEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost("localhost");
factory.setPort(port);
factory.setUser("foo");
factory.setPassword("foo");
factory.setAllowUnknownKeys(true);
factory.setTestSession(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory("foo");
fileSynchronizer.setFilter(new SftpSimplePatternFileListFilter("*.xml"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(new File("sftp-inbound"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setMaxFetchSize(1);
return source;
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload());
}
};
}`
but instead of single folder i want get files for all subfolder present in foo directory
if possible please help with full code
#GaryRussell
Thanks you so much for you early response .I have done some changes according to your suggested code app is started but files is not getting picked up by application.
CompositeFileListFilter<LsEntry> compositeFileListFilter = new CompositeFileListFilter<>();
SftpPersistentAcceptOnceFileListFilter fileListFilter =
new SftpPersistentAcceptOnceFileListFilter(
(JdbcMetadataStore) context.getBean("metadataStore"), "REMOTE");
if (Constants.APP1.equals(appName) || Constants.APP2.equals(appName)) {
SftpRegexPatternFileListFilter regexPatternFileListFilter =
new SftpRegexPatternFileListFilter(Pattern.compile("^IL.*"));
compositeFileListFilter.addFilter(regexPatternFileListFilter);
}
compositeFileListFilter.addFilter(fileListFilter);
return IntegrationFlows.fromSupplier(
() -> sftpEnvironment.getSftpGLSIncomingDir(), // remote dir
e -> e.autoStartup(true).poller(pollerMetada()))
.handle(
Sftp.outboundGateway(sftpSessionFactory(), Command.MGET, "payload")
.options(Option.RECURSIVE)
.filter(compositeFileListFilter)
.fileExistsMode(FileExistsMode.IGNORE)
.localDirectoryExpression("'/tmp/' + #remoteDirectory")) // re-create tree locally
.split()
.log()
.get();
#GaryRussell
I have changed my code this new way it is partially processing files mean one example out of 10 file only process 5 or 6 files. I am not able to figure the main issue in that.and I also have also some open challenges which i am mentioning below
Its able to read files form remote subdirectories and store in local directory but I want to process these file in some other sftpChannel, if posible without storing locally
I also want to apply some deduplication technique using data base which will help me to avoid duplicate file processing.
public class SFTPPollerService {
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
//code
return factory;
}
//OLD code
// #Bean
// public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
// SftpInboundFileSynchronizer fileSynchronizer =
// new SftpInboundFileSynchronizer(sftpSessionFactory());
// fileSynchronizer.setDeleteRemoteFiles(sftpEnvironment.isDeleteRemoteFiles());
// fileSynchronizer.setRemoteDirectory(sftpEnvironment.getSftpGLSIncomingDir());
// fileSynchronizer.setPreserveTimestamp(true);
// CompositeFileListFilter<LsEntry> compositeFileListFilter = new
// CompositeFileListFilter<>();
// SftpPersistentAcceptOnceFileListFilter fileListFilter =
// new SftpPersistentAcceptOnceFileListFilter(
// (JdbcMetadataStore) context.getBean("metadataStore"), "REMOTE");
// if (Constants.app2.equals(appName)
// || Constants.app1.equals(appName)) {
// SftpRegexPatternFileListFilter regexPatternFileListFilter =
// new SftpRegexPatternFileListFilter(Pattern.compile("*.txt"));
// compositeFileListFilter.addFilter(regexPatternFileListFilter);
// }
// compositeFileListFilter.addFilter(fileListFilter);
// fileSynchronizer.setFilter(compositeFileListFilter);
// return fileSynchronizer;
// }
//
// #Bean
// #InboundChannelAdapter(channel = "sftpChannel", poller = #Poller("pollerMetada"))
// public MessageSource<File> sftpMessageSource() {
// SftpInboundFileSynchronizingMessageSource source =
// new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
// source.setLocalDirectory(new File(sftpEnvironment.getSftpLocalDir()));
//
// source.setAutoCreateLocalDirectory(true);
//
// try {
// source.setLocalFilter(
// (FileSystemPersistentAcceptOnceFileListFilter)
// context.getBean("filelistFilter"));
// } catch (Exception e) {
// LOG.error(
// "Exception caught while setting local filter on
// SftpInboundFileSynchronizingMessageSource",
// e);
// }
// source.setMaxFetchSize(sftpEnvironment.getMaxFetchFileSize());
//
// return source;
// }
//new Code
#Bean
public IntegrationFlow sftpInboundFlow() {
CompositeFileListFilter<LsEntry> compositeFileListFilter = new CompositeFileListFilter<>();
SftpPersistentAcceptOnceFileListFilter fileListFilter =
new SftpPersistentAcceptOnceFileListFilter(
(JdbcMetadataStore) context.getBean("metadataStore"), "REMOTE");
if (Constants.app2.equals(appName) || Constants.app1.equals(appName)) {
SftpRegexPatternFileListFilter regexPatternFileListFilter =
new SftpRegexPatternFileListFilter(Pattern.compile("(subDir | *.txt)"));
compositeFileListFilter.addFilter(regexPatternFileListFilter);
}
fileListFilter.setForRecursion(true);
FileSystemPersistentAcceptOnceFileListFilter fileSystemPersistentAcceptOnceFileListFilter = (FileSystemPersistentAcceptOnceFileListFilter) context.getBean(
"filelistFilter");
compositeFileListFilter.addFilter(fileListFilter);
// IntegrationFlow ir =
// IntegrationFlows.from(
// Sftp.inboundAdapter(sftpSessionFactory())
// .preserveTimestamp(true)
// .remoteDirectory(sftpEnvironment.getSftpGLSIncomingDir())
// .deleteRemoteFiles(sftpEnvironment.isDeleteRemoteFiles())
// .filter(compositeFileListFilter)
// .autoCreateLocalDirectory(true)
// .localDirectory(new File(sftpEnvironment.getSftpLocalDir())),
// e -> e.autoStartup(true).poller(pollerMetada()))
// .handle(handler())
// .get();
return IntegrationFlows.fromSupplier(
() -> sftpEnvironment.getSftpGLSIncomingDir(), // remote dir
e -> e.autoStartup(true).poller(pollerMetada()))
.handle(
Sftp.outboundGateway(sftpSessionFactory(), Command.MGET, "payload")
.options(Option.RECURSIVE)
.fileExistsMode(FileExistsMode.IGNORE)
.regexFileNameFilter("(dsv[0-9]|.*.xml)")
// .filter(compositeFileListFilter)
.localDirectoryExpression("'user/localDir/test/'"))
// .handle(handler())
// .patternFileNameFilter(".*\\.xml")) // re-create tree locally
.split()
.channel("sftpChannel")
// .handle(handler())
.log()
.get();
}
#Bean
public PollerMetadata pollerMetada() {
PollerMetadata pm = new PollerMetadata();
ExpressionEvaluatingTransactionSynchronizationProcessor processor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("payload.delete()");
processor.setAfterRollbackExpression(exp);
TransactionSynchronizationFactory tsf = new DefaultTransactionSynchronizationFactory(processor);
pm.setTransactionSynchronizationFactory(tsf);
List<Advice> advices = new ArrayList<>();
advices.add(compoundTriggerAdvice());
pm.setAdviceChain(advices);
pm.setTrigger(compoundTrigger());
pm.setMaxMessagesPerPoll(sftpEnvironment.getMaxMessagesPerPoll());
return pm;
}
#Bean
public CronTrigger cronTrigger() {
if (LOG.isDebugEnabled()) {
return new CronTrigger(sftpEnvironment.getPollerCronExpressionWhenDebugModeIsEnabled());
} else {
return new CronTrigger(sftpEnvironment.getPollerCronExpression());
}
}
#Bean
public PeriodicTrigger periodicTrigger() {
return new PeriodicTrigger(sftpEnvironment.getPeriodicTriggerInMillis());
}
#Bean
public CompoundTrigger compoundTrigger() {
return new CompoundTrigger(cronTrigger());
}
#Bean
public CompoundTriggerAdvice compoundTriggerAdvice() {
return new CompoundTriggerAdvice(compoundTrigger(), periodicTrigger());
}
#Bean
public FileSystemPersistentAcceptOnceFileListFilter filelistFilter(MetadataStore datastore) {
return new FileSystemPersistentAcceptOnceFileListFilter((JdbcMetadataStore) datastore, "INT");
}
#Bean
public PlatformTransactionManager transactionManager() {
return new org.springframework.integration.transaction.PseudoTransactionManager();
}
#Bean
DataSource dataSource() throws SQLException {
OracleDataSource dataSource = new OracleDataSource();
dataSource.setUser(databaseProperties.getOracleUsername());
dataSource.setPassword(databaseProperties.getOraclePassword());
dataSource.setURL(databaseProperties.getOracleUrl());
dataSource.setImplicitCachingEnabled(true);
dataSource.setFastConnectionFailoverEnabled(true);
return dataSource;
}
/**
* Creates a {#link JdbcMetadataStore} for the de-duplication logic.
*
* <p>This method uses the "REGION" column of the metadatastore table to differentiate between
* multiple apps. The value of the "REGION" column is set equal to the app-name.
*
* #return a JDBC metadata store
* #throws SQLException in case an exception occurs during connection to SQL database
*/
#Bean
public MetadataStore metadataStore() throws SQLException {
JdbcMetadataStore jdbcMetadataStore = new JdbcMetadataStore(dataSource());
if (!Constants.app2.equals(appName)) {
jdbcMetadataStore.setRegion(appName);
}
return jdbcMetadataStore;
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler() {
return message -> {
File file = (File) message.getPayload();
FileDto fileDto = new FileDto(file);
fileHandler.handle(fileDto);
LOG.info("controller is here ");
try {
if (sftpEnvironment.isDeleteLocalFiles()) {
Files.deleteIfExists(Paths.get(file.toString()));
}
} catch (IOException e) {
// TODO retry/report/handle gracefully
LOG.error(String.format("MessageHandler had error message=%s", message), e);
}
};
}
}
The synchronizer doesn't support recursion. Use the outbound gateway with a recursive mget command instead.
https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-outbound-gateway
Using the mget Command
mget retrieves multiple remote files based on a pattern and supports the following options:
-P: Preserve the timestamps of the remote files.
-R: Retrieve the entire directory tree recursively.
-x: Throw an exception if no files match the pattern (otherwise, an empty list is returned).
-D: Delete each remote file after successful transfer. If the transfer is ignored, the remote file is not deleted, because the FileExistsMode is IGNORE and the local file already exists.
The message payload resulting from an mget operation is a List<File> object (that is, a List of File objects, each representing a retrieved file).
EDIT
Here is an example using the java DSL...
#SpringBootApplication
public class So75180789Application {
public static void main(String[] args) {
SpringApplication.run(So75180789Application.class, args);
}
#Bean
IntegrationFlow flow(DefaultSftpSessionFactory sf) {
return IntegrationFlows.fromSupplier(() -> "foo/*", // remote dir
e -> e.poller(Pollers.fixedDelay(5000)))
.handle(Sftp.outboundGateway(sf, Command.MGET, "payload")
.options(Option.RECURSIVE)
.fileExistsMode(FileExistsMode.IGNORE)
.localDirectoryExpression("'/tmp/' + #remoteDirectory")) // re-create tree locally
.split()
.log()
.get();
}
#Bean
DefaultSftpSessionFactory sf(#Value("${host}") String host,
#Value("${username}") String user, #Value("${pw}") String pw) {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost(host);
sf.setUser(user);
sf.setPassword(pw);
sf.setAllowUnknownKeys(true);
return sf;
}
}

Read New File While Doing Processing For A Field In Spring Batch

I have a fixedlength input file reading by using SPRING BATCH.
I have already implemented Job, Step, Processor, etc.
Here are the sample code.
#Configuration
public class BatchConfig {
private JobBuilderFactory jobBuilderFactory;
private StepBuilderFactory stepBuilderFactory;
#Value("${inputFile}")
private Resource resource;
#Autowired
public BatchConfig(JobBuilderFactory jobBuilderFactory, StepBuilderFactory stepBuilderFactory) {
this.jobBuilderFactory = jobBuilderFactory;
this.stepBuilderFactory = stepBuilderFactory;
}
#Bean
public Job job() {
return this.jobBuilderFactory.get("JOB-Load")
.start(fileReadingStep())
.build();
}
#Bean
public Step fileReadingStep() {
return stepBuilderFactory.get("File-Read-Step1")
.<Employee,EmpOutput>chunk(1000)
.reader(itemReader())
.processor(new CustomFileProcesser())
.writer(new CustomFileWriter())
.faultTolerant()
.skipPolicy(skipPolicy())
.build();
}
#Bean
public FlatFileItemReader<Employee> itemReader() {
FlatFileItemReader<Employee> flatFileItemReader = new FlatFileItemReader<Employee>();
flatFileItemReader.setResource(resource);
flatFileItemReader.setName("File-Reader");
flatFileItemReader.setLineMapper(LineMapper());
return flatFileItemReader;
}
#Bean
public LineMapper<Employee> LineMapper() {
DefaultLineMapper<Employee> defaultLineMapper = new DefaultLineMapper<Employee>();
FixedLengthTokenizer fixedLengthTokenizer = new FixedLengthTokenizer();
fixedLengthTokenizer.setNames(new String[] { "employeeId", "employeeName", "employeeSalary" });
fixedLengthTokenizer.setColumns(new Range[] { new Range(1, 9), new Range(10, 20), new Range(20, 30)});
fixedLengthTokenizer.setStrict(false);
defaultLineMapper.setLineTokenizer(fixedLengthTokenizer);
defaultLineMapper.setFieldSetMapper(new CustomFieldSetMapper());
return defaultLineMapper;
}
#Bean
public JobSkipPolicy skipPolicy() {
return new JobSkipPolicy();
}
}
For Processing I have added some sample code What I need, But if I add BufferedReader here then it's taking more times to do the job.
#Component
public class CustomFileProcesser implements ItemProcessor<Employee, EmpOutput> {
#Override
public EmpOutput process(Employee item) throws Exception {
EmpOutput emp = new EmpOutput();
emp.setEmployeeSalary(checkSal(item.getEmployeeSalary()));
return emp;
}
public String checkSal(String sal) {
// need to read the another file
// required to do some kind of validation
// after that final result need to return
File f1 = new File("C:\\Users\\John\\New\\salary.txt");
FileReader fr;
try {
fr = new FileReader(f1);
BufferedReader br = new BufferedReader(fr);
String s = br.readLine();
while (s != null) {
String value = s.substring(5, 7);
if(value.equals(sal))
sal = value;
else
sal = "5000";
s = br.readLine();
}
} catch (Exception e) {
e.printStackTrace();
}
return sal;
}
// other fields need to check by reading different different file.
// These new files contains more than 30k records.
// all are fixedlength file.
// I need to get the field by giving the index
}
While doing the processing for one or more field, I need to check In another file by reading that file (it's a file I will read from fileSystem/Cloud).
While processing the data for 5 fields I need to read 5 different different file again, I will check the fields details inside those file and then I will gererate the result , That result will process forther.
You can cache the content of the file in memory and do your check against the cache instead of re-reading the entire file from disk for each item.
You can find an example here: Spring Batch With Annotation and Caching.

Spring Boot WebSocket URL Not Responding and RxJS Call Repetition?

I'm trying to follow a guide to WebSockets at https://www.devglan.com/spring-boot/spring-boot-angular-websocket
I'd like it to respond to ws://localhost:8448/wsb/softlayer-cost-file, but I'm sure I misunderstood something. I'd like to get it to receive a binary file and issue periodic updates as the file is being processed.
Questions are:
How come Spring does not respond to my requests despite all the multiple URLs I try (see below).
Does my RxJS call run once and then conclude, or does it keep running until some closure has happened? Sorry to ask what might be obvious to others.
On my Spring Boot Server start, I see no errors. After about 5-7 minutes of running, I saw the following log message:
INFO o.s.w.s.c.WebSocketMessageBrokerStats - WebSocketSession[0 current WS(0)-HttpStream(0)-HttpPoll(0), 0 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(0)-CONNECTED(0)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], outboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], sockJsScheduler[pool size = 6, active threads = 1, queued tasks = 0, completed tasks = 5]
I've pointed my browser at these URLs and can't get the Spring Boot server to show any reaction:
ws://localhost:8448/app/message
ws://localhost:8448/greeting/app/message
ws://localhost:8448/topic
ws://localhost:8448/queue
(I got the initial request formed in Firefox, then clicked edit/resend to try again).
WebSocketConfig.java
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Autowired
CostFileUploadWebSocketHandler costFileUploadWebSocketHandler;
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(new SocketTextHandler(), "/wst");
registry.addHandler(costFileUploadWebSocketHandler, "/wsb/softlayer-cost-file");
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic/", "/queue/");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/greeting").setAllowedOrigins("*");
// .withSockJS();
}
}
CostFileUploadWebSocketHandler.java
#Component
public class CostFileUploadWebSocketHandler extends BinaryWebSocketHandler {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
private SoftLayerJobService softLayerJobService;
private SoftLayerService softLayerService;
private AuthenticationFacade authenticationFacade;
#Autowired
CostFileUploadWebSocketHandler(SoftLayerJobService softLayerJobService, SoftLayerService softLayerService,
AuthenticationFacade authenticationFacade) {
this.softLayerJobService = softLayerJobService;
this.softLayerService = softLayerService;
this.authenticationFacade = authenticationFacade;
}
Map<WebSocketSession, FileUploadInFlight> sessionToFileMap = new WeakHashMap<>();
#Override
public boolean supportsPartialMessages() {
return true;
}
class WebSocketProgressReporter implements ProgressReporter {
private WebSocketSession session;
public WebSocketProgressReporter(WebSocketSession session) {
this.session = session;
}
#Override
public void reportCurrentProgress(BatchStatus currentBatchStatus, long currentPercentage) {
try {
session.sendMessage(new TextMessage("BatchStatus "+currentBatchStatus));
session.sendMessage(new TextMessage("Percentage Complete "+currentPercentage));
} catch(IOException e) {
throw new RuntimeException(e);
}
}
}
#Override
protected void handleBinaryMessage(WebSocketSession session, BinaryMessage message) throws Exception {
ByteBuffer payload = message.getPayload();
FileUploadInFlight inflightUpload = sessionToFileMap.get(session);
if (inflightUpload == null) {
throw new IllegalStateException("This is not expected");
}
inflightUpload.append(payload);
if (message.isLast()) {
File fileNameSaved = save(inflightUpload.name, "websocket", inflightUpload.bos.toByteArray());
BatchStatus currentBatchStatus = BatchStatus.UNKNOWN;
long percentageComplete;
ProgressReporter progressReporter = new WebSocketProgressReporter(session);
SoftLayerCostFileJobExecutionThread softLayerCostFileJobExecutionThread =
new SoftLayerCostFileJobExecutionThread(softLayerService, softLayerJobService, fileNameSaved,progressReporter);
logger.info("In main thread about to begin separate thread");
ForkJoinPool.commonPool().submit(softLayerCostFileJobExecutionThread);
while(!softLayerCostFileJobExecutionThread.jobDone());
// softLayerCostFileJobExecutionThread.run();
// Wait for above to complete somehow
// StepExecution foundStepExecution = jobExplorer.getJobExecution(
// jobExecutionThread.getJobExecutionResult().getJobExecution().getId()
// ).getStepExecutions().stream().filter(stepExecution->stepExecution.getStepName().equals("softlayerUploadFile")).findFirst().orElseGet(null);
// if (!"COMPLETED".equals(jobExecutionResult.getExitStatus())) {
// throw new UploadFileException(file.getOriginalFilename() + " exit status: " + jobExecutionResult.getExitStatus());
// }
logger.info("In main thread after separate thread submitted");
session.sendMessage(new TextMessage("UPLOAD "+inflightUpload.name));
session.close();
sessionToFileMap.remove(session);
logger.info("Uploaded "+inflightUpload.name);
}
String response = "Upload Chunk: size "+ payload.array().length;
logger.debug(response);
}
private File save(String fileName, String prefix, byte[] data) throws IOException {
Path basePath = Paths.get(".", "uploads", prefix, UUID.randomUUID().toString());
logger.info("Saving incoming cost file "+fileName+" to "+basePath);
Files.createDirectories(basePath);
FileChannel channel = new FileOutputStream(Paths.get(basePath.toString(), fileName).toFile(), false).getChannel();
channel.write(ByteBuffer.wrap(data));
channel.close();
return new File(basePath.getFileName().toString());
}
#Override
public void afterConnectionEstablished(WebSocketSession session) throws Exception {
sessionToFileMap.put(session, new FileUploadInFlight(session));
}
static class FileUploadInFlight {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
String name;
String uniqueUploadId;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
/**
* Fragile constructor - beware not prod ready
* #param session
*/
FileUploadInFlight(WebSocketSession session) {
String query = session.getUri().getQuery();
String uploadSessionIdBase64 = query.split("=")[1];
String uploadSessionId = new String(Base64Utils.decodeUrlSafe(uploadSessionIdBase64.getBytes()));
List<String> sessionIdentifiers = Splitter.on("\\").splitToList(uploadSessionId);
String uniqueUploadId = session.getRemoteAddress().toString()+sessionIdentifiers.get(0);
String fileName = sessionIdentifiers.get(1);
this.name = fileName;
this.uniqueUploadId = uniqueUploadId;
logger.info("Preparing upload for "+this.name+" uploadSessionId "+uploadSessionId);
}
public void append(ByteBuffer byteBuffer) throws IOException{
bos.write(byteBuffer.array());
}
}
}
Below is a snippet of Angular code where I make the call to the websocket. The service is intended to receive a file, then provide regular updates of percentage complete until the service is completed. Does this call need to be in a loop, or does the socket run until it's closed?
Angular Snippet of call to WebSocket:
this.softlayerService.uploadBlueReportFile(this.blueReportFile)
.subscribe(data => {
this.showLoaderBlueReport = false;
this.successBlueReport = true;
this.blueReportFileName = "No file selected";
this.responseBlueReport = 'File '.concat(data.fileName).concat(' ').concat('is ').concat(data.exitStatus);
this.blueReportSelected = false;
this.getCurrentUserFiles();
},
(error)=>{
if(error.status === 504){
this.showLoaderBlueReport = false;
this.stillProcessing = true;
}else{
this.showLoaderBlueReport = false;
this.displayUploadBlueReportsError(error, 'File upload failed');
}
});
}

Spring Integration: how to unit test a poller advice

I'm trying to unit test an advice on the poller which blocks execution of the mongo channel adapter until a certain condition is met (=all messages from this batch are processed).
The flow looks as follow:
IntegrationFlows.from(MongoDb.reactiveInboundChannelAdapter(mongoDbFactory,
new Query().with(Sort.by(Sort.Direction.DESC, "modifiedDate")).limit(1))
.collectionName("metadata")
.entityClass(Metadata.class)
.expectSingleResult(true),
e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(pollingIntervalSeconds))
.advice(this.advices.waitUntilCompletedAdvice())))
.handle((p, h) -> {
this.advices.waitUntilCompletedAdvice().setWait(true);
return p;
})
.handle(doSomething())
.channel(Channels.DOCUMENT_HEADER.name())
.get();
And the following advice bean:
#Bean
public WaitUntilCompletedAdvice waitUntilCompletedAdvice() {
DynamicPeriodicTrigger trigger = new DynamicPeriodicTrigger(Duration.ofSeconds(1));
return new WaitUntilCompletedAdvice(trigger);
}
And the advice itself:
public class WaitUntilCompletedAdvice extends SimpleActiveIdleMessageSourceAdvice {
AtomicBoolean wait = new AtomicBoolean(false);
public WaitUntilCompletedAdvice(DynamicPeriodicTrigger trigger) {
super(trigger);
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
if (getWait())
return false;
return true;
}
public boolean getWait() {
return wait.get();
}
public void setWait(boolean newWait) {
if (getWait() == newWait)
return;
while (true) {
if (wait.compareAndSet(!newWait, newWait)) {
return;
}
}
}
}
I'm using the following test for testing the flow:
#Test
public void testClaimPoollingAdapterFlow() throws Exception {
// given
ArgumentCaptor<Message<?>> captor = messageArgumentCaptor();
CountDownLatch receiveLatch = new CountDownLatch(1);
MessageHandler mockMessageHandler = mockMessageHandler(captor).handleNext(m -> receiveLatch.countDown());
this.mockIntegrationContext.substituteMessageHandlerFor("retrieveDocumentHeader", mockMessageHandler);
LocalDateTime modifiedDate = LocalDateTime.now();
ProcessingMetadata data = Metadata.builder()
.modifiedDate(modifiedDate)
.build();
assert !this.advices.waitUntilCompletedAdvice().getWait();
// when
itf.getInputChannel().send(new GenericMessage<>(Mono.just(data)));
// then
assertThat(receiveLatch.await(10, TimeUnit.SECONDS)).isTrue();
verify(mockMessageHandler).handleMessage(any());
assertThat(captor.getValue().getPayload()).isEqualTo(modifiedDate);
assert this.advices.waitUntilCompletedAdvice().getWait();
}
Which works fine but when I send another message to the input channel, it still processes the message without respecting the advice.
Is it intended behaviour? If so, how can I verify using unit test that the poller is really waiting for this advice?
itf.getInputChannel().send(new GenericMessage<>(Mono.just(data)));
That bypasses the poller and sends the message directly.
You can unit test the advice has been configured by calling beforeReceive() from your test
Or you can create a dummy test flow with the same advice
IntegationFlows.from(() -> "foo", e -> e.poller(...))
...
And verify that just one message is sent.
Example implementation:
#Test
public void testWaitingActivate() {
// given
this.advices.waitUntilCompletedAdvice().setWait(true);
// when
Message<ProcessingMetadata> receive = (Message<ProcessingMetadata>) testChannel.receive(3000);
// then
assertThat(receive).isNull();
}
#Test
public void testWaitingInactive() {
// given
this.advices.waitUntilCompletedAdvice().setWait(false);
// when
Message<ProcessingMetadata> receive = (Message<ProcessingMetadata>) testChannel.receive(3000);
// then
assertThat(receive).isNotNull();
}
#Configuration
#EnableIntegration
public static class Config {
#Autowired
private Advices advices;
#Bean
public PollableChannel testChannel() {
return new QueueChannel();
}
#Bean
public IntegrationFlow fakeFlow() {
this.advices.waitUntilCompletedAdvice().setWait(true);
return IntegrationFlows.from(() -> "foo", e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(1))
.advice(this.advices.waitUntilCompletedAdvice()))).channel("testChannel").get();
}
}

SftpInboundFileSynchronizer not synchronizing

I have the following SFTP file synchronizer:
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(applicationProperties.getSftpDirectory());
CompositeFileListFilter<ChannelSftp.LsEntry> compositeFileListFilter = new CompositeFileListFilter<ChannelSftp.LsEntry>();
compositeFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(store, "sftp"));
compositeFileListFilter.addFilter(new SftpSimplePatternFileListFilter(applicationProperties.getLoadFileNamePattern()));
fileSynchronizer.setFilter(compositeFileListFilter);
fileSynchronizer.setPreserveTimestamp(true);
return fileSynchronizer;
}
When the application first runs, it synchronizes to the local directory with the remote SFTP site directory. However, it fails to pick up any subsequent changes in the remote SFTP directory files.
It is scheduled to poll as follows:
#Bean
#InboundChannelAdapter(autoStartup="true", channel = "sftpChannel", poller = #Poller("pollerMetadata"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(applicationProperties.getScheduledLoadDirectory());
source.setAutoCreateLocalDirectory(true);
ChainFileListFilter<File> chainFileFilter = new ChainFileListFilter<File>();
chainFileFilter.addFilter(new LastModifiedFileListFilter());
FileSystemPersistentAcceptOnceFileListFilter fs = new FileSystemPersistentAcceptOnceFileListFilter(store, "dailyfilesystem");
fs.setFlushOnUpdate(true);
chainFileFilter.addFilter(fs);
source.setLocalFilter(chainFileFilter);
source.setCountsEnabled(true);
return source;
}
#Bean
public PollerMetadata pollerMetadata(RetryCompoundTriggerAdvice retryCompoundTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
List<Advice> adviceChain = new ArrayList<Advice>();
adviceChain.add(retryCompoundTriggerAdvice);
pollerMetadata.setAdviceChain(adviceChain);
pollerMetadata.setTrigger(compoundTrigger());
pollerMetadata.setMaxMessagesPerPoll(1);
return pollerMetadata;
}
#Bean
public CompoundTrigger compoundTrigger() {
CompoundTrigger compoundTrigger = new CompoundTrigger(primaryTrigger());
return compoundTrigger;
}
#Bean
public CronTrigger primaryTrigger() {
return new CronTrigger(applicationProperties.getSchedule());
}
#Bean
public PeriodicTrigger secondaryTrigger() {
return new PeriodicTrigger(applicationProperties.getRetryInterval());
}
In the afterReceive method of RetryCompoundTriggerAdvice which extends AbstractMessageSourceAdvice, I get a null result after the first run.
How can I configure the synchronizer such that it synchronizes periodically (rather than just once at app startup)?
Update
I have found that when the SFTP site has no file in its directory on my application startup, the SftpInboundFileSynchronizer syncs at every polling interval. So I can see com.jcraft.jsch log statements at every poll. But as soon as a file is found on the SFTP site, it syncs to get that file locally and then syncs no more.
Update 2
My apologies... here's the custom code:
#Component
public class RetryCompoundTriggerAdvice extends AbstractMessageSourceAdvice {
private final static Logger logger = LoggerFactory.getLogger(RetryCompoundTriggerAdvice.class);
private final CompoundTrigger compoundTrigger;
private final Trigger override;
private final ApplicationProperties applicationProperties;
private final Mail mail;
private int attempts = 0;
private boolean expectedMessage;
private boolean inProcess;
public RetryCompoundTriggerAdvice(CompoundTrigger compoundTrigger,
#Qualifier("secondaryTrigger") Trigger override,
ApplicationProperties applicationProperties,
Mail mail) {
this.compoundTrigger = compoundTrigger;
this.override = override;
this.applicationProperties = applicationProperties;
this.mail = mail;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
logger.debug("!inProcess is " + !inProcess);
return !inProcess;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (expectedMessage) {
logger.info("Received expected load file. Setting cron trigger.");
this.compoundTrigger.setOverride(null);
expectedMessage = false;
return result;
}
final int maxOverrideAttempts = applicationProperties.getMaxFileRetry();
attempts++;
if (result == null && attempts < maxOverrideAttempts) {
logger.info("Unable to find file after " + attempts + " attempt(s). Will reattempt");
this.compoundTrigger.setOverride(this.override);
} else if (result == null && attempts >= maxOverrideAttempts) {
String message = "Unable to find daily file" +
" after " + attempts +
" attempt(s). Will not reattempt since max number of attempts is set at " +
maxOverrideAttempts + ".";
logger.warn(message);
mail.sendAdminsEmail("Missing Load File", message);
attempts = 0;
this.compoundTrigger.setOverride(null);
} else {
attempts = 0;
// keep periodically checking until we are certain
// that this message is the expected message
this.compoundTrigger.setOverride(this.override);
inProcess = true;
logger.info("Found load file");
}
return result;
}
public void foundExpectedMessage(boolean found) {
logger.debug("Expected message was found? " + found);
this.expectedMessage = found;
inProcess = false;
}
}
You have the logic:
#Override
public boolean beforeReceive(MessageSource<?> source) {
logger.debug("!inProcess is " + !inProcess);
return !inProcess;
}
Let's study its JavaDoc:
/**
* Subclasses can decide whether to proceed with this poll.
* #param source the message source.
* #return true to proceed.
*/
public abstract boolean beforeReceive(MessageSource<?> source);
And the logic around this method:
Message<?> result = null;
if (beforeReceive((MessageSource<?>) target)) {
result = (Message<?>) invocation.proceed();
}
return afterReceive(result, (MessageSource<?>) target);
So, we call invocation.proceed() (SFTP synchronization) only if beforeReceive() returns true. In your case it is the case only if !inProcess.
In the afterReceive() implementation you have inProcess = true; in case you have a result - at the first attempt. And looks like you reset it back to the false only when someone calls that foundExpectedMessage().
So, what do you expect from us as an answer to your problem? It is really in your custom code and not related to the Framework. Sorry...

Resources