Writing in multiple unrelated tables in spring batch writer - spring

Can we have a writer which will write in 2 different unrelated tables simultaneously in spring batch? Actually, Along with the main data I need to store some metadata in a different table. How can I go about it?

Please find below . Let say you have 3 tables to write
#Bean
public CompositeItemWriter compositeWriter() throws Exception {
CompositeItemWriter compositeItemWriter = new CompositeItemWriter();
List<ItemWriter> writers = new ArrayList<ItemWriter>();
writers.add(firstTableWriter());
writers.add(secondTableWriter());
writers.add(thirdTableWriter());
compositeItemWriter.setDelegates(writers);
return compositeItemWriter;
}
#Bean
public JdbcBatchItemWriter<YourDTO> firstTableWriter() {
JdbcBatchItemWriter<YourDTO> databaseItemWriter = new JdbcBatchItemWriter<>();
databaseItemWriter.setDataSource(dataSource);
databaseItemWriter.setSql("INSERT INTO FIRSTTABLE");
ItemPreparedStatementSetter<YourDTO> invoicePreparedStatementSetter = new FirstTableSetter();
databaseItemWriter.setItemPreparedStatementSetter(invoicePreparedStatementSetter);
return databaseItemWriter;
}
#Bean
public JdbcBatchItemWriter<YourDTO> secondTableWriter() {
JdbcBatchItemWriter<YourDTO> databaseItemWriter = new JdbcBatchItemWriter<>();
databaseItemWriter.setDataSource(dataSource);
databaseItemWriter.setSql("INSERT INTO SECOND TABLE");
ItemPreparedStatementSetter<YourDTO> invoicePreparedStatementSetter = new SecondTableSetter();
databaseItemWriter.setItemPreparedStatementSetter(invoicePreparedStatementSetter);
return databaseItemWriter;
}
#Bean
public JdbcBatchItemWriter<YourDTO> thirdTableWriter() {
JdbcBatchItemWriter<YourDTO> databaseItemWriter = new JdbcBatchCustomItemWriter<>();
databaseItemWriter.setDataSource(dataSource);
databaseItemWriter.setSql("INSERT INTO THIRD TABLE");
ItemPreparedStatementSetter<YourDTO> invoicePreparedStatementSetter = new ThirdTableSetter();
databaseItemWriter.setItemPreparedStatementSetter(invoicePreparedStatementSetter);
return databaseItemWriter;
}
//SettterClass Example
public class FirstTableSetter implements ItemPreparedStatementSetter<YourDTO> {
#Override
public void setValues(YourDTO yourDTO, PreparedStatement preparedStatement) throws SQLException {
preparedStatement.setString(1, yourDTO.getMyValue());
}
}

Related

Recursively read files from remote server present in subdirectories with Spring Integration

I have working flow for getting files from single folder present in remote server using inbound adopter but i want for get files for all subfolder present in any remote server parent folder
I have code like this
#Bean
public SessionFactory<SftpClient.DirEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost("localhost");
factory.setPort(port);
factory.setUser("foo");
factory.setPassword("foo");
factory.setAllowUnknownKeys(true);
factory.setTestSession(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory("foo");
fileSynchronizer.setFilter(new SftpSimplePatternFileListFilter("*.xml"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(new File("sftp-inbound"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setMaxFetchSize(1);
return source;
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload());
}
};
}`
but instead of single folder i want get files for all subfolder present in foo directory
if possible please help with full code
#GaryRussell
Thanks you so much for you early response .I have done some changes according to your suggested code app is started but files is not getting picked up by application.
CompositeFileListFilter<LsEntry> compositeFileListFilter = new CompositeFileListFilter<>();
SftpPersistentAcceptOnceFileListFilter fileListFilter =
new SftpPersistentAcceptOnceFileListFilter(
(JdbcMetadataStore) context.getBean("metadataStore"), "REMOTE");
if (Constants.APP1.equals(appName) || Constants.APP2.equals(appName)) {
SftpRegexPatternFileListFilter regexPatternFileListFilter =
new SftpRegexPatternFileListFilter(Pattern.compile("^IL.*"));
compositeFileListFilter.addFilter(regexPatternFileListFilter);
}
compositeFileListFilter.addFilter(fileListFilter);
return IntegrationFlows.fromSupplier(
() -> sftpEnvironment.getSftpGLSIncomingDir(), // remote dir
e -> e.autoStartup(true).poller(pollerMetada()))
.handle(
Sftp.outboundGateway(sftpSessionFactory(), Command.MGET, "payload")
.options(Option.RECURSIVE)
.filter(compositeFileListFilter)
.fileExistsMode(FileExistsMode.IGNORE)
.localDirectoryExpression("'/tmp/' + #remoteDirectory")) // re-create tree locally
.split()
.log()
.get();
#GaryRussell
I have changed my code this new way it is partially processing files mean one example out of 10 file only process 5 or 6 files. I am not able to figure the main issue in that.and I also have also some open challenges which i am mentioning below
Its able to read files form remote subdirectories and store in local directory but I want to process these file in some other sftpChannel, if posible without storing locally
I also want to apply some deduplication technique using data base which will help me to avoid duplicate file processing.
public class SFTPPollerService {
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
//code
return factory;
}
//OLD code
// #Bean
// public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
// SftpInboundFileSynchronizer fileSynchronizer =
// new SftpInboundFileSynchronizer(sftpSessionFactory());
// fileSynchronizer.setDeleteRemoteFiles(sftpEnvironment.isDeleteRemoteFiles());
// fileSynchronizer.setRemoteDirectory(sftpEnvironment.getSftpGLSIncomingDir());
// fileSynchronizer.setPreserveTimestamp(true);
// CompositeFileListFilter<LsEntry> compositeFileListFilter = new
// CompositeFileListFilter<>();
// SftpPersistentAcceptOnceFileListFilter fileListFilter =
// new SftpPersistentAcceptOnceFileListFilter(
// (JdbcMetadataStore) context.getBean("metadataStore"), "REMOTE");
// if (Constants.app2.equals(appName)
// || Constants.app1.equals(appName)) {
// SftpRegexPatternFileListFilter regexPatternFileListFilter =
// new SftpRegexPatternFileListFilter(Pattern.compile("*.txt"));
// compositeFileListFilter.addFilter(regexPatternFileListFilter);
// }
// compositeFileListFilter.addFilter(fileListFilter);
// fileSynchronizer.setFilter(compositeFileListFilter);
// return fileSynchronizer;
// }
//
// #Bean
// #InboundChannelAdapter(channel = "sftpChannel", poller = #Poller("pollerMetada"))
// public MessageSource<File> sftpMessageSource() {
// SftpInboundFileSynchronizingMessageSource source =
// new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
// source.setLocalDirectory(new File(sftpEnvironment.getSftpLocalDir()));
//
// source.setAutoCreateLocalDirectory(true);
//
// try {
// source.setLocalFilter(
// (FileSystemPersistentAcceptOnceFileListFilter)
// context.getBean("filelistFilter"));
// } catch (Exception e) {
// LOG.error(
// "Exception caught while setting local filter on
// SftpInboundFileSynchronizingMessageSource",
// e);
// }
// source.setMaxFetchSize(sftpEnvironment.getMaxFetchFileSize());
//
// return source;
// }
//new Code
#Bean
public IntegrationFlow sftpInboundFlow() {
CompositeFileListFilter<LsEntry> compositeFileListFilter = new CompositeFileListFilter<>();
SftpPersistentAcceptOnceFileListFilter fileListFilter =
new SftpPersistentAcceptOnceFileListFilter(
(JdbcMetadataStore) context.getBean("metadataStore"), "REMOTE");
if (Constants.app2.equals(appName) || Constants.app1.equals(appName)) {
SftpRegexPatternFileListFilter regexPatternFileListFilter =
new SftpRegexPatternFileListFilter(Pattern.compile("(subDir | *.txt)"));
compositeFileListFilter.addFilter(regexPatternFileListFilter);
}
fileListFilter.setForRecursion(true);
FileSystemPersistentAcceptOnceFileListFilter fileSystemPersistentAcceptOnceFileListFilter = (FileSystemPersistentAcceptOnceFileListFilter) context.getBean(
"filelistFilter");
compositeFileListFilter.addFilter(fileListFilter);
// IntegrationFlow ir =
// IntegrationFlows.from(
// Sftp.inboundAdapter(sftpSessionFactory())
// .preserveTimestamp(true)
// .remoteDirectory(sftpEnvironment.getSftpGLSIncomingDir())
// .deleteRemoteFiles(sftpEnvironment.isDeleteRemoteFiles())
// .filter(compositeFileListFilter)
// .autoCreateLocalDirectory(true)
// .localDirectory(new File(sftpEnvironment.getSftpLocalDir())),
// e -> e.autoStartup(true).poller(pollerMetada()))
// .handle(handler())
// .get();
return IntegrationFlows.fromSupplier(
() -> sftpEnvironment.getSftpGLSIncomingDir(), // remote dir
e -> e.autoStartup(true).poller(pollerMetada()))
.handle(
Sftp.outboundGateway(sftpSessionFactory(), Command.MGET, "payload")
.options(Option.RECURSIVE)
.fileExistsMode(FileExistsMode.IGNORE)
.regexFileNameFilter("(dsv[0-9]|.*.xml)")
// .filter(compositeFileListFilter)
.localDirectoryExpression("'user/localDir/test/'"))
// .handle(handler())
// .patternFileNameFilter(".*\\.xml")) // re-create tree locally
.split()
.channel("sftpChannel")
// .handle(handler())
.log()
.get();
}
#Bean
public PollerMetadata pollerMetada() {
PollerMetadata pm = new PollerMetadata();
ExpressionEvaluatingTransactionSynchronizationProcessor processor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("payload.delete()");
processor.setAfterRollbackExpression(exp);
TransactionSynchronizationFactory tsf = new DefaultTransactionSynchronizationFactory(processor);
pm.setTransactionSynchronizationFactory(tsf);
List<Advice> advices = new ArrayList<>();
advices.add(compoundTriggerAdvice());
pm.setAdviceChain(advices);
pm.setTrigger(compoundTrigger());
pm.setMaxMessagesPerPoll(sftpEnvironment.getMaxMessagesPerPoll());
return pm;
}
#Bean
public CronTrigger cronTrigger() {
if (LOG.isDebugEnabled()) {
return new CronTrigger(sftpEnvironment.getPollerCronExpressionWhenDebugModeIsEnabled());
} else {
return new CronTrigger(sftpEnvironment.getPollerCronExpression());
}
}
#Bean
public PeriodicTrigger periodicTrigger() {
return new PeriodicTrigger(sftpEnvironment.getPeriodicTriggerInMillis());
}
#Bean
public CompoundTrigger compoundTrigger() {
return new CompoundTrigger(cronTrigger());
}
#Bean
public CompoundTriggerAdvice compoundTriggerAdvice() {
return new CompoundTriggerAdvice(compoundTrigger(), periodicTrigger());
}
#Bean
public FileSystemPersistentAcceptOnceFileListFilter filelistFilter(MetadataStore datastore) {
return new FileSystemPersistentAcceptOnceFileListFilter((JdbcMetadataStore) datastore, "INT");
}
#Bean
public PlatformTransactionManager transactionManager() {
return new org.springframework.integration.transaction.PseudoTransactionManager();
}
#Bean
DataSource dataSource() throws SQLException {
OracleDataSource dataSource = new OracleDataSource();
dataSource.setUser(databaseProperties.getOracleUsername());
dataSource.setPassword(databaseProperties.getOraclePassword());
dataSource.setURL(databaseProperties.getOracleUrl());
dataSource.setImplicitCachingEnabled(true);
dataSource.setFastConnectionFailoverEnabled(true);
return dataSource;
}
/**
* Creates a {#link JdbcMetadataStore} for the de-duplication logic.
*
* <p>This method uses the "REGION" column of the metadatastore table to differentiate between
* multiple apps. The value of the "REGION" column is set equal to the app-name.
*
* #return a JDBC metadata store
* #throws SQLException in case an exception occurs during connection to SQL database
*/
#Bean
public MetadataStore metadataStore() throws SQLException {
JdbcMetadataStore jdbcMetadataStore = new JdbcMetadataStore(dataSource());
if (!Constants.app2.equals(appName)) {
jdbcMetadataStore.setRegion(appName);
}
return jdbcMetadataStore;
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler() {
return message -> {
File file = (File) message.getPayload();
FileDto fileDto = new FileDto(file);
fileHandler.handle(fileDto);
LOG.info("controller is here ");
try {
if (sftpEnvironment.isDeleteLocalFiles()) {
Files.deleteIfExists(Paths.get(file.toString()));
}
} catch (IOException e) {
// TODO retry/report/handle gracefully
LOG.error(String.format("MessageHandler had error message=%s", message), e);
}
};
}
}
The synchronizer doesn't support recursion. Use the outbound gateway with a recursive mget command instead.
https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-outbound-gateway
Using the mget Command
mget retrieves multiple remote files based on a pattern and supports the following options:
-P: Preserve the timestamps of the remote files.
-R: Retrieve the entire directory tree recursively.
-x: Throw an exception if no files match the pattern (otherwise, an empty list is returned).
-D: Delete each remote file after successful transfer. If the transfer is ignored, the remote file is not deleted, because the FileExistsMode is IGNORE and the local file already exists.
The message payload resulting from an mget operation is a List<File> object (that is, a List of File objects, each representing a retrieved file).
EDIT
Here is an example using the java DSL...
#SpringBootApplication
public class So75180789Application {
public static void main(String[] args) {
SpringApplication.run(So75180789Application.class, args);
}
#Bean
IntegrationFlow flow(DefaultSftpSessionFactory sf) {
return IntegrationFlows.fromSupplier(() -> "foo/*", // remote dir
e -> e.poller(Pollers.fixedDelay(5000)))
.handle(Sftp.outboundGateway(sf, Command.MGET, "payload")
.options(Option.RECURSIVE)
.fileExistsMode(FileExistsMode.IGNORE)
.localDirectoryExpression("'/tmp/' + #remoteDirectory")) // re-create tree locally
.split()
.log()
.get();
}
#Bean
DefaultSftpSessionFactory sf(#Value("${host}") String host,
#Value("${username}") String user, #Value("${pw}") String pw) {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost(host);
sf.setUser(user);
sf.setPassword(pw);
sf.setAllowUnknownKeys(true);
return sf;
}
}

Read New File While Doing Processing For A Field In Spring Batch

I have a fixedlength input file reading by using SPRING BATCH.
I have already implemented Job, Step, Processor, etc.
Here are the sample code.
#Configuration
public class BatchConfig {
private JobBuilderFactory jobBuilderFactory;
private StepBuilderFactory stepBuilderFactory;
#Value("${inputFile}")
private Resource resource;
#Autowired
public BatchConfig(JobBuilderFactory jobBuilderFactory, StepBuilderFactory stepBuilderFactory) {
this.jobBuilderFactory = jobBuilderFactory;
this.stepBuilderFactory = stepBuilderFactory;
}
#Bean
public Job job() {
return this.jobBuilderFactory.get("JOB-Load")
.start(fileReadingStep())
.build();
}
#Bean
public Step fileReadingStep() {
return stepBuilderFactory.get("File-Read-Step1")
.<Employee,EmpOutput>chunk(1000)
.reader(itemReader())
.processor(new CustomFileProcesser())
.writer(new CustomFileWriter())
.faultTolerant()
.skipPolicy(skipPolicy())
.build();
}
#Bean
public FlatFileItemReader<Employee> itemReader() {
FlatFileItemReader<Employee> flatFileItemReader = new FlatFileItemReader<Employee>();
flatFileItemReader.setResource(resource);
flatFileItemReader.setName("File-Reader");
flatFileItemReader.setLineMapper(LineMapper());
return flatFileItemReader;
}
#Bean
public LineMapper<Employee> LineMapper() {
DefaultLineMapper<Employee> defaultLineMapper = new DefaultLineMapper<Employee>();
FixedLengthTokenizer fixedLengthTokenizer = new FixedLengthTokenizer();
fixedLengthTokenizer.setNames(new String[] { "employeeId", "employeeName", "employeeSalary" });
fixedLengthTokenizer.setColumns(new Range[] { new Range(1, 9), new Range(10, 20), new Range(20, 30)});
fixedLengthTokenizer.setStrict(false);
defaultLineMapper.setLineTokenizer(fixedLengthTokenizer);
defaultLineMapper.setFieldSetMapper(new CustomFieldSetMapper());
return defaultLineMapper;
}
#Bean
public JobSkipPolicy skipPolicy() {
return new JobSkipPolicy();
}
}
For Processing I have added some sample code What I need, But if I add BufferedReader here then it's taking more times to do the job.
#Component
public class CustomFileProcesser implements ItemProcessor<Employee, EmpOutput> {
#Override
public EmpOutput process(Employee item) throws Exception {
EmpOutput emp = new EmpOutput();
emp.setEmployeeSalary(checkSal(item.getEmployeeSalary()));
return emp;
}
public String checkSal(String sal) {
// need to read the another file
// required to do some kind of validation
// after that final result need to return
File f1 = new File("C:\\Users\\John\\New\\salary.txt");
FileReader fr;
try {
fr = new FileReader(f1);
BufferedReader br = new BufferedReader(fr);
String s = br.readLine();
while (s != null) {
String value = s.substring(5, 7);
if(value.equals(sal))
sal = value;
else
sal = "5000";
s = br.readLine();
}
} catch (Exception e) {
e.printStackTrace();
}
return sal;
}
// other fields need to check by reading different different file.
// These new files contains more than 30k records.
// all are fixedlength file.
// I need to get the field by giving the index
}
While doing the processing for one or more field, I need to check In another file by reading that file (it's a file I will read from fileSystem/Cloud).
While processing the data for 5 fields I need to read 5 different different file again, I will check the fields details inside those file and then I will gererate the result , That result will process forther.
You can cache the content of the file in memory and do your check against the cache instead of re-reading the entire file from disk for each item.
You can find an example here: Spring Batch With Annotation and Caching.

Spring batch app does not process all items

I have a spring batch app which is weird that I dont know why does not process all items ,I am using range partition and compositeprocessor for the data transform. If my reader reads 5787 records for example, it is an example because they can be more, it only processes 5704 recors and the rest remain unprocessed. I hope someone can help me , thanks in advance.
My dataitemprocessor
public class data implements ItemProcessor<beangenerico,ThreadLocal<List<beanAccountCollect>>> {
Logger logger = Logger.getLogger(data.class);
private String SP_SQL = "{call GetDetailAccount(?)}";
private String SELECT = "{call myspbyblocks (?,?)}";
private beanAccountCollect b;
private ThreadLocal<List<beanAccountCollect>> listbeanAccC = new ThreadLocal<List<beanAccountCollect>>();
private ThreadLocal<List<beanCustomer>> listbeanc=new ThreadLocal<List<beanCustomer>>();
#Autowired
private JdbcTemplate jdbcTemplate;
#Override
public ThreadLocal<List<beanAccountCollect>> process(beangenerico rangos) {
// TODO Auto-generated method stub
listbeanAccC.set(new ArrayList<beanAccountCollect>());
try {
listbeanc = this.jdbcTemplate.query(SELECT,new Object [] {rangos.getIni(),rangos.getFin()},new CustomerResultSetExtractor());
for(beanCustomer bc : listbeanc.get()) {
b = new beanAccountCollect();
b.setUsernetwork(bc.getUsernetwork());
b.setTipoagente(bc.getTipoagente());
b.setLbpar(this.jdbcTemplate.query(SP_SQL,new Object [] {bc.getCuenta()},new BeanAccountResulSetExtractor(this.jdbcTemplate)));
listbeanAccC.get().add(b);
}
}catch (Exception e) {
logger.error(e);
}
return listbeanAccC;
}
public void setJdbcTemplate(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
}
This is my transform data
public class transformDataWS implements ItemProcessor<ThreadLocal<List<beanAccountCollect>>, ThreadLocal<List<beanNewMessageBeanP>>>
{
Logger logger = Logger.getLogger(transformDataWS.class);
private ThreadLocal<List<beanNewMessageBeanP>> lstbnmbp = new ThreadLocal<List<beanNewMessageBeanP>>();
private beanNewMessageBeanP bnmbp;
public ThreadLocal<List<beanNewMessageBeanP>> process(ThreadLocal<List<beanAccountCollect>> list) throws Exception {
// TODO Auto-generated method stub
lstbnmbp.set(new ArrayList<beanNewMessageBeanP>());
List<beanParameter> lbeanPar=null;
List<NMessagePEntryBeanParray> lNMPEBa = null;
for(beanAccountCollect bc:list.get()) {
NewMessageParametrosEntryBeanP nb = new NewMessageParametrosEntryBeanP();
NMessagePEntryBeanParray bar= null;
NewMessageParametrosEntryBeanP [] ba = null;
lNMPEBa = new ArrayList<NMessagePEntryBeanParray>();
lbeanPar = new ArrayList<beanParameter>();
lbeanPar = bc.getLbpar();
bnmbp = new beanNewMessageBeanP();
bnmbp.setTipoagente(bc.getTipoagente());
bnmbp.setUsernetwork(bc.getUsernetwork());
if(lbeanPar!=null) {
for(beanParameter bpar : lbeanPar) {
ba = new NewMessageParametrosEntryBeanP[54];
bar = new NMessagePEntryBeanParray();
ba[0] = new NewMessageParametrosEntryBeanP();
ba[0].setKey("PaymentQ");
ba[0].setValue(bpar.getPQ());
ba[1] = new NewMessageParametrosEntryBeanP();
ba[1].setKey("PaymentReest");
ba[1].setValue(bpar.getPR());
ba[2] = new NewMessageParametrosEntryBeanP();
ba[2].setKey("DelayCte");
ba[2].setValue(bpar.getDelayCte());
ba[3] = new NewMessageParametrosEntryBeanP();
ba[3].setKey("DelayRange");ba[3].setValue(bpar.getDelayR());
ba[4] = new NewMessageParametrosEntryBeanP();
ba[4].setKey("C4");ba[4].setValue(bpar.getC3());
ba[5] = new NewMessageParametrosEntryBeanP();
ba[5].setKey("C6");ba[5].setValue(bpar.getC6());
ba[6] = new NewMessageParametrosEntryBeanP();
ba[6].setKey("Banddict");ba[6].setValue(bpar.getBanddict());
ba[7] = new NewMessageParametrosEntryBeanP();
ba[7].setKey("Street");ba[7].setValue(bpar.getStreet());
ba[8] = new NewMessageParametrosEntryBeanP();
ba[8].setKey("Stree_1");ba[8].setValue(bpar.getStreet1());
//....
ba[53] = new NewMessageParametrosEntryBeanP();
ba[53].setKey("Zone");ba[53].setValue(bpar.getZone());
bar.setArr(ba);
lNMPEBa.add(bar);
}
}
bnmbp.setNmespebarr(lNMPEBa);
lstbnmbp.get().add(bnmbp);
}
return lstbnmbp;
}
}
This is my config job
#EnableBatchProcessing
#Configuration
#Import({DBConfiguration.class})
#ComponentScan({"com.mycompany.batch.config","com.mycompany.batch.mapper","com.mycompany.batch.model","com.mycompany.batch.particion","com.mycompan.batch.procesos","com.mycompany.batch.reader","com.mycompany.batch.writers"})
#PropertySource("file:pruebas.properties")
public class ConfigJobBatch {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
#Qualifier("sqlserverDataSource")
private DataSource dataSource;
#Autowired
Environment envws;
#Bean(name = "demoPartitionStep")
public Step step1Manager(Step slaveStep) {
return stepBuilderFactory.get("step1.manager")
.<String, String>partitioner("step1", demoPartitioner())
.step(slaveStep)
.gridSize(numerohilos())
.taskExecutor(taskExecutor())
.build();
}
#Bean(name = "demoPartitioner", destroyMethod = "")
public Partitioner demoPartitioner() {
RangePartitioner partitioner = new RangePartitioner();
return partitioner;
}
// slave step
#Bean
public Step slaveStep(ItemReader<beangenerico> demoReader,ItemWriter BeanAccCollectionWriter)
{
return stepBuilderFactory.get("slaveStep")
.chunk(1)
.reader(demoReader)
.processor(compositeProcessor())
.writer(BeanAccCollectionWriter)
.taskExecutor(taskExecutor())
.build();
}
#Bean
public CompositeItemProcessor compositeProcessor() {
List<ItemProcessor> delegates = new ArrayList<>(2);
delegates.add(CustomerProccesor());
delegates.add(beanDataItemProccesor());
CompositeItemProcessor processor = new CompositeItemProcessor();
processor.setDelegates(delegates);
return processor;
}
/***FIXME debemos instanciar los processor como spring bean sino el spring no lee y no toma en cuenta la capa dao o service***/
#Bean
public CustomerItemProcessor CustomerProccesor(){
return new CustomerItemProcessor();
}
#Bean
public beanDataItemProccesor beanDataItemProccesor(){
return new beanDataItemProccesor();
}
/***FIXME debemos instanciar los processor como bean sino no toma en cuenta la capa dao o service***/
#Bean
public CustomItemProcessListener listener() {
return new CustomItemProcessListener();
}
#Bean(name = "demoWriter")
#StepScope
public ItemWriter< beangenerico> CustomItemWriter() {
// TODO Auto-generated method stub
CustomItemWriter wri = new CustomItemWriter();
return wri;
}
#Bean(name = "testWriter")
#StepScope
public ItemWriter<ThreadLocal<CopyOnWriteArrayList<beangen>>> testItemWriter() {
// TODO Auto-generated method stub
TestWriter wri = new TestWriter();
return wri;
}
#Bean(name = "BeanAccCollectionWriter")
#StepScope
public ItemWriter<ThreadLocal<List<NewMessage>>> BeanAccItemWriter() {
// TODO Auto-generated method stub
BeanAccItemWriter wri = new BeanAccItemWriter();
return wri;
}
#Bean(name="flatFileItemWriterPartition")
#StepScope
public FlatFileItemWriter<beangen> slaveWriter(
#Value("#{stepExecutionContext[fromId]}") int fromId,#Value("#{stepExecutionContext[toId]}")int toId ) {
FlatFileItemWriter<beangen> reader = new FlatFileItemWriter<beangen>();
reader.setResource(new FileSystemResource(
"csv/users.processed" + fromId + "-" + toId + ".csv"));
//reader.setAppendAllowed(false);
reader.setLineAggregator(new DelimitedLineAggregator<beangen>() {{
setDelimiter(",");
setFieldExtractor(new BeanWrapperFieldExtractor<beangen>() {{
setNames(new String[]{"usernetwork","cuenta","atributo","atributo2"});
}});
}});
return reader;
}
#Bean(name="tempRecordsWriter")
#StepScope
public ListDelegateWriter ListDelegateWriter(#Qualifier("flatFileItemWriterPartition")FlatFileItemWriter<beangen> writer) {
// TODO Auto-generated method stub
ListDelegateWriter wri = new ListDelegateWriter();
wri.setDelegate(writer);
return wri;
}
#Bean(name = "demoReader")
#StepScope
public ItemReader<beangenerico> myreader(#Value("#{stepExecutionContext['fromId']}") int minValue,#Value("#{stepExecutionContext['toId']}") int maxValue){
Myreader fr = new Myreader(minValue,maxValue);
return fr;
}
#Bean
public TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor("spring_batch");
}
#Bean
public Job job(#Qualifier("demoPartitionStep") Step demoPartitionStep) {
return this.jobBuilderFactory.get("job")
.start(demoPartitionStep)
.build();
}
#Bean
public StepExecuListner steplistener() {
return new StepExecuListner();
}
public static int numerohilos() {
/****ciclo para hilos usando rango y numero de hilos a calcular***
**************N_threads = N_cpu * U_cpu * (1 + W / C) *************************************
***N_cpu = Runtime.getRuntime().availableProcessors()**
*******/
int numcpu = Runtime.getRuntime().availableProcessors();
int numthread = numcpu*1*(1+10);
int gridSize=numthread;
return gridSize;
}
}
I fix my code, this helped .
public class data implements ItemProcessor<beangenerico,ThreadLocal<List<beanAccountCollect>>> {
Logger logger = Logger.getLogger(data.class);
private String SP_SQL = "{call GetDetailAccount(?)}";
private String SELECT = "{call myspbyblocks (?,?)}";
private beanAccountCollect b;
private ThreadLocal<List<beanAccountCollect>> listbeanAccC = new ThreadLocal<List<beanAccountCollect>>();
private ThreadLocal<List<beanCustomer>> listbeanc=new ThreadLocal<List<beanCustomer>>();
#Autowired
private JdbcTemplate jdbcTemplate;
#Override
public ThreadLocal<List<beanAccountCollect>> process(beangenerico rangos) {
// TODO Auto-generated method stub
List<beanParameter> lbeanPar=null;
listbeanAccC.set(new ArrayList<beanAccountCollect>());
try {
listbeanc = this.jdbcTemplate.query(SELECT,new Object [] {rangos.getIni(),rangos.getFin()},new CustomerResultSetExtractor());
for(beanCustomer bc : listbeanc.get()) {
b = new beanAccountCollect();
lbeanPar = new ArrayList<beanParameter>();
lbeanPar = this.jdbcTemplate.query(SP_SQL,new Object [] {bc.getCuenta()},new BeanAccountResulSetExtractor(this.jdbcTemplate));
b.setUsernetwork(bc.getUsernetwork());
b.setTipoagente(bc.getTipoagente());
b.setLbpar(lbeanPar);
listbeanAccC.get().add(b);
}
}catch (Exception e) {
logger.error(e);
}
return listbeanAccC;
}
public void setJdbcTemplate(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
}

i want to fetch all tables in one database and display in jsp

like so
#SuppressWarnings("unchecked")
public List<Tables> getTableColumns(DataSource dataSource) throws MetaDataAccessException{
List<Tables> table= (List<Tables>) JdbcUtils.extractDatabaseMetaData(dataSource, new DatabaseMetaDataCallback() {
#Override
public Object processMetaData(java.sql.DatabaseMetaData dbmd)
throws SQLException, MetaDataAccessException {
// TODO Auto-generated method stub
List<String> ret = Lists.newArrayList();
ResultSet rs =dbmd.getTables(null, null,null,new String[] {"TABLE"});
List<Tables> list =new ArrayList();
while (rs.next()) {
String table_name =rs.getString("table_name");
// ret.add(String.valueOf(rs.getObject(3)));
Tables table=new Tables();
table.setTable_name(table_name);
// ret.add(table_name);
System.out.println("hi user:" +table.getTable_name());
}
return ret;
}
});
return table;
}

How to set the override on a compound trigger?

I have a Spring integration application that normally polls daily for a file via SFTP using a cron trigger. But if it doesn't find the file it expects, it should poll every x minutes via a periodic trigger until y attempts. To do this I use the following component:
#Component
public class RetryCompoundTriggerAdvice extends AbstractMessageSourceAdvice {
private final static Logger logger = LoggerFactory.getLogger(RetryCompoundTriggerAdvice.class);
private final CompoundTrigger compoundTrigger;
private final Trigger override;
private final ApplicationProperties applicationProperties;
private final Mail mail;
private int attempts = 0;
public RetryCompoundTriggerAdvice(CompoundTrigger compoundTrigger,
#Qualifier("secondaryTrigger") Trigger override,
ApplicationProperties applicationProperties,
Mail mail) {
this.compoundTrigger = compoundTrigger;
this.override = override;
this.applicationProperties = applicationProperties;
this.mail = mail;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
final int maxOverrideAttempts = applicationProperties.getMaxFileRetry();
attempts++;
if (result == null && attempts < maxOverrideAttempts) {
logger.info("Unable to find load file after " + attempts + " attempt(s). Will reattempt");
this.compoundTrigger.setOverride(this.override);
} else if (result == null && attempts >= maxOverrideAttempts) {
mail.sendAdminsEmail("Missing File");
attempts = 0;
this.compoundTrigger.setOverride(null);
}
else {
attempts = 0;
this.compoundTrigger.setOverride(null);
logger.info("Found load file");
}
return result;
}
public void setOverrideTrigger() {
this.compoundTrigger.setOverride(this.override);
}
public CompoundTrigger getCompoundTrigger() {
return compoundTrigger;
}
}
If a file doesn't exist, this works great. That is, the override (i.e. periodic trigger) takes effect and polls every x minutes until y attempts.
However, if a file does exist but it's not the expected file (e.g. the data is at the wrong date), another class (that reads the file) calls the setOverrideTrigger of the RetryCompoundTriggerAdvice class. But afterReceive is not subsequently called at every x minutes. Why would this be?
Here's more of the application code:
SftpInboundFileSynchronizer:
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(applicationProperties.getSftpDirectory());
CompositeFileListFilter<ChannelSftp.LsEntry> compositeFileListFilter = new CompositeFileListFilter<ChannelSftp.LsEntry>();
compositeFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(store, "sftp"));
compositeFileListFilter.addFilter(new SftpSimplePatternFileListFilter(applicationProperties.getLoadFileNamePattern()));
fileSynchronizer.setFilter(compositeFileListFilter);
fileSynchronizer.setPreserveTimestamp(true);
return fileSynchronizer;
}
Session factory is:
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory sftpSessionFactory = new DefaultSftpSessionFactory();
sftpSessionFactory.setHost(applicationProperties.getSftpHost());
sftpSessionFactory.setPort(applicationProperties.getSftpPort());
sftpSessionFactory.setUser(applicationProperties.getSftpUser());
sftpSessionFactory.setPassword(applicationProperties.getSftpPassword());
sftpSessionFactory.setAllowUnknownKeys(true);
return new CachingSessionFactory<LsEntry>(sftpSessionFactory);
}
The SftpInboundFileSynchronizingMessageSource is set to poll using the compound trigger.
#Bean
#InboundChannelAdapter(autoStartup="true", channel = "sftpChannel", poller = #Poller("pollerMetadata"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(applicationProperties.getScheduledLoadDirectory());
source.setAutoCreateLocalDirectory(true);
CompositeFileListFilter<File> compositeFileFilter = new CompositeFileListFilter<File>();
compositeFileFilter.addFilter(new LastModifiedFileListFilter());
compositeFileFilter.addFilter(new FileSystemPersistentAcceptOnceFileListFilter(store, "dailyfilesystem"));
source.setLocalFilter(compositeFileFilter);
source.setCountsEnabled(true);
return source;
}
#Bean
public PollerMetadata pollerMetadata(RetryCompoundTriggerAdvice retryCompoundTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
List<Advice> adviceChain = new ArrayList<Advice>();
adviceChain.add(retryCompoundTriggerAdvice);
pollerMetadata.setAdviceChain(adviceChain);
pollerMetadata.setTrigger(compoundTrigger());
pollerMetadata.setMaxMessagesPerPoll(1);
return pollerMetadata;
}
#Bean
public CompoundTrigger compoundTrigger() {
CompoundTrigger compoundTrigger = new CompoundTrigger(primaryTrigger());
return compoundTrigger;
}
#Bean
public CronTrigger primaryTrigger() {
return new CronTrigger(applicationProperties.getSchedule());
}
#Bean
public PeriodicTrigger secondaryTrigger() {
return new PeriodicTrigger(applicationProperties.getRetryInterval());
}
Update
Here's the message handler:
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler dailyHandler(SimpleJobLauncher jobLauncher, Job job, Mail mail) {
JobRunner jobRunner = new JobRunner(jobLauncher, job, store, mail);
jobRunner.setDaily("true");
jobRunner.setOverwrite("false");
return jobRunner;
}
JobRunner kicks off a Spring Batch job. After processing the job, my application looks to see if the file had the data it expected for the day. If not, it is setting the override trigger.
That's the way triggers work - you only get an opportunity to change the trigger when the trigger fires.
Since you reset to the cron trigger, the next opportunity for change is when that trigger fires (if the poller thread is released by the downstream flow before changing the trigger).
Are you handing off the file to another thread (queue channel or executor)? If not, I would expect any changes to the trigger should be applied, because nextExecutionTime() will not be called until the downstream flow returns.
If there's a thread handoff, you have no opportunity to change the trigger.

Resources