I am using MultiResourceItemReader in order to read and eventually write a list of CSV files to the database.
#StepScope
#Bean
public MultiResourceItemReader<DailyExport> multiResourceItemReader(#Value("#{stepExecutionContext[listNotLoadedFilesPath]}") List<String> notLoadedFilesPath) {
logger.info("** start multiResourceItemReader **");
// cast List of not loaded files to array of resources
List <Resource>tmpList = new ArrayList<Resource>();
notLoadedFilesPath.stream().forEach(fullPath -> {
Resource resource = new FileSystemResource(fullPath);
tmpList.add(resource);
});
Resource [] resourceArr = tmpList.toArray(new Resource[tmpList.size()]);
MultiResourceItemReader<DailyExport> multiResourceItemReader = new MultiResourceItemReader<>();
multiResourceItemReader.setName("dailyExportMultiReader");
multiResourceItemReader.setDelegate(reader(dailyExportMapper()));
multiResourceItemReader.setResources(resourceArr);
return multiResourceItemReader;
}
#Bean
public FlatFileItemReader<DailyExport> reader(FieldSetMapper<DailyExport> testClassRowMapper) {
logger.info("** start reader **");
// Create reader instance
FlatFileItemReader<DailyExport> reader = new FlatFileItemReaderBuilder<DailyExport>()
.name("dailyExportReader")
.linesToSkip(1).fieldSetMapper(testClassRowMapper)
.delimited().delimiter("|").names(dailyExportMetadata)
.build();
return reader;
}
Everything is working well but I also need to store the current file\resource name.
I found this API getCurrentResource but I couldn't figure how to use it. Is there a way to get the current resource during the process stage?
public class DailyExportItemProcessor implements ItemProcessor<DailyExport, DailyExport>{
#Autowired
public MultiResourceItemReader<DailyExport> multiResourceItemReader;
#Override
public DailyExport process(DailyExport item) throws Exception {
// multiResourceItemReader.getCurrent ??
return item;
}
Thank you
ResourceAware is what you need, it allows you set the original resource on the item so you can get access to it in the processor (or anywhere else where the item is in scope):
class DailyExport implement ResourceAware {
private Resource resource;
// getter/setter for resource
}
then in the processor:
public class DailyExportItemProcessor implements ItemProcessor<DailyExport, DailyExport>{
#Override
public DailyExport process(DailyExport item) throws Exception {
Resource currentResource = item.getResource();
// do something with the item/resource
return item;
}
}
Related
How can I pass a parameter as a filter condition when getting the file list of the SFTP server from MessagingGateway?
My SftpMessageGateway code
#MessagingGateway
public interface SftpMessageGateway {
#Gateway(requestChannel = "getSftpChannel")
List<SftpFileInfo> getIconListByProductUiId(#Payloads("productUiId") String productUiId);
Integration Config
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(host);
factory.setPort(port);
factory.setUser(id);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
#ServiceActivator(inputChannel = "getSftpChannel")
public MessageHandler getMessageHandler() {
SftpOutboundGateway outboundGateway = new SftpOutboundGateway(sftpSessionFactory(), "ls", "'" + uploadPath + "'");
outboundGateway.setOption(AbstractRemoteFileOutboundGateway.Option.NAME_ONLY);
outboundGateway.setFilter(new SftpSimplePatternFileListFilter("*alpha*"));
outboundGateway.setFilter(new SftpSimplePatternFileListFilter("I want get custom argument)); <----
return outboundGateway;
}
You can set only one filter into a gateway, however there is a CompositeFileListFilter where you can combine a set of filters, include any custom impl of the FileListFilter.
See more info in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/file.html#remote-persistent-flf
You can refer following code snippet for implementing FileListFilter. My use case was to fetch most latest file uploaded in SFTP directory.
#Component
public class LastModifiedFileFilter implements FileListFilter<LsEntry> {
#Override
public List<LsEntry> filterFiles(LsEntry[] files) {
List<LsEntry> result = new ArrayList<LsEntry>();
Vector<LsEntry> list = new Vector<LsEntry>();
Collections.addAll(list, files);
ChannelSftp.LsEntry lastModifiedEntry = Collections.max(list,
(Comparator.comparingInt(entry -> entry.getAttrs().getMTime())));
result.add(lastModifiedEntry);
return result;
}
}
Once you have your own custom filter in place then you need to 'Chain' it with your other filters in SftpOutboundGateway object. For your reference, I did it this way
ChainFileListFilter<LsEntry> filterList = new ChainFileListFilter<LsEntry>();
filterList.addFilter(new SftpSimplePatternFileListFilter("*alpha*"));
filterList.addFilter(new LastModifiedFileFilter());
setFilter(filterList);
For me, it will now fetch latest file having "alpha" string present in its name. Hope this helps.
I am working on project where I need to validate consumer group is created on topic or not. Is there any way in boldSpring Kafkabold to validate it
Currently, I haven't seen describeConsumerGroups supported in Spring-Kafka KafkaAdmin. So, you may need to create a Kafka AdminClient and call the method by yourself.
E.g: Here, I took advantage of the auto-configuration property class KafkaProperties and autowired it to the service.
#Service
public class KafkaBrokerService implements BrokerService {
private Map<String, Object> configs;
public KafkaBrokerService(KafkaProperties kafkaProperties) { // Autowired
this.configs = kafkaProperties.buildAdminProperties();
}
private AdminClient createAdmin() {
Map<String, Object> configs2 = new HashMap<>(this.configs);
return AdminClient.create(configs2);
}
public SomeDto consumerGroupDescription(String groupId) {
try (AdminClient adminClient = createAdmin()) {
// ConsumerGroup's members
ConsumerGroupDescription consumerGroupDescription = adminClient.describeConsumerGroups(Collections.singletonList(groupId))
.describedGroups().get(groupId).get();
// ConsumerGroup's partitions and the committed offset in each partition
Map<TopicPartition, OffsetAndMetadata> offsets = adminClient.listConsumerGroupOffsets(groupId).partitionsToOffsetAndMetadata().get();
// When you get the information, you can validate it here.
...
} catch (ExecutionException | InterruptedException e) {
//
}
}
}
So I've created a batch job which generates reports (csv files). I have been able to generate the the files seamlessly using FlatFileItemWriter but my end goal is to create an InputStream to call a rest service which will store the document or a byte array to store it in the database.
public class CustomWriter implements ItemWriter<Report> {
#Override
public void write(List<? extends Report> reports) throws Exception {
reports.forEach(this::writeDataToFile);
}
private void writeDataToFile(final Report data) throws Exception {
FlatFileItemWriter writer = new FlatFileItemWriter();
writer.setResource(new FileSystemResource("C:/reports/test-report.csv"));
writer.setLineAggregator(getLineAggregator();
writer.afterPropertiesSet();
writer.open(new ExecutionContext());
writer.write(data);
writer.close();
}
private DelimitedLineAggregator<Report> getLineAggregator(final Report report) {
DelimitedLineAggregator<Report> delimitedLineAgg = new DelimitedLineAggregator<Report>();
delimitedLineAgg.setDelimiter(",");
delimitedLineAgg.setFieldExtractor(getFieldExtractor());
return delimitedLineAgg;
}
private FieldExtractor<Report> getFieldExtractor() {
BeanWrapperFieldExtractor<Report> fieldExtractor = new BeanWrapperFieldExtractor<Report>();
fieldExtractor.setNames(COLUMN_HEADERS.toArray(new String[0]));
return fieldExtractor;
}
}
One way I could do this is to store the file locally temporarily and create a new step to pick the generated files up and do the sending/storing but I would really like to skip this step and send/store it in the first step.
How do I go about doing this?
Background
I am working on designing a file reading layer that can read delimited files and load it in a List. I have decided to use Spring Batch because it provides a lot of scalability options which I can leverage for different sets of files depending on their size.
The requirement
I want to design a generic Job API that can be used to read any delimited file.
There should be a single Job structure that should be used for parsing every delimited file. For example, if the system needs to read 5 files, there will be 5 jobs (one for each file). The only way the 5 jobs will be different from each other is that they will use a different FieldSetMapper, column name, directory path and additional scaling parameters such as commit-interval and throttle-limit.
The user of this API should not need to configure a Spring
batch job, step, chunking, partitioning, etc on his own when a new file type is introduced in the system.
All that the user needs to do is to provide the FieldsetMapperto be used by the job along with the commit-interval, throttle-limit and the directory where each type of file will be placed.
There will be one predefined directory per file. Each directory can contain multiple files of the same type and format. A MultiResourcePartioner will be used to look inside a directory. The number of partitions = number of files in the directory.
My requirement is to build a Spring Batch infrastructure that gives me a unique job I can launch once I have the bits and pieces that will make up the job.
My solution :
I created an abstract configuration class that will be extended by concrete configuration classes (There will be 1 concrete class per file to be read).
#Configuration
#EnableBatchProcessing
public abstract class AbstractFileLoader<T> {
private static final String FILE_PATTERN = "*.dat";
#Autowired
JobBuilderFactory jobs;
#Autowired
ResourcePatternResolver resourcePatternResolver;
public final Job createJob(Step s1, JobExecutionListener listener) {
return jobs.get(this.getClass().getSimpleName())
.incrementer(new RunIdIncrementer()).listener(listener)
.start(s1).build();
}
public abstract Job loaderJob(Step s1, JobExecutionListener listener);
public abstract FieldSetMapper<T> getFieldSetMapper();
public abstract String getFilesPath();
public abstract String[] getColumnNames();
public abstract int getChunkSize();
public abstract int getThrottleLimit();
#Bean
#StepScope
#Value("#{stepExecutionContext['fileName']}")
public FlatFileItemReader<T> reader(String file) {
FlatFileItemReader<T> reader = new FlatFileItemReader<T>();
String path = file.substring(file.indexOf(":") + 1, file.length());
FileSystemResource resource = new FileSystemResource(path);
reader.setResource(resource);
DefaultLineMapper<T> lineMapper = new DefaultLineMapper<T>();
lineMapper.setFieldSetMapper(getFieldSetMapper());
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(",");
tokenizer.setNames(getColumnNames());
lineMapper.setLineTokenizer(tokenizer);
reader.setLineMapper(lineMapper);
reader.setLinesToSkip(1);
return reader;
}
#Bean
public ItemProcessor<T, T> processor() {
// TODO add transformations here
return null;
}
#Bean
#JobScope
public ListItemWriter<T> writer() {
ListItemWriter<T> writer = new ListItemWriter<T>();
return writer;
}
#Bean
#JobScope
public Step readStep(StepBuilderFactory stepBuilderFactory,
ItemReader<T> reader, ItemWriter<T> writer,
ItemProcessor<T, T> processor, TaskExecutor taskExecutor) {
final Step readerStep = stepBuilderFactory
.get(this.getClass().getSimpleName() + " ReadStep:slave")
.<T, T> chunk(getChunkSize()).reader(reader)
.processor(processor).writer(writer).taskExecutor(taskExecutor)
.throttleLimit(getThrottleLimit()).build();
final Step partitionedStep = stepBuilderFactory
.get(this.getClass().getSimpleName() + " ReadStep:master")
.partitioner(readerStep)
.partitioner(
this.getClass().getSimpleName() + " ReadStep:slave",
partitioner()).taskExecutor(taskExecutor).build();
return partitionedStep;
}
/*
* #Bean public TaskExecutor taskExecutor() { return new
* SimpleAsyncTaskExecutor(); }
*/
#Bean
#JobScope
public Partitioner partitioner() {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
Resource[] resources;
try {
resources = resourcePatternResolver.getResources("file:"
+ getFilesPath() + FILE_PATTERN);
} catch (IOException e) {
throw new RuntimeException(
"I/O problems when resolving the input file pattern.", e);
}
partitioner.setResources(resources);
return partitioner;
}
#Bean
#JobScope
public JobExecutionListener listener(ListItemWriter<T> writer) {
return new JobCompletionNotificationListener<T>(writer);
}
/*
* Use this if you want the writer to have job scope (JIRA BATCH-2269). Also
* change the return type of writer to ListItemWriter for this to work.
*/
#Bean
public TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor() {
#Override
protected void doExecute(final Runnable task) {
// gets the jobExecution of the configuration thread
final JobExecution jobExecution = JobSynchronizationManager
.getContext().getJobExecution();
super.doExecute(new Runnable() {
public void run() {
JobSynchronizationManager.register(jobExecution);
try {
task.run();
} finally {
JobSynchronizationManager.close();
}
}
});
}
};
}
}
Let's say I have to read Invoice data for the sake of discussion. I can therefore extend the above class for creating an InvoiceLoader :
#Configuration
public class InvoiceLoader extends AbstractFileLoader<Invoice>{
private class InvoiceFieldSetMapper implements FieldSetMapper<Invoice> {
public Invoice mapFieldSet(FieldSet f) {
Invoice invoice = new Invoice();
invoice.setNo(f.readString("INVOICE_NO");
return e;
}
}
#Override
public FieldSetMapper<Invoice> getFieldSetMapper() {
return new InvoiceFieldSetMapper();
}
#Override
public String getFilesPath() {
return "I:/CK/invoices/partitions/";
}
#Override
public String[] getColumnNames() {
return new String[] { "INVOICE_NO", "DATE"};
}
#Override
#Bean(name="invoiceJob")
public Job loaderJob(Step s1,
JobExecutionListener listener) {
return createJob(s1, listener);
}
#Override
public int getChunkSize() {
return 25254;
}
#Override
public int getThrottleLimit() {
return 8;
}
}
Let's say I have one more class called Inventory that extends AbstractFileLoader.
On application startup, I can load these two annotation configurations as follows :
AbstractApplicationContext context1 = new AnnotationConfigApplicationContext(InvoiceLoader.class, InventoryLoader.class);
Somewhere else in my application two different threads can launch the jobs as follows :
Thread 1 :
JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
Job job1 = context1.getBean("invoiceJob", Job.class);
JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
Thread 2 :
JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
Job job1 = context1.getBean("inventoryJob", Job.class);
JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
The advantage of this approach is that everytime there is a new file to be read, all that the developer/user has to do is subclass AbstractFileLoader and implement the required abstract methods without the need to get into the details of how to assemble the job.
The questions :
I am new to Spring batch so I may have overlooked some of the not-so-obvious issues with this approach such as shared internal objects in Spring batch that may cause two jobs running together to fail or obvious issues such as scoping of the beans.
Is there a better way to achieve my objective?
The fileName attribute of the #Value("#{stepExecutionContext['fileName']}") is always being assigned the value as I:/CK/invoices/partitions/ which is the value returned by getPathmethod in InvoiceLoader even though the getPathmethod inInventoryLoader`returns a different value.
One option is passing them as job parameters. For instance:
#Bean
Job job() {
jobs.get("myJob").start(step1(null)).build()
}
#Bean
#JobScope
Step step1(#Value('#{jobParameters["commitInterval"]}') commitInterval) {
steps.get('step1')
.chunk((int) commitInterval)
.reader(new IterableItemReader(iterable: [1, 2, 3, 4], name: 'foo'))
.writer(writer(null))
.build()
}
#Bean
#JobScope
ItemWriter writer(#Value('#{jobParameters["writerClass"]}') writerClass) {
applicationContext.classLoader.loadClass(writerClass).newInstance()
}
With MyWriter:
class MyWriter implements ItemWriter<Integer> {
#Override
void write(List<? extends Integer> items) throws Exception {
println "Write $items"
}
}
Then executed with:
def jobExecution = launcher.run(ctx.getBean(Job), new JobParameters([
commitInterval: new JobParameter(3),
writerClass: new JobParameter('MyWriter'), ]))
Output is:
INFO: Executing step: [step1]
Write [1, 2, 3]
Write [4]
Feb 24, 2016 2:30:22 PM org.springframework.batch.core.launch.support.SimpleJobLauncher$1 run
INFO: Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{commitInterval=3, writerClass=MyWriter}] and the following status: [COMPLETED]
Status is: COMPLETED, job execution id 0
#1 step1 COMPLETED
Full example here.
I am working with Spring-websocket and I have the following problem:
I am trying to put a placeholder inside a #MessageMapping annotation in order to get the url from properties. It works with #RequestMapping but not with #MessageMapping.
If I use this placeholder, the URL is null. Any idea or suggestion?
Example:
#RequestMapping(value= "${myProperty}")
#MessageMapping("${myProperty}")
Rossen Stoyanchev added placeholder support for #MessageMapping and #SubscribeMapping methods.
See Jira issue: https://jira.spring.io/browse/SPR-13271
Spring allows you to use property placeholders in #RequestMapping, but not in #MessageMapping. This is 'cause the MessageHandler. So, we need to override the default MessageHandler to do this.
WebSocketAnnotationMethodMessageHandler does not support placeholders and you need add this support yourself.
For simplicity I just created another WebSocketAnnotationMethodMessageHandler class in my project at the same package of the original, org.springframework.web.socket.messaging, and override getMappingForMethod method from SimpAnnotationMethodMessageHandler with same content, changing only how SimpMessageMappingInfo is contructed using this with this methods (private in WebSocketAnnotationMethodMessageHandler):
private SimpMessageMappingInfo createMessageMappingCondition(final MessageMapping annotation) {
return new SimpMessageMappingInfo(SimpMessageTypeMessageCondition.MESSAGE, new DestinationPatternsMessageCondition(
this.resolveAnnotationValues(annotation.value()), this.getPathMatcher()));
}
private SimpMessageMappingInfo createSubscribeCondition(final SubscribeMapping annotation) {
final SimpMessageTypeMessageCondition messageTypeMessageCondition = SimpMessageTypeMessageCondition.SUBSCRIBE;
return new SimpMessageMappingInfo(messageTypeMessageCondition, new DestinationPatternsMessageCondition(
this.resolveAnnotationValues(annotation.value()), this.getPathMatcher()));
}
These methods now will resolve value considering properties (calling resolveAnnotationValues method), so we need use something like this:
private String[] resolveAnnotationValues(final String[] destinationNames) {
final int length = destinationNames.length;
final String[] result = new String[length];
for (int i = 0; i < length; i++) {
result[i] = this.resolveAnnotationValue(destinationNames[i]);
}
return result;
}
private String resolveAnnotationValue(final String name) {
if (!(this.getApplicationContext() instanceof ConfigurableApplicationContext)) {
return name;
}
final ConfigurableApplicationContext applicationContext = (ConfigurableApplicationContext) this.getApplicationContext();
final ConfigurableBeanFactory configurableBeanFactory = applicationContext.getBeanFactory();
final String placeholdersResolved = configurableBeanFactory.resolveEmbeddedValue(name);
final BeanExpressionResolver exprResolver = configurableBeanFactory.getBeanExpressionResolver();
if (exprResolver == null) {
return name;
}
final Object result = exprResolver.evaluate(placeholdersResolved, new BeanExpressionContext(configurableBeanFactory, null));
return result != null ? result.toString() : name;
}
You still need to define a PropertySourcesPlaceholderConfigurer bean in your configuration.
If you are using XML based configuration, include something like this:
<context:property-placeholder location="classpath:/META-INF/spring/url-mapping-config.properties" />
If you are using Java based configuration, you can try in this way:
#Configuration
#PropertySources(value = #PropertySource("classpath:/META-INF/spring/url-mapping-config.properties"))
public class URLMappingConfig {
#Bean
public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
return new PropertySourcesPlaceholderConfigurer();
}
}
Obs.: in this case, url-mapping-config.properties file are in a gradle/maven project in src\main\resources\META-INF\spring folder and content look like this:
myPropertyWS=urlvaluews
This is my sample controller:
#Controller
public class WebSocketController {
#SendTo("/topic/test")
#MessageMapping("${myPropertyWS}")
public String test() throws Exception {
Thread.sleep(4000); // simulated delay
return "OK";
}
}
With default MessageHandler startup log will print something like this:
INFO: Mapped "{[/${myPropertyWS}],messageType=[MESSAGE]}" onto public java.lang.String com.brunocesar.controller.WebSocketController.test() throws java.lang.Exception
And with our MessageHandler now print this:
INFO: Mapped "{[/urlvaluews],messageType=[MESSAGE]}" onto public java.lang.String com.brunocesar.controller.WebSocketController.test() throws java.lang.Exception
See in this gist the full WebSocketAnnotationMethodMessageHandler implementation.
EDIT: this solution resolves the problem for versions before 4.2 GA. For more information, see this jira.
Update :
Now I understood what you mean, but I think that is not possible(yet).
Documentation does not mention anything related to Path mapping URIs.
Old answer
Use
#MessageMapping("/handler/{myProperty}")
instead of
#MessageMapping("/handler/${myProperty}")
And use it like this:
#MessageMapping("/myHandler/{username}")
public void handleTextMessage(#DestinationVariable String username,Message message) {
//do something
}
#MessageMapping("/chat/{roomId}")
public Message handleMessages(#DestinationVariable("roomId") String roomId, #Payload Message message, Traveler traveler) throws Exception {
System.out.println("Message received for room: " + roomId);
System.out.println("User: " + traveler.toString());
// store message in database
message.setAuthor(traveler);
message.setChatRoomId(Integer.parseInt(roomId));
int id = MessageRepository.getInstance().save(message);
message.setId(id);
return message;
}