Spring batch JdbcPagingItemReader endless loop - spring

I am facing problems with JdbcPagingItemReader. I am stucked in endless loop. I read about that itemReader contract have to return null, but I don't manage to implement it correctly. Somebody can show me an example?
public List<TransactionDTO> getTransactions(Integer chunk, LocalDateTime startDate, LocalDateTime endDate)
throws Exception {
final TransactionMapper transactionMapper = new TransactionMapper();
final SqlPagingQueryProviderFactoryBean sqlPagingQueryProviderFactoryBean = new SqlPagingQueryProviderFactoryBean();
sqlPagingQueryProviderFactoryBean.setDataSource(dataSource);
sqlPagingQueryProviderFactoryBean.setSelectClause(env.getProperty("sql.fromdates.select"));
sqlPagingQueryProviderFactoryBean.setFromClause(env.getProperty("sql.fromdates.from"));
sqlPagingQueryProviderFactoryBean.setWhereClause(env.getProperty("sql.fromdates.where"));
sqlPagingQueryProviderFactoryBean.setSortKey(env.getProperty("sql.fromdates.sort"));
final Map<String, Object> parametros = new HashMap<>();
parametros.put("startDate", startDate);
parametros.put("endDate", endDate);
final JdbcPagingItemReader<TransactionDTO> itemReader = new JdbcPagingItemReader<>();
itemReader.setDataSource(dataSource);
itemReader.setQueryProvider(sqlPagingQueryProviderFactoryBean.getObject());
// TODO esto debe ser el chunk
itemReader.setPageSize(1);
itemReader.setFetchSize(1);
itemReader.setRowMapper(transactionMapper);
itemReader.afterPropertiesSet();
itemReader.setParameterValues(parametros);
ExecutionContext executionContext = new ExecutionContext();
itemReader.open(executionContext);
List<TransactionDTO> list = new ArrayList<>();
TransactionDTO primerDto = itemReader.read();
while (primerDto != null) {
list.add(itemReader.read());
}
itemReader.close();
return list;
}

Related

Export entities to database schema through java code

A long time ago, I did that with a code like that:
Configuration config = new Configuration();
Properties props = new Properties();
FileInputStream fos = = new FileInputStream( file_name );
props.load(fos);
fos.close();
config.setProperties(props);
config.addAnnotatedClass(...);
Connection conn = DriverManager.getConnection(url,usuario,senha);
SchemaExport schema = new SchemaExport();
schema.create(true, true);
But now, if I try use this code, I got a compilation error. Seeing the javadoc for SchemaExport, I notice a lot of changes in the methods used on this example.
Hpw I could do that now?
update
based on the suggested link, I implemented the method this way:
public void criarTabelas(String server, String user, String pass) throws Exception {
StandardServiceRegistry standardRegistry = new StandardServiceRegistryBuilder().applySetting("hibernate.hbm2ddl.auto", "create").applySetting("hibernate.dialect", dialect).applySetting("hibernate.id.new_generator_mappings", "true").build();
MetadataSources sources = new MetadataSources(standardRegistry);
for(Class<?> entity : lista_entidades())
sources.addAnnotatedClass(entity);
MetadataImplementor metadata = (MetadataImplementor) sources.getMetadataBuilder().build();
SchemaExport export = new SchemaExport();
export.create(EnumSet.of(TargetType.DATABASE), metadata);
}
private List<Class<?>> lista_entidades() throws Exception {
List<Class<?>> lista = new ArrayList<Class<?>>();
ClassPathScanningCandidateComponentProvider scanner = new ClassPathScanningCandidateComponentProvider(false);
scanner.addIncludeFilter(new AnnotationTypeFilter(Entity.class));
for (BeanDefinition bd : scanner.findCandidateComponents("org.loja.model"))
lista.add(Class.forName(bd.getBeanClassName()));
return lista;
}
Now I need a way to establish a jdbc connection and associate to the SchemaExport.
I solve this issue with this code:
public void criarTabelas(String server, String user, String pass) throws Exception {
Connection conn = DriverManager.getConnection(url_prefix+server+"/"+url_suffix, user, pass);
StandardServiceRegistry standardRegistry = new StandardServiceRegistryBuilder()
.applySetting("hibernate.hbm2ddl.auto", "create")
.applySetting("hibernate.dialect", dialect)
.applySetting("hibernate.id.new_generator_mappings", "true")
.applySetting("javax.persistence.schema-generation-connection", conn)
.build();
MetadataSources sources = new MetadataSources(standardRegistry);
for(Class<?> entity : lista_entidades())
sources.addAnnotatedClass(entity);
MetadataImplementor metadata = (MetadataImplementor) sources.getMetadataBuilder().build();
SchemaExport export = new SchemaExport();
export.create(EnumSet.of(TargetType.DATABASE), metadata);
conn.close();
}
private List<Class<?>> lista_entidades() throws Exception {
List<Class<?>> lista = new ArrayList<Class<?>>();
ClassPathScanningCandidateComponentProvider scanner = new ClassPathScanningCandidateComponentProvider(false);
scanner.addIncludeFilter(new AnnotationTypeFilter(Entity.class));
for (BeanDefinition bd : scanner.findCandidateComponents("org.loja.model"))
lista.add(Class.forName(bd.getBeanClassName()));
System.out.println("lista: "+lista);
return lista;
}

AggregatingReplyingKafkaTemplate releaseStrategy Question

There seem to be an issue when I use AggregatingReplyingKafkaTemplate with template.setReturnPartialOnTimeout(true) in that, it returns timeout exception even if partial results are available from consumers.
In example below, I have 3 consumers to reply to the request topic and i've set the reply timeout at 10 seconds. I've explicitly delayed the response of Consumer 3 to 11 seconds, however, I expect the response back from Consumer 1 and 2, so, I can return partial results. However, I am getting KafkaReplyTimeoutException. Appreciate your inputs. Thanks.
I follow the code based on the Unit Test below.
[ReplyingKafkaTemplateTests][1]
I've provided the actual code below:
#RestController
public class SumController {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
public static final String D_REPLY = "dReply";
public static final String D_REQUEST = "dRequest";
#ResponseBody
#PostMapping(value="/sum")
public String sum(#RequestParam("message") String message) throws InterruptedException, ExecutionException {
AggregatingReplyingKafkaTemplate<Integer, String, String> template = aggregatingTemplate(
new TopicPartitionOffset(D_REPLY, 0), 3, new AtomicInteger());
String resultValue ="";
String currentValue ="";
try {
template.setDefaultReplyTimeout(Duration.ofSeconds(10));
template.setReturnPartialOnTimeout(true);
ProducerRecord<Integer, String> record = new ProducerRecord<>(D_REQUEST, null, null, null, message);
RequestReplyFuture<Integer, String, Collection<ConsumerRecord<Integer, String>>> future =
template.sendAndReceive(record);
future.getSendFuture().get(5, TimeUnit.SECONDS); // send ok
System.out.println("Send Completed Successfully");
ConsumerRecord<Integer, Collection<ConsumerRecord<Integer, String>>> consumerRecord = future.get(10, TimeUnit.SECONDS);
System.out.println("Consumer record size "+consumerRecord.value().size());
Iterator<ConsumerRecord<Integer, String>> iterator = consumerRecord.value().iterator();
while (iterator.hasNext()) {
currentValue = iterator.next().value();
System.out.println("response " + currentValue);
System.out.println("Record header " + consumerRecord.headers().toString());
resultValue = resultValue + currentValue + "\r\n";
}
} catch (Exception e) {
System.out.println("Error Message is "+e.getMessage());
}
return resultValue;
}
public AggregatingReplyingKafkaTemplate<Integer, String, String> aggregatingTemplate(
TopicPartitionOffset topic, int releaseSize, AtomicInteger releaseCount) {
//Create Container Properties
ContainerProperties containerProperties = new ContainerProperties(topic);
containerProperties.setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
//Set the consumer Config
//Create Consumer Factory with Consumer Config
DefaultKafkaConsumerFactory<Integer, Collection<ConsumerRecord<Integer, String>>> cf =
new DefaultKafkaConsumerFactory<>(consumerConfigs());
//Create Listener Container with Consumer Factory and Container Property
KafkaMessageListenerContainer<Integer, Collection<ConsumerRecord<Integer, String>>> container =
new KafkaMessageListenerContainer<>(cf, containerProperties);
// container.setBeanName(this.testName);
AggregatingReplyingKafkaTemplate<Integer, String, String> template =
new AggregatingReplyingKafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerConfigs()), container,
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
template.setSharedReplyTopic(true);
template.start();
return template;
}
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test_id");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
return props;
}
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
org.apache.kafka.common.serialization.StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class);
return props;
}
public ProducerFactory<Integer,String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#KafkaListener(id = "def1", topics = { D_REQUEST}, groupId = "D_REQUEST1")
#SendTo // default REPLY_TOPIC header
public String dListener1(String in) throws InterruptedException {
return "First Consumer : "+ in.toUpperCase();
}
#KafkaListener(id = "def2", topics = { D_REQUEST}, groupId = "D_REQUEST2")
#SendTo // default REPLY_TOPIC header
public String dListener2(String in) throws InterruptedException {
return "Second Consumer : "+ in.toLowerCase();
}
#KafkaListener(id = "def3", topics = { D_REQUEST}, groupId = "D_REQUEST3")
#SendTo // default REPLY_TOPIC header
public String dListener3(String in) throws InterruptedException {
Thread.sleep(11000);
return "Third Consumer : "+ in;
}
}
'''
[1]: https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/test/java/org/springframework/kafka/requestreply/ReplyingKafkaTemplateTests.java
template.setReturnPartialOnTimeout(true) simply means the template will consult the release strategy on timeout (with the timeout argument = true, to tell the strategy it's a timeout rather than a delivery call).
It must return true to release the partial result.
This is to allow you to look at (and possibly modify) the list to decide whether you want to release or discard.
Your strategy ignores the timeout parameter:
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
You need return timeout ? true : { ... }.

Spring Kafka - how to fetch timestamp (event time) when message was produced

I have a requirement to fetch timestamp (event-time) when the message was produced, in the kafka consumer application. I am aware of the timestampExtractor, which can be used with kafka stream , but my requirement is different as I am not using stream to consume message.
My kafka producer is as follows :
#Override
public void run(ApplicationArguments args) throws Exception {
List<String> names = Arrays.asList("priya", "dyser", "Ray", "Mark", "Oman", "Larry");
List<String> pages = Arrays.asList("blog", "facebook", "instagram", "news", "youtube", "about");
Runnable runnable = () -> {
String rPage = pages.get(new Random().nextInt(pages.size()));
String rName = pages.get(new Random().nextInt(names.size()));
PageViewEvent pageViewEvent = new PageViewEvent(rName, rPage, Math.random() > .5 ? 10 : 1000);
Message<PageViewEvent> message = MessageBuilder
.withPayload(pageViewEvent).
setHeader(KafkaHeaders.MESSAGE_KEY, pageViewEvent.getUserId().getBytes())
.build();
try {
this.pageViewsOut.send(message);
log.info("sent " + message);
} catch (Exception e) {
log.error(e);
}
};
Kafka Consumer is implemented using Spring kafka #KafkaListener.
#KafkaListener(topics = "test1" , groupId = "json", containerFactory = "kafkaListenerContainerFactory")
public void receive(#Payload PageViewEvent data,#Headers MessageHeaders headers) {
LOG.info("Message received");
LOG.info("received data='{}'", data);
}
Container factory configuration
#Bean
public ConsumerFactory<String,PageViewEvent > priceEventConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "json");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(PageViewEvent.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, PageViewEvent> priceEventsKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, PageViewEvent> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(priceEventConsumerFactory());
return factory;
}
The producer, which is sending the message when I print give me below data :
[payload=PageViewEvent(userId=blog, page=about, duration=10),
headers={id=8ebdad85-e2f7-958f-500e-4560ac0970e5,
kafka_messageKey=[B#71975e1a, contentType=application/json,
timestamp=1553041963803}]
This does have a produced timestamp. How can I fetch the message produced time stamp with Spring kafka?
RECEIVED_TIMESTAMP means it is the time stamp from the record that was received not the time it was received.. We avoid putting it in TIMESTAMP to avoid inadvertent propagation to an outbound message.
You can use something like below:
final Producer<String, String> producer = new KafkaProducer<String, String>(properties);
long time = System.currentTimeMillis();
final CountDownLatch countDownLatch = new CountDownLatch(5);
int count=0;
try {
for (long index = time; index < time + 10; index++) {
String key = null;
count++;
if(count<=5)
key = "id_"+ Integer.toString(1);
else
key = "id_"+ Integer.toString(2);
final ProducerRecord<String, String> record =
new ProducerRecord<>(TOPIC, key, "B2B Sample Message: " + count);
producer.send(record, (metadata, exception) -> {
long elapsedTime = System.currentTimeMillis() - time;
if (metadata != null) {
System.out.printf("sent record(key=%s value=%s) " +
"meta(partition=%d, offset=%d) time=%d timestamp=%d\n",
record.key(), record.value(), metadata.partition(),
metadata.offset(), elapsedTime, metadata.timestamp());
System.out.println("Timestamp:: "+metadata.timestamp() );
} else {
exception.printStackTrace();
}
countDownLatch.countDown();
});
}
try {
countDownLatch.await(25, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}finally {
producer.flush();
producer.close();
}
}

Client side code for multipart REST operation

Hi I need to consume a REST operation which accepts a xml payload and a pdf file. Basically a JAXB object is converted to xml string and uploaded in a xml file. So in a multipart request, a xml file and pdf file are uploaded.
The REST operation server side code is as follows:
server side:
public class CompanyType extends MediaType {
public final static final XML_STRING = "applicaiton/company+xml";
}
#POST
#Path("/upload")
#Consumes("multipart/mixed")
#Produces(CompanyType.XML_STRING)
public UploadResponseObject upload(MultiPart multiPart){
UploadRequestObject req = multiPart.getBodyParts().get(0).getEntityAs(UploadRequestObject.class);
BodyPartEntity bpe = (BodyPartEntity) multiPart.getBodyParts().get(1).getEntity();
byte[] pdfBytes = IOUtils.toByteArray(bpe.getInputStream());
....
....
}
client side code to consume REST operation:
#Autowired
private RestTemplate rt;
public UploadResponseObject callMultipartUploadOperation(UploadRequestObject req, java.io.File target) throws Exception {
String url = "http://<host-name>:<port>/service-name/upload");
MultiValueMap<String, Object> mv = new LinkedMultiValueMap<String, Object>();
this.rt = new RestTemplate();
this.rt.setMessageConverters(getMessageConverter());
String id = <random number generated from 1 to 50000>;
// Add xml entity
org.springframework.http.HttpHeaders xmlFileHeaders = new org.springframework.http.HttpHeaders();
xmlFileHeaders.add(MeditType.CONTENT_TYPE, "applicaiton/company+xml");
HttpEntity<String> xmlFile = new HttpEntity<String>(createXMLString(req), xmlFileHeaders);
mv.add(id + ".xml", xmlFile);
// Add pdf file
org.springframework.http.HttpHeaders fileHeaders = new org.springframework.http.HttpHeaders();
fileHeaders.add(MediaType.CONTENT_TYPE, "application/pdf");
FileSystemResource fsr = new FileSystemResource(target);
HttpEntity<FileSystemResource> fileEntity = new HttpEntity<FileSystemResource>(
fsr, fileHeaders);
String filename = target.getName();
mv.add(filename, fileEntity);
HttpEntity<UploadRequestObject> ereq = new HttpEntity<UploadRequestObject>(req, getRequestHeaders());
ResponseEntity<UploadResponseObject> res= this.restTemplate.postForEntity(url, ereq, UploadResponseObject.class);
return res.getBody();
}
private List<HttpMessageConverter<?>> getMessageConverter() {
List<HttpMessageConverter<?>> messageConverters = new ArrayList<HttpMessageConverter<?>>();
Jaxb2Marshaller jaxb2Marshaller = new Jaxb2Marshaller();
jaxb2Marshaller.setClassesToBeBound(UploadResponseObject.class);
MarshallingHttpMessageConverter mhmc = new MarshallingHttpMessageConverter(jaxb2Marshaller);
List<org.springframework.http.MediaType> supportedMediaTypes = new ArrayList<org.springframework.http.MediaType>();
supportedMediaTypes.add(new org.springframework.http.MediaType("application", "company+xml"));
mhmc.setSupportedMediaTypes(supportedMediaTypes);
messageConverters.add(mhmc);
// Add Form and Part converters
FormHttpMessageConverter fmc = new FormHttpMessageConverter();
fmc.addPartConverter(new Jaxb2RootElementHttpMessageConverter());
messageConverters.add(fmc);
return messageConverters;
}
When the below line is executed from client code,
ResponseEntity<UploadResponseObject> res= this.rt.postForEntity(url, ereq, UploadResponseObject.class);
the following exception is thrown
org.springframework.web.client.RestClientException: Could not write request: no suitable HttpMessageConverter
found for request
type [org..types.UploadRequestObject]
and content type [application/company+xml]
Please advise the changes to make the client side code work.
After much trial and error, was able to find the solution for the same.
Client side code:
#Autowired
private RestTemplate rt;
public UploadResponseObject callMultipartUploadOperation(UploadRequestObject req, java.io.File target) throws Exception {
String url = "http://<host-name>:<port>/service-name/upload");
MultiValueMap<String, Object> mv = new LinkedMultiValueMap<String, Object>();
this.rt = new RestTemplate();
this.rt.setMessageConverters(getMessageConverter());
String id = <random number generated from 1 to 50000>;
// Add xml entity
org.springframework.http.HttpHeaders xmlFileHeaders = new org.springframework.http.HttpHeaders();
xmlFileHeaders.add(MeditType.CONTENT_TYPE, "applicaiton/company+xml");
HttpEntity<String> xmlFile = new HttpEntity<String>(createXMLString(req), xmlFileHeaders);
mv.add(id + ".xml", xmlFile);
// Add pdf file
org.springframework.http.HttpHeaders fileHeaders = new org.springframework.http.HttpHeaders();
fileHeaders.add(MediaType.CONTENT_TYPE, "application/pdf");
FileSystemResource fsr = new FileSystemResource(target);
HttpEntity<FileSystemResource> fileEntity = new HttpEntity<FileSystemResource>(
fsr, fileHeaders);
String filename = target.getName();
mv.add(filename, fileEntity);
HttpEntity<UploadRequestObject> ereq = new HttpEntity<UploadRequestObject>(req, getRequestHeaders());
ResponseEntity<UploadResponseObject> res= this.restTemplate.postForEntity(url, ereq, UploadResponseObject.class);
return res.getBody();
}
Message converters:
private List<HttpMessageConverter<?>> getMessageConverter() {
List<HttpMessageConverter<?>> messageConverters = new ArrayList<HttpMessageConverter<?>>();
Jaxb2Marshaller jaxb2Marshaller = new Jaxb2Marshaller();
jaxb2Marshaller.setClassesToBeBound(UploadResponseObject.class);
MarshallingHttpMessageConverter mhmc = new MarshallingHttpMessageConverter(jaxb2Marshaller);
List<org.springframework.http.MediaType> supportedMediaTypes = new ArrayList<org.springframework.http.MediaType>();
supportedMediaTypes.add(new org.springframework.http.MediaType("application", "company+xml"));
supportedMediaTypes.add(new org.springframework.http.MediaType("multipart", "form-data"));
mhmc.setSupportedMediaTypes(supportedMediaTypes);
messageConverters.add(mhmc);
// Add Form and Part converters
FormHttpMessageConverter fmc = new FormHttpMessageConverter();
fmc.addPartConverter(new Jaxb2RootElementHttpMessageConverter());
fmc.addPartConverter(new ResourceHttpMessageConverter());
messageConverters.add(fmc);
return messageConverters;
}
Request headers :
private org.springframework.http.HttpHeaders getRequestHeaders(String contentType) throws Exception {
....
.....
org.springframework.http.HttpHeaders httpHeaders = new org.springframework.http.HttpHeaders();
httpHeaders.set("Accept", "applicaiton/company+xml");
httpHeaders.set("Content-Type", "multipart/form-data");
String consumer = "<AppUserId>";
httpHeaders.set("consumer", consumer);
String tmStamp= getCurrentTimeStamp();
httpHeaders.set("timestamp", tmStamp);
...
...
return httpHeaders;
}

how to transfer flowfiles one by one using a custom processor in NIFI

I'm programming a custom processor in Nifi v 1.3
The processor executes an SQL query read from the resultset and transforms every row to json document and stores it into an ArrayList, finally it transfers every 1000 document (fetchSize param) to a flowfile, this work for me, but it sends all flowFiles at once.
What I want is for it to transfer each flowfile independently when I call transferFlowFile method without waiting for the end of the onTrigger method to transfer everything at once.
here the code :
public void onTrigger(final ProcessContext context, final ProcessSession session) throws ProcessException {
FlowFile fileToProcess = null;
if (context.hasIncomingConnection()) {
fileToProcess = session.get();
if (fileToProcess == null && context.hasNonLoopConnection()) {
return;
}
}
final ResultSet resultSet = st.executeQuery();
final ResultSetMetaData meta = resultSet.getMetaData();
final int nrOfColumns = meta.getColumnCount();
List<Map<String, Object>> documentList = new ArrayList<>();
while (resultSet.next()) {
final AtomicLong nrOfRows = new AtomicLong(0L);
cpt++;
Map<String, Object> item = new HashMap<>();
for (int i = 1; i <= nrOfColumns; i++) {
int javaSqlType = meta.getColumnType(i);
String nameOrLabel = StringUtils.isNotEmpty(meta.getColumnLabel(i)) ? meta.getColumnLabel(i)
: meta.getColumnName(i);
Object value = null;
value = resultSet.getObject(i);
if (value != null) {
item.put(nameOrLabel, value.toString());
}
}
Document document = new Document(item);
documentList.add(document);
if (fetchSize!=0 && cpt % fetchSize == 0) {
FlowFile flowFile = session.create();
transferFlowFile(flowFile, session, documentList, fileToProcess, nrOfRows, stopWatch);
}
}
if (!documentList.isEmpty()) {
final AtomicLong nrOfRows = new AtomicLong(0L);
FlowFile flowFile = session.create();
transferFlowFile(flowFile, session, documentList, fileToProcess, nrOfRows, stopWatch);
}
}
public void transferFlowFile(FlowFile flowFile, ProcessSession session, List<Map<String, Object>> documentList,
FlowFile fileToProcess, AtomicLong nrOfRows, StopWatch stopWatch) {
flowFile = session.write(flowFile, out -> {
ObjectMapper mapper = new ObjectMapper();
IOUtils.write(mapper.writeValueAsBytes(documentList), out);
});
documentList.clear();
flowFile = session.putAttribute(flowFile, CoreAttributes.MIME_TYPE.key(), "application/json");
session.getProvenanceReporter().modifyContent(flowFile, "Retrieved " + nrOfRows.get() + " rows",
stopWatch.getElapsed(TimeUnit.MILLISECONDS));
session.transfer(flowFile, REL_SUCCESS);
}
Call session.commit() after
session.transfer(flowFile, REL_SUCCESS)
The any flows files created since the last commit, or since the beginning if there was never a commit, will be transferred when the session is committed.

Resources