Why is exception in Spring Batch AsycItemProcessor caught by SkipListener's onSkipInWrite method? - spring-boot

I'm writing a Spring Boot application that starts up, gathers and converts millions of database entries into a new streamlined JSON format, and then sends them all to a GCP PubSub topic. I'm attempting to use Spring Batch for this, but I'm running into trouble implementing fault tolerance for my process. The database is rife with data quality issues, and sometimes my conversions to JSON will fail. When failures occur, I don't want the job to immediately quit, I want it to continue processing as many records as it can and, before completion, to report which exact records failed so that I, and or my team, can examine these problematic database entries.
To achieve this, I've attempted to use Spring Batch's SkipListener interface. But I'm also using an AsyncItemProcessor and an AsyncItemWriter in my process, and even though the exceptions are occurring during the processing, the SkipListener's onSkipInWrite() method is catching them - rather than the onSkipInProcess() method. And unfortunately, the onSkipInWrite() method doesn't have access to the original database entity, so I can't store its ID in my list of problematic DB entries.
Have I misconfigured something? Is there any other way to gain access to the objects from the reader that failed the processing step of an AsynItemProcessor?
Here's what I've tried...
I have a singleton Spring Component where I store how many DB entries I've successfully processed along with up to 20 problematic database entries.
#Component
#Getter //lombok
public class ProcessStatus {
private int processed;
private int failureCount;
private final List<UnexpectedFailure> unexpectedFailures = new ArrayList<>();
public void incrementProgress { processed++; }
public void logUnexpectedFailure(UnexpectedFailure failure) {
failureCount++;
unexpectedFailure.add(failure);
}
#Getter
#AllArgsConstructor
public static class UnexpectedFailure {
private Throwable error;
private DBProjection dbData;
}
}
I have a Spring batch Skip Listener that's supposed to catch failures and update my status component accordingly:
#AllArgsConstructor
public class ConversionSkipListener implements SkipListener<DBProjection, Future<JsonMessage>> {
private ProcessStatus processStatus;
#Override
public void onSkipInRead(Throwable error) {}
#Override
public void onSkipInProcess(DBProjection dbData, Throwable error) {
processStatus.logUnexpectedFailure(new ProcessStatus.UnexpectedFailure(error, dbData));
}
#Override
public void onSkipInWrite(Future<JsonMessage> messageFuture, Throwable error) {
//This is getting called instead!! Even though the exception happened during processing :(
//But I have no access to the original DBProjection data here, and messageFuture.get() gives me null.
}
}
And then I've configured my job like this:
#Configuration
public class ConversionBatchJobConfig {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private TaskExecutor processThreadPool;
#Bean
public SimpleCompletionPolicy processChunkSize(#Value("${commit.chunk.size:100}") Integer chunkSize) {
return new SimpleCompletionPolicy(chunkSize);
}
#Bean
#StepScope
public ItemStreamReader<DbProjection> dbReader(
MyDomainRepository myDomainRepository,
#Value("#{jobParameters[pageSize]}") Integer pageSize,
#Value("#{jobParameters[limit]}") Integer limit) {
RepositoryItemReader<DbProjection> myDomainRepositoryReader = new RepositoryItemReader<>();
myDomainRepositoryReader.setRepository(myDomainRepository);
myDomainRepositoryReader.setMethodName("findActiveDbDomains"); //A native query
myDomainRepositoryReader.setArguments(new ArrayList<Object>() {{
add("ACTIVE");
}});
myDomainRepositoryReader.setSort(new HashMap<String, Sort.Direction>() {{
put("update_date", Sort.Direction.ASC);
}});
myDomainRepositoryReader.setPageSize(pageSize);
myDomainRepositoryReader.setMaxItemCount(limit);
// myDomainRepositoryReader.setSaveState(false); <== haven't figured out what this does yet
return myDomainRepositoryReader;
}
#Bean
#StepScope
public ItemProcessor<DbProjection, JsonMessage> dataConverter(DataRetrievalSerivice dataRetrievalService) {
//Sometimes throws exceptions when DB data is exceptionally weird, bad, or missing
return new DbProjectionToJsonMessageConverter(dataRetrievalService);
}
#Bean
#StepScope
public AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter(
ItemProcessor<DbProjection, JsonMessage> dataConverter) throws Exception {
AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter = new AsyncItemProcessor<>();
asyncDataConverter.setDelegate(dataConverter);
asyncDataConverter.setTaskExecutor(processThreadPool);
asyncDataConverter.afterPropertiesSet();
return asyncDataConverter;
}
#Bean
#StepScope
public ItemWriter<JsonMessage> jsonPublisher(GcpPubsubPublisherService publisherService) {
return new JsonMessageWriter(publisherService);
}
#Bean
#StepScope
public AsyncItemWriter<JsonMessage> asyncJsonPublisher(ItemWriter<JsonMessage> jsonPublisher) throws Exception {
AsyncItemWriter<JsonMessage> asyncJsonPublisher = new AsyncItemWriter<>();
asyncJsonPublisher.setDelegate(jsonPublisher);
asyncJsonPublisher.afterPropertiesSet();
return asyncJsonPublisher;
}
#Bean
public Step conversionProcess(SimpleCompletionPolicy processChunkSize,
ItemStreamReader<DbProjection> dbReader,
AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter,
AsyncItemWriter<JsonMessage> asyncJsonPublisher,
ProcessStatus processStatus,
#Value("${conversion.failure.limit:20}") int maximumFailures) {
return stepBuilderFactory.get("conversionProcess")
.<DbProjection, Future<JsonMessage>>chunk(processChunkSize)
.reader(dbReader)
.processor(asyncDataConverter)
.writer(asyncJsonPublisher)
.faultTolerant()
.skipPolicy(new MyCustomConversionSkipPolicy(maximumFailures))
// ^ for now this returns true for everything until 20 failures
.listener(new ConversionSkipListener(processStatus))
.build();
}
#Bean
public Job conversionJob(Step conversionProcess) {
return jobBuilderFactory.get("conversionJob")
.start(conversionProcess)
.build();
}
}

This is because the future wrapped by the AsyncItemProcessor is only unwrapped in the AsyncItemWriter, so any exception that might occur at that time is seen as a write exception instead of a processing exception. That's why onSkipInWrite is called instead of onSkipInProcess.
This is actually a known limitation of this pattern which is documented in the Javadoc of the AsyncItemProcessor, here is an excerpt:
Because the Future is typically unwrapped in the ItemWriter,
there are lifecycle and stats limitations (since the framework doesn't know
what the result of the processor is).
While not an exhaustive list, things like StepExecution.filterCount will not
reflect the number of filtered items and
itemProcessListener.onProcessError(Object, Exception) will not be called.
The Javadoc states that the list is not exhaustive, and the side-effect regarding the SkipListener that you are experiencing is one these limitations.

Related

Using DelegatingSessionFactory with RemoteFileTemplate.execute(SessionCallback)

I'm trying to declare multiple SFTP sessions, wrap them in a DelegatingSessionFactory, then later use SftpRemoteFileTemplate.execute(...) during a cron job.
On the execute part of things, the code is very simple, it is already used for a single session, but I want to expand it to multiple possible sessions.
Below I extended my single session code. I just copied the methods for reference. At the end I'll show how I think the new methods should look.
public class XSession extends SftpSession {
#Scheduled(cron = "${sftp.scan.x.schedule}")
void scan() {
List<FileHistoryEntity> fileList = template.execute(this::processFiles);
...
}
private List<FileHistoryEntity> processFiles(Session<ChannelSftp.LsEntry> session) {
List.of(session.list(this.remoteDir)).forEach(file -> doWhatever());
...
}
}
But now I have multiple sessions. So I declare the following class:
#Slf4j
#Configuration
#RequiredArgsConstructor
public class DelegateSftpSessionHandler {
private final SessionFactory<ChannelSftp.LsEntry> session1;
private final SessionFactory<ChannelSftp.LsEntry> session2;
private final SessionFactory<ChannelSftp.LsEntry> session3;
private final SessionFactory<ChannelSftp.LsEntry> session4;
private final SessionFactory<ChannelSftp.LsEntry> session5;
#RequiredArgsConstructor
public enum DelegateSessionConfig {
SESSION_1("IN_REALITY_A_RELEVANT_NAME_1");
SESSION_2("IN_REALITY_A_RELEVANT_NAME_2");
SESSION_3("IN_REALITY_A_RELEVANT_NAME_3");
SESSION_4("IN_REALITY_A_RELEVANT_NAME_4");
SESSION_5("IN_REALITY_A_RELEVANT_NAME_5");
public final String threadKey;
}
#Bean
#Primary
public DelegatingSessionFactory<ChannelSftp.LsEntry> delegatingSessionFactory() {
Map<Object, SessionFactory<ChannelSftp.LsEntry>> sessionMap = new HashMap<>();
sessionMap.put(DelegateSessionConfig.SESSION_1.threadKey, session1);
sessionMap.put(DelegateSessionConfig.SESSION_2.threadKey, session2);
sessionMap.put(DelegateSessionConfig.SESSION_3.threadKey, session3);
sessionMap.put(DelegateSessionConfig.SESSION_4.threadKey, session4);
sessionMap.put(DelegateSessionConfig.SESSION_5.threadKey, session5);
DefaultSessionFactoryLocator<ChannelSftp.LsEntry> sessionLocator = new DefaultSessionFactoryLocator<>(sessionMap);
return new DelegatingSessionFactory<>(sessionLocator);
}
#Bean
SftpRemoteFileTemplate ftpRemoteFileTemplate(DelegatingSessionFactory<ChannelSftp.LsEntry> dsf) {
return new SftpRemoteFileTemplate(dsf);
}
}
Ting is, I have no idea how any of this works, and the spring sftp / fpt documentation is by no means clear. The code is virtually undocumented. And I'm just guessing. I think that I have to do the following:
public class XSession extends SftpSession {
#Autowire
DelegatingSessionFactory<ChannelSftp.LsEntry> delegatingSessionFactory;
#Autowired
SftpRemoteFileTemplate template;
#Scheduled(cron = "${sftp.scan.x.schedule}") // x == SESSION_1
#Async // for thread key
void scan() {
delegatingSessionFactory.setThreadKey(DelegateSessionConfig.SESSION_1.threadKey);
// because thread key changes the session globally? So I don't need specify
// which session this template is working with???
List<FileHistoryEntity> fileList = template.execute(this::processFiles);
...
delegatingSessionFactory.clearThreadKey();
}
private List<FileHistoryEntity> processFiles(Session<ChannelSftp.LsEntry> session) {
List.of(session.list(this.remoteDir)).forEach(file -> doWhatever());
...
}
}
I'm basing what I'm saying on the following link, github spring integration test
Honestly, I hardly understand what is happening. But it seems like setting the thread key, changes the session globally.
My only other idea is to just ... create the RemoteFileTemplate on demand
public static SftpRemoteFileTemplate getTemplateFor(DelegatingSessionFactory<ChannelSftp.LsEntry> dsf, DelegateSessionConfig session) {
return new SftpRemoteFileTemplate(dsf.getFactoryLocator().getSessionFactory(session.threadKey));
}
It does not set it globally. That's how a ThreadLocal variable works: you set a value in some thread and only this thread can see it. If you use the same object concurrently, other threads don't see that value because it does not belong to their thread state.
Not sure what is your concern, but pattern to extend an SftpSession for custom logic is not right. You should consider to use an SftpRemoteFileTemplate.execute(SessionCallback<F, T> callback) instead, but thread key must be set into a DelegatingSessionFactory before anyway and in the same thread you going to call that execute().

How to use StepListenerSupport

I am trying to stop a running job based on timeout value. I am following a post found here, but I am not sure how you add this listener.
Here is the listener implementation
public class StopListener extends StepListenerSupport{
public static final Logger LOG = LoggerFactory.getLogger(StopListener.class);
private static final int TIMEOUT = 30;
private StepExecution stepExecution;
#Override
public void beforeStep(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
#Override
public void afterChunk(ChunkContext context) {
if (timeout(context)) {
this.stepExecution.setTerminateOnly();
}
}
private boolean timeout(ChunkContext chunkContext) {
LOG.info("----- TIMEOUT-----");
Date startTime = chunkContext.getStepContext().getStepExecution().getJobExecution().getStartTime();
Date now = new Date();
return Duration.between(startTime.toInstant(), now.toInstant()).toMinutes() > TIMEOUT;
}
}
Here is my step
#Bean
public Step dataFilterStep() {
return stepBuilderFactory.get("dataFilterStep")
.<UserInfo, UserInfo> chunk(10)
.reader(dataFilterItemReader())
.processor(dataFilterItemProcessor())
.writer(dataFilterWriter())
.listener(new StopListener())
.build();
}
But I am getting error saying "The method listener(Object) is ambiguous for the type SimpleStepBuilder<UserInfo,UserInfo>". A help would be really appreciated!
On one hand, StepListenerSupport is a polymorphic object, it implements 7 interfaces. On the other hand, the step builder provides several overloaded .listener() methods to accept different types of listeners. That's why when you pass your StopListener in .listener(new StopListener()), the type of listener is ambiguous.
What you can do is cast the listener to the type you want, something like:
.listener(((ChunkListener) new StopListener()))
However, by following the principle of least power [1][2], I would recommend changing your StopListener to implement only the interface required for the functionality. In your case, you seem to want to stop the job after a given timeout in afterChunk, so you can make your listener implement ChunkListener and not extend StepListenerSupport.
[1]: The Rule of Least Power
[2]: The Principle of Least Power

Cache Kafka Records using Caffeine Cache Springboot

I am trying to cache Kafka Records within 3 minutes of interval post that it will get expired and removed from the cache.
Each incoming records which is fetched using kafka consumer written in springboot needs to be updated in cache first then if it is present i need to discard the next duplicate records if it matches the cache record.
I have tried using Caffeine cache as below,
#EnableCaching
public class AppCacheManagerConfig {
#Bean
public CacheManager cacheManager(Ticker ticker) {
CaffeineCache bookCache = buildCache("declineRecords", ticker, 3);
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Collections.singletonList(bookCache));
return cacheManager;
}
private CaffeineCache buildCache(String name, Ticker ticker, int minutesToExpire) {
return new CaffeineCache(name, Caffeine.newBuilder().expireAfterWrite(minutesToExpire, TimeUnit.MINUTES)
.maximumSize(100).ticker(ticker).build());
}
#Bean
public Ticker ticker() {
return Ticker.systemTicker();
}
}
and my Kafka Consumer is as below,
#Autowired
CachingServiceImpl cachingService;
#KafkaListener(topics = "#{'${spring.kafka.consumer.topic}'}", concurrency = "#{'${spring.kafka.consumer.concurrentConsumers}'}", errorHandler = "#{'${spring.kafka.consumer.errorHandler}'}")
public void consume(Message<?> message, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long createTime) {
logger.info("Recieved Message: " + message.getPayload());
try {
boolean approveTopic = false;
boolean duplicateRecord = false;
if (cachingService.isDuplicateCheck(declineRecord)) {
//do something with records
}
else
{
//do something with records
}
cachingService.putInCache(xmlJSONObj, declineRecord, time);
and my caching service is as below,
#Component
public class CachingServiceImpl {
private static final Logger logger = LoggerFactory.getLogger(CachingServiceImpl.class);
#Autowired
CacheManager cacheManager;
#Cacheable(value = "declineRecords", key = "#declineRecord", sync = true)
public String putInCache(JSONObject xmlJSONObj, String declineRecord, String time) {
logger.info("Record is Cached for 3 minutes interval check", declineRecord);
cacheManager.getCache("declineRecords").put(declineRecord, time);
return declineRecord;
}
public boolean isDuplicateCheck(String declineRecord) {
if (null != cacheManager.getCache("declineRecords").get(declineRecord)) {
return true;
}
return false;
}
}
But Each time a record comes in consumer my cache is always empty. Its not holding the records.
Modifications Done:
I have added Configuration file as below after going through the suggestions and more kind of R&D removed some of the earlier logic and now the caching is working as expected but duplicate check is failing when all the three consumers are sending the same records.
`
#Configuration
public class AppCacheManagerConfig {
public static Cache<String, Object> jsonCache =
Caffeine.newBuilder().expireAfterWrite(3, TimeUnit.MINUTES)
.maximumSize(10000).recordStats().build();
#Bean
public CacheLoader<Object, Object> cacheLoader() {
CacheLoader<Object, Object> cacheLoader = new CacheLoader<Object, Object>() {
#Override
public Object load(Object key) throws Exception {
return null;
}
#Override
public Object reload(Object key, Object oldValue) throws Exception {
return oldValue;
}
};
return cacheLoader;
}
`
Now i am using the above cache as manual put and get.
I guess you're trying to implement records deduplication for Kafka.
Here is the similar discussion:
https://github.com/spring-projects/spring-kafka/issues/80
Here is the current abstract class which you may extend to achieve the necessary result:
https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/listener/adapter/AbstractFilteringMessageListener.java
Your caching service is definitely incorrect: Cacheable annotation allows marking the data getters and setters, to add caching through AOP. While in the code you clearly implement some low-level cache updating logic of your own.
At least next possible changes may help you:
Remove #Cacheable. You don't need it because you work with cache manually, so it may be the source of conflicts (especially as soon as you use sync = true). If it helps, remove #EnableCaching as well - it enables support for cache-related Spring annotations which you don't need here.
Try removing Ticker bean with the appropriate parameters for other beans. It should not be harmful as per your configuration, but usually it's helpful only for tests, no need to define it otherwise.
Double-check what is declineRecord. If it's a serialized object, ensure that serialization works properly.
Add recordStats() for cache and output stats() to log for further analysis.

Spring Batch Runtime Exception in Item processor

I am learning spring batch and trying to understand how item processor works, during exception.
I am reading data from csv file in a chunk of 3 records and process it and write it to Database.
my csv file
Jill,Doe
Joe,Doe
Justin,Doe
Jane,Doe
John,Doem
Jill,Doe
Joe,Doe
Justin,Doe
Jane,Doe
Batch Configuration, reading items in chunk of 3 , and skip limit 2
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Bean
public FlatFileItemReader<Person> reader() {
return new FlatFileItemReaderBuilder<Person>().name("personItemReader").resource(new ClassPathResource("sample-data.csv")).delimited()
.names(new String[] { "firstName", "lastName" }).fieldSetMapper(new BeanWrapperFieldSetMapper<Person>() {
{
setTargetType(Person.class);
}
}).build();
}
#Bean
public PersonItemProcessor processor() {
return new PersonItemProcessor();
}
#Bean
public JdbcBatchItemWriter<Person> writer(DataSource dataSource) {
return new JdbcBatchItemWriterBuilder<Person>().itemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<>())
.sql("INSERT INTO person (first_name, last_name) VALUES (:firstName, :lastName)").dataSource(dataSource).build();
}
#Bean
public Job importUserJob(JobCompletionNotificationListener listener, Step step1) {
return jobBuilderFactory.get("importUserJob").incrementer(new RunIdIncrementer()).listener(listener).flow(step1).end().build();
}
#Bean
public Step step1(JdbcBatchItemWriter<Person> writer) {
return stepBuilderFactory.get("step1").<Person, Person> chunk(3).reader(reader()).processor(processor()).writer(writer).faultTolerant().skipLimit(2)
.skip(Exception.class).build();
}
}
I am trying to simulate a Exception, by throwing Exception manually for one record in my item processor
public class PersonItemProcessor implements ItemProcessor<Person, Person> {
private static final Logger log = LoggerFactory.getLogger(PersonItemProcessor.class);
#Override
public Person process(final Person person) throws Exception {
final String firstName = person.getFirstName().toUpperCase();
final String lastName = person.getLastName().toUpperCase();
final Person transformedPerson = new Person(firstName, lastName);
log.info("Converting (" + person + ") into (" + transformedPerson + ")");
if (person.getLastName().equals("Doem"))
throw new Exception("DOOM");
return transformedPerson;
}
}
Now as per skip limit, when the exception is thrown, the item processor is re processing the chunk and skips the item which throws error and item write also inserts all records in DB , except the one record with exception.
This is all fine, because my processor, it is just converting lower to upper case name, and it can be run many times with out impact.
But lets assume if my item processor, is calling web service and sending data.
and if some exception is thrown after successful calling for web service. then remaining data in the chunk will be processed again (and calling webservice again).
I don't want to call web service again, because it is like sending duplicate data to web service and the webservice system cannot identify duplicate data.
How to handle such case. one option is don't skip Exception, which means my still one record in the chunk will not make it to item writer, even though the processor had called web service. so that is not correct.
other option chunk should be of size 1 , then this may not be efficient in processing thousands of records.
what are the other options ?
According to your description, your item processor is not idempotent. However, the Fault tolerance section of the documentation says that the item processor should be idempotent when using a fault tolerant step. Here is an excerpt:
If a step is configured to be fault tolerant (typically by using skip or retry processing), any ItemProcessor used should be implemented in a way that is idempotent.

Spring Kafka global transaction ID stays open after program ends

I am creating a Kafka Spring producer under Spring Boot which will send data to Kafka and then write to a database; I want all that work to be in one transaction. I am new to Kafka and no expert on Spring, and am having some difficulty. Any pointers much appreciated.
So far my code writes to Kafka successfully in a loop. I have not yet set up
the DB, but have proceeded to set up global transactioning by adding a transactionIdPrefix to the producerFactory in the configuration:
producerFactory.setTransactionIdPrefix("MY_SERVER");
and added #Transactional to the method that does the Kafka send. Eventually I plan to do my DB work in that same method.
Problem: the code runs great the first time. But if I stop the program, even cleanly, I find that the code hangs the 2nd time I run it as soon as it enters the #Transactional method. If I comment out the #Transactional, it enters the method but hangs on the kafa template send().
The problem seems to be the transaction ID. If I change the prefix and rerun, the program runs fine again the first time but hangs when I run it again, until a new prefix is chosen. Since after a restart the trans ID counter starts at zero, if the trans ID prefix does not change then the same trans ID will be used upon restart.
It seems to me that the original transID is still open on the server, and was never committed. (I can read the data off the topic using the console-consumer, but that will read uncommitted). But if that is the case, how do I get spring to commit the trans? I am thinking my coniguration must be wrong. Or-- is the issue possibly that trans ID's can never be reused? (In which case, how does one solve that?)
Here is my relevant code. Config is:
#SpringBootApplication
public class MYApplication {
#Autowired
private static ChangeSweeper changeSweeper;
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> producerFactory=new DefaultKafkaProducerFactory<>(configProps);
producerFactory.setTransactionIdPrefix("MY_SERVER");
return producerFactory;
}
#Bean
public KafkaTransactionManager<String, String> KafkaTransactionManager() {
return new KafkaTransactionManager<String, String>((producerFactory()));
}
#Bean(name="kafkaProducerTemplate")
public KafkaTemplate<String, String> kafkaProducerTemplate() {
return new KafkaTemplate<>(producerFactory());
}
And the method that does the transaction is:
#Transactional
public void send( final List<Record> records) {
logger.debug("sending {} records; batchSize={}; topic={}", records.size(),batchSize, kafkaTopic);
// Divide the record set into batches of size batchSize and send each batch with a kafka transaction:
for (int batchStartIndex = 0; batchStartIndex < records.size(); batchStartIndex += batchSize ) {
int batchEndIndex=Math.min(records.size()-1, batchStartIndex+batchSize-1);
List<Record> nextBatch = records.subList(batchStartIndex, batchEndIndex);
logger.debug("## batch is from " + batchStartIndex + " to " + batchEndIndex);
for (Record record : nextBatch) {
kafkaProducerTemplate.send( kafkaTopic, record.getKey().toString(), record.getData().toString());
logger.debug("Sending> " + record);
}
// I will put the DB writes here
}
This works fine for me no matter how many times I run it (but I have to run 3 broker instances on my local machine because transactions require that by default)...
#SpringBootApplication
#EnableTransactionManagement
public class So47817034Application {
public static void main(String[] args) {
SpringApplication.run(So47817034Application.class, args).close();
}
private final CountDownLatch latch = new CountDownLatch(2);
#Bean
public ApplicationRunner runner(Foo foo) {
return args -> {
foo.send("foo");
foo.send("bar");
this.latch.await(10, TimeUnit.SECONDS);
};
}
#Bean
public KafkaTransactionManager<Object, Object> KafkaTransactionManager(KafkaProperties properties) {
return new KafkaTransactionManager<Object, Object>(kafkaProducerFactory(properties));
}
#Bean
public ProducerFactory<Object, Object> kafkaProducerFactory(KafkaProperties properties) {
DefaultKafkaProducerFactory<Object, Object> factory =
new DefaultKafkaProducerFactory<Object, Object>(properties.buildProducerProperties());
factory.setTransactionIdPrefix("foo-");
return factory;
}
#KafkaListener(id = "foo", topics = "so47817034")
public void listen(String in) {
System.out.println(in);
this.latch.countDown();
}
#Component
public static class Foo {
#Autowired
private KafkaTemplate<Object, Object> template;
#Transactional
public void send(String go) {
this.template.send("so47817034", go);
}
}
}

Resources