How to rollback child transaction if any exception in parent transaction? - spring-boot

I have two transaction manager for two database. I need to persist same data into both databases. If one transaction failed, other one need rollback. I have done like below
public interface DataService {
void saveData();
}
#Service
public class DataServiceImpl implements DataService {
#Autowired
private DataRepository dataRepository;
#Autowired
private OrDataRepository orDataRepository;
#Autowired
#Qualifier("orService")
private OrService orDataServiceImpl;
#Override
#Transactional(transactionManager = "transactionManager", rollbackFor = {RuntimeException.class})
public void saveData() {
Data data = new Data();
data.setCompKey(UUID.randomUUID().toString().substring(1,5));
data.setName("data");
dataRepository.save(data);
orDataServiceImpl.save();
//throw new RuntimeException("");
}
}
public interface OrService {
void save();
}
#Service("orService")
public class OrDataServiceImpl implements OrService {
#Autowired
private OrDataRepository orDataRepository;
#Override
#Transactional(rollbackFor = {RuntimeException.class})
public void save() {
OrData data = new OrData();
data.setCompKey(UUID.randomUUID().toString().substring(1,5));
data.setName("ordata");
orDataRepository.save(data);
}
}
I have two transaction manager (entityManager & orEntityManager) for two different DB.
If any exception in OrDataServiceImpl save method, data is not getting persisted in both DB. But if any exception in DataServiceImpl saveData method, data is getting persisted into OrData table.
I want to rollback the data from both DB if any exception.
chainedTransactionManager is deprecated. So can't use. atomikos and bitronix also can't use due to some restrictions. Kindly suggest better way to achieve distributed transation

The code need to be refactored, edit the DataServiceImpl.save() method.
Comment the orDataServiceImpl.save() line
public void saveData() {
Data data = new Data();
data.setCompKey(UUID.randomUUID().toString().substring(1,5));
data.setName("data");
dataRepository.save(data);
//orDataServiceImpl.save();
//throw new RuntimeException("");
}
Refactor/Edit the OrDataService Interface
public interface OrDataService {
void save(String uuid);
void delete(String uuid);
//will be use for compensating transaction
}
Update the OrDataServiceImpl class to implement above interface
Write new orchestration Method and use compensating transaction to rollback
pseudo code
call OrDataServiceImpl.save()
if step#1 was success
-> DataServiceImpl.saveData()
if Exception at step#3,
->OrDataServiceImpl.delete() [//to rollback]
else if, Exception at step#1
//do nothing

Related

Why is exception in Spring Batch AsycItemProcessor caught by SkipListener's onSkipInWrite method?

I'm writing a Spring Boot application that starts up, gathers and converts millions of database entries into a new streamlined JSON format, and then sends them all to a GCP PubSub topic. I'm attempting to use Spring Batch for this, but I'm running into trouble implementing fault tolerance for my process. The database is rife with data quality issues, and sometimes my conversions to JSON will fail. When failures occur, I don't want the job to immediately quit, I want it to continue processing as many records as it can and, before completion, to report which exact records failed so that I, and or my team, can examine these problematic database entries.
To achieve this, I've attempted to use Spring Batch's SkipListener interface. But I'm also using an AsyncItemProcessor and an AsyncItemWriter in my process, and even though the exceptions are occurring during the processing, the SkipListener's onSkipInWrite() method is catching them - rather than the onSkipInProcess() method. And unfortunately, the onSkipInWrite() method doesn't have access to the original database entity, so I can't store its ID in my list of problematic DB entries.
Have I misconfigured something? Is there any other way to gain access to the objects from the reader that failed the processing step of an AsynItemProcessor?
Here's what I've tried...
I have a singleton Spring Component where I store how many DB entries I've successfully processed along with up to 20 problematic database entries.
#Component
#Getter //lombok
public class ProcessStatus {
private int processed;
private int failureCount;
private final List<UnexpectedFailure> unexpectedFailures = new ArrayList<>();
public void incrementProgress { processed++; }
public void logUnexpectedFailure(UnexpectedFailure failure) {
failureCount++;
unexpectedFailure.add(failure);
}
#Getter
#AllArgsConstructor
public static class UnexpectedFailure {
private Throwable error;
private DBProjection dbData;
}
}
I have a Spring batch Skip Listener that's supposed to catch failures and update my status component accordingly:
#AllArgsConstructor
public class ConversionSkipListener implements SkipListener<DBProjection, Future<JsonMessage>> {
private ProcessStatus processStatus;
#Override
public void onSkipInRead(Throwable error) {}
#Override
public void onSkipInProcess(DBProjection dbData, Throwable error) {
processStatus.logUnexpectedFailure(new ProcessStatus.UnexpectedFailure(error, dbData));
}
#Override
public void onSkipInWrite(Future<JsonMessage> messageFuture, Throwable error) {
//This is getting called instead!! Even though the exception happened during processing :(
//But I have no access to the original DBProjection data here, and messageFuture.get() gives me null.
}
}
And then I've configured my job like this:
#Configuration
public class ConversionBatchJobConfig {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private TaskExecutor processThreadPool;
#Bean
public SimpleCompletionPolicy processChunkSize(#Value("${commit.chunk.size:100}") Integer chunkSize) {
return new SimpleCompletionPolicy(chunkSize);
}
#Bean
#StepScope
public ItemStreamReader<DbProjection> dbReader(
MyDomainRepository myDomainRepository,
#Value("#{jobParameters[pageSize]}") Integer pageSize,
#Value("#{jobParameters[limit]}") Integer limit) {
RepositoryItemReader<DbProjection> myDomainRepositoryReader = new RepositoryItemReader<>();
myDomainRepositoryReader.setRepository(myDomainRepository);
myDomainRepositoryReader.setMethodName("findActiveDbDomains"); //A native query
myDomainRepositoryReader.setArguments(new ArrayList<Object>() {{
add("ACTIVE");
}});
myDomainRepositoryReader.setSort(new HashMap<String, Sort.Direction>() {{
put("update_date", Sort.Direction.ASC);
}});
myDomainRepositoryReader.setPageSize(pageSize);
myDomainRepositoryReader.setMaxItemCount(limit);
// myDomainRepositoryReader.setSaveState(false); <== haven't figured out what this does yet
return myDomainRepositoryReader;
}
#Bean
#StepScope
public ItemProcessor<DbProjection, JsonMessage> dataConverter(DataRetrievalSerivice dataRetrievalService) {
//Sometimes throws exceptions when DB data is exceptionally weird, bad, or missing
return new DbProjectionToJsonMessageConverter(dataRetrievalService);
}
#Bean
#StepScope
public AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter(
ItemProcessor<DbProjection, JsonMessage> dataConverter) throws Exception {
AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter = new AsyncItemProcessor<>();
asyncDataConverter.setDelegate(dataConverter);
asyncDataConverter.setTaskExecutor(processThreadPool);
asyncDataConverter.afterPropertiesSet();
return asyncDataConverter;
}
#Bean
#StepScope
public ItemWriter<JsonMessage> jsonPublisher(GcpPubsubPublisherService publisherService) {
return new JsonMessageWriter(publisherService);
}
#Bean
#StepScope
public AsyncItemWriter<JsonMessage> asyncJsonPublisher(ItemWriter<JsonMessage> jsonPublisher) throws Exception {
AsyncItemWriter<JsonMessage> asyncJsonPublisher = new AsyncItemWriter<>();
asyncJsonPublisher.setDelegate(jsonPublisher);
asyncJsonPublisher.afterPropertiesSet();
return asyncJsonPublisher;
}
#Bean
public Step conversionProcess(SimpleCompletionPolicy processChunkSize,
ItemStreamReader<DbProjection> dbReader,
AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter,
AsyncItemWriter<JsonMessage> asyncJsonPublisher,
ProcessStatus processStatus,
#Value("${conversion.failure.limit:20}") int maximumFailures) {
return stepBuilderFactory.get("conversionProcess")
.<DbProjection, Future<JsonMessage>>chunk(processChunkSize)
.reader(dbReader)
.processor(asyncDataConverter)
.writer(asyncJsonPublisher)
.faultTolerant()
.skipPolicy(new MyCustomConversionSkipPolicy(maximumFailures))
// ^ for now this returns true for everything until 20 failures
.listener(new ConversionSkipListener(processStatus))
.build();
}
#Bean
public Job conversionJob(Step conversionProcess) {
return jobBuilderFactory.get("conversionJob")
.start(conversionProcess)
.build();
}
}
This is because the future wrapped by the AsyncItemProcessor is only unwrapped in the AsyncItemWriter, so any exception that might occur at that time is seen as a write exception instead of a processing exception. That's why onSkipInWrite is called instead of onSkipInProcess.
This is actually a known limitation of this pattern which is documented in the Javadoc of the AsyncItemProcessor, here is an excerpt:
Because the Future is typically unwrapped in the ItemWriter,
there are lifecycle and stats limitations (since the framework doesn't know
what the result of the processor is).
While not an exhaustive list, things like StepExecution.filterCount will not
reflect the number of filtered items and
itemProcessListener.onProcessError(Object, Exception) will not be called.
The Javadoc states that the list is not exhaustive, and the side-effect regarding the SkipListener that you are experiencing is one these limitations.

Spring Transaction | Not Rollback some fraction of code

I need to write a method which is being called from a open transaction & will never rollback, even though exception occurs in the system. I used Propagation.NEVER to achieve this, Is it fine, or I should use PROPAGATION_NOT_SUPPORTED?
#Autowired
private PartnerTransactionService partnerTransactionService;
#PostConstruct
#Transactional
private void test(AuditDTO<Object> audit){
partnerTransactionService.saveAudits(audit);
partnerTransactionService.nonTrasactionsalSaveAudits(audit);
throw new NullPointerException();
}
In some other service class(both are spring managed bean ie sprinf proxy is being created for both)-
public <T> void saveAudits(AuditDTO<T> audit){
if(audit!=null){
PartnerTransaction partnerTransaction=new PartnerTransaction(audit);
partnerTransactionRepository.save(partnerTransaction);
}
}
#Transactional(propagation = Propagation.NEVER)
public <T> void nonTrasactionsalSaveAudits(AuditDTO<T> audit){
if(audit!=null){
PartnerTransaction partnerTransaction=new PartnerTransaction(audit);
partnerTransactionRepository.save(partnerTransaction);
}
}
I throws a NPE to check my code , I found both the data being save irrespective of exception, non being rollback, what I am missing?

How to do manual transaction management with JOOQ and Spring-boot 2.0?

Using Spring Boot 2.0.4 and JOOQ 3.11.3.
I have a server endpoint that needs fine-grained control over transaction management; it needs to issue multiple SQL statements before and after an external call and must not keep the DB transaction open while talking to the external site.
In the below code testTransactionV4 is the attempt I like best.
I've looked in the JOOQ manual but the transaction-management section is pretty light-on and seems to imply this is the way to do it.
It feels like I'm working harder than I should be here, which is usually a sign that I'm doing it wrong. Is there a better, "correct" way to do manual transaction management with Spring/JOOQ?
Also, any improvements to the implementation of the TransactionBean would be greatly appreciated (and upvoted).
But the point of this question is really just: "Is this the right way"?
TestEndpoint:
#Role.SystemApi
#SystemApiEndpoint
public class TestEndpoint {
private static Log log = to(TestEndpoint.class);
#Autowired private DSLContext db;
#Autowired private TransactionBean txBean;
#Autowired private Tx tx;
private void doNonTransactionalThing() {
log.info("long running thing that should not be inside a transaction");
}
/** Works; don't like the commitWithResult name but it'll do if there's
no better way. Implementation is ugly too.
*/
#JsonPostMethod("testTransactionV4")
public void testMultiTransactionWithTxBean() {
log.info("start testMultiTransactionWithTxBean");
AccountRecord account = txBean.commitWithResult( db ->
db.fetchOne(ACCOUNT, ACCOUNT.ID.eq(1)) );
doNonTransactionalThing();
account.setName("test_tx+"+new Date());
txBean.commit(db -> account.store() );
}
/** Works; but it's ugly, especially having to work around lambda final
requirements on references. */
#JsonPostMethod("testTransactionV3")
public void testMultiTransactionWithJooqApi() {
log.info("start testMultiTransactionWithJooqApi");
AtomicReference<AccountRecord> account = new AtomicReference<>();
db.transaction( config->
account.set(DSL.using(config).fetchOne(ACCOUNT, ACCOUNT.ID.eq(1))) );
doNonTransactionalThing();
account.get().setName("test_tx+"+new Date());
db.transaction(config->{
account.get().store();
});
}
/** Does not work, there's only one commit that spans over the long operation */
#JsonPostMethod("testTransactionV1")
#Transactional
public void testIncorrectSingleTransactionWithMethodAnnotation() {
log.info("start testIncorrectSingleTransactionWithMethodAnnotation");
AccountRecord account = db.fetchOne(ACCOUNT, ACCOUNT.ID.eq(1));
doNonTransactionalThing();
account.setName("test_tx+"+new Date());
account.store();
}
/** Works, but I don't like defining my tx boundaries this way, readability
is poor (relies on correct bean naming and even then is non-obvious) and is
fragile in the face of refactoring. When explicit TX boundaries are needed
I want them getting in my face straight away.
*/
#JsonPostMethod("testTransactionV2")
public void testMultiTransactionWithNestedComponent() {
log.info("start testTransactionWithComponentDelegation");
AccountRecord account = tx.readAccount();
doNonTransactionalThing();
account.setName("test_tx+"+new Date());
tx.writeAccount(account);
}
#Component
static class Tx {
#Autowired private DSLContext db;
#Transactional
public AccountRecord readAccount() {
return db.fetchOne(ACCOUNT, ACCOUNT.ID.eq(1));
}
#Transactional
public void writeAccount(AccountRecord account) {
account.store();
}
}
}
TransactionBean:
#Component
public class TransactionBean {
#Autowired private DSLContext db;
/**
Don't like the name, but can't figure out how to make it be just "commit".
*/
public <T> T commitWithResult(Function<DSLContext, T> worker) {
// Yuck, at the very least need an array or something as the holder.
AtomicReference<T> result = new AtomicReference<>();
db.transaction( config -> result.set(
worker.apply(DSL.using(config))
));
return result.get();
}
public void commit(Consumer<DSLContext> worker) {
db.transaction( config ->
worker.accept(DSL.using(config))
);
}
public void commit(Runnable worker) {
db.transaction( config ->
worker.run()
);
}
}
Use the TransactionTemplate to wrap the transactional part. Spring Boot provides one out-of-the-box so it is ready for use. You can use the execute method to wrap a call in a transaction.
#Autowired
private TransactionTemplate transaction;
#JsonPostMethod("testTransactionV1")
public void testIncorrectSingleTransactionWithTransactionTemplate() {
log.info("start testIncorrectSingleTransactionWithMethodAnnotation");
AccountRecord account = transaction.execute( status -> db.fetchOne(ACCOUNT, ACCOUNT.ID.eq(1)));
doNonTransactionalThing();
transaction.execute(status -> {
account.setName("test_tx+"+new Date());
account.store();
return null;
}
}
Something like that should do the trick. Not sure if the lambdas would work (keep forgetting the syntax of the TransactionCallback

Testing that delete is correctly rolled back with DataIntegrityViolationException junit, spring, #Transactional

I have a category -> subCategory -> products hierarchy in my application. If a subcategory has no products, you are allowed to delete it. If a subCategory has products, the DAO throws a DataIntegrityViolationException and the transaction should be rolled back.
In my tests, I have:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = {TestTransactionManagement.class})
public class BusinessSubCategoryCRUDTest {
#Autowired
public void setCRUD(BusinessSubCategoryCRUD crud) {
this.crud = crud;
}
// #Transactional
#Test
public void testDeleteBusinessSubCategoryInUseCanNotBeDeleted() {
final long id = 1;
BusinessSubCategory subCategoryBeforeDelete =
crud.readBusinessSubCategory(id);
final int numCategoriesBeforeDelete =
subCategoryBeforeDelete.getBusinessCategories().size();
try {
crud.deleteBusinessSubCategory(
new BusinessSubCategory(id, ""));
} catch (DataIntegrityViolationException e) {
System.err.println(e);
}
BusinessSubCategory subCategoryAfterDeleteFails =
crud.readBusinessSubCategory(id);
// THIS next assertion is the source of my angst.
// At this point the the links to the categories will have been
// been deleted, an exception will have been thrown but the
// Transaction is not yet rolled back if the test case (or test
// class) is marked with #Transactional
assertEquals(
numCategoriesBeforeDelete,
subCategoryAfterDeleteFails.getBusinessCategories().size());
}
}
However, if I uncomment the #Transactional above #Test, it fails. I think the DAO is using the transaction from the #Test and so the transaction doesn't roll back until AFTER I check to be sure the transaction has been rolled back.
#Transactional(readOnly = false, propagation =
Propagation.REQUIRED)
public boolean deleteBusinessSubCategory(
BusinessSubCategory businessSubCategory) {
BeanPropertySqlParameterSource paramMap = new
BeanPropertySqlParameterSource(businessSubCategory);
namedJdbcTemplate.update(
DELETE_CATEGORY_SUB_CATEGORY_BY_ID_SQL,
paramMap);
return 0 != namedJdbcTemplate.update(
DELETE_SUB_CATEGORY_BY_ID_SQL,
paramMap);
}
So, how do I have the DAO code still inherit the transaction from the context it is running in (in production it inherits the transaction from the service it is running in) but still be able to test it. I want to put #Transactional on the entire test class, but that then leaves my test either failing or incomplete.
For completeness, here is my configuration class for the test.
#Configuration
#EnableTransactionManagement
public class TestTransactionManagement {
#Bean
public EmbeddedDatabase getDataSource() {
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
EmbeddedDatabase db = builder
.setType(EmbeddedDatabaseType.HSQL) //.H2 or .DERBY
.addScript("sql/create-db.sql")
.addScript("sql/create-test-data.sql")
.build();
return db;
}
#Bean
public DataSourceTransactionManager transactionManager() {
return new DataSourceTransactionManager(getDataSource());
}
#Bean
public BusinessSubCategoryCRUD getCRUD() {
return new BusinessSubCategoryCRUD(getDataSource());
}
}
The "solution" or workaround was to reset the database before each test. Then there was no need for an #Transactional on the test, the rollback could be tested, and the test suite ran slighly slower due to the additional database setup.
#Before
public void setUp() {
Connection conn = DataSourceUtils.getConnection(dataSource);
ScriptUtils.executeSqlScript(
conn, new ClassPathResource("sql/create-test-data.sql"));
DataSourceUtils.releaseConnection(conn, dataSource);
}

spring junit test with two hibernate transactions

I have spring junit test consisting of two sequential transactions which are propagated as REQUIRES_NEW:
public class ContractServiceTest extends AbstractIntegrationTest {
#Autowired
private PersistenceManagerHibernate persistenceManagerHibernate;
#Autowired
private ContractService contractService;
#Autowired
private EntityChangeService entityChangeService;
#Resource
private AddServiceService addService;
#Autowired
private ReferenceBookService refService;
#Autowired
private PropertyService propertyService;
#Autowired
private HibernateTransactionManager transactionManager;
#Test
public void testContractDeletes() {
Long contractId = 1L;
final Contract contract = createTestDetachedContract(contractId, PropertyServiceTest.createManaged(propertyService, refService), refService);
ensureContractCreated(contract);
deleteTransactional(contract);
Assert.assertEquals(1, entityChangeService.findByPaginationOrderByUpdateDate(Contract.class.getName(), contract.getId().toString(), null, 0, 30).size());
}
#Test
#Ignore
public void testContractCreates() {
Long contractId = 1L;
final Contract contract = createTestDetachedContract(contractId, PropertyServiceTest.createManaged(propertyService, refService), refService);
ensureContractDeleted(contract);
createContractTransactional(contract);
Assert.assertEquals(1, entityChangeService.findByPaginationOrderByUpdateDate(Contract.class.getName(), contract.getId().toString(), null, 0, 30).size());
}
private void ensureContractCreated(Contract contract) {
if (persistenceManagerHibernate.isCreated(Contract.class, contract.getId())) {
return;
}
createContractTransactional(contract);
}
private void deleteTransactional(final Contract contract) {
TransactionTemplate transactionTemplate = new TransactionTemplate(transactionManager);
transactionTemplate.setPropagationBehavior(Propagation.REQUIRES_NEW.value());
transactionTemplate.execute(new TransactionCallback() {
public Object doInTransaction(TransactionStatus status) {
try {
contractService.delete(contract);
} catch (Exception e) {
toString();
}
return null;
}
});
}
private void createContractTransactional(final Contract contract) {
TransactionTemplate transactionTemplate2 = new TransactionTemplate(transactionManager);
transactionTemplate2.setPropagationBehavior(Propagation.REQUIRES_NEW.value());
transactionTemplate2.execute(new TransactionCallback() {
public Object doInTransaction(TransactionStatus status) {
contractService.create(contract);
return null;
}
});
}
private void ensureContractDeleted(final Contract contract) {
if (!persistenceManagerHibernate.isCreated(Contract.class, contract.getId())) {
return;
}
deleteTransactional(contract);
}
public static Contract createTestDetachedContract(Long contractId, Property property, ReferenceBookService refService) {
Contract contract1 = new Contract();
contract1.setId(contractId);
contract1.setName("test name");
contract1.setProperty(property);
contract1.setNumber("10");
contract1.setType(refService.get(ContractType.class, 1L));
contract1.setStatus(refService.get(ContractStatus.class, 1L));
contract1.setCreated(new Date());
contract1.setCurrencyRate(new BigDecimal(10));
contract1.setInitialSum(new BigDecimal(10));
contract1.setSum(new BigDecimal(10));
return contract1;
}
}
Test freezes at commiting of transaction with insert sql statement, which is:
private void createContractTransactional(final Contract contract) {
TransactionTemplate transactionTemplate2 = new TransactionTemplate(transactionManager);
transactionTemplate2.setPropagationBehavior(Propagation.REQUIRES_NEW.value());
transactionTemplate2.execute(new TransactionCallback() {
public Object doInTransaction(TransactionStatus status) {
contractService.create(contract);
return null;
}
});
}
Why does that happening(debugger stops at some oracle code without source code provided) and how to write spring junit test with two sequential transactions correctly?
It sounds like the test is creating a deadlock on the Contract table in your database. The root cause of this is most likely the use of the REQUIRES_NEW propagation level, as detailed in this question. The important part is this:
PROPAGATION_REQUIRES_NEW starts a new, independent "inner" transaction for the given scope. This transaction will be committed or rolled back completely independent from the outer transaction, having its own isolation scope, its own set of locks, etc. The outer transaction will get suspended at the beginning of the inner one, and resumed once the inner one has completed
The createContractTransactional method is trying to insert into the Contract table but something earlier in the test must be holding a lock on it, I'm guessing it's the call to persistenceManagerHibernate.isCreated(Contract.class, contract.getId()). Whatever the cause, you have two independent transactions that are locking on the same table, i.e. a deadlock.
Try setting the propagation level the transactions in your test to REQUIRED, which is the default setting. This creates a new transaction if there isn't one already, otherwise the current transaction is used. That should make your test execute in a single transaction and so you shouldn't get a deadlock. Once that is working then you may want to read the spring documentation on its propagation levels to make sure that REQUIRED is the right level for your needs.

Resources