I have following code with Panache using Quarkus:
#Path("/hello")
public class GreetingResource {
#Inject ManagedExecutor executor;
#GET
#Produces(MediaType.TEXT_PLAIN)
public String hello() {
executor.runAsync( new Runnable() {
public void run(){
Book book = new Book();
book.id=java.util.UUID.randomUUID().toString();
book.title="aaaaa";
book.persistAndFlush();
System.out.println("Persisted data");
}});
return "Hello from RESTEasy Reactive";
}
}
The persisting never happens. It just hangs. If I remove the thread, everything works just fine. Any reason why this is? And how do I address this?
This is a simplification of a more complex use case where removing the thread is not necessarily welcomed.
Related
I'm writing a Spring Boot application that starts up, gathers and converts millions of database entries into a new streamlined JSON format, and then sends them all to a GCP PubSub topic. I'm attempting to use Spring Batch for this, but I'm running into trouble implementing fault tolerance for my process. The database is rife with data quality issues, and sometimes my conversions to JSON will fail. When failures occur, I don't want the job to immediately quit, I want it to continue processing as many records as it can and, before completion, to report which exact records failed so that I, and or my team, can examine these problematic database entries.
To achieve this, I've attempted to use Spring Batch's SkipListener interface. But I'm also using an AsyncItemProcessor and an AsyncItemWriter in my process, and even though the exceptions are occurring during the processing, the SkipListener's onSkipInWrite() method is catching them - rather than the onSkipInProcess() method. And unfortunately, the onSkipInWrite() method doesn't have access to the original database entity, so I can't store its ID in my list of problematic DB entries.
Have I misconfigured something? Is there any other way to gain access to the objects from the reader that failed the processing step of an AsynItemProcessor?
Here's what I've tried...
I have a singleton Spring Component where I store how many DB entries I've successfully processed along with up to 20 problematic database entries.
#Component
#Getter //lombok
public class ProcessStatus {
private int processed;
private int failureCount;
private final List<UnexpectedFailure> unexpectedFailures = new ArrayList<>();
public void incrementProgress { processed++; }
public void logUnexpectedFailure(UnexpectedFailure failure) {
failureCount++;
unexpectedFailure.add(failure);
}
#Getter
#AllArgsConstructor
public static class UnexpectedFailure {
private Throwable error;
private DBProjection dbData;
}
}
I have a Spring batch Skip Listener that's supposed to catch failures and update my status component accordingly:
#AllArgsConstructor
public class ConversionSkipListener implements SkipListener<DBProjection, Future<JsonMessage>> {
private ProcessStatus processStatus;
#Override
public void onSkipInRead(Throwable error) {}
#Override
public void onSkipInProcess(DBProjection dbData, Throwable error) {
processStatus.logUnexpectedFailure(new ProcessStatus.UnexpectedFailure(error, dbData));
}
#Override
public void onSkipInWrite(Future<JsonMessage> messageFuture, Throwable error) {
//This is getting called instead!! Even though the exception happened during processing :(
//But I have no access to the original DBProjection data here, and messageFuture.get() gives me null.
}
}
And then I've configured my job like this:
#Configuration
public class ConversionBatchJobConfig {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private TaskExecutor processThreadPool;
#Bean
public SimpleCompletionPolicy processChunkSize(#Value("${commit.chunk.size:100}") Integer chunkSize) {
return new SimpleCompletionPolicy(chunkSize);
}
#Bean
#StepScope
public ItemStreamReader<DbProjection> dbReader(
MyDomainRepository myDomainRepository,
#Value("#{jobParameters[pageSize]}") Integer pageSize,
#Value("#{jobParameters[limit]}") Integer limit) {
RepositoryItemReader<DbProjection> myDomainRepositoryReader = new RepositoryItemReader<>();
myDomainRepositoryReader.setRepository(myDomainRepository);
myDomainRepositoryReader.setMethodName("findActiveDbDomains"); //A native query
myDomainRepositoryReader.setArguments(new ArrayList<Object>() {{
add("ACTIVE");
}});
myDomainRepositoryReader.setSort(new HashMap<String, Sort.Direction>() {{
put("update_date", Sort.Direction.ASC);
}});
myDomainRepositoryReader.setPageSize(pageSize);
myDomainRepositoryReader.setMaxItemCount(limit);
// myDomainRepositoryReader.setSaveState(false); <== haven't figured out what this does yet
return myDomainRepositoryReader;
}
#Bean
#StepScope
public ItemProcessor<DbProjection, JsonMessage> dataConverter(DataRetrievalSerivice dataRetrievalService) {
//Sometimes throws exceptions when DB data is exceptionally weird, bad, or missing
return new DbProjectionToJsonMessageConverter(dataRetrievalService);
}
#Bean
#StepScope
public AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter(
ItemProcessor<DbProjection, JsonMessage> dataConverter) throws Exception {
AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter = new AsyncItemProcessor<>();
asyncDataConverter.setDelegate(dataConverter);
asyncDataConverter.setTaskExecutor(processThreadPool);
asyncDataConverter.afterPropertiesSet();
return asyncDataConverter;
}
#Bean
#StepScope
public ItemWriter<JsonMessage> jsonPublisher(GcpPubsubPublisherService publisherService) {
return new JsonMessageWriter(publisherService);
}
#Bean
#StepScope
public AsyncItemWriter<JsonMessage> asyncJsonPublisher(ItemWriter<JsonMessage> jsonPublisher) throws Exception {
AsyncItemWriter<JsonMessage> asyncJsonPublisher = new AsyncItemWriter<>();
asyncJsonPublisher.setDelegate(jsonPublisher);
asyncJsonPublisher.afterPropertiesSet();
return asyncJsonPublisher;
}
#Bean
public Step conversionProcess(SimpleCompletionPolicy processChunkSize,
ItemStreamReader<DbProjection> dbReader,
AsyncItemProcessor<DbProjection, JsonMessage> asyncDataConverter,
AsyncItemWriter<JsonMessage> asyncJsonPublisher,
ProcessStatus processStatus,
#Value("${conversion.failure.limit:20}") int maximumFailures) {
return stepBuilderFactory.get("conversionProcess")
.<DbProjection, Future<JsonMessage>>chunk(processChunkSize)
.reader(dbReader)
.processor(asyncDataConverter)
.writer(asyncJsonPublisher)
.faultTolerant()
.skipPolicy(new MyCustomConversionSkipPolicy(maximumFailures))
// ^ for now this returns true for everything until 20 failures
.listener(new ConversionSkipListener(processStatus))
.build();
}
#Bean
public Job conversionJob(Step conversionProcess) {
return jobBuilderFactory.get("conversionJob")
.start(conversionProcess)
.build();
}
}
This is because the future wrapped by the AsyncItemProcessor is only unwrapped in the AsyncItemWriter, so any exception that might occur at that time is seen as a write exception instead of a processing exception. That's why onSkipInWrite is called instead of onSkipInProcess.
This is actually a known limitation of this pattern which is documented in the Javadoc of the AsyncItemProcessor, here is an excerpt:
Because the Future is typically unwrapped in the ItemWriter,
there are lifecycle and stats limitations (since the framework doesn't know
what the result of the processor is).
While not an exhaustive list, things like StepExecution.filterCount will not
reflect the number of filtered items and
itemProcessListener.onProcessError(Object, Exception) will not be called.
The Javadoc states that the list is not exhaustive, and the side-effect regarding the SkipListener that you are experiencing is one these limitations.
Using Spring Boot 2.0.4 and JOOQ 3.11.3.
I have a server endpoint that needs fine-grained control over transaction management; it needs to issue multiple SQL statements before and after an external call and must not keep the DB transaction open while talking to the external site.
In the below code testTransactionV4 is the attempt I like best.
I've looked in the JOOQ manual but the transaction-management section is pretty light-on and seems to imply this is the way to do it.
It feels like I'm working harder than I should be here, which is usually a sign that I'm doing it wrong. Is there a better, "correct" way to do manual transaction management with Spring/JOOQ?
Also, any improvements to the implementation of the TransactionBean would be greatly appreciated (and upvoted).
But the point of this question is really just: "Is this the right way"?
TestEndpoint:
#Role.SystemApi
#SystemApiEndpoint
public class TestEndpoint {
private static Log log = to(TestEndpoint.class);
#Autowired private DSLContext db;
#Autowired private TransactionBean txBean;
#Autowired private Tx tx;
private void doNonTransactionalThing() {
log.info("long running thing that should not be inside a transaction");
}
/** Works; don't like the commitWithResult name but it'll do if there's
no better way. Implementation is ugly too.
*/
#JsonPostMethod("testTransactionV4")
public void testMultiTransactionWithTxBean() {
log.info("start testMultiTransactionWithTxBean");
AccountRecord account = txBean.commitWithResult( db ->
db.fetchOne(ACCOUNT, ACCOUNT.ID.eq(1)) );
doNonTransactionalThing();
account.setName("test_tx+"+new Date());
txBean.commit(db -> account.store() );
}
/** Works; but it's ugly, especially having to work around lambda final
requirements on references. */
#JsonPostMethod("testTransactionV3")
public void testMultiTransactionWithJooqApi() {
log.info("start testMultiTransactionWithJooqApi");
AtomicReference<AccountRecord> account = new AtomicReference<>();
db.transaction( config->
account.set(DSL.using(config).fetchOne(ACCOUNT, ACCOUNT.ID.eq(1))) );
doNonTransactionalThing();
account.get().setName("test_tx+"+new Date());
db.transaction(config->{
account.get().store();
});
}
/** Does not work, there's only one commit that spans over the long operation */
#JsonPostMethod("testTransactionV1")
#Transactional
public void testIncorrectSingleTransactionWithMethodAnnotation() {
log.info("start testIncorrectSingleTransactionWithMethodAnnotation");
AccountRecord account = db.fetchOne(ACCOUNT, ACCOUNT.ID.eq(1));
doNonTransactionalThing();
account.setName("test_tx+"+new Date());
account.store();
}
/** Works, but I don't like defining my tx boundaries this way, readability
is poor (relies on correct bean naming and even then is non-obvious) and is
fragile in the face of refactoring. When explicit TX boundaries are needed
I want them getting in my face straight away.
*/
#JsonPostMethod("testTransactionV2")
public void testMultiTransactionWithNestedComponent() {
log.info("start testTransactionWithComponentDelegation");
AccountRecord account = tx.readAccount();
doNonTransactionalThing();
account.setName("test_tx+"+new Date());
tx.writeAccount(account);
}
#Component
static class Tx {
#Autowired private DSLContext db;
#Transactional
public AccountRecord readAccount() {
return db.fetchOne(ACCOUNT, ACCOUNT.ID.eq(1));
}
#Transactional
public void writeAccount(AccountRecord account) {
account.store();
}
}
}
TransactionBean:
#Component
public class TransactionBean {
#Autowired private DSLContext db;
/**
Don't like the name, but can't figure out how to make it be just "commit".
*/
public <T> T commitWithResult(Function<DSLContext, T> worker) {
// Yuck, at the very least need an array or something as the holder.
AtomicReference<T> result = new AtomicReference<>();
db.transaction( config -> result.set(
worker.apply(DSL.using(config))
));
return result.get();
}
public void commit(Consumer<DSLContext> worker) {
db.transaction( config ->
worker.accept(DSL.using(config))
);
}
public void commit(Runnable worker) {
db.transaction( config ->
worker.run()
);
}
}
Use the TransactionTemplate to wrap the transactional part. Spring Boot provides one out-of-the-box so it is ready for use. You can use the execute method to wrap a call in a transaction.
#Autowired
private TransactionTemplate transaction;
#JsonPostMethod("testTransactionV1")
public void testIncorrectSingleTransactionWithTransactionTemplate() {
log.info("start testIncorrectSingleTransactionWithMethodAnnotation");
AccountRecord account = transaction.execute( status -> db.fetchOne(ACCOUNT, ACCOUNT.ID.eq(1)));
doNonTransactionalThing();
transaction.execute(status -> {
account.setName("test_tx+"+new Date());
account.store();
return null;
}
}
Something like that should do the trick. Not sure if the lambdas would work (keep forgetting the syntax of the TransactionCallback
Thanks for reading ahead of time. In my main method I have a PublishSubscribeChannel
#Bean(name = "feeSchedule")
public SubscribableChannel getMessageChannel() {
return new PublishSubscribeChannel();
}
In a service that does a long running process it creates a fee schedule that I inject the channel into
#Service
public class FeeScheduleCompareServiceImpl implements FeeScheduleCompareService {
#Autowired
MessageChannel outChannel;
public List<FeeScheduleUpdate> compareFeeSchedules(String oldStudyId) {
List<FeeScheduleUpdate> sortedResultList = longMethod(oldStudyId);
outChannel.send(MessageBuilder.withPayload(sortedResultList).build());
return sortedResultList;
}
}
Now this is the part I'm struggling with. I want to use completable future and get the payload of the event in the future A in another spring bean. I need future A to return the payload from the message. I think want to create a ServiceActivator to be the message end point but like I said, I need it to return the payload for future A.
#org.springframework.stereotype.Service
public class SFCCCompareServiceImpl implements SFCCCompareService {
#Autowired
private SubscribableChannel outChannel;
#Override
public List<SFCCCompareDTO> compareSFCC(String state, int service){
ArrayList<SFCCCompareDTO> returnList = new ArrayList<SFCCCompareDTO>();
CompletableFuture<List<FeeScheduleUpdate>> fa = CompletableFuture.supplyAsync( () ->
{ //block A WHAT GOES HERE?!?!
outChannel.subscribe()
}
);
CompletableFuture<List<StateFeeCodeClassification>> fb = CompletableFuture.supplyAsync( () ->
{
return this.stateFeeCodeClassificationRepository.findAll();
}
);
CompletableFuture<List<SFCCCompareDTO>> fc = fa.thenCombine(fb,(a,b) ->{
//block C
//get in this block when both A & B are complete
Object theList = b.stream().forEach(new Consumer<StateFeeCodeClassification>() {
#Override
public void accept(StateFeeCodeClassification stateFeeCodeClassification) {
a.stream().forEach(new Consumer<FeeScheduleUpdate>() {
#Override
public void accept(FeeScheduleUpdate feeScheduleUpdate) {
returnList new SFCCCompareDTO();
}
});
}
}).collect(Collectors.toList());
return theList;
});
fc.join();
return returnList;
}
}
Was thinking there would be a service activator like:
#MessageEndpoint
public class UpdatesHandler implements MessageHandler{
#ServiceActivator(requiresReply = "true")
public List<FeeScheduleUpdate> getUpdates(Message m){
return (List<FeeScheduleUpdate>) m.getPayload();
}
}
Your question isn't clear, but I'll try to help you with some info.
Spring Integration doesn't provide CompletableFuture support, but it does provide an async handling and replies.
See Asynchronous Gateway for more information. And also see Asynchronous Service Activator.
outChannel.subscribe() should come with the MessageHandler callback, by the way.
I have one class that extends DeferredResults and extends Runnable as shown below
public class EventDeferredObject<T> extends DeferredResult<Boolean> implements Runnable {
private Long customerId;
private String email;
#Override
public void run() {
RestTemplate restTemplate=new RestTemplate();
EmailMessageDTO emailMessageDTO=new EmailMessageDTO("dineshshe#gmail.com", "Hi There");
Boolean result=restTemplate.postForObject("http://localhost:9080/asycn/sendEmail", emailMessageDTO, Boolean.class);
this.setResult(result);
}
//Constructor and getter and setters
}
Now I have controller that return the object of the above class,whenever new request comes to controller we check if that request is present in HashMap(That stores unprocessed request at that instance).If not present then we are creating object of EventDeferredObject class can store that in HashMap and call start() method on it.If this type request is already present then we will return that from HashMap.On completion on request we will delete that request from HashMap.
#RequestMapping(value="/sendVerificationDetails")
public class SendVerificationDetailsController {
private ConcurrentMap<String , EventDeferredObject<Boolean>> requestMap=new ConcurrentHashMap<String , EventDeferredObject<Boolean>>();
#RequestMapping(value="/sendEmail",method=RequestMethod.POST)
public EventDeferredObject<Boolean> sendEmail(#RequestBody EmailDTO emailDTO)
{
EventDeferredObject<Boolean> eventDeferredObject = null;
System.out.println("Size:"+requestMap.size());
if(!requestMap.containsKey(emailDTO.getEmail()))
{
eventDeferredObject=new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail());
requestMap.put(emailDTO.getEmail(), eventDeferredObject);
Thread t1=new Thread(eventDeferredObject);
t1.start();
}
else
{
eventDeferredObject=requestMap.get(emailDTO.getEmail());
}
eventDeferredObject.onCompletion(new Runnable() {
#Override
public void run() {
if(requestMap.containsKey(emailDTO.getEmail()))
{
requestMap.remove(emailDTO.getEmail());
}
}
});
return eventDeferredObject;
}
}
Now this code works fine if there no identical request comes to that stored in HashMap. If we give number of different request at same time code works fine.
Well, I do not know if I understood correctly, but I think you might have race conditions in the code, for example here:
if(!requestMap.containsKey(emailDTO.getEmail()))
{
eventDeferredObject=new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail());
requestMap.put(emailDTO.getEmail(), eventDeferredObject);
Thread t1=new Thread(eventDeferredObject);
t1.start();
}
else
{
eventDeferredObject=requestMap.get(emailDTO.getEmail());
}
think of a scenario in which you have two requests with the same key emailDTO.getEmail().
Request 1 checks if there is a key in the map, does not find it and puts it inside.
Request 2 comes some time later, checks if there is a key in the map, finds it, and
goes to fetch it; however just before that, the thread started by request 1 finishes and another thread, started by onComplete event, removes the key from the map. At this point,
requestMap.get(emailDTO.getEmail())
will return null, and as a result you will have a NullPointerException.
Now, this does look like a rare scenario, so I do not know if this is the problem you see.
I would try to modify the code as follows (I did not run it myself, so I might have errors):
public class EventDeferredObject<T> extends DeferredResult<Boolean> implements Runnable {
private Long customerId;
private String email;
private ConcurrentMap ourConcurrentMap;
#Override
public void run() {
...
this.setResult(result);
ourConcurrentMap.remove(this.email);
}
//Constructor and getter and setters
}
so the DeferredResult implementation has the responsibility to remove itself from the concurrent map. Moreover I do not use the onComplete to set a callback thread, as it seems to me an unnecessary complication. To avoid the race conditions I talked about before, one needs to combine somehow the verification of the presence of an entry with its fetching into one atomic operation; this is done by the putIfAbsent method of ConcurrentMap. Therefore I change the controller into
#RequestMapping(value="/sendVerificationDetails")
public class SendVerificationDetailsController {
private ConcurrentMap<String , EventDeferredObject<Boolean>> requestMap=new ConcurrentHashMap<String , EventDeferredObject<Boolean>>();
#RequestMapping(value="/sendEmail",method=RequestMethod.POST)
public EventDeferredObject<Boolean> sendEmail(#RequestBody EmailDTO emailDTO)
{
EventDeferredObject<Boolean> eventDeferredObject = new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail(), requestMap);
EventDeferredObject<Boolean> oldEventDeferredObject = requestMap.putIfAbsent(emailDTO.getEmail(), eventDeferredObject );
if(oldEventDeferredObject == null)
{
//if no value was present before
Thread t1=new Thread(eventDeferredObject);
t1.start();
return eventDeferredObject;
}
else
{
return oldEventDeferredObject;
}
}
}
if this does not solve the problem you have, I hope that at least it might give some idea.
I just came up with something funny code
public class FeedService {
#Inject
private FriendService friendService;
#Inject
private FeedRepository feedRepository;
private ThreadPoolTaskExecutor taskExecutor;
public FeedService(){
prepareExecutor();
}
#Async
public void addToFriendsFeed(final Status status, User user) {
Collection<String> friends = friendService.getFriendsForUser(user.getLogin());
for (final String friend : friends) {
taskExecutor.execute( new Runnable() {
public void run() {
feedRepository.createFeed(friend, Constants.FEED_STATUS, status.getStatusId());
}
});
}
}
public Executor prepareExecutor() {
taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(2);
taskExecutor.setMaxPoolSize(10);
taskExecutor.setQueueCapacity(25);
taskExecutor.setThreadNamePrefix("FeedServiceExecutor-");
taskExecutor.initialize();
return taskExecutor;
}
}
I am not sure if this code is right.? Any spring gurus could you please let me know...
I am not able to understand the executor part.. I am not sure if i need to create a ThreadPool when i am doing new Runnable each time...I am not a threading expert thats why I posted ..if i already knew it than i haven't posted it...
Your TaskExecutor will be null as you haven't initialized it.
Your prepareExecutor() method creates a ThreadPoolTaskExecutor which it assigns to a local variable and then returns. You do nothing with that return value. Did you mean to assign it to taskExecutor?
You would probably want to inject the TaskExecutor instead of creating it within this service.