While experimenting with asynchronous processing, I discovered an unusual phenomenon.
#Slf4j
#ControllerAdvice
public class ErrorAdvice {
#ExceptionHandler(Exception.class)
public void ex(Exception e) {
log.info("error e = ", e);
}
#ExceptionHandler(CompletionException.class)
public void ex2(CompletionException e) {
log.info("error e = ", e);
}
}
#Service
#RequiredArgsConstructor
public class MemberService {
private final MemberRepository memberRepository;
#Transactional
public void save(String name) {
memberRepository.save(new Member(name));
throw new IllegalStateException();
}
}
public class Client {
#Transactional
public void save(String name) {
CompletableFuture.runAsync(() -> memberService.save(name));
}
}
Client starts a transaction and CompletableFuture starts another thread, thus starting a newly bound transaction scope.
But the problem is that I can't catch the error in ControllerAdvice. Now I think it's very risky in real application operation. What is the cause and how to fix it?
In my opinion, ControllerAdvice wraps the controller as a proxy, so even if Memberservice does asynchronous processing, it doesn't make sense that the exception is not handled because it is inside the proxy. It doesn't even leave any logs. Why is the exception being ignored??
why are you using CompletableFuture.runAsync(() -> memberService.save(name));?
just try memberService.save(name));
Related
**Sample Code Snippet and use case**
Here I am extending abstract class and in abstract class only i have final method which has been marked with #Transactional, so upon any exception in any of the 3 methods implemented the transaction should rollback but its not happening.
public abstract class A<E e>{
public abstract void processData(String str);//process some data
public abstract E persistData(E e);//save some data based on E object
public abstract E sendData(E e);
#Transactional(readOnly=false,propagation=Propagation.REQUIRED, isolation=Isolation.READ_COMMITTED,rollbackfor={all custom exception classes}
public final E processMethod(E e){
E e1;
processData(String str);
try{
e1= persistData(e);
}
catch(Exception e){
throw new RuntimeException();//or custom Exception
}
try{
e1= sendData(e1);**//Some Exception happens here but still data saved by persistData().**
}
catch(Exception e){
throw new RuntimeException();//or custom Exception
}
}
public class C extends A<Entity>{
#override
public void processData(){ //processing Data
}
#override
public Entity persistData(Entity e){
//saving data successfully
}
#override
public Entity sendData(Entity e){
//any Exception in sendData Method after persisting Data in Db is not rolling back, instead saving data in db upon exception also
}
}
**Sample Code only to follow use case**
#SpringBootApplication
#EnableTransactionManagement
public class Application{
C c= new C();
E e=new E();
e=c.processMethod(e);
}
I have simple spring cloud kafka stream application. The application terminates each time there is an exception and I'm unable to overwrite this behaviour. The desired outcome is incremental backoff when there are certain types of exceptions or to continue on other type of exceptions. I use springCloudVersion - Hoxton.SR3 and spring boot: 2.2.6.RELEASE
application.yaml
spring:
cloud:
stream:
binders.process-in-0:
destination: test
kafka:
streams:
binder:
deserializationExceptionHandler: logAndContinue
configuration:
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
Beans
#Bean
public java.util.function.Consumer<KStream<String, String>> process() {
return input -> input.process(() -> new EventProcessor());
}
#Bean
public StreamsBuilderFactoryBeanCustomizer customizer() {
return fb -> {
fb.getStreamsConfiguration().put(StreamsConfig.DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG,
ContinueOnErrorHandler.class);
};
}
EventProcessor
public class EventProcessor implements Processor<String, String>, ProcessorSupplier<String, String> {
private ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, String value) {
throw new RuntimeException("Some exception");
}
#Override
public void close() {
}
#Override
public Processor<String, String> get() {
return this;
}
}
ContinueOnErrorHandler
public class ContinueOnErrorHandler implements ProductionExceptionHandler {
#Override
public ProductionExceptionHandlerResponse handle(ProducerRecord<byte[], byte[]> record, Exception exception) {
return ProductionExceptionHandlerResponse.CONTINUE;
}
#Override
public void configure(Map<String, ?> configs) {
//ignore
}
}
The custom processor you are using from the consumer is throwing a RuntimeException in the process method. It is not caught by anything. When that exception is thrown, the application simply exits.
The production exception handler that you are using does not have any effect here since you are not producing anything here. Consumer does not produce anything. If you have a use case of producing something, you should switch to java.util.funciton.Function instead.
In order to fix the issue here, as you are processing the record in the custom processor (EventProcessor), if you get an exception, you should catch it and take appropriate actions. For e.g, here is a template:
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, String value) {
try {
// start processing
// exception thrown
}
catch (Exception e){
// Take the appropriate action
}
}
This way, the application won't be terminated when the exception is thrown in the processor.
With Spring Cloud Stream Kafka app, how can we ensure that the stream listener waits to process messages until some dependency tasks (reference data population, e.g.) are done? Below app fails to process messages because messages are delivered too early. How can we guarantee this kind of ordering within a Spring Boot App?
#Service
public class ApplicationStartupService implements ApplicationRunner {
private final FooReferenceDataService fooReferenceDataService;
#Override
public void run(ApplicationArguments args) throws Exception {
fooReferenceDataService.loadData();
}
}
#EnableBinding(MyBinding.class)
public class MyFooStreamProcessor {
#Autowired FooService fooService;
#StreamListener("my-input")
public void process(KStream<String, Foo> input) {
input.foreach((k,v)-> {
// !!! this fails to save
// messages are delivered too early before foo reference data got loaded into database
fooService.save(v);
});
}
}
spring-cloud-stream: 2.1.0.RELEASE
spring-boot: 2.1.2.RELEASE
I found this is not available in spring cloud stream as of May 15, 2018.
Kafka - Delay binding until complex service initialisation has completed
Do we have a plan/timeline when this is supported?
In the mean time, I achieved what I wanted by using #Ordered and ApplicationRunner. It's messy but works. Basically, stream listener will wait until other works are done.
#Service
#Order(1)
public class ApplicationStartupService implements ApplicationRunner {
private final FooReferenceDataService fooReferenceDataService;
#Override
public void run(ApplicationArguments args) throws Exception {
fooReferenceDataService.loadData();
}
}
#EnableBinding(MyBinding.class)
#Order(2)
public class MyFooStreamProcessor implements ApplicationRunner {
#Autowired FooService fooService;
private final AtomicBoolean ready = new AtomicBoolean(false);
#StreamListener("my-input")
public void process(KStream<String, Foo> input) {
input.foreach((k,v)-> {
while (ready.get() == false) {
try {
log.info("sleeping for other dependent components to finish initialization");
Thread.sleep(10000);
} catch (InterruptedException e) {
log.info("woke up");
}
}
fooService.save(v);
});
}
#Override
public void run(ApplicationArguments args) throws Exception {
ready.set(true);
}
}
I'm using Spring for the first time and am trying to implement a shared queue wherein a Kafka listener puts messages on the shared queue, and a ThreadManager that will eventually do something multithreaded with the items it takes off the shared queue. Here is my current implementation:
The Listener:
#Component
public class Listener {
#Autowired
private QueueConfig queueConfig;
private ExecutorService executorService;
private List<Future> futuresThread1 = new ArrayList<>();
public Listener() {
Properties appProps = new AppProperties().get();
this.executorService = Executors.newFixedThreadPool(Integer.parseInt(appProps.getProperty("listenerThreads")));
}
//TODO: how can I pass an approp into this annotation?
#KafkaListener(id = "id0", topics = "bose.cdp.ingest.marge.boseaccount.normalized")
public void listener(ConsumerRecord<?, ?> record) throws InterruptedException, ExecutionException
{
futuresThread1.add(executorService.submit(new Runnable() {
#Override public void run() {
try{
queueConfig.blockingQueue().put(record);
// System.out.println(queueConfig.blockingQueue().take());
} catch (Exception e){
System.out.print(e.toString());
}
}
}));
}
}
The Queue:
#Configuration
public class QueueConfig {
private Properties appProps = new AppProperties().get();
#Bean
public BlockingQueue<ConsumerRecord> blockingQueue() {
return new ArrayBlockingQueue<>(
Integer.parseInt(appProps.getProperty("blockingQueueSize"))
);
}
}
The ThreadManager:
#Component
public class ThreadManager {
#Autowired
private QueueConfig queueConfig;
private int threads;
public ThreadManager() {
Properties appProps = new AppProperties().get();
this.threads = Integer.parseInt(appProps.getProperty("threadManagerThreads"));
}
public void run() throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(threads);
try {
while (true){
queueConfig.blockingQueue().take();
}
} catch (Exception e){
System.out.print(e.toString());
executorService.shutdownNow();
executorService.awaitTermination(1, TimeUnit.SECONDS);
}
}
}
Lastly, the main thread where everything is started from:
#SpringBootApplication
public class SourceAccountListenerApp {
public static void main(String[] args) {
SpringApplication.run(SourceAccountListenerApp.class, args);
ThreadManager threadManager = new ThreadManager();
try{
threadManager.run();
} catch (Exception e) {
System.out.println(e.toString());
}
}
}
The problem
I can tell when running this in the debugger that the Listener is adding things to the queue. When the ThreadManager takes off the shared queue, it tells me the queue is null and I get an NPE. It seems like autowiring isn't working to connect the queue the listener is using to the ThreadManager. Any help appreciated.
This is the problem:
ThreadManager threadManager = new ThreadManager();
Since you are creating the instance manually, you cannot use the DI provided by Spring.
One simple solution is implement a CommandLineRunner, that will be executed after the complete SourceAccountListenerApp initialization:
#SpringBootApplication
public class SourceAccountListenerApp {
public static void main(String[] args) {
SpringApplication.run(SourceAccountListenerApp.class, args);
}
// Create the CommandLineRunner Bean and inject ThreadManager
#Bean
CommandLineRunner runner(ThreadManager manager){
return args -> {
manager.run();
};
}
}
You use SpringĀ“s programatic, so called 'JavaConfig', way of setting up Spring beans (classes annotated with #Configuration with methods annotated with #Bean). Usually at application startup Spring will call those #Bean methods under the hood and register them in it's application context (if scope is singleton - the default - this will happen only once!). No need to call those #Bean methods anywhere in your code directly... you must not, otherwise you will get a separate, fresh instance that possibly is not fully configured!
Instead, you need to inject the BlockingQueue<ConsumerRecord> that you 'configured' in your QueueConfig.blockingQueue() method into your ThreadManager. Since the queue seems to be a mandatory dependency for the ThreadManager to work, I'd let Spring inject it via constructor:
#Component
public class ThreadManager {
private int threads;
// add instance var for queue...
private BlockingQueue<ConsumerRecord> blockingQueue;
// you could add #Autowired annotation to BlockingQueue param,
// but I believe it's not mandatory...
public ThreadManager(BlockingQueue<ConsumerRecord> blockingQueue) {
Properties appProps = new AppProperties().get();
this.threads = Integer.parseInt(appProps.getProperty("threadManagerThreads"));
this.blockingQueue = blockingQueue;
}
public void run() throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(threads);
try {
while (true){
this.blockingQueue.take();
}
} catch (Exception e){
System.out.print(e.toString());
executorService.shutdownNow();
executorService.awaitTermination(1, TimeUnit.SECONDS);
}
}
}
Just to clarify one more thing: by default the method name of a #Bean method is used by Spring to assign this bean a unique ID (method name == bean id). So your method is called blockingQueue, means your BlockingQueue<ConsumerRecord> instance will also be registered with id blockingQueue in application context. The new constructor parameter is also named blockingQueue and it's type matches BlockingQueue<ConsumerRecord>. Simplified, that's one way Spring looks up and injects/wires dependencies.
I am facing a problem in transaction rollback using the #Transactional annotation.
I have the following methods in backingbean, service and dao classes:
public class ItemBackingBean {
public void saveUpdate() {
try {
ItemService.executeTransaction();
}
catch(Exception e) {
}
}
}
public class ItemService {
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void executeTransaction() {
deleteItem();
createOrder();
}
private void deleteItem() {
persist();
}
private void createOrder() {
persist();
}
private void persist() {
JpaDaoImpl.persist(object);
JpaDaoImpl.update(object);
}
}
public class JpaDaoImpl implements JpaDao {
#Transactional(readOnly = true)
public persist(Object object) {
getEm().persist(object);
}
#Transactional(readOnly = false, propagation = Propagation.REQUIRED)
public void update(Object object) {
getEm().merge(object);
}
#Transactional(readOnly = true)
public void remove(Object object) {
getEm().remove(object);
}
}
If any exception occurs in createOrder(), all transactions should rollback but it is not happening. Can any body tell me the problem?
What is the impact of #Transactional in JpaDaoImpl.java? The persist() and update() methods have different radOnly values. This Dao is existing code in our project and we don't want to change it. Can anybody help?
Regards,
Bandu
For those who don't want to throw exception (transaction should NOT be only rollbacked when exception happen), use this: TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
If any exception occurs in createOrder(), all transactions should rollback but it is not happening. Can any body tell me the problem?
Rollback occurs only for RuntimeExceptions (see http://docs.spring.io/spring/docs/2.0.8/reference/transaction.html "please note that the Spring Framework's transaction infrastructure code will, by default, only mark a transaction for rollback in the case of runtime, unchecked exceptions;") but it is a customizable behaviour
You can keep the default transaction propagation that is PROPAGATION_REQUIRED without affecting the existing code if you want a ALL or NOTHING behaviour