#Retryable method not working that also is #Scheduled and #EnableSchedulerLock - spring-boot

I want to create a cron job which is retry-able and only 1 instance should execute it when we deploy multiple instances of the application.
I have also referred #Recover annotated method is not discovered for a #Retryable method that also is #Scheduled, but I am unable to resolve the issue of ArrayIndexOutOfBoundsException.
I am using 2.1.8.RELEASE version spring-boot
#Configuration
#EnableScheduling
#EnableSchedulerLock(defaultLockAtMostFor = "PT30S")
#EnableRetry
public class MyScheduler {
#Scheduled(cron = "0 16 16 * * *")
#SchedulerLock(name = "MyScheduler_lock", lockAtLeastForString = ""PT5M", lockAtMostForString ="PT14M")
#Retryable(value = Exception.class, maxAttempts = 2)
public void retryAndRecover() {
retry++;
log.info("Scheduling Service Failed " + retry);
throw new Exception();
}
#Recover
public void recover(Exception e, String str) {
log.info("Service recovering");
}
}
Detailed exception:
2019-12-07 19:42:00.109 INFO [my-service,false] 16767 --- [ scheduling-1] r.t.p.scheduler.MyScheduler : Scheduling Service Failed 1
2019-12-07 19:42:01.114 INFO [my-service,false] 16767 --- [ scheduling-1] r.t.p.scheduler.MyScheduler : Scheduling Service Failed 2
2019-12-07 19:42:01.123 ERROR [my-service,,,] 16767 --- [ scheduling-1] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
java.lang.ArrayIndexOutOfBoundsException: arraycopy: last source index 1 out of bounds for object array[0]
at java.base/java.lang.System.arraycopy(Native Method) ~[na:na]
at org.springframework.retry.annotation.RecoverAnnotationRecoveryHandler$SimpleMetadata.getArgs(RecoverAnnotationRecoveryHandler.java:166) ~[spring-retry-1.2.1.RELEASE.jar:na]
at org.springframework.retry.annotation.RecoverAnnotationRecoveryHandler.recover(RecoverAnnotationRecoveryHandler.java:62) ~[spring-retry-1.2.1.RELEASE.jar:na]

Your recover method can't have more parameters than the main method (aside from the exception.
String str

Related

No subscriptions have been created in Reactor Kafka and Spring Integration

I'm trying to create a simple flow with Spring Integration and Project Reactor, where I consume records with Reactor Kafka, passing them to a channel that from there it will produce messages into another topic with Reactor Kafka.
The consuming flow is:
#Service
public class ReactiveConsumerService {
public ReactiveKafkaConsumerTemplate<String, String> reactiveKafkaConsumerTemplate;
#Qualifier("directChannel")
#Autowired
public MessageChannel directChannel;
public ReactiveConsumerService(ReactiveKafkaConsumerTemplate<String, String> reactiveKafkaConsumerTemplate) {
this.reactiveKafkaConsumerTemplate = reactiveKafkaConsumerTemplate;
}
#Bean
public IntegrationFlow readFromKafka() {
return IntegrationFlows.from(reactiveKafkaConsumerTemplate.receiveAutoAck()
.map(GenericMessage::new))
.<ConsumerRecord<String, String>, String>transform(ConsumerRecord::value)
.<String, String>transform(String::toUpperCase)
.channel(directChannel)
.get();
}
}
And the producing flow is:
#Service
public class ReactiveProducerService {
private final ReactiveKafkaProducerTemplate<String, String> reactiveKafkaProducerTemplate;
#Qualifier("directChannel")
#Autowired
public MessageChannel directChannel;
public ReactiveProducerService(ReactiveKafkaProducerTemplate<String, String> reactiveKafkaProducerTemplate) {
this.reactiveKafkaProducerTemplate = reactiveKafkaProducerTemplate;
}
#Bean
public IntegrationFlow kafkaProducerFlow() {
return IntegrationFlows.from(directChannel)
.handle(s -> reactiveKafkaProducerTemplate.send("topic2", s.getPayload().toString()))
.get();
}
}
I'd like to know how and where exactly should I perform the subscription.
Edit:
I've added a .subscripe() and it still doesn't work:
2022-01-25 20:36:59.570 INFO 1804 --- [ration-sample-1] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-reactive-kafka-spring-integration-sample-1 unregistered
2022-01-25 20:36:59.573 ERROR 1804 --- [oundedElastic-1] reactor.core.publisher.Operators : Operator called default onErrorDropped
reactor.core.Exceptions$ErrorCallbackNotImplemented: java.lang.IllegalStateException: No subscriptions have been created
Caused by: java.lang.IllegalStateException: No subscriptions have been created
at reactor.kafka.receiver.ReceiverOptions.subscriber(ReceiverOptions.java:423) ~[reactor-kafka-1.3.9.jar:1.3.9]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.FluxPeekFuseable] :
reactor.core.publisher.Flux.doOnRequest
Caused by: java.lang.IllegalStateException: No subscriptions have been created
reactor.kafka.receiver.internals.ConsumerHandler.receive(ConsumerHandler.java:110)
Error has been observed at the following site(s):
*________Flux.doOnRequest ? at reactor.kafka.receiver.internals.ConsumerHandler.receive(ConsumerHandler.java:110)
|_ Flux.filter ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.lambda$receiveAutoAck$6(DefaultKafkaReceiver.java:70)
|_ Flux.publishOn ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.lambda$receiveAutoAck$6(DefaultKafkaReceiver.java:71)
|_ Flux.map ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.lambda$receiveAutoAck$6(DefaultKafkaReceiver.java:72)
*______________Flux.using ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.lambda$withHandler$19(DefaultKafkaReceiver.java:137)
*__________Flux.usingWhen ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.withHandler(DefaultKafkaReceiver.java:129)
|_ ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.receiveAutoAck(DefaultKafkaReceiver.java:68)
|_ ? at reactor.kafka.receiver.KafkaReceiver.receiveAutoAck(KafkaReceiver.java:124)
|_ Flux.concatMap ? at org.springframework.kafka.core.reactive.ReactiveKafkaConsumerTemplate.receiveAutoAck(ReactiveKafkaConsumerTemplate.java:69)
|_ Flux.map ? at reactor.kafka.spring.integration.samples.service.ReactiveConsumerService.readFromKafka(ReactiveConsumerService.java:38)
|_ Flux.from ? at org.springframework.integration.channel.FluxMessageChannel.subscribeTo(FluxMessageChannel.java:118)
|_ Flux.delaySubscription ? at org.springframework.integration.channel.FluxMessageChannel.subscribeTo(FluxMessageChannel.java:119)
|_ Flux.publishOn ? at org.springframework.integration.channel.FluxMessageChannel.subscribeTo(FluxMessageChannel.java:120)
|_ Flux.doOnNext ? at org.springframework.integration.channel.FluxMessageChannel.subscribeTo(FluxMessageChannel.java:121)
Original Stack Trace:
at reactor.kafka.receiver.ReceiverOptions.subscriber(ReceiverOptions.java:423) ~[reactor-kafka-1.3.9.jar:1.3.9]
at reactor.kafka.receiver.internals.ConsumerEventLoop$SubscribeEvent.run(ConsumerEventLoop.java:207) ~[reactor-kafka-1.3.9.jar:1.3.9]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68) ~[reactor-core-3.4.14.jar:3.4.14]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28) ~[reactor-core-3.4.14.jar:3.4.14]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
2022-01-25 20:36:59.772 INFO 1804 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8090
2022-01-25 20:36:59.853 INFO 1804 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Routes startup summary (total:0 started:0)
2022-01-25 20:36:59.853 INFO 1804 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Apache Camel 3.12.0 (camel-1) started in 149ms (build:84ms init:59ms start:6ms)
2022-01-25 20:36:59.866 INFO 1804 --- [ main] ReactorKafkaSpringIntegrationApplication : Started ReactorKafkaSpringIntegrationApplication in 4.246 seconds (JVM running for 4.616)
The sample code:
#Service
public class ReactiveProducerService {
private final ReactiveKafkaProducerTemplate<String, String> reactiveKafkaProducerTemplate;
#Qualifier("directChannel")
#Autowired
public MessageChannel directChannel;
public ReactiveProducerService(ReactiveKafkaProducerTemplate<String, String> reactiveKafkaProducerTemplate) {
this.reactiveKafkaProducerTemplate = reactiveKafkaProducerTemplate;
}
#Bean
public IntegrationFlow kafkaProducerFlow() {
return IntegrationFlows.from(directChannel)
.handle(s -> reactiveKafkaProducerTemplate.send("topic2", s.getPayload().toString()).subscribe(System.out::println))
.get();
}
}
The subscription to the reactiveKafkaConsumerTemplate happens immediately when the endpoint for the .<ConsumerRecord<String, String>, String>transform(ConsumerRecord::value) is started automatically by the application context.
See this one as an alternative:
/**
* Represent an Integration Flow as a Reactive Streams {#link Publisher} bean.
* #param autoStartOnSubscribe start message production and consumption in the flow,
* when a subscription to the publisher is initiated.
* If this set to true, the flow is marked to not start automatically by the application context.
* #param <T> the expected {#code payload} type
* #return the Reactive Streams {#link Publisher}
* #since 5.5.6
*/
#SuppressWarnings(UNCHECKED)
protected <T> Publisher<Message<T>> toReactivePublisher(boolean autoStartOnSubscribe) {
Although I think you mean the subscription on the outbound side. It is not clear from your question, but that reactiveKafkaProducerTemplate has a contract like:
public Mono<SenderResult<Void>> send(String topic, V value) {
So, you need to subscribe to that returned Mono to initiate a process.
NOTE: you have messed arguments for that send() as well. Didn't you mean this instead: reactiveKafkaProducerTemplate.send("test", "topic2") ?
To make it subscribing to that Mono, you just need to do that yourself in that handle():
.handle(s -> reactiveKafkaProducerTemplate.send("topic2", "test").subscribe())
UPDATE 2
The error like java.lang.IllegalStateException: No subscriptions have been created from the reactor.kafka.receiver.ReceiverOptions.subscriber() means that you didn't assign topic, patterns or partitions to listen to.
See ReceiverOptions.subscription() or ReceiverOptions.assignment().

KafkaProducer InterruptedException during gracefull shutdown on spring boot application

For a project we are sending some events to kafka. We use spring-kafka 2.6.2.
Due to usage of spring-vault we have to restart/kill the application before the end of credentials lease (application is automatically restarted by kubernetes).
Our problem is that when using applicationContext.close() to proceed with our gracefull shutdown, KafkaProducer gets an InterruptedException Interrupted while joining ioThread inside it's close() method.
It means that in our case some pending events are not sent to kafka before shutdown as it's forced to close due to an error during destroy.
Here under a stacktrace
2020-12-18 13:57:29.007 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.s.b.w.e.tomcat.GracefulShutdown : Commencing graceful shutdown. Waiting for active requests to complete
2020-12-18 13:57:29.009 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2020-12-18 13:57:29.013 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Destroying Spring FrameworkServlet 'dispatcherServlet'
2020-12-18 13:57:29.014 INFO [titan-producer,,,] 1 --- [tomcat-shutdown] o.s.b.w.e.tomcat.GracefulShutdown : Graceful shutdown complete
2020-12-18 13:57:29.020 WARN [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.a.c.loader.WebappClassLoaderBase : The web application [ROOT] appears to have started a thread named [kafka-producer-network-thread | titan-producer-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.base#11.0.9.1/sun.nio.ch.EPoll.wait(Native Method)
java.base#11.0.9.1/sun.nio.ch.EPollSelectorImpl.doSelect(Unknown Source)
java.base#11.0.9.1/sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
java.base#11.0.9.1/sun.nio.ch.SelectorImpl.select(Unknown Source)
org.apache.kafka.common.network.Selector.select(Selector.java:873)
org.apache.kafka.common.network.Selector.poll(Selector.java:469)
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
java.base#11.0.9.1/java.lang.Thread.run(Unknown Source)
2020-12-18 13:57:29.021 WARN [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.a.c.loader.WebappClassLoaderBase : The web application [ROOT] appears to have started a thread named [micrometer-kafka-metrics] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.base#11.0.9.1/jdk.internal.misc.Unsafe.park(Native Method)
java.base#11.0.9.1/java.util.concurrent.locks.LockSupport.parkNanos(Unknown Source)
java.base#11.0.9.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(Unknown Source)
java.base#11.0.9.1/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(Unknown Source)
java.base#11.0.9.1/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(Unknown Source)
java.base#11.0.9.1/java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)
java.base#11.0.9.1/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.base#11.0.9.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.base#11.0.9.1/java.lang.Thread.run(Unknown Source)
2020-12-18 13:57:29.046 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2020-12-18 13:57:29.048 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService 'taskScheduler'
2020-12-18 13:57:29.051 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=titan-producer-1] Closing the Kafka producer with timeoutMillis = 30000 ms.
2020-12-18 13:57:29.055 ERROR [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=titan-producer-1] Interrupted while joining ioThreadjava.lang.InterruptedException: null
at java.base/java.lang.Object.wait(Native Method)
at java.base/java.lang.Thread.join(Unknown Source)
at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1205)
at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1182)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.closeDelegate(DefaultKafkaProducerFactory.java:901)
at org.springframework.kafka.core.DefaultKafkaProducerFactory.destroy(DefaultKafkaProducerFactory.java:428)
at org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:258)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:587)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:559)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:1092)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:520)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingletons(DefaultListableBeanFactory.java:1085)
at org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:1061)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1030)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.doClose(ServletWebServerApplicationContext.java:170)
at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:979)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)2020-12-18 13:57:29.055 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=titan-producer-1] Proceeding to force close the producer since pending requests could not be completed within timeout 30000 ms.
2020-12-18 13:57:29.056 WARN [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.s.b.f.support.DisposableBeanAdapter : Invocation of destroy method failed on bean with name 'kafkaProducerFactory': org.apache.kafka.common.errors.InterruptException: java.lang.InterruptedException
2020-12-18 13:57:29.064 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService
2020-12-18 13:57:29.065 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] c.l.t.p.zookeeper.ZookeeperManagerImpl : Closing zookeeperConnection
2020-12-18 13:57:29.197 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] org.apache.zookeeper.ZooKeeper : Session: 0x30022348ba6000b closed
2020-12-18 13:57:29.197 INFO [titan-producer,,,] 1 --- [d-1-EventThread] org.apache.zookeeper.ClientCnxn : EventThread shut down for session: 0x30022348ba6000b
2020-12-18 13:57:29.206 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] com.zaxxer.hikari.HikariDataSource : loadtest_fallback_titan_pendingEvents - Shutdown initiated...
2020-12-18 13:57:29.221 INFO [titan-producer,222efdd2a07966ce,222efdd2a07966ce,true] 1 --- [ scheduling-1] com.zaxxer.hikari.HikariDataSource : loadtest_fallback_titan_pendingEvents - Shutdown completed.
Here is my configuration class
#Flogger
#EnableKafka
#Configuration
#RequiredArgsConstructor
#ConditionalOnProperty(
name = "titan.producer.kafka.enabled",
havingValue = "true",
matchIfMissing = true)
public class KafkaConfiguration {
#Bean
DefaultKafkaProducerFactoryCustomizer kafkaProducerFactoryCustomizer(ObjectMapper mapper) {
return producerFactory -> producerFactory.setValueSerializer(new JsonSerializer<>(mapper));
}
#Bean
public NewTopic createTopic(TitanProperties titanProperties, KafkaProperties kafkaProperties) {
TitanProperties.Kafka kafka = titanProperties.getKafka();
String defaultTopic = kafkaProperties.getTemplate().getDefaultTopic();
int numPartitions = kafka.getNumPartitions();
short replicationFactor = kafka.getReplicationFactor();
log.atInfo()
.log("Creating Kafka Topic %s with %s partitions and %s replicationFactor", defaultTopic, numPartitions, replicationFactor);
return TopicBuilder.name(defaultTopic)
.partitions(numPartitions)
.replicas(replicationFactor)
.config(MESSAGE_TIMESTAMP_TYPE_CONFIG, LOG_APPEND_TIME.name)
.build();
}
}
and my application.yaml
spring:
application:
name: titan-producer
kafka:
client-id: ${spring.application.name}
producer:
key-serializer: org.apache.kafka.common.serialization.UUIDSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
max.block.ms: 2000
request.timeout.ms: 2000
delivery.timeout.ms: 2000 #must be greater or equal to request.timeout.ms + linger.ms
template:
default-topic: titan-dev
Our vault configuration which executes the applicationContext.close() using a scheduledTask. We do it kind randomly as we have multiple replicas of the app running in parallel and avoid all the replicas to be killed at the same time.
#Flogger
#Configuration
#ConditionalOnBean(SecretLeaseContainer.class)
#ConditionalOnProperty(
name = "titan.producer.scheduling.enabled",
havingValue = "true",
matchIfMissing = true)
public class VaultConfiguration {
#Bean
public Lifecycle scheduledAppRestart(Clock clock, TitanProperties properties, TaskScheduler scheduler, ConfigurableApplicationContext applicationContext) {
Instant now = clock.instant();
Duration maxTTL = properties.getVaultConfig().getCredsMaxLease();
Instant start = now.plusSeconds(maxTTL.dividedBy(2).toSeconds());
Instant end = now.plusSeconds(maxTTL.minus(properties.getVaultConfig().getCredsMaxLeaseExpirationThreshold()).toSeconds());
Instant randomInstant = randBetween(start, end);
return new ScheduledLifecycle(scheduler, applicationContext::close, "application restart before lease expiration", randomInstant);
}
private Instant randBetween(Instant startInclusive, Instant endExclusive) {
long startSeconds = startInclusive.getEpochSecond();
long endSeconds = endExclusive.getEpochSecond();
long random = RandomUtils.nextLong(startSeconds, endSeconds);
return Instant.ofEpochSecond(random);
}
}
The ScheduledLifecycle class we use to run the scheduledtasks
import lombok.extern.flogger.Flogger;
import org.springframework.context.SmartLifecycle;
import org.springframework.scheduling.TaskScheduler;
import java.time.Duration;
import java.time.Instant;
import java.util.concurrent.ScheduledFuture;
#Flogger
public class ScheduledLifecycle implements SmartLifecycle {
private ScheduledFuture<?> future = null;
private Duration delay = null;
private final TaskScheduler scheduler;
private final Runnable command;
private final String commandDesc;
private final Instant startTime;
public ScheduledLifecycle(TaskScheduler scheduler, Runnable command, String commandDesc, Instant startTime) {
this.scheduler = scheduler;
this.command = command;
this.commandDesc = commandDesc;
this.startTime = startTime;
}
public ScheduledLifecycle(TaskScheduler scheduler, Runnable command, String commandDesc, Instant startTime, Duration delay) {
this(scheduler, command, commandDesc, startTime);
this.delay = delay;
}
#Override
public void start() {
if (delay != null) {
log.atInfo().log("Scheduling %s: starting at %s, running every %s", commandDesc, startTime, delay);
future = scheduler.scheduleWithFixedDelay(command, startTime, delay);
} else {
log.atInfo().log("Scheduling %s: execution at %s", commandDesc, startTime);
future = scheduler.schedule(command, startTime);
}
}
#Override
public void stop() {
if (future != null) {
log.atInfo().log("Stop %s", commandDesc);
future.cancel(true);
}
}
#Override
public boolean isRunning() {
boolean running = future != null && (!future.isDone() && !future.isCancelled());
log.atFine().log("is %s running? %s", running);
return running;
}
}
Is there a bug with spring-kafka? Any idea?
Thanks
future.cancel(true);
This is interrupting the producer thread and is likely the root cause of the problem.
You should use future.cancel(false); to allow the task to terminate in an orderly fashion, without interruption.
/**
* Attempts to cancel execution of this task. This attempt will
* fail if the task has already completed, has already been cancelled,
* or could not be cancelled for some other reason. If successful,
* and this task has not started when {#code cancel} is called,
* this task should never run. If the task has already started,
* then the {#code mayInterruptIfRunning} parameter determines
* whether the thread executing this task should be interrupted in
* an attempt to stop the task.
*
* <p>After this method returns, subsequent calls to {#link #isDone} will
* always return {#code true}. Subsequent calls to {#link #isCancelled}
* will always return {#code true} if this method returned {#code true}.
*
* #param mayInterruptIfRunning {#code true} if the thread executing this
* task should be interrupted; otherwise, in-progress tasks are allowed
* to complete
* #return {#code false} if the task could not be cancelled,
* typically because it has already completed normally;
* {#code true} otherwise
*/
boolean cancel(boolean mayInterruptIfRunning);
EDIT
In addition, the ThreadPoolTaskScheduler.waitForTasksToCompleteOnShutdown is false by default.
/**
* Set whether to wait for scheduled tasks to complete on shutdown,
* not interrupting running tasks and executing all tasks in the queue.
* <p>Default is "false", shutting down immediately through interrupting
* ongoing tasks and clearing the queue. Switch this flag to "true" if you
* prefer fully completed tasks at the expense of a longer shutdown phase.
* <p>Note that Spring's container shutdown continues while ongoing tasks
* are being completed. If you want this executor to block and wait for the
* termination of tasks before the rest of the container continues to shut
* down - e.g. in order to keep up other resources that your tasks may need -,
* set the {#link #setAwaitTerminationSeconds "awaitTerminationSeconds"}
* property instead of or in addition to this property.
* #see java.util.concurrent.ExecutorService#shutdown()
* #see java.util.concurrent.ExecutorService#shutdownNow()
*/
public void setWaitForTasksToCompleteOnShutdown(boolean waitForJobsToCompleteOnShutdown) {
this.waitForTasksToCompleteOnShutdown = waitForJobsToCompleteOnShutdown;
}
You might also have to set awaitTerminationSeconds.

Is it possible to enforce message order on ActiveMQ topics using Spring Boot and JmsTemplate?

In playing around with Spring Boot, ActiveMQ, and JmsTemplate, I noticed that it appears that message order is not always preserved. In reading on ActiveMQ, "Message Groups" are offered as a potential solution to preserving message order when sending to a topic. Is there a way to do this with JmsTemplate?
Add Note: I'm starting to think that JmsTemplate is nice for "getting launched", but has too many issues.
Sample code and console output posted below...
#RestController
public class EmptyControllerSB {
#Autowired
MsgSender msgSender;
#RequestMapping(method = RequestMethod.GET, value = { "/v1/msgqueue" })
public String getAccount() {
msgSender.sendJmsMessageA();
msgSender.sendJmsMessageB();
return "Do nothing...successfully!";
}
}
#Component
public class MsgSender {
#Autowired
JmsTemplate jmsTemplate;
void sendJmsMessageA() {
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.TEST-TOPIC"), "message A");
}
void sendJmsMessageB() {
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.TEST-TOPIC"), "message B");
}
}
#Component
public class MsgReceiver {
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";
private final String consumerTwo = "Consumer.myConsumer2.VirtualTopic.TEST-TOPIC";
#JmsListener(destination = consumerOne )
public void receiveMessage1(String strMessage) {
System.out.println("Received on #1a -> " + strMessage);
}
#JmsListener(destination = consumerOne )
public void receiveMessage2(String strMessage) {
System.out.println("Received on #1b -> " + strMessage);
}
#JmsListener(destination = consumerTwo )
public void receiveMessage3(String strMessage) {
System.out.println("Received on #2 -> " + strMessage);
}
}
Here's the console output (note the order of output in first sequence)...
\Intel\Intel(R) Management Engine Components\DAL;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\gnupg\bin;C:\Users\LesR\AppData\Local\Microsoft\WindowsApps;c:\Gradle\gradle-5.0\bin;;C:\Program Files\JetBrains\IntelliJ IDEA 2018.3\bin;;.]
2019-04-03 09:23:08.408 INFO 13936 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-04-03 09:23:08.408 INFO 13936 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 672 ms
2019-04-03 09:23:08.705 INFO 13936 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-04-03 09:23:08.845 INFO 13936 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-04-03 09:23:08.877 INFO 13936 --- [ main] mil.navy.msgqueue.MsgqueueApplication : Started MsgqueueApplication in 1.391 seconds (JVM running for 1.857)
2019-04-03 09:23:14.949 INFO 13936 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2019-04-03 09:23:14.949 INFO 13936 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2019-04-03 09:23:14.952 INFO 13936 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 3 ms
Received on #2 -> message A
Received on #1a -> message B
Received on #1b -> message A
Received on #2 -> message B
<HIT DO-NOTHING ENDPOINT AGAIN>
Received on #1b -> message A
Received on #2 -> message A
Received on #1a -> message B
Received on #2 -> message B
BLUF - Add "?consumer.exclusive=true" to the declaration of the destination for the JmsListener annotation.
It seems that the solution is not that complex, especially if one abandons ActiveMQ's "message groups" in favor or "exclusive consumers". The drawback to the "message groups" is that the sender has to have prior knowledge of the potential partitioning of message consumers. If the producer has this knowledge, then "message groups" are a nice solution, as the solution is somewhat independent of the consumer.
But, a similar solution can be implemented from the consumer side, by having the consumer declare "exclusive consumer" on the queue. While I did not see anything in the JmsTemplate implementation that directly supports this, it seems that Spring's JmsTemplate implementation passes the queue name to ActiveMQ, and then ActiveMQ "does the right thing" and enforces the exclusive consumer behavior.
So...
Change the following...
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";
to...
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";?consumer.exclusive=true
Once I did this, only one of the two declared receive methods were invoked, and message order was maintained in all my test runs.

AWS Lambda - Spring boot is not handling the request

I am trying to run spring boot application as serverless in AWS lambda and I am getting below exception while calling lambda function. Spring boot application successfully ran but it seems that it is going to fail to map the request
2018-09-25 06:11:50.717 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-09-25 06:11:50.823 INFO 1 --- [ main] **my.service.Application : Started Application in 7.405 seconds (JVM running for 8.939)**
START RequestId: decfc13c-c089-11e8-bacd-a37f1ba65629 Version: $LATEST
2018-09-25 06:11:50.994 ERROR 1 --- [ main] **c.a.s.p.i.s.AwsProxyHttpServletRequest : Called set character encoding to UTF-8 on a request without a content type. Character encoding will not be set
2018-09-25 06:11:51.175 ERROR 1 --- [ main] o.s.boot.web.support.ErrorPageFilter : Forwarding to error page from request [/] due to exception [null]**
java.lang.NullPointerException: null
at com.amazonaws.serverless.proxy.internal.servlet.AwsProxyHttpServletRequest.getRemoteAddr(AwsProxyHttpServletRequest.java:575) ~[task/:na]
at org.springframework.web.servlet.FrameworkServlet.publishRequestHandledEvent(FrameworkServlet.java:1075) ~[task/:na]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005) ~[task/:na]
.........
2018-09-25 06:11:51.535 ERROR 1 --- [ main] s.p.i.s.AwsLambdaServletContainerHandler : Could not forward request
This is my StreamLambdaHandler java file.
public class StreamLambdaHandler implements RequestStreamHandler {
private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
} catch (ContainerInitializationException e) {
throw new RuntimeException("Could not initialize Spring Boot application", e);
}
}
#Override
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context)
throws IOException {
handler.proxyStream(inputStream, outputStream, context);
outputStream.close();
}
}
Looks like you might be hitting https://github.com/awslabs/aws-serverless-java-container/issues/172. According to the ticket, the fix will be available as part of the upcoming 1.2 release.

Resilience4j circuit breaker used with reactive Flux never changes to OPEN on errors

I am evaluating resilience4j to include it in our reactive APIs, so far I am using mock Fluxes.
The service below always fails as I want to test if the circuit OPENs on multiple errors:
#Service
class GamesRepositoryImpl : GamesRepository {
override fun findAll(): Flux<Game> {
return if (Math.random() <= 1.0) {
Flux.error(RuntimeException("fail"))
} else {
Flux.just(
Game("The Secret of Monkey Island"),
Game("Loom"),
Game("Maniac Mansion"),
Game("Day of the Tentacle")).log()
}
}
}
This is the handler that uses the repository, printing the state of the circuit:
#Component
class ApiHandlers(private val gamesRepository: GamesRepository) {
var circuitBreaker : CircuitBreaker = CircuitBreaker.ofDefaults("gamesCircuitBreaker")
fun getGames(serverRequest: ServerRequest) : Mono<ServerResponse> {
println("*********${circuitBreaker.state}")
return ok().body(gamesRepository.findAll().transform(CircuitBreakerOperator.of(circuitBreaker)), Game::class.java)
}
}
I invoke the API endpoint many times, always getting this stacktrace:
*********CLOSED
2018-03-14 12:02:28.153 ERROR 1658 --- [ctor-http-nio-3] .a.w.r.e.DefaultErrorWebExceptionHandler : Failed to handle request [GET http://localhost:8081/api/v1/games]
java.lang.RuntimeException: FAIL
at com.codependent.reactivegames.repository.GamesRepositoryImpl.findAll(GamesRepositoryImpl.kt:12) ~[classes/:na]
at com.codependent.reactivegames.web.handler.ApiHandlers.getGames(ApiHandlers.kt:20) ~[classes/:na]
...
2018-03-14 12:05:48.973 DEBUG 1671 --- [ctor-http-nio-2] i.g.r.c.i.CircuitBreakerStateMachine : No Consumers: Event ERROR not published
2018-03-14 12:05:48.975 ERROR 1671 --- [ctor-http-nio-2] .a.w.r.e.DefaultErrorWebExceptionHandler : Failed to handle request [GET http://localhost:8081/api/v1/games]
java.lang.RuntimeException: fail
at com.codependent.reactivegames.repository.GamesRepositoryImpl.findAll(GamesRepositoryImpl.kt:12) ~[classes/:na]
at com.codependent.reactivegames.web.handler.ApiHandlers.getGames(ApiHandlers.kt:20) ~[classes/:na]
at com.codependent.reactivegames.web.route.ApiRoutes$apiRouter$1$1$1.invoke(ApiRoutes.kt:14) ~[classes/:na]
As you see the circuit is always CLOSED. I don't know if it has anything to do but notice this message No Consumers: Event ERROR not published.
Why isn't this working?
The problem was the default ringBufferSizeInClosedState which is 100 requests and I never made so many manual requests.
I setup my own CircuitBreakerConfig for my tests and now the circuit opens right away:
val circuitBreakerConfig : CircuitBreakerConfig = CircuitBreakerConfig.custom()
.failureRateThreshold(50f)
.waitDurationInOpenState(Duration.ofMillis(10000))
.ringBufferSizeInHalfOpenState(5)
.ringBufferSizeInClosedState(5)
.build()
var circuitBreaker: CircuitBreaker = CircuitBreaker.of("gamesCircuitBreaker", circuitBreakerConfig)

Resources