NoSuchJobException when trying to restart a spring batch job - spring

I am running a spring batch job thru CommandLineRunner. The way I want my job to run is for every run create a new JobInstance and if the jobExecution fails to complete, I will rerun the job with --restart parameter and it should find the previous execution and resume that jobExecution instead of creating a new jobInstance.
Below is my JobConfig class.
#Slf4j
#Configuration
public class JobConfig {
#Autowired
List<Job> jobsToRun; //Jobs annotated as a Components
#Bean
public CommandLineRunner commandLineRunner(JobLauncher jobLauncher,
JobRepository jobRepository, JobOperator jobOperator, JobExplorer jobExplorer) {
return args -> {
JobParameters parameters = new JobParametersBuilder()
.addLong("timestamp", System.currentTimeMillis())
.toJobParameters();
SimpleCommandLinePropertySource cli = new SimpleCommandLinePropertySource(args);
jobsToRun.forEach(job -> {
try {
if (!cli.containsProperty("restart")) {
jobLauncher.run(job, parameters);
} else {
long jobInstanceId = jobOperator.getJobInstances(job.getName(), 0, 1).get(0);
long lastExecutionId = jobOperator.getExecutions(jobInstanceId).get(0);
if (jobExplorer.getJobExecution(lastExecutionId).getStatus() == BatchStatus.FAILED) {
jobOperator.restart(lastExecutionId);
log.info("Restarting the job " + job.getName());
} else {
log.warn("Cannot restart the job. Job not in Failed state.");
}
}
} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException | JobParametersInvalidException e) {
log.error(
"Error occured while running job " + job.getName() + " Reason: " + e.getMessage());
} catch (NoSuchJobException | NoSuchJobInstanceException | NoSuchJobExecutionException e) {
e.printStackTrace();
}
});
};
}
}
This configuration collects all my job beans and runs them thru this CommandLineRunner bean.
I run the job without the --restart parameter and everything runs fine.
Next, when I fail the job deliberately and try running the same with --restart parameter, the app throws a org.springframework.batch.core.launch.NoSuchJobException: No job configuration with the name [itemJob] was registered.
I debug thru the program the jobInstanceId and the jobExecutionId seem to be the right id's. One important thing is that the app does not log anything as an ERROR. I just get this exception at an INFO level. Not sure what am I missing here.
Just to be more clear. I am also including a job component bean here if that may help.
#Component
#Slf4j
#Profile("master")
#ConditionalOnProperty(name = "item", havingValue = "true")
public class ItemImportJob {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private ItemRemotePartition itemRemotePartition;
#Bean
#Profile("master")
public Job itemJob() throws Exception {
return jobBuilderFactory.get("itemJob").listener(new JobExecutionListener() {
#Override
public void beforeJob(JobExecution jobExecution) {
log.info("Ready to start the job");
}
#Override
public void afterJob(JobExecution jobExecution) {
log.info("Job successfully executed.");
}
}).incrementer(new RunIdIncrementer())
.start(itemRemotePartition.masterStep())
.build();
}
}
Complete log of NoSuchJobException:
2018-02-06 13:29:35.789 INFO 82332 --- [ main] c.a.s.p.b.BulkImportProductApplication : Started BulkImportProductApplication in 19.762 seconds (JVM running for 21.76)
org.springframework.batch.core.launch.NoSuchJobException: No job configuration with the name [itemJob] was registered
at org.springframework.batch.core.configuration.support.MapJobRegistry.getJob(MapJobRegistry.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration$PassthruAdvice.invoke(SimpleBatchConfiguration.java:127)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
at com.sun.proxy.$Proxy117.getJob(Unknown Source)
at org.springframework.batch.core.launch.support.SimpleJobOperator.restart(SimpleJobOperator.java:275)
at org.springframework.batch.core.launch.support.SimpleJobOperator$$FastClassBySpringCGLIB$$44ee6049.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:669)
at org.springframework.batch.core.launch.support.SimpleJobOperator$$EnhancerBySpringCGLIB$$587272bf.restart(<generated>)
at com.art.service.product.bulkimportproduct.config.job.JobConfig.lambda$null$0(JobConfig.java:52)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at com.art.service.product.bulkimportproduct.config.job.JobConfig.lambda$commandLineRunner$1(JobConfig.java:44)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:732)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:716)
at org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:703)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:304)
at com.art.service.product.bulkimportproduct.BulkImportProductApplication.main(BulkImportProductApplication.java:17)
Let me know if I can help you out with any other details that might help figure out what is going wrong. Thanks in advance.

Related

spring-retry : "Getting Retry exhausted after last attempt with no recovery path" with original exception on non retryable method

I am trying to implement spring-retry(version - 1.3.1) in my spring boot application. I have to retry webservice operation to read the record if not found in first request.
sample code:
#Retryable(include = {IllegalArgumentException.class}, backoff = #Backoff(500), maxAttempts = 3, recover ="readFallback")
Object read(String Id);
#Recover
Object readFallback(RuntimeException e, String Id);
void deletePayment(String paymentId);
Problem :
I am getting correct response from read method(annotated with #Retryable) in exception scenario but I am getting RetryExhaustedException with nested original exception when I am getting exception on my delete method. As you see, delete method doesn't annotated with #Retryable . Delete method is in different package.
**Sample exception response ** : "Retry exhausted after last attempt with no recovery path; nested exception is exception.NotFoundException: Not found"
Expected : Delete method should not be impacted by #Retryable. Can someone help me to find what am i missing or doing wrong. I have tried but unable to not found the solution of this problem on internet.
Thanks in advance !
Works as expected for me:
#SpringBootApplication
#EnableRetry
public class So71546747Application {
public static void main(String[] args) {
SpringApplication.run(So71546747Application.class, args);
}
#Bean
ApplicationRunner runner(SomeRetryables retrier) {
return args -> {
retrier.foo("testFoo");
try {
Thread.sleep(1000);
retrier.bar("testBar");
}
catch (Exception e) {
e.printStackTrace();
}
};
}
}
#Component
class SomeRetryables {
#Retryable
void foo(String in) {
System.out.println(in);
throw new RuntimeException(in);
}
#Recover
void recover(String in, Exception ex) {
System.out.println("recovered");
}
void bar(String in) {
System.out.println(in);
throw new RuntimeException(in);
}
}
testFoo
testFoo
testFoo
recovered
testBar
java.lang.RuntimeException: testBar
at com.example.demo.SomeRetryables.bar(So71546747Application.java:52)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:789)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
at org.springframework.retry.annotation.AnnotationAwareRetryOperationsInterceptor.invoke(AnnotationAwareRetryOperationsInterceptor.java:166)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698)
at com.example.demo.SomeRetryables$$EnhancerBySpringCGLIB$$e61dd199.bar(<generated>)
at com.example.demo.So71546747Application.lambda$0(So71546747Application.java:26)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:768)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:758)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:310)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1312)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1301)
at com.example.demo.So71546747Application.main(So71546747Application.java:17)
Please provide an MCRE that exhibits the behavior you see so we can see what's wrong.

How can we get the JobId in the RetryContext?

I am just extending my this question here - Spring Retry doesn't works when we use RetryTemplate?.
How can we get the JobId in the RetryContext ?
I went through link: Spring Batch how to configure retry period for failed jobs, but still did not know.
#Component
#Slf4j
public class RecoveryCallback implements RecoveryCallback<String>{
#Autowired
private NamedParameterJdbcTemplate namedJdbcTemplate;
#Autowired
private AbcService abcService;
#Value("#{stepExecution.jobExecution.jobId}")
private Long jobId;
#Override
public String recover(RetryContext context) throws Exception {
log.warn("RecoveryCallback | recover is executed ...");
ErrorLog errorLog = ErrorLog.builder()
.jobName("ABC")
.stepName("RETRY_STEP")
.stepType("RETRY")
....
....
....
.jobId(jobId)
.build();
abcService.updateErrLog(errorLog);
return "Batch Job Retried and exausted with all attemps";
}
}
Since you are injecting stepExecution.jobExecution.jobId in a field of a Spring bean, you need to make this bean Step scoped. With this approach, the RetryContext is not used.
If you want to use the retry context, then you need to put the jobId in the context in the retryable method first. From your linked question:
retryTemplate.execute(retryContext -> {
JobExecution jobExecution = jobLauncher.run(sampleAcctJob, pdfParams);
if(!jobExecution.getAllFailureExceptions().isEmpty()) {
log.error("============== sampleAcctJob Job failed, retrying.... ================");
throw jobExecution.getAllFailureExceptions().iterator().next();
}
logDetails(jobExecution);
// PUT JOB ID in retryContext
retryContext.setAttribute("jobId", jobExecution.getExecutionId());
return jobExecution;
});
With that, you can get the jobId from the context in the recover method:
#Override
public String recover(RetryContext context) throws Exception {
log.warn("RecoveryCallback | recover is executed ...");
ErrorLog errorLog = ErrorLog.builder()
.jobName("ABC")
.stepName("RETRY_STEP")
.stepType("RETRY")
....
....
.jobId(context.getAttribute("jobId"))
.build();
abcService.updateErrLog(errorLog);
return "Batch Job Retried and exausted with all attemps";
}

Can not run few methods sequentially when Spring Boot starts

I have to run a few methods when Application starts, like the following:
#SpringBootApplication
public class Application implements CommandLineRunner {
private final MonitoringService monitoringService;
private final QrReaderServer qrReaderServer;
#Override
public void run(String... args) {
monitoringService.launchMonitoring();
qrReaderServer.launchServer();
}
However, only the first one is executed! And the application is started:
... Started Application in 5.21 seconds (JVM running for 6.336)
... START_MONITORING for folder: D:\results
The second one is always skipped!
If change the call order - the only the second one will be executed.
Could not find any solution for launching both at the beginning - tried #PostConstruct, ApplicationRunner, #EventListener(ApplicationReadyEvent.class)...
Looks like they are blocking each other somehow. Despite the fact that both have void type.
Monitoring launch implementation:
#Override
public void launchMonitoring() {
log.info("START_MONITORING for folder: {}", monitoringProperties.getFolder());
try {
WatchKey key;
while ((key = watchService.take()) != null) {
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
if (kind == ENTRY_CREATE) {
log.info("FILE_CREATED: {}", event.context());
// some delay for fully file upload
Thread.sleep(monitoringProperties.getFrequency());
String fullFileName = getFileName(event);
String fileName = FilenameUtils.removeExtension(fullFileName);
processResource(fullFileName, fileName);
}
}
key.reset();
}
} catch (InterruptedException e) {
log.error("interrupted exception for monitoring service", e);
} catch (IOException e) {
log.error("io exception while processing file", e);
}
}
QR Reader start (launch TCP server with Netty configuration):
#Override
public void launchServer() {
try {
ChannelFuture serverChannelFuture = serverBootstrap.bind(hostAddress).sync();
log.info("Server is STARTED : port {}", hostAddress.getPort());
serverChannel = serverChannelFuture.channel().closeFuture().sync().channel();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
shutdownQuietly();
}
}
How to solve this issue?
Start launchMonitoring() asynchronously.
The easiest way to do this is to enable Async by adding #EnableAsync on your Application
and then annotate launchMonitoring() with #Async
Not sure if launchServer() should also be started asynchronously.
EDIT: completed Answer
No task executor bean found for async processing: no bean of type TaskExecutor and no bean named 'taskExecutor' either
By default Spring will create a SimpleAsyncTaskExecutor, but you can provide your taskExecutor
Example:
#EnableAsync
#Configuration
public class AsyncConfig implements AsyncConfigurer {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.set... // your custom configs
executor.initialize();
return executor;
}
...
}

kafka embedded : java.io.FileNotFoundException: /tmp/kafka-7785736914220873149/replication-offset-checkpoint.tmp

I use kafkaEmbedded in integration test and I get FileNotFoundException :
java.io.FileNotFoundException: /tmp/kafka-7785736914220873149/replication-offset-checkpoint.tmp
at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_141]
at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_141]
at java.io.FileOutputStream.<init>(FileOutputStream.java:213) ~[na:1.8.0_141]
at java.io.FileOutputStream.<init>(FileOutputStream.java:162) ~[na:1.8.0_141]
at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:43) ~[kafka_2.11-0.11.0.0.jar:na]
at kafka.server.checkpoints.OffsetCheckpointFile.write(OffsetCheckpointFile.scala:58) ~[kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:1118) [kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:1115) [kafka_2.11-0.11.0.0.jar:na]
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) [scala-library-2.11.11.jar:na]
at scala.collection.immutable.Map$Map1.foreach(Map.scala:116) [scala-library-2.11.11.jar:na]
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) [scala-library-2.11.11.jar:na]
at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1115) [kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$1.apply$mcV$sp(ReplicaManager.scala:211) [kafka_2.11-0.11.0.0.jar:na]
at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) [kafka_2.11-0.11.0.0.jar:na]
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57) [kafka_2.11-0.11.0.0.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_141]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_141]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
My tests pass with success but I get this error in the end of my build
After many hours of research I found this :
kafka TestUtils.tempDirectory method is used to create temporary directory for embedded kafka broker. It also registers shutdown hook which deletes this directory when JVM exits.
when unit test finishes execution it calls System.exit, which in turn executes all registered shutdown hooks
If kafka broker runs at the end of unit test it will attempt to write/read data in a dir which is deleted and produces different FileNotFound exceptions.
My config class :
#Configuration
public class KafkaEmbeddedConfiguration {
private final KafkaEmbedded kafkaEmbedded;
public KafkaEmbeddedListenerConfigurationIT() throws Exception {
kafkaEmbedded = new KafkaEmbedded(1, true, "topic1");
kafkaEmbedded.before();
}
#Bean
public KafkaTemplate<String, Message> sender(ProtobufSerializer protobufSerializer,
KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry) throws Exception {
KafkaTemplate<String, Message> sender = KafkaTestUtils.newTemplate(kafkaEmbedded, new StringSerializer(),
protobufSerializer);
for (MessageListenerContainer listenerContainer :
registry.getListenerContainers()) {
ContainerTestUtils.waitForAssignment(listenerContainer,
kafkaEmbedded.getPartitionsPerTopic());
}
return sender;
}
Test class :
#RunWith(SpringRunner.class)
public class DeviceEnergyKafkaListenerIT {
...
#Autowired
private KafkaTemplate<String, Message> sender;
#Test
public void test (){
...
sender.send(topic, msg);
sender.flush();
}
Any ideas how to resolve this please ?
With a #ClassRule broker, add an #AfterClass method...
#AfterClass
public static void tearDown() {
embeddedKafka.getKafkaServers().forEach(b -> b.shutdown());
embeddedKafka.getKafkaServers().forEach(b -> b.awaitShutdown());
}
For a #Rule or bean, use an #After method.
final KafkaServer server =
embeddedKafka.getKafkaServers().stream().findFirst().orElse(null);
if(server != null) {
server.replicaManager().shutdown(false);
final Field replicaManagerField = server.getClass().getDeclaredField("replicaManager");
if(replicaManagerField != null) {
replicaManagerField.setAccessible(true);
replicaManagerField.set(server, null);
}
}
embeddedKafka.after();
For a more detail discussion you can refer this thread
Embedded kafka issue with multiple tests using the same context
The following solution provided by mhyeon-lee has worked for me:
import org.apache.kafka.common.utils.Exit
class SomeTest {
static {
Exit.setHaltProcedure((statusCode, message) -> {
if (statusCode != 1) {
Runtime.getRuntime().halt(statusCode);
}
});
}
#Test
void test1() {
}
#Test
void test2() {
}
}
When JVM shutdown Hook is running, kafka log file is deleted and
Exit.halt (1) is called when other shutdown hook accesses kafka log
file at the same time.
Since halt is called here and status is 1, i only defend against 1.
https://github.com/a0x8o/kafka/blob/master/core/src/main/scala/kafka/log/LogManager.scala#L193
If you encounter a situation where the test fails with a different
status value, you can add defense code.
An error log may occur, but the test will not fail because the command
is not propagated to Runtime.halt.
References:
https://github.com/spring-projects/spring-kafka/issues/194#issuecomment-612875646
https://github.com/spring-projects/spring-kafka/issues/194#issuecomment-613548108

How can I shutdown Spring boot thread pool project amicably which is 24x7 running

I have created spring boot thread pool project which has thread that needs to run 24x7 once spawned but when I need to stop the app in server for some maintenance it should shutdown after completing its current task and not taking up any new task.
My code for the same is:
Config class
#Configuration
public class ThreadConfig {
#Bean
public ThreadPoolTaskExecutor taskExecutor(){
ThreadPoolTaskExecutor executorPool = new ThreadPoolTaskExecutor();
executorPool.setCorePoolSize(10);
executorPool.setMaxPoolSize(20);
executorPool.setQueueCapacity(10);
executorPool.setWaitForTasksToCompleteOnShutdown(true);
executorPool.setAwaitTerminationSeconds(60);
executorPool.initialize();
return executorPool;
}
}
Runnable class
#Component
#Scope("prototype")
public class DataMigration implements Runnable {
String name;
private boolean run=true;
public DataMigration(String name) {
this.name = name;
}
#Override
public void run() {
while(run){
System.out.println(Thread.currentThread().getName()+" Start Thread = "+name);
processCommand();
System.out.println(Thread.currentThread().getName()+" End Thread = "+name);
if(Thread.currentThread().isInterrupted()){
System.out.println("Thread Is Interrupted");
break;
}
}
}
private void processCommand() {
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public void shutdown(){
this.run = false;
}
}
Main class:
#SpringBootApplication
public class DataMigrationPocApplication implements CommandLineRunner{
#Autowired
private ThreadPoolTaskExecutor taskExecutor;
public static void main(String[] args) {
SpringApplication.run(DataMigrationPocApplication.class, args);
}
#Override
public void run(String... arg0) throws Exception {
for(int i = 1; i<=20 ; i++){
taskExecutor.execute(new DataMigration("Task " + i));
}
for (;;) {
int count = taskExecutor.getActiveCount();
System.out.println("Active Threads : " + count);
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
if (count == 0) {
taskExecutor.shutdown();
break;
}
}
System.out.println("Finished all threads");
}
}
I need help to understand if I need to stop my spring boot application it should stop all the 20 threads running which runs (24x7) otherwise after completing there current loop in while loop and exit.
I would propose couple of changes in this code to resolve the problem
1) since in your POC processCommand calls Thread.sleep, when you shutdown the executor and it interrupts workers InterruptedException get called but is almost ignored in your code. After that there is if(Thread.currentThread().isInterrupted()) check which will return false for the reason above. Similar problem is outlined in the post below
how does thread.interrupt() sets the flag?
the following code change should fix the problem:
private void processCommand() {
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
shutdown();
}
}
2) Also because of ThreadConfig::taskExecutor executorPool.setWaitForTasksToCompleteOnShutdown(true) Spring will call executor.shutdown instead of executor.shutdownNow. According to javadoc ExecutorService.shutdown
Initiates an orderly shutdown in which previously submitted tasks are
executed, but no new tasks will be accepted.
So I would recommend to set
executorPool.setWaitForTasksToCompleteOnShutdown(false);
Other things to improve in this code: although DataMigration is annotated as a component the instances of this class are creared not by Spring. You should try using factory method similar to ThreadConfig::taskExecutor in order to make Spring initiate instances of DataMigration for example to inject other bean into DataMigration instances.
In order to shutdown executor when running jar file on linux environment you can for example add actuator module and enable shutdown endpoint:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
in application.properties:
endpoints.shutdown.enabled=true
It will enable JMX shutdown endpoint and you can call shutdown on it.
If you want current job cycle of the task to be finished you should set
executorPool.setWaitForTasksToCompleteOnShutdown(true);
In order to connect to your jvm process on linux env remotely you have to specify an RMI Registry port.
Here is a detailed article:
How to access Spring-boot JMX remotely
If you just need to connect to JMX from local env you can run jsoncole or command-line tools : Calling JMX MBean method from a shell script
Here is an example uf using one of these tools - jmxterm
$>run -d org.springframework.boot: -b org.springframework.boot:name=shutdownEndpoint,type=Endpoint shutdown
#calling operation shutdown of mbean org.springframework.boot:name=shutdownEndpoint,type=Endpoint with params []
#operation returns:
{
message = Shutting down, bye...;
}

Resources