Hazelcast. register IExecutorService - spring-boot

I am reading Hazelcast documentation (http://docs.hazelcast.org/docs/latest-development/manual/html/Distributed_Computing/Executor_Service/Implementing_a_Runnable_Task.html)
To manage Executor in cloud environment, hazelcast retrieve Executor like ...
public class MasterMember {
public static void main( String[] args ) throws Exception {
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
IExecutorService executor = hazelcastInstance.getExecutorService( "exec" );
for ( int k = 1; k <= 1000; k++ ) {
Thread.sleep( 1000 );
System.out.println( "Producing echo task: " + k );
executor.execute( new EchoTask( String.valueOf( k ) ) );
}
System.out.println( "EchoTaskMain finished!" );
}
}
However, I could not find the part which register ExecutorService named exec. I would like to know how to register Executor to HazelcastInstance.
FYI, My project is based on Spring boot, and initiate HazelcastInstance like below (in ApplicationContextAware implementation)
public static HazelcastInstance getHazelcastIntance(){
return context.getBean(HazelcastInstance.class) ;
}
summary. I would like to know how to register a Executor to HazelcastInstance.
Thanks.
===============================================
Edit.
I got a TaskExecutor bean in my application Context.
But I can not generate IExecutorService with Spring TaskExecutor bean.
IExecutorService executor = hazelcastInstance.getExecutorService("taskExecutor");
executor.execute(new ParallelDBSyncExecutor(executeId) {
#Override
public void action() throws Exception {
this.dbSyncService.executeWithIteration(this.executeId, start, total);
}
});
Hazelcast can detect Spring bean which is already registered?

Related

How to create instance specific message queues in springboot rest api

I have a number of microservices, each running in its own container in a load balanced environment. I have a need for each instance of these microservices to create a rabbitmq queue when it starts up and delete it when it stops. I have currently defined the following property in my application properties file:
config_queue: config_${PID}
My message queue listener looks like this:
public class ConfigListener {
Logger logger = LoggerFactory.getLogger(ConfigListener.class);
// https://www.programcreek.com/java-api-examples/index.php?api=org.springframework.amqp.rabbit.annotation.RabbitListener
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "${config_queue}",
autoDelete = "true"),
exchange = #Exchange(value = AppConstants.TOPIC_CONFIGURATION,
type= ExchangeTypes.FANOUT)
))
public void configChanged(String message){
... application logic
}
}
All this works great when I run the microservice. A queue with prefix config and process id gets created and is auto deleted when I stop the service.
However, when I run this service and others in their individual docker containers, all services have the same PID and that is 1.
Does anybody have any idea how I can create specify a queue that is unique to that instance.
Thanks in advance for your help.
Use an AnonymousQueue instead:
#SpringBootApplication
public class So72030217Application {
public static void main(String[] args) {
SpringApplication.run(So72030217Application.class, args);
}
#RabbitListener(queues = "#{configQueue.name}")
public void listen(String in) {
System.out.println(in);
}
}
#Configuration
class Config {
#Bean
FanoutExchange fanout() {
return new FanoutExchange("config");
}
#Bean
Queue configQueue() {
return new AnonymousQueue(new Base64UrlNamingStrategy("config_"));
}
#Bean
Binding binding() {
return BindingBuilder.bind(configQueue()).to(fanout());
}
}
AnonymousQueues are auto-delete and use a Base64 encoded UUID in the name.

Can not run few methods sequentially when Spring Boot starts

I have to run a few methods when Application starts, like the following:
#SpringBootApplication
public class Application implements CommandLineRunner {
private final MonitoringService monitoringService;
private final QrReaderServer qrReaderServer;
#Override
public void run(String... args) {
monitoringService.launchMonitoring();
qrReaderServer.launchServer();
}
However, only the first one is executed! And the application is started:
... Started Application in 5.21 seconds (JVM running for 6.336)
... START_MONITORING for folder: D:\results
The second one is always skipped!
If change the call order - the only the second one will be executed.
Could not find any solution for launching both at the beginning - tried #PostConstruct, ApplicationRunner, #EventListener(ApplicationReadyEvent.class)...
Looks like they are blocking each other somehow. Despite the fact that both have void type.
Monitoring launch implementation:
#Override
public void launchMonitoring() {
log.info("START_MONITORING for folder: {}", monitoringProperties.getFolder());
try {
WatchKey key;
while ((key = watchService.take()) != null) {
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
if (kind == ENTRY_CREATE) {
log.info("FILE_CREATED: {}", event.context());
// some delay for fully file upload
Thread.sleep(monitoringProperties.getFrequency());
String fullFileName = getFileName(event);
String fileName = FilenameUtils.removeExtension(fullFileName);
processResource(fullFileName, fileName);
}
}
key.reset();
}
} catch (InterruptedException e) {
log.error("interrupted exception for monitoring service", e);
} catch (IOException e) {
log.error("io exception while processing file", e);
}
}
QR Reader start (launch TCP server with Netty configuration):
#Override
public void launchServer() {
try {
ChannelFuture serverChannelFuture = serverBootstrap.bind(hostAddress).sync();
log.info("Server is STARTED : port {}", hostAddress.getPort());
serverChannel = serverChannelFuture.channel().closeFuture().sync().channel();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
shutdownQuietly();
}
}
How to solve this issue?
Start launchMonitoring() asynchronously.
The easiest way to do this is to enable Async by adding #EnableAsync on your Application
and then annotate launchMonitoring() with #Async
Not sure if launchServer() should also be started asynchronously.
EDIT: completed Answer
No task executor bean found for async processing: no bean of type TaskExecutor and no bean named 'taskExecutor' either
By default Spring will create a SimpleAsyncTaskExecutor, but you can provide your taskExecutor
Example:
#EnableAsync
#Configuration
public class AsyncConfig implements AsyncConfigurer {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.set... // your custom configs
executor.initialize();
return executor;
}
...
}

ThreadPoolTaskExecutor Getting Overwritten in Scheduled Class

I have the following ThreadPoolTaskExecutor thats gets created with the expected core/max pool size configurations.
#Slf4j
#Configuration
public class ThreadPoolConfig {
#Value("${corePoolSize}")
private Integer corePoolSize;
#Value("${queueCapacity}")
private Integer queueCapacity;
#Value("${maxPoolSize}")
private Integer maxPoolSize;
#Bean(name="myThreadPoolTaskExecutor")
public ThreadPoolTaskExecutor myThreadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setBeanName("myThreadPoolTaskExecutor");
executor.setCorePoolSize(corePoolSize);
executor.setQueueCapacity(queueCapacity);
executor.setMaxPoolSize(maxPoolSize);
executor.setThreadNamePrefix("my_thread_");
executor.setWaitForTasksToCompleteOnShutdown(true);
executor.initialize();
log.debug("threadPoolTaskExecutor CorePoolSize is : " + executor.getCorePoolSize());
log.debug("threadPoolTaskExecutor MaxPoolSize is : " + executor.getMaxPoolSize());
return executor;
}
}
When my #scheduled method runs the max pool size is set to the DEFAULT value of 2147483647 and I don't understand why it's not using the configured ThreadPoolTaskExecutor above:
#EnableScheduling
public class SchedulingConfig {
}
#Component
public class Scheduler {
#Autowired
#Qualifier("myThreadPoolTaskExecutor")
private ThreadPoolTaskExecutor threadPoolTaskExecutor;
#Scheduled(fixedRateString = "${fixedRate}")
public void invokeScheduledThread() {
while (threadPoolTaskExecutor.getActiveCount() <= threadPoolTaskExecutor.getMaxPoolSize()) {
log.debug("Active Thread Pool count is : " + threadPoolTaskExecutor.getActiveCount() + ", Max Thread Pool count is : " + threadPoolTaskExecutor.getMaxPoolSize() + " on the scheduled Thread : " + Thread.currentThread().getName());
//call a service to retrieve some items to process
threadPoolTaskExecutor.execute(Some Object that implements runnable);
}
}
}
Output:
Active Thread Pool count is : 0, Max Thread Pool count is : 2147483647 on the scheduled Thread : task-scheduler-1
I put a break point into the initialise() method of org.springframework.scheduling.concurrent.ExecutorConfigurationSupport
and it looks like the method is getting invoked 3 times, twice with a ThreadName Prefix of "my_thread_"
which is expected and finally once for a Bean called "taskScheduler" with a ThreadName Prefix of "task-scheduler-".
Does anyone know why I can't use my own ThreadPoolTaskExecutor within the Scheduler class?
I wanted to use a default #Scheduler to run on a single thread every x number of seconds and create X number of Threads using my own ThreadPoolTaskExecutor.
Use ThreadPoolTaskScheduler instead of ThreadPoolTaskExecutor.
For example:
#Configuration
public class SpringSchedulerConfig {
private static final int THREAD_POOL_SIZE = 5;
#Bean
public ThreadPoolTaskScheduler getScheduler() {
ThreadPoolTaskScheduler threadPoolTaskScheduler = new ThreadPoolTaskScheduler();
//we want every Job in a separate Thread.
threadPoolTaskScheduler.setPoolSize(THREAD_POOL_SIZE);
return threadPoolTaskScheduler;
}
}

How can I shutdown Spring boot thread pool project amicably which is 24x7 running

I have created spring boot thread pool project which has thread that needs to run 24x7 once spawned but when I need to stop the app in server for some maintenance it should shutdown after completing its current task and not taking up any new task.
My code for the same is:
Config class
#Configuration
public class ThreadConfig {
#Bean
public ThreadPoolTaskExecutor taskExecutor(){
ThreadPoolTaskExecutor executorPool = new ThreadPoolTaskExecutor();
executorPool.setCorePoolSize(10);
executorPool.setMaxPoolSize(20);
executorPool.setQueueCapacity(10);
executorPool.setWaitForTasksToCompleteOnShutdown(true);
executorPool.setAwaitTerminationSeconds(60);
executorPool.initialize();
return executorPool;
}
}
Runnable class
#Component
#Scope("prototype")
public class DataMigration implements Runnable {
String name;
private boolean run=true;
public DataMigration(String name) {
this.name = name;
}
#Override
public void run() {
while(run){
System.out.println(Thread.currentThread().getName()+" Start Thread = "+name);
processCommand();
System.out.println(Thread.currentThread().getName()+" End Thread = "+name);
if(Thread.currentThread().isInterrupted()){
System.out.println("Thread Is Interrupted");
break;
}
}
}
private void processCommand() {
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public void shutdown(){
this.run = false;
}
}
Main class:
#SpringBootApplication
public class DataMigrationPocApplication implements CommandLineRunner{
#Autowired
private ThreadPoolTaskExecutor taskExecutor;
public static void main(String[] args) {
SpringApplication.run(DataMigrationPocApplication.class, args);
}
#Override
public void run(String... arg0) throws Exception {
for(int i = 1; i<=20 ; i++){
taskExecutor.execute(new DataMigration("Task " + i));
}
for (;;) {
int count = taskExecutor.getActiveCount();
System.out.println("Active Threads : " + count);
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
if (count == 0) {
taskExecutor.shutdown();
break;
}
}
System.out.println("Finished all threads");
}
}
I need help to understand if I need to stop my spring boot application it should stop all the 20 threads running which runs (24x7) otherwise after completing there current loop in while loop and exit.
I would propose couple of changes in this code to resolve the problem
1) since in your POC processCommand calls Thread.sleep, when you shutdown the executor and it interrupts workers InterruptedException get called but is almost ignored in your code. After that there is if(Thread.currentThread().isInterrupted()) check which will return false for the reason above. Similar problem is outlined in the post below
how does thread.interrupt() sets the flag?
the following code change should fix the problem:
private void processCommand() {
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
shutdown();
}
}
2) Also because of ThreadConfig::taskExecutor executorPool.setWaitForTasksToCompleteOnShutdown(true) Spring will call executor.shutdown instead of executor.shutdownNow. According to javadoc ExecutorService.shutdown
Initiates an orderly shutdown in which previously submitted tasks are
executed, but no new tasks will be accepted.
So I would recommend to set
executorPool.setWaitForTasksToCompleteOnShutdown(false);
Other things to improve in this code: although DataMigration is annotated as a component the instances of this class are creared not by Spring. You should try using factory method similar to ThreadConfig::taskExecutor in order to make Spring initiate instances of DataMigration for example to inject other bean into DataMigration instances.
In order to shutdown executor when running jar file on linux environment you can for example add actuator module and enable shutdown endpoint:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
in application.properties:
endpoints.shutdown.enabled=true
It will enable JMX shutdown endpoint and you can call shutdown on it.
If you want current job cycle of the task to be finished you should set
executorPool.setWaitForTasksToCompleteOnShutdown(true);
In order to connect to your jvm process on linux env remotely you have to specify an RMI Registry port.
Here is a detailed article:
How to access Spring-boot JMX remotely
If you just need to connect to JMX from local env you can run jsoncole or command-line tools : Calling JMX MBean method from a shell script
Here is an example uf using one of these tools - jmxterm
$>run -d org.springframework.boot: -b org.springframework.boot:name=shutdownEndpoint,type=Endpoint shutdown
#calling operation shutdown of mbean org.springframework.boot:name=shutdownEndpoint,type=Endpoint with params []
#operation returns:
{
message = Shutting down, bye...;
}

Spring batch : Assemble a job rather than configuring it (Extensible job configuration)

Background
I am working on designing a file reading layer that can read delimited files and load it in a List. I have decided to use Spring Batch because it provides a lot of scalability options which I can leverage for different sets of files depending on their size.
The requirement
I want to design a generic Job API that can be used to read any delimited file.
There should be a single Job structure that should be used for parsing every delimited file. For example, if the system needs to read 5 files, there will be 5 jobs (one for each file). The only way the 5 jobs will be different from each other is that they will use a different FieldSetMapper, column name, directory path and additional scaling parameters such as commit-interval and throttle-limit.
The user of this API should not need to configure a Spring
batch job, step, chunking, partitioning, etc on his own when a new file type is introduced in the system.
All that the user needs to do is to provide the FieldsetMapperto be used by the job along with the commit-interval, throttle-limit and the directory where each type of file will be placed.
There will be one predefined directory per file. Each directory can contain multiple files of the same type and format. A MultiResourcePartioner will be used to look inside a directory. The number of partitions = number of files in the directory.
My requirement is to build a Spring Batch infrastructure that gives me a unique job I can launch once I have the bits and pieces that will make up the job.
My solution :
I created an abstract configuration class that will be extended by concrete configuration classes (There will be 1 concrete class per file to be read).
#Configuration
#EnableBatchProcessing
public abstract class AbstractFileLoader<T> {
private static final String FILE_PATTERN = "*.dat";
#Autowired
JobBuilderFactory jobs;
#Autowired
ResourcePatternResolver resourcePatternResolver;
public final Job createJob(Step s1, JobExecutionListener listener) {
return jobs.get(this.getClass().getSimpleName())
.incrementer(new RunIdIncrementer()).listener(listener)
.start(s1).build();
}
public abstract Job loaderJob(Step s1, JobExecutionListener listener);
public abstract FieldSetMapper<T> getFieldSetMapper();
public abstract String getFilesPath();
public abstract String[] getColumnNames();
public abstract int getChunkSize();
public abstract int getThrottleLimit();
#Bean
#StepScope
#Value("#{stepExecutionContext['fileName']}")
public FlatFileItemReader<T> reader(String file) {
FlatFileItemReader<T> reader = new FlatFileItemReader<T>();
String path = file.substring(file.indexOf(":") + 1, file.length());
FileSystemResource resource = new FileSystemResource(path);
reader.setResource(resource);
DefaultLineMapper<T> lineMapper = new DefaultLineMapper<T>();
lineMapper.setFieldSetMapper(getFieldSetMapper());
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(",");
tokenizer.setNames(getColumnNames());
lineMapper.setLineTokenizer(tokenizer);
reader.setLineMapper(lineMapper);
reader.setLinesToSkip(1);
return reader;
}
#Bean
public ItemProcessor<T, T> processor() {
// TODO add transformations here
return null;
}
#Bean
#JobScope
public ListItemWriter<T> writer() {
ListItemWriter<T> writer = new ListItemWriter<T>();
return writer;
}
#Bean
#JobScope
public Step readStep(StepBuilderFactory stepBuilderFactory,
ItemReader<T> reader, ItemWriter<T> writer,
ItemProcessor<T, T> processor, TaskExecutor taskExecutor) {
final Step readerStep = stepBuilderFactory
.get(this.getClass().getSimpleName() + " ReadStep:slave")
.<T, T> chunk(getChunkSize()).reader(reader)
.processor(processor).writer(writer).taskExecutor(taskExecutor)
.throttleLimit(getThrottleLimit()).build();
final Step partitionedStep = stepBuilderFactory
.get(this.getClass().getSimpleName() + " ReadStep:master")
.partitioner(readerStep)
.partitioner(
this.getClass().getSimpleName() + " ReadStep:slave",
partitioner()).taskExecutor(taskExecutor).build();
return partitionedStep;
}
/*
* #Bean public TaskExecutor taskExecutor() { return new
* SimpleAsyncTaskExecutor(); }
*/
#Bean
#JobScope
public Partitioner partitioner() {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
Resource[] resources;
try {
resources = resourcePatternResolver.getResources("file:"
+ getFilesPath() + FILE_PATTERN);
} catch (IOException e) {
throw new RuntimeException(
"I/O problems when resolving the input file pattern.", e);
}
partitioner.setResources(resources);
return partitioner;
}
#Bean
#JobScope
public JobExecutionListener listener(ListItemWriter<T> writer) {
return new JobCompletionNotificationListener<T>(writer);
}
/*
* Use this if you want the writer to have job scope (JIRA BATCH-2269). Also
* change the return type of writer to ListItemWriter for this to work.
*/
#Bean
public TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor() {
#Override
protected void doExecute(final Runnable task) {
// gets the jobExecution of the configuration thread
final JobExecution jobExecution = JobSynchronizationManager
.getContext().getJobExecution();
super.doExecute(new Runnable() {
public void run() {
JobSynchronizationManager.register(jobExecution);
try {
task.run();
} finally {
JobSynchronizationManager.close();
}
}
});
}
};
}
}
Let's say I have to read Invoice data for the sake of discussion. I can therefore extend the above class for creating an InvoiceLoader :
#Configuration
public class InvoiceLoader extends AbstractFileLoader<Invoice>{
private class InvoiceFieldSetMapper implements FieldSetMapper<Invoice> {
public Invoice mapFieldSet(FieldSet f) {
Invoice invoice = new Invoice();
invoice.setNo(f.readString("INVOICE_NO");
return e;
}
}
#Override
public FieldSetMapper<Invoice> getFieldSetMapper() {
return new InvoiceFieldSetMapper();
}
#Override
public String getFilesPath() {
return "I:/CK/invoices/partitions/";
}
#Override
public String[] getColumnNames() {
return new String[] { "INVOICE_NO", "DATE"};
}
#Override
#Bean(name="invoiceJob")
public Job loaderJob(Step s1,
JobExecutionListener listener) {
return createJob(s1, listener);
}
#Override
public int getChunkSize() {
return 25254;
}
#Override
public int getThrottleLimit() {
return 8;
}
}
Let's say I have one more class called Inventory that extends AbstractFileLoader.
On application startup, I can load these two annotation configurations as follows :
AbstractApplicationContext context1 = new AnnotationConfigApplicationContext(InvoiceLoader.class, InventoryLoader.class);
Somewhere else in my application two different threads can launch the jobs as follows :
Thread 1 :
JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
Job job1 = context1.getBean("invoiceJob", Job.class);
JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
Thread 2 :
JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
Job job1 = context1.getBean("inventoryJob", Job.class);
JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
The advantage of this approach is that everytime there is a new file to be read, all that the developer/user has to do is subclass AbstractFileLoader and implement the required abstract methods without the need to get into the details of how to assemble the job.
The questions :
I am new to Spring batch so I may have overlooked some of the not-so-obvious issues with this approach such as shared internal objects in Spring batch that may cause two jobs running together to fail or obvious issues such as scoping of the beans.
Is there a better way to achieve my objective?
The fileName attribute of the #Value("#{stepExecutionContext['fileName']}") is always being assigned the value as I:/CK/invoices/partitions/ which is the value returned by getPathmethod in InvoiceLoader even though the getPathmethod inInventoryLoader`returns a different value.
One option is passing them as job parameters. For instance:
#Bean
Job job() {
jobs.get("myJob").start(step1(null)).build()
}
#Bean
#JobScope
Step step1(#Value('#{jobParameters["commitInterval"]}') commitInterval) {
steps.get('step1')
.chunk((int) commitInterval)
.reader(new IterableItemReader(iterable: [1, 2, 3, 4], name: 'foo'))
.writer(writer(null))
.build()
}
#Bean
#JobScope
ItemWriter writer(#Value('#{jobParameters["writerClass"]}') writerClass) {
applicationContext.classLoader.loadClass(writerClass).newInstance()
}
With MyWriter:
class MyWriter implements ItemWriter<Integer> {
#Override
void write(List<? extends Integer> items) throws Exception {
println "Write $items"
}
}
Then executed with:
def jobExecution = launcher.run(ctx.getBean(Job), new JobParameters([
commitInterval: new JobParameter(3),
writerClass: new JobParameter('MyWriter'), ]))
Output is:
INFO: Executing step: [step1]
Write [1, 2, 3]
Write [4]
Feb 24, 2016 2:30:22 PM org.springframework.batch.core.launch.support.SimpleJobLauncher$1 run
INFO: Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{commitInterval=3, writerClass=MyWriter}] and the following status: [COMPLETED]
Status is: COMPLETED, job execution id 0
#1 step1 COMPLETED
Full example here.

Resources