Stop the task in Spring Integration framework - spring

I would like to stop a KeepAliveReceiver task after a given event. I tested the following solutions and none is working 1) sending keepAliveReceiver.stop() to control but, 2) implement Lifecycle and call stop() 3) stop the scheduler. Any ideas how can I stop the task from within the running task?
#MessageEndpoint
public class KeepAliveReceiver implements Runnable, LifeCycle {
private int limit;
#Autowired
private ControlBusGateway controlGateway; // sending messages to control Channel
#Autowired
private ThreadPoolTaskScheduler myScheduler;
#Override
public void run() {
...
if ( event ) {
LOGGER.debug( "FAILOVER! Starting messageReceiveRouter. ");
controlGateway.send( new GenericMessage<String>( "#keepAliveReceiver.stop()" ) );
// not allowed
myScheduler.shutdown();
// not working, the scheduler keeps starting the keepAliveReceiver
this.stop();
//not working
}
}
#Override
public void stop() {
LOGGER.debug( "STOPPED!");
}
and xml definition of the scheduler:
<task:scheduler id="myScheduler" pool-size="10" />
<task:scheduled-tasks>
<task:scheduled ref="keepAliveReceiver" method="run" fixed-rate="500" />
</task:scheduled-tasks>

Send to the controlGateway a Message with empty command ;-)
'Kill' your <control-bus> and change it to
<outbound-channel-adapter channel="stopSchedulerChannel" expression="#myScheduler.shutdown()">
And add
<channel id="stopSchedulerChannel">
<dispatcher task-executor="executor"/>
</channel>
And configure appropriate executor bean
Your problem is about a wish to stop task from himself. From other side <control-bus> allows operations only on SmartLifecycle implementors

Related

How to restrict multiple instances of spring batch job running?

My Spring batch is triggered by a Rest end point. I am looking for the solution to run only one instance of a job at a time.
You can implement a listener overriding beforeJob method, that will run before the job starts and check if another instance of the same job is running. If another instance is already running, it will stop the current run.
#Component
public class MultipleCheckListener implements JobExecutionListener {
#Autowired
private JobExplorer explorer;
#Override
public void beforeJob(JobExecution jobExecution) {
String jobName = jobExecution.getJobInstance().getJobName();
Set<JobExecution> executions = explorer.findRunningJobExecutions(jobName);
if(executions.size() > 1) {
jobExecution.stop();
}
}
}

Spring-batch exiting with Exit Status : COMPLETED before actual job is finished?

In my Spring Batch application I have written a CustomItemWriter which internally writes item to DynamoDB using DynamoDBAsyncClient, this client returns Future object. I have a input file with millions of record. Since CustomItemWriter returns future object immediately my batch job exiting within 5 sec with status as COMPLETED, but in actual it is taking 3-4 minutes to write all item to the DB, I want that batch job finishes only after all item written to DataBase. How can i do that?
job is defined as below
<bean id="report" class="com.solution.model.Report" scope="prototype" />
<batch:job id="job" restartable="true">
<batch:step id="step1">
<batch:tasklet>
<batch:chunk reader="cvsFileItemReader" processor="filterReportProcessor" writer="customItemWriter"
commit-interval="20">
</batch:chunk>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="customItemWriter" class="com.solution.writer.CustomeWriter"></bean>
CustomeItemWriter is defined as below
public class CustomeWriter implements ItemWriter<Report>{
public void write(List<? extends Report> item) throws Exception {
List<Future<PutItemResult>> list = new LinkedList();
AmazonDynamoDBAsyncClient client = new AmazonDynamoDBAsyncClient();
for(Report report : item) {
PutItemRequest req = new PutItemRequest();
req.setTableName("MyTable");
req.setReturnValue(ReturnValue.ALL_ODD);
req.addItemEntry("customerId",new
AttributeValue(item.getCustomeId()));
Future<PutItemResult> res = client.putItemAsync(req);
list.add(res);
}
}
}
Main class contains
JobExecution execution = jobLauncher.run(job, new JobParameters());
System.out.println("Exit Status : " + execution.getStatus());
Since in ItemWriter its returning future object it doesn't waits to complete the opration. And from the main since all item is submitted for writing Batch Status is showing COMPLETED and job terminates.
I want that this job should terminate only after actual write is performed in the DynamoDB.
Can we have some other step well to wait on this or some Listener is available?
Here is one approach. Since ItemWriter::write doesn't return anything you can make use of listener feature.
#Component
#JobScope
public class YourWriteListener implements ItemWriteListener<WhatEverYourTypeIs> {
#Value("#{jobExecution.executionContext}")
private ExecutionContext executionContext;
#Override
public void afterWrite(final List<? extends WhatEverYourTypeIs> paramList) {
Future future = this.executionContext.readAndValidate("FutureKey", Future.class);
//wait till the job is done using future object
}
#Override
public void beforeWrite(final List<? extends WhatEverYourTypeIs> paramList) {
}
#Override
public void onWriteError(final Exception paramException, final List<? extends WhatEverYourTypeIs> paramList) {
}
}
In your writer class, everything remains same except addind the future object to ExecutionContext.
public class YourItemWriter extends ItemWriter<WhatEverYourTypeIs> {
#Value("#{jobExecution.executionContext}")
private ExecutionContext executionContext;
#Override
protected void doWrite(final List<? extends WhatEverYourTypeIs> youritems)
//write to DynamoDb and get Future object
executionContext.put("FutureKey", future);
}
}
}
And you can register the listener in your configuration. Here is a java code, you need to do the same in your xml
#Bean
public Step initStep() {
return this.stepBuilders.get("someStepName").<YourTypeX, YourTypeY>chunk(10)
.reader(yourReader).processor(yourProcessor)
.writer(yourWriter).listener(YourWriteListener)
.build();
}

What are the other ways to specify code for Insight to analyze?

Spring Insight documentation states:
A trace represents a thread of execution. It is usually started by an HTTP request but can also be started by a background job
My application architecture style is one of queues running in the background that I'd like to instrument as well. However, I can't figure out how to get Spring Insight to instrument these calls initiated by queued message. I.e. I'd like to instrument the trace after a message is read off of the queue.
How can I ensure Insight instruments these background jobs?
I ended up creating an aspect that targets all of the Command Handlers. It extends the AbstractOperationCollectionAspect, implements the collectionPoint aspect passing in the Handler as an argument to use when it implements the createOperation method.
I.e.
public aspect CommandHandlerOperationCollectionAspect extends AbstractOperationCollectionAspect
{
public pointcut collectionPoint():
execution(* com.xtrac.common.core.handler.ThreadedHandler.HandlerRunnable.executeActorHandler(com.xtrac.common.core.handler.Handler,java.lang.Object));
protected Operation createOperation(JoinPoint jp)
{
Object[] args = jp.getArgs();
com.xtrac.common.core.handler.Handler handler = (Handler) args[0];
Operation operation = new Operation()
.type(XTRACOperationType.COMMAND_HANDLER)
.label(handler.getClass().getSimpleName())
.sourceCodeLocation(getSourceCodeLocation(jp));
return operation;
}
#Override
public String getPluginName()
{
return HandlerPluginRuntimeDescriptor.PLUGIN_NAME;
}
#Override
public boolean isMetricsGenerator()
{
return true;
}
}
I also implemented an AbstractSingleTypeEndpointAnalyzer to fill out the analyzer:
public class HandlerEndPointAnalyzer extends AbstractSingleTypeEndpointAnalyzer
{
private static final HandlerEndPointAnalyzer INSTANCE=new HandlerEndPointAnalyzer();
private HandlerEndPointAnalyzer() {
super(XTRACOperationType.COMMAND_HANDLER);
}
public static final HandlerEndPointAnalyzer getInstance() {
return INSTANCE;
}
#Override
protected EndPointAnalysis makeEndPoint(Frame handlerFrame, int depth) {
Operation operation = handlerFrame.getOperation();
String resourceLabel = operation.getLabel();
String exampleRequest = EndPointAnalysis.getHttpExampleRequest(handlerFrame);
return new EndPointAnalysis(EndPointName.valueOf(resourceLabel),
resourceLabel,
exampleRequest,
getOperationScore(operation, depth),
operation);
}
being sure to add it as a descriptor:
public class HandlerPluginRuntimeDescriptor extends PluginRuntimeDescriptor {
public static final String PLUGIN_NAME = "handler";
private static final HandlerPluginRuntimeDescriptor INSTANCE=new HandlerPluginRuntimeDescriptor();
private static final List<? extends EndPointAnalyzer> epAnalyzers=
ArrayUtil.asUnmodifiableList(HandlerEndPointAnalyzer.getInstance());
private HandlerPluginRuntimeDescriptor() {
super();
}
public static final HandlerPluginRuntimeDescriptor getInstance() {
return INSTANCE;
}
#Override
public Collection<? extends EndPointAnalyzer> getEndPointAnalyzers() {
return epAnalyzers;
}
#Override
public String getPluginName() {
return PLUGIN_NAME;
}
}
All noted in the spring xml file:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:insight="http://www.springframework.org/schema/insight-idk"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/insight-idk http://www.springframework.org/schema/insight-idk/insight-idk-1.0.xsd">
<insight:plugin name="handler" version="${project.version}" publisher="XTRAC Solutions LLC" />
<insight:operation-group group="XTRAC Handlers" operation="command_handler_operation" />
<insight:operation-group group="XTRAC Handlers" operation="event_handler_operation" />
<insight:operation-group group="XTRAC Classic" operation="xtrac_workflow_operation" />
<insight:operation-view operation="command_handler_operation"
template="com/xtrac/insight/command_handler_operation.ftl" />
<insight:operation-view operation="event_handler_operation"
template="com/xtrac/insight/event_handler_operation.ftl" />
<insight:operation-view operation="xtrac_workflow_operation"
template="com/xtrac/insight/xtrac_workflow_operation.ftl" />
<bean id="handlerPluginEndPointAnalyzer"
class="com.xtrac.insight.HandlerEndPointAnalyzer"
factory-method="getInstance"
lazy-init="true"
/>
<bean id="handlerPluginRuntimeDescriptor"
class="com.xtrac.insight.HandlerPluginRuntimeDescriptor"
factory-method="getInstance"
lazy-init="true"
/>
</beans>
along with some ftls.
I also created a MethodOperationCollectionAspect to collect some of the web service calls that occur in these handers. This sets it up for a nice display that tells me a lot about what is going on during the hander operation, and how much time it takes. E.g.
This set up a framework for maintaining a monitor on the health of the application if I set up the base line Thresholds for the named handlers
This is very useful because I can then tell if the application is healthy. Otherwise, the endpoints default to <200 ms for healthy.

Refreshing Spring context when JMS message delivered

I'd like to refresh my application context when system receives JMS message. In order to do it, I set up Spring Integration jms:message-driven-channel-adapter which forwards message to service activator implementing ApplicationContextAware. This activator (ConfigurationReloader class) invokes ConfigurableApplicationContext#refresh() method.
Below is sample code snippet:
<jms:message-driven-channel-adapter id="jmsDriverConfigurationAdapter"
destination="configurationApplyQueue" channel="jmsConfigurationInboundChannel" />
<channel id="jmsConfigurationInboundChannel"/>
<service-activator input-channel="jmsConfigurationInboundChannel" ref="configurationReloader" method="refresh"/>
And my activator:
public final class ConfigurationReloader implements ApplicationContextAware {
private ConfigurableApplicationContext applicationContext;
public void refresh() {
this.applicationContext.refresh();
}
#Override
public void setApplicationContext(
final ApplicationContext applicationContext) throws BeansException {
if (applicationContext instanceof ConfigurableApplicationContext) {
this.applicationContext =
(ConfigurableApplicationContext) applicationContext;
}
}
}
In case of delivering such message, context start shutdown operation but stuck on DefaultMessageListenerContainer bean shutdown:
2011-11-14 15:42:52,980 [org.springframework.jms.listener.DefaultMessageLis tenerContainer#0-1] DEBUG org.springframework.jms.listener.DefaultMessageLis tenerContainer - Shutting down JMS listener container
2011-11-14 15:42:52,980 [org.springframework.jms.listener.DefaultMessageLis tenerContainer#0-1] DEBUG org.springframework.jms.listener.DefaultMessageLis tenerContainer - Waiting for shutdown of message listener invokers
2011-11-14 15:42:55,104 [org.springframework.jms.listener.DefaultMessageLis tenerContainer#0-1] DEBUG org.springframework.jms.listener.DefaultMessageLis tenerContainer - Still waiting for shutdown of 1 message listener invokers
Invoking this operation over JMS is crucial for me since new configuration parameters are delivered along with message.
It is standard Spring MVC application with DispatcherServlet on the front based on the latest SpringCore and Spring Integration. Also I am sure that it's JMS related issue, because invoking ConfigurationLoader through controller works fine.
As I've debugged, it stucks after DefaultMessageListenerContainer#538 line invocation (wait() method on lifecycleMonitor):
/**
* Destroy the registered JMS Sessions and associated MessageConsumers.
*/
protected void doShutdown() throws JMSException {
logger.debug("Waiting for shutdown of message listener invokers");
try {
synchronized (this.lifecycleMonitor) {
while (this.activeInvokerCount > 0) {
if (logger.isDebugEnabled()) {
logger.debug("Still waiting for shutdown of " + this.activeInvokerCount +
" message listener invokers");
}
this.lifecycleMonitor.wait(); // <--- line 538
}
}
}
catch (InterruptedException ex) {
// Re-interrupt current thread, to allow other threads to react.
Thread.currentThread().interrupt();
}
}
...there is nobody to call notify / notifyAll on monitor so maybe it's some kind of bug?
Thank you for any hints!
Can you please explain why do you need such sophisticated architecture? Reloading application context when JMS message is received? Sounds crazy (or maybe ingenious?)
Nevertheless, I am not 100% sure but the information you provided is pretty clear: you are trying to shutdown an application context while consuming JMS message. But since the consumer is Spring-managed, context cannot be destroyed because it waits for all beans to finish - including yours ConfigurationReloader required by Spring Integration message consumer. And ConfigurationReloader cannot finish because it waits for context to be destroyed (refresh() is blocking).
Simply put - you have introduced a cyclic dependency and a deadlock.
The solution is simple - delay the context refresh so that it happens after the JMS message consumption. The easiest way would be:
public void refresh() {
Thread destroyThread = new Thread() {
#Override
public void run() {
this.applicationContext.refresh();
}
};
destroyThread.start();
}
Not pretty but I'm almost sure this will work.

Spring integration: difficulty with transaction between 2 activators

I have this use case.
First chain:
<int:chain input-channel="inserimentoCanaleActivate" output-channel="inserimentoCanalePreRouting">
<int:service-activator ref="inserimentoCanaleActivator" method="activate" />
</int:chain>
This is the relative code:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
public EventMessage<ModificaOperativitaRapporto> activate(EventMessage<InserimentoCanale> eventMessage) {
...
// some Database changes
dao.save(myObject);
}
All is working great.
Then I have another chain:
<int:chain id="onlineCensimentoClienteChain" input-channel="ONLINE_CENSIMENTO_CLIENTE" output-channel="inserimentoCanaleActivate">
<int:service-activator ref="onlineCensimentoClienteActivator" method="activate" />
<int:splitter expression="payload.getPayload().getCanali()" />
</int:chain>
And the relative activator:
#Override
public EventMessage<CensimentoCliente> activate(EventMessage<CensimentoCliente> eventMessage) {
...
// some Database changes
dao.save(myObject);
}
The CensimentoCliente payload as described below has a List of payload of the first chain, so with a splitter I split on the list and reuse the code of the first chain.
public interface CensimentoCliente extends Serializable {
Collection<? extends InserimentoCanale> getCanali();
void setCanali(Collection<? extends InserimentoCanale> canali);
...
}
But since every activator gets his transaction definition (since the first one can live without the second one) I have a use case where the transactions are separated.
The goal is to have the db modifies of the two chains been part of the same transaction.
Any help?
Kind regards
Massimo
You can accomplish this by creating a custom channel (or other custom component, but this is the simplest approach) that wraps the message dispatch in a TransactionTemplate callback execution:
public class TransactionalChannel extends AbstractSubscribableChannel {
private final MessageDispatcher dispatcher = new UnicastingDispatcher();
private final TransactionTemplate transactionTemplate;
TransactionalChannel(TransactionTemplate transactionTemplate) {
this.transactionTemplate = transactionTemplate;
}
#Override
protected boolean doSend(final Message<?> message, long timeout) {
return transactionTemplate.execute(new TransactionCallback<Boolean>() {
#Override
public Boolean doInTransaction(TransactionStatus status) {
return getDispatcher().dispatch(message);
}
});
}
#Override
protected MessageDispatcher getDispatcher() {
return dispatcher;
}
}
In your XML, you can define your channel and transaction template and reference your custom channel just as you would any other channel:
<bean id="transactionalChannel" class="com.stackoverflow.TransactionalChannel">
<constructor-arg>
<bean class="org.springframework.transaction.support.TransactionTemplate">
<property name="transactionManager" ref="transactionManager"/>
<property name="propagationBehavior" value="#{T(org.springframework.transaction.TransactionDefinition).PROPAGATION_REQUIRES_NEW}"/>
</bean>
</constructor-arg>
</bean>
For your example, you could perhaps use a bridge to pass the message through the new channel:
<int:bridge input-channel="inserimentoCanaleActivate" output-channel="transactionalChannel" />
<int:chain input-channel="transactionalChannel" output-channel="inserimentoCanalePreRouting">
<int:service-activator ref="inserimentoCanaleActivator" method="activate" />
</int:chain>
You you have <service-activator> and #Transactional on service method, the transaction will be bounded only to that method invocation.
If you want to have a transction for entire message flow (or its part) you should declare TX advice somewhere before.
If your channels are direct all service invocations will be wrapped with the same transaction.
The most simple way to accomplish your wishes, write simple #Gateway interface with #Transactional and call it from the start of your message flow.
To clarify a bit regarding transactions
Understanding Transactions in Message flows
Are these modifying 2 separate relational databases ? If so you are looking at an XA transaction. Now if you are running this on a non XA container like tomcat, all of this must be done in a single thread that is watched by a transaction manager - (you will have to piggy back on the transaction manager that actually triggers these events). The transaction manager can be a JMS message or a poller against some data source. Also this processing must be done in a single thread so that spring can help you run the entire process in a single transaction.
As a final note , do not introduce threadpools / queues between service activators. This can cause the activators to run in separate threads

Resources