how to shutdown ThreadPoolTaskExecutor? Good way - spring

I have ThreadPoolTaskExecutor. I should send too many emails (different emails). If I have error during email sending, I should write it in database.
<bean id="taskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="5" />
<property name="maxPoolSize" value="10" />
<property name="WaitForTasksToCompleteOnShutdown" value="true" />
</bean>
I'm executing tasks. taskExecutor.execute(Runable). Everything works well!
#Override
public void run() {
try {
mailService.sendMail();
} catch (Throwable t) {
daoService.writeFaillStatus();
}
}
Everything seems ok! Async request are doing well!
I also have
white(true) {
if(executor.getActiveCount()==0){
executor.shutdown(); break;
}
Thread.sleep(3000)
}
Because of WaitForTasksToCompleteOnShutdown=true task never will be shut down automatically. In the other words, main thread never will be destroyed (Main thread is the thread where I invoke thread executor tasks - when I run code in eclipse, terminal is active always). Even after executor thread will finish their work my console looks like this:
I think this is because, main thread is waiting something - someone who tell that "everything is already done, relax, go and shut down"
So thought about this solution while(true). Could you tell me if this is good idea? Might be it is not good.
I know that this executor also have submit() method. I think I do not need here. Please correct me If I am not correct in this post.

Because you are using the Spring ThreadPoolTaskExecutor, you are in luck.
If you configure the following it will allow you specify a number of seconds to wait before forcing a shutdown.
<bean id="taskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="5" />
<property name="maxPoolSize" value="10" />
<property name="WaitForTasksToCompleteOnShutdown" value="true" />
<property name="awaitTerminationSeconds" value="X" />
</bean>
Set the value of X to be however many seconds you want the process to wait before terminating.
From the documentation for awaitTerminationSeconds
If the "waitForTasksToCompleteOnShutdown"
flag has been set to true, it will continue to fully execute all ongoing tasks as well as all remaining tasks in the
queue, in parallel to the rest of the container shutting down.
In either case, if you specify an await-termination period using this property, this executor will wait for the given time
(max) for the termination of tasks. As a rule of thumb, specify a significantly higher timeout here if you set
"waitForTasksToCompleteOnShutdown" to true at the same time, since all remaining tasks in the queue will still get
executed - in contrast to the default shutdown behavior where it's just about waiting for currently executing tasks
that aren't reacting to thread interruption.
Basically, you are trying to forcefully do something that should be solely handled by the framework. It should be the framework that decides when to shut down the task executor, due to exceptional circumstances. I would remove all your code that is trying to shut down the task executor, and let Spring handle that shut down, when all your jobs have finished. Then Spring will properly shut down the main as well.

If you know before hand how many are "too many emails", I would suggest to take a look at CountDownLatch rather than the busy wait loop to check the task status.
In main thread set the
CountDownLatch latch = new CountDownLatch(TOO_MANY_EMAILS);
Pass this instance to runnable instance, where we call latch.countDown() after sending each mail.
In the main thread we wait for the latch to countdown: latch.await(). This will block main thread execution.
After which you could safely shutdown the thread pool knowing all the work has been completed.

Related

Consume from channel only if the number of messages reaches a count or the message is in the channel since a while

I have a custom sink module and I would like to consume the messages from the input only if the the number of messages reaches a count or if they are in the channel since some time. In nutshell, I want to do a bulk push.
I have tried aggregating the number of messages after consuming and storing them in an aggregated channel backed by SimpleMessageStore and have MessageGroupStoreReaper checking for messages in the channel.
I am not satisfied with this approach as I am consuming the messages and storing them in an in-memory store, I am aware of the JDBC store as well but I don't want to follow this approach as the message channels in spring XD are backed by redis/mq I would like to consume from the input channel based on my conditions.
My current bean configuration is as shown below:
<int:aggregator id="messageAggregator" ref="messageAggregatorBean"
method="aggregate" input-channel="input" output-channel="aggregatorOutputChannel"
release-strategy="messageReleaseStrategyBean" release-strategy-method="canRelease"
send-partial-result-on-expiry="true" message-store="resultMessageStore">
</int:aggregator>
<int:service-activator id="contributionIndexerService"
ref="contributionIndexerBean" method="bulkIndex" input-channel="aggregatorOutChannel" />
<bean id="resultMessageStore"
class="org.springframework.integration.store.SimpleMessageStore" />
<bean id="resultMessageStoreReaper"
class="org.springframework.integration.store.MessageGroupStoreReaper">
<property name="messageGroupStore" ref="resultMessageStore" />
<property name="timeout" value="60000" />
</bean>
<task:scheduled-tasks>
<task:scheduled ref="resultMessageStoreReaper" method="run"
fixed-rate="10000" />
</task:scheduled-tasks>
Any thoughts or comments?
Thanks in advance.
I'm not sure that you will be able to determine the count of messages in the Broker's queue (Redis/RabbitMQ or even just normal JMS), more over how much they have been there.
You definitely should consume them to do such a logic.
And yes, I think Aggregator can help you. But right: that must be a Persistent Message Store.
for the case
if they are in the channel since some time
The Aggregator suggest an option like group-timeout to release those groups which haven't reached the releaseStrategy condition, but you would like to emit them anyway over some time: http://docs.spring.io/spring-integration/reference/html/messaging-routing-chapter.html#agg-and-group-to

Why PollSkipStrategy.skipPoll method is getting called on every message polled from queue?

I'm using inbound poller to process failed requests from backout queue. For scheduling, I'm using corn expression '0 0/2 * * * *' i.e. execute poller every two minutes. The scheduling is working fine as per corn, but PollSkipStrategy.skipPoll method is getting called for every message polled. I was under impression is, poll skip strategy will be execute once for each poll and not for each record polled. I have implementation for PollSkipStrategy.skipPoll, which returns true or false based on peoperty. I'm missing something here? Here is my configuration
<bean id="RegistrationEventPoller"
class="com.poller.RegistrationEventPoller">
<property name="RegistrationEventRetryCount" value="$env{RegistrationEventRetryCount}"/>
</bean>
<bean id="PollSkipAdvice" class="org.springframework.integration.scheduling.PollSkipAdvice">
<constructor-arg ref="PollSkipStrategy"/>
</bean>
<bean id="PollSkipStrategy"
class="com..poller.PollSkipStrategy">
<property name="RegistrationPollerOnOff" value="$env{RegistrationPollerOnOff}"/>
</bean>
The advice is an around advice on the whole flow (MessageSource.receive() and sending the message). When the poller fires it calls the flow for up to maxMessagesPerPoll so, yes, the advice is actually called for each message found within the poll, not just on the first poll. It simply provides a mechanism to stop calling the message source if some condition prevents you from handling messages.
A more sophisticated Smart Polling feature was added in 4.2 which gives you much more flexibility.

Does applying ExpressionEvaluatingRequestHandlerAdvice supress the error?

adapter going out to a jms queue. I have some logic that needs to trigger both on successful deliver and on failover so i've hooked the adapter to the ExpressionEvaluatingRequestHandlerAdvice.
<jms:outbound-channel-adapter id="101Out"
channel="101DestinationChannel"
connection-factory="101Factory"
destination-expression="headers.DESTINATION_NAME"
destination-resolver="namingDestinationResolver"
explicit-qos-enabled="true">
<jms:request-handler-advice-chain>
<beans:bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<beans:property name="onSuccessExpression" ref="success"/>
<beans:property name="successChannel" ref="normalOpsReplicationChannel"/>
<beans:property name="onFailureExpression" ref="failure"/>
<beans:property name="failureChannel" ref="failoverInitiationChannel" />
</beans:bean>
<beans:bean id="retryAdvice" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice">>
<beans:property name="retryTemplate" ref="retryTemplate"/>
</beans:bean>
</jms:request-handler-advice-chain>
</jms:outbound-channel-adapter>
Now both of these methods are triggering appropriately and the replication/failover logic is executing fine. But on failure (when i stop the queue manager) once the process hooked up to the failureChannel completes, i see that the error is propagating back to the source of the call (an HTTP endpoint in this case).
The advice IS supposed to stop the error from propagating right?
<service-activator input-channel="failoverInitiationChannel"
ref="failoverInitiator" />
I have a service activator hooked up to the failureChannel which just mutates a singleton. Nothing i do here can have triggered the error. Moreover, the error coming back is definitely for the queue access so it can't be anything i did after the failoverInitiator activated.
org.springframework.jms.IllegalStateException: JMSWMQ0018: Failed to connect to queue manager 'APFDEV1' with connection mode 'Client' and host name 'localhost(1510)'.
I'm a very confused if i'm supposed to use the recoveryCallback on the RequestHandlerRetryAdvice or this one to actually stop the error. But i do need an action taken even on success so the ExpressionEvaluatingAdvice is a better fit to my scenario.
Thanks for the help in advance :-)
That is the default behavior. Please see the ExpressionEvaluatingRequestHandlerAdvice javadocs for the trapException property...
/**
* If true, any exception will be caught and null returned.
* Default false.
* #param trapException true to trap Exceptions.
*/
public void setTrapException(boolean trapException) {
this.trapException = trapException;
}
I will add a note the reference manual.

Transactional semantics of jdbc poller with task executor

What is the transactional behaviour of the jdbc:inbound-channel-adapter when it uses a task executor, as in the following code:
<task:executor id="pollerPool" pool-size="10" queue-capacity="1000" />
<int-jdbc:inbound-channel-adapter id="pollingAdapter"
channel="..." data-source="..." auto-startup="true" query="..."
row-mapper="..." update="..." max-rows-per-poll="100">
<int:poller fixed-rate="50000" task-executor="pollerPool">
<int:transactional transaction-manager="..."
isolation="DEFAULT" propagation="REQUIRED" read-only="false" timeout="1000"/>
</int:poller>
</int-jdbc:inbound-channel-adapter>
Obviously the use of a task executor will start a new transaction, but that is no issue because the jdbc poller is the begining of the pipeline. But will componenents further down the pipeline participate in the same transaction? This is important because, if not, the update statement of the jdbc:inbound-channel-adapter will not be rolled back if there is a failure further down the line.
Correct. Polling task AbstractPollingEndpoint#createPoller() is wrapped with TransactionInterceptor and it independ of provided TaskExecutor.
In other words: that thread which poll message from JDBC is within transaction boundaries.
And that transaction lives until that downstream flow ends its work or you shift a message to another thread, e.g. some Executor Channel.
Don't forget that transaction is single-threaded anyway.
For more info take a look here please: http://docs.spring.io/spring-integration/docs/3.0.1.RELEASE/reference/html/transactions.html

best way to both prevent starvation and ensure minimal throughput within a Spring JMS listener pool

I have a JMS listener pool:
<jms:listener-container connection-factory="jmsConnectionFactory"
acknowledge="client" concurrency="10">
<jms:listener destination="foo1" ref="ref1"/>
<jms:listener destination="foo2" ref="ref2"/>
<jms:listener destination="foo3" ref="ref3"/>
<jms:listener destination="foo4" ref="ref4"/> // etc...
</jms:listener-container>
I'm looking for a way to ensure that some of my message types don't starve or block out all the others. Example: I would prefer to have at least M but no more than N percent of my pool be dedicated to processing foo1 messages, even though such processing may erratically stall.
Presently, if I let this happen, then all 10 threads in my pool will end up dedicated to foo1 messages. foo{2-4} messages will have to wait. I can prevent such starvation by enforcing timeouts on my foo1 listener, but then I fail my throughput goals.
Is there some easy configuration-based way of achieving this? Can I have two JMS listener pools running at once?
Or is my safest bet just to set up two entirely different server fleets, one dedicated to foo1 messages, the other to foo2-4?
Ideally, I would like to do something like the following. But "concurrency" isn't an attribute of jms:listener, just jms:listener-container:
<jms:listener-container connection-factory="jmsConnectionFactory"
acknowledge="client" concurrency="25">
<jms:listener destination="foo1" ref="ref1" concurrency="10" /> // ensure higher throughput
<jms:listener destination="foo2" ref="ref2" concurrency="5" /> // don't let foo1 starve me...
<jms:listener destination="foo3" ref="ref3" concurrency="5" /> // don't let foo1 starve me...
<jms:listener destination="foo4" ref="ref4" concurrency="5" /> // etc...
</jms:listener-container>
Thanks.
Understand that the namespace configuration is just a convenience to simplify configuration. Each <listener/> element gets its own listener container and "inherits" the attributes from the <listener-container/> element.
So, in your first example, the concurrency="10" is not shared across all the listeners; each <listener/> gets a concurrency of 10.
You can achieve your second example by declaring two <listener-container/> elements, one with concurrency 10 and one with 5. Then declare each listener in the appropriate container element.
Hope that helps.

Resources