Pausing entire container when concurrency is greater than 1 - spring

When we are using a container with concurrency greater than 1, and wish to pause all the child containers, can we pause directly using:
MessageListenerContainer mlc= registry.getListenerContainer(<id>);
mlc.pause();
Or do we have to cast it to a Concurrent Message Listener Container and pause each container individually?
Thanks in advance.

If you're using #KafkaListener(id = "your-id",...) for your method, it will be registered as an instance of ConcurrentMessageListenerContainer. So when you call mlc.pause(), it actually is ConcurrentMessageListenerContainer#pause, you don't have to cast it (polymorphism).
With ConcurrentMessageListenerContainer#pause, it will delegate to all KafkaMessageListenerContainer#pauses inside it (for example: concurrency = 3, you will have 3 instances of KafkaMessageListenerContainer). You don't have to pause each container individually, mlc.pause() is enough.

Related

C# - Worker thread with slots, items being dynamically added

I have window service that polls a web service for new items every 30 seconds. If it finds any new items, it checks to see if they need to be "processed" and then puts them in a list to process. I spawn off different threads to process 5 at a time, and when one finishes, another one will fill the empty slot. Once everything has finished, the program sleeps for 30 seconds and then polls again.
My issue is, while the items are being processed(which could take up to 15 minutes), new items are being created which also may need to be processed. My problem is the main thread gets held up waiting for every last thread to finish before it sleeps and starts the process all over.
What I'm looking to do is have the main thread continue to poll the web service every 30 seconds, however instead of getting held up, add any new items it finds to a list, which would be processed in a separate worker thread. In that worker thread, it would still have say only 5 slots available, but they would essentially always all be filled, assuming the main thread continues to find new items to process.
I hope that makes sense. Thanks!
EDIT: updated code sample
I put together this as a worker thread that operates on a ConcurrentQueue. Any way to improve this?
private void ThreadWorker() {
DateTime dtStart = DateTime.Now;
int iNumOfConcurrentSlots = 6
Thread[] threads = new Thread[iNumOfConcurrentSlots];
while (true) {
for (int i = 0; i < m_iNumOfConcurrentSlots; i++) {
if (m_tAssetQueue.TryDequeue(out Asset aa)) {
threads[i] = new Thread(() => ProcessAsset(aa));
threads[i].Start();
Thread.Sleep(500);
}
}
}
}
EDIT: Ahh yeah that won't work above. I need a way of being able to not hard code the number of ConcurrentSlots, but have each thread basically waiting and looking for something in the Queue and if it finds it, process it. But then I also need a way of signalling that the ProcessAsset() function has completed to release the thread and allow another thread to be created....
One simple way to do it is to have 5 threads reading from a concurrent queue. The main thread queues items and the worker threads do blocking reads from the queue.
Note: The workers are in an infinite loop. They call TryDequeue, process the item if they got one or sleep one second if they fail to get something. They can also check for an exit flag.
To have your service property behaved, you might have an independent polling thread that queues the items. The main thread is kept to respond to start, stop, pause requests.
Pseudo code for worker thread:
While true
If TryDequeue then
process data
If exit flag is true, break
While pause flag, sleep
Sleep
Pseudo code for polling thread:
While true
Poll web service
Queue items in concurrent queue
If exit flag true, break
While pause flag, sleep
Sleep
Pseudo code for main thread:
Start polling thread
Start n worker threads with above code
Handle stop:
set exit flag to true
Handle pause
set pause flag to true

stop specfic process in python ProcessPoolExecutor or shared state btw them

This is my code
def long_stage_task(node, deployment_folder_name, stage_s3_bucket):
global workers
logging.info("starting....")
work = StageOS(node, deployment_folder_name, stage_s3_bucket)--> class
work.stagestart()--> class method
executor = ProcessPoolExecutor(5)
executor.submit(long_stage_task, i, deployment_folder_name, stage_s3_bucket)
Now how can i stop a particular process/pid.
Is there any way to pass globals or shared state btw them, i don't see any thing in the doc.
https://docs.python.org/3/library/concurrent.futures.html
You could pass to the workers a list of Events and set them when you want the worker to stop. This implies your long_stage_task function periodically checks its own Event.
If what you are after is stopping a task which is taking too long, you can take a look at pebble. It allows to set timeouts to function calls as well as to cancel ongoing tasks.

DefaultMessageListenerContainer stops processing messages

I'm hoping this is a simple configuration issue but I can't seem to figure out what it might be.
Set-up
Spring-Boor 2.2.2.RELEASE
cloud-starter
cloud-starter-aws
spring-jms
spring-cloud-dependencies Hoxton.SR1
amazon-sqs-java-messaging-lib 1.0.8
Problem
My application starts up fine and begins to process messages from Amazon SQS. After some amount of time I see the following warning
2020-02-01 04:16:21.482 LogLevel=WARN 1 --- [ecutor-thread14] o.s.j.l.DefaultMessageListenerContainer : Number of scheduled consumers has dropped below concurrentConsumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers.
The above warning gets printed multiple times and eventually I see the following two INFO messages
2020-02-01 04:17:51.552 LogLevel=INFO 1 --- [ecutor-thread40] c.a.s.javamessaging.SQSMessageConsumer : Shutting down ConsumerPrefetch executor
2020-02-01 04:18:06.640 LogLevel=INFO 1 --- [ecutor-thread40] com.amazon.sqs.javamessaging.SQSSession : Shutting down SessionCallBackScheduler executor
The above 2 messages will display several times and at some point no more messages are consumed from SQS. I don't see any other messages in my log to indicate an issue, but I get no messages from my handlers that they are processing messages (I have 2~) and I can see the AWS SQS queue growing in the number of messages and the age.
~: This exact code was working fine when I had a single handler, this problem started when I added the second one.
Configuration/Code
The first "WARNing" I realize is caused by the currency of the ThreadPoolTaskExecutor, but I can not get a configuration which works properly. Here is my current configuration for the JMS stuff, I have tried various levels of max pool size with no real affect other than the warings start sooner or later based on the pool size
public ThreadPoolTaskExecutor asyncAppConsumerTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setThreadGroupName("asyncConsumerTaskExecutor");
taskExecutor.setThreadNamePrefix("asyncConsumerTaskExecutor-thread");
taskExecutor.setCorePoolSize(10);
// Allow the thread pool to grow up to 4 times the core size, evidently not
// having the pool be larger than the max concurrency causes the JMS queue
// to barf on itself with messages like
// "Number of scheduled consumers has dropped below concurrentConsumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers"
taskExecutor.setMaxPoolSize(10 * 4);
taskExecutor.setQueueCapacity(0); // do not queue up messages
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(60);
return taskExecutor;
}
Here is the JMS Container Factory we create
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(SQSConnectionFactory sqsConnectionFactory, ThreadPoolTaskExecutor asyncConsumerTaskExecutor) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(sqsConnectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
// The JMS processor will start 'concurrency' number of tasks
// and supposedly will increase this to the max of '10 * 3'
factory.setConcurrency(10 + "-" + (10 * 3));
factory.setTaskExecutor(asyncConsumerTaskExecutor);
// Let the task process 100 messages, default appears to be 10
factory.setMaxMessagesPerTask(100);
// Wait up to 5 seconds for a timeout, this keeps the task around a bit longer
factory.setReceiveTimeout(5000L);
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
}
I added the setMaxMessagesPerTask & setReceiveTimeout calls based on stuff found on the internet, the problem persists without these and at various settings (50, 2500L, 25, 1000L, etc...)
We create a default SQS connection factory
public SQSConnectionFactory sqsConnectionFactory(AmazonSQS amazonSQS) {
return new SQSConnectionFactory(new ProviderConfiguration(), amazonSQS);
}
Finally the handlers look like this
#JmsListener(destination = "consumer-event-queue")
public void receiveEvents(String message) throws IOException {
MyEventDTO myEventDTO = jsonObj.readValue(message, MyEventDTO.class);
//messageTask.process(myEventDTO);
}
#JmsListener(destination = "myalert-sqs")
public void receiveAlerts(String message) throws IOException, InterruptedException {
final MyAlertDTO myAlert = jsonObj.readValue(message, MyAlertDTO.class);
myProcessor.addAlertToQueue(myAlert);
}
You can see in the first function (receiveEvents) we just take the message from the queue and exit, we have not implemented the processing code for that.
The second function (receiveAlerts) gets the message, the myProcessor.addAlertToQueue function creates a runnable object and submits it to a threadpool to be processed at some point in the future.
The problem only started (the warning, info and failure to consume messages) only started when we added the receiveAlerts function, previously the other function was the only one present and we did not see this behavior.
More
This is part of a larger project and I am working on breaking this code out into a smaller test case to see if I can duplicate this issue. I will post a follow-up with the results.
In the Mean Time
I'm hoping this is just a config issue and someone more familiar with this can tell me what I'm doing wrong, or that someone can provide some thoughts and comments on how to correct this to work properly.
Thank you!
After fighting this one for a bit I think I finally resolved it.
The issue appears to be due to the "DefaultJmsListenerContainerFactory", this factory creates a new "DefaultJmsListenerContainer" for EACH method with a '#JmsListener' annotation. The person who originally wrote the code thought it was only called once for the application, and the created container would be re-used. So the issue was two-fold
The 'ThreadPoolTaskExecutor' attached to the factory had 40 threads, when the application had 1 '#JmsListener' method this worked fine, but when we aded a second method then each method got 10 threads (total of 20) for listening. This is fine, however; since we stated that each listener could grow up to 30 listeners we quickly ran out of threads in the pool mentioned in 1 above. This caused the "Number of scheduled consumers has dropped below concurrentConsumers limit" error
This is probably obvious given the above, but I wanted to call it out explicitly. In the Listener Factory we set the concurrency to be "10-30", however; all of the listeners have to share that pool. As such the max concurrency has to be setup so that each listeners' max value is small enough so that if each listener creates its maximum that it doesn't exceed the maximum number of threads in the pool (e.g. if we have 2 '#JmsListener' annotated methods and a pool with 40 threads, then the max value can be no more than 20).
Hopefully this might help someone else with a similar issue in the future....

Async feature in Servlets

I was just going back to Servlet-3.x features and exploring it. If I am not wrong, before Servlet-3.x it was thread per request model and it would run out of threads in the pool, for heavy incoming traffic.
So, with Servlet-3.x it says it is Asynchronous and doesn't keep the threads blocked , rather releases them immediately but just the task is delegated.
Here is my interpretation,
consider there are 2 threads in Server thread-pool
For a new Async Servlet request R1 there is a thread T1, this T1 would delegate the task to T2 and T1 responds back to client immediately.
Question: Is T2 created from Server thread-pool? If so, I don't get the point.
Case 1: If it was old Synchronous Servlet request T1 would have been busy running I/O task,
Case 2: If it was Asynchronous Servlet call T2 is busy running I/O task.
In both cases, one of them is busy.
I tried to check the same with a sample Async servlet in openliberty app server, below is the sample log captured from my sample demo Servlet.
Entering doGet() == thread name is = Default Executor-thread-116
Exiting doGet() == thread name is = Default Executor-thread-116
=== Long running task started ===
Thread executing #start of long running task = Default Executor-thread-54
Thread executing #end of long running task = Default Executor-thread-54
=== Long running task ended ===
As shown above, the Default Executor-thread-116 is released immediately and delegated long running task to the Default Executor-thread-54, but I am not sure if they are from the App Server thread pool. If so, why can't just Default Executor-thread-116 do the task instead of delegation?
Can someone throw some light on this async behavior of Servlets in JavaEE
In your example, where the work is synchronous and there's no separate executor/threadpool, there is nearly no point to use async servlets. Lots of samples/examples out there are just block on a 2nd thread because they're trying to illustrate just the syntax.
But there's no reason why you can't spin off a thread to do a little work, add your async context to some list, and then after some event (inbound JMS, websocket, whatever) provides the data needed to complete the async response. For example, a 2-player game server wouldn't wait for player 2 in a second thread, it would just have their async context floating around in memory waiting for a 2nd player to find it.

Running Plone subscriber events asynchronously

In using Plone 4, I have successfully created a subscriber event to do extra processing when a custom content type is saved. This I accomplished by using the Products.Archetypes.interfaces.IObjectInitializedEvent interface.
configure.zcml
<subscriber
for="mycustom.product.interfaces.IRepositoryItem
Products.Archetypes.interfaces.IObjectInitializedEvent"
handler=".subscribers.notifyCreatedRepositoryItem"
/>
subscribers.py
def notifyCreatedRepositoryItem(repositoryitem, event):
"""
This gets called on IObjectInitializedEvent - which occurs when a new object is created.
"""
my custom processing goes here. Should be asynchronous
However, the extra processing can sometimes take too long, and I was wondering if there is a way to run it in the background i.e. asynchronously.
Is it possible to run subscriber events asynchronously for example when one is saving an object?
Not out of the box. You'd need to add asynch support to your environment.
Take a look at plone.app.async; you'll need a ZEO environment and at least one extra instance. The latter will run async jobs you push into the queue from your site.
You can then define methods to be executed asynchronously and push tasks into the queue to execute such a method asynchronously.
Example code, push a task into the queue:
from plone.app.async.interfaces import IAsyncService
async = getUtility(IAsyncService)
async.queueJob(an_async_task, someobject, arg1_value, arg2_value)
and the task itself:
def an_async_task(someobject, arg1, arg2):
# do something with someobject
where someobject is a persistent object in your ZODB. The IAsyncService.queueJob takes at least a function and a context object, but you can add as many further arguments as you need to execute your task. The arguments must be pickleable.
The task will then be executed by an async worker instance when it can, outside of the context of the current request.
Just to give more options, you could try collective.taskqueue for that, really simple and really powerful (and avoid some of the drawbacks of plone.app.async).
The description on PyPI already has enough to get you up to speed in no time, and you can use redis for the queue management which is a big plus.

Resources