How to schedule #RabbitListener - spring-boot

As per the requirement, I don't want to consume message from queue for a couple of hours in a day.
/*Consume time from 9AM to 5PM*/
#Scheduled(cron = "* * 9-16 * * *")
#RabbitListener(queues = "${QUEUE_NAME}")
public void processMessage(SomeMessage message) {
}

I see a few options
Keep your application started just when consumption is required
The application could be started by cron (or other schedulers) and once started, schedule itself to stop after some time.
Consume messages in an imperative, not declarative way.
Just use https://docs.spring.io/spring-amqp/docs/2.1.4.RELEASE/reference/#polling-consumer
Use
org.springframework.amqp.coreAmqpTemplate#receive method in a loop. Make sure that your loop is running only during scheduled hours.
Use Delayed Messages
https://www.cloudamqp.com/docs/delayed-messages.html
This requires changes in the producer. The consumer could be running all the time. But if during sending the message you delay it, so it will be delivered according to your schedule.

Related

Laravel - Throttling Emails sends with job middleware

An application that I'm making will allow users to set up automatic email campaigns to email their list of users (up to x per day).
I need a way of making sure that this is throttled so too many aren't sent within some range. Right now I'm trying to work within the confines of a free Mailtrap plan. But even on production using Sendgrid, I want a sensible throttle.
So say a user has set their automatic time to 9am and there are 30 users eligible to receive requests on that date and time. Every review_request gets a record in the DB. Upon Model creation, an event listener is triggered to then dispatch a job.
This is the handle method of the job that is dispatched:
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
Redis::throttle('request-' . $this->reviewRequest->id)
->block(0)->allow(1)->every(5)
->then(function () {
// Lock obtained...
$message = new ReviewRequestMailer($this->location, $this->reviewRequest, $this->type);
Mail::to($this->customer->email)
->send(
$message
);
}, function () {
// Could not obtain lock...
return $this->release(5);
});
}
the above is taken from https://laravel.com/docs/8.x/queues#job-middleware
"For example, consider the following handle method which leverages Laravel's Redis rate limiting features to allow only one job to process every five seconds:"
I am using Horizon to view the jobs. When I run my command to send emails (about 25 requests to be sent), all jobs seems to process instantly. Not 1 every 5 seconds as I would expect.
The exception for the failed jobs are:
Swift_TransportException: Expected response code 354 but got code "550", with message "550 5.7.0 Requested action not taken: too many emails per second
Why does the above Redis throttle not process a single job every 5 seconds? And how can I achieve this?

DefaultMessageListenerContainer stops processing messages

I'm hoping this is a simple configuration issue but I can't seem to figure out what it might be.
Set-up
Spring-Boor 2.2.2.RELEASE
cloud-starter
cloud-starter-aws
spring-jms
spring-cloud-dependencies Hoxton.SR1
amazon-sqs-java-messaging-lib 1.0.8
Problem
My application starts up fine and begins to process messages from Amazon SQS. After some amount of time I see the following warning
2020-02-01 04:16:21.482 LogLevel=WARN 1 --- [ecutor-thread14] o.s.j.l.DefaultMessageListenerContainer : Number of scheduled consumers has dropped below concurrentConsumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers.
The above warning gets printed multiple times and eventually I see the following two INFO messages
2020-02-01 04:17:51.552 LogLevel=INFO 1 --- [ecutor-thread40] c.a.s.javamessaging.SQSMessageConsumer : Shutting down ConsumerPrefetch executor
2020-02-01 04:18:06.640 LogLevel=INFO 1 --- [ecutor-thread40] com.amazon.sqs.javamessaging.SQSSession : Shutting down SessionCallBackScheduler executor
The above 2 messages will display several times and at some point no more messages are consumed from SQS. I don't see any other messages in my log to indicate an issue, but I get no messages from my handlers that they are processing messages (I have 2~) and I can see the AWS SQS queue growing in the number of messages and the age.
~: This exact code was working fine when I had a single handler, this problem started when I added the second one.
Configuration/Code
The first "WARNing" I realize is caused by the currency of the ThreadPoolTaskExecutor, but I can not get a configuration which works properly. Here is my current configuration for the JMS stuff, I have tried various levels of max pool size with no real affect other than the warings start sooner or later based on the pool size
public ThreadPoolTaskExecutor asyncAppConsumerTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setThreadGroupName("asyncConsumerTaskExecutor");
taskExecutor.setThreadNamePrefix("asyncConsumerTaskExecutor-thread");
taskExecutor.setCorePoolSize(10);
// Allow the thread pool to grow up to 4 times the core size, evidently not
// having the pool be larger than the max concurrency causes the JMS queue
// to barf on itself with messages like
// "Number of scheduled consumers has dropped below concurrentConsumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers"
taskExecutor.setMaxPoolSize(10 * 4);
taskExecutor.setQueueCapacity(0); // do not queue up messages
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(60);
return taskExecutor;
}
Here is the JMS Container Factory we create
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(SQSConnectionFactory sqsConnectionFactory, ThreadPoolTaskExecutor asyncConsumerTaskExecutor) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(sqsConnectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
// The JMS processor will start 'concurrency' number of tasks
// and supposedly will increase this to the max of '10 * 3'
factory.setConcurrency(10 + "-" + (10 * 3));
factory.setTaskExecutor(asyncConsumerTaskExecutor);
// Let the task process 100 messages, default appears to be 10
factory.setMaxMessagesPerTask(100);
// Wait up to 5 seconds for a timeout, this keeps the task around a bit longer
factory.setReceiveTimeout(5000L);
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
}
I added the setMaxMessagesPerTask & setReceiveTimeout calls based on stuff found on the internet, the problem persists without these and at various settings (50, 2500L, 25, 1000L, etc...)
We create a default SQS connection factory
public SQSConnectionFactory sqsConnectionFactory(AmazonSQS amazonSQS) {
return new SQSConnectionFactory(new ProviderConfiguration(), amazonSQS);
}
Finally the handlers look like this
#JmsListener(destination = "consumer-event-queue")
public void receiveEvents(String message) throws IOException {
MyEventDTO myEventDTO = jsonObj.readValue(message, MyEventDTO.class);
//messageTask.process(myEventDTO);
}
#JmsListener(destination = "myalert-sqs")
public void receiveAlerts(String message) throws IOException, InterruptedException {
final MyAlertDTO myAlert = jsonObj.readValue(message, MyAlertDTO.class);
myProcessor.addAlertToQueue(myAlert);
}
You can see in the first function (receiveEvents) we just take the message from the queue and exit, we have not implemented the processing code for that.
The second function (receiveAlerts) gets the message, the myProcessor.addAlertToQueue function creates a runnable object and submits it to a threadpool to be processed at some point in the future.
The problem only started (the warning, info and failure to consume messages) only started when we added the receiveAlerts function, previously the other function was the only one present and we did not see this behavior.
More
This is part of a larger project and I am working on breaking this code out into a smaller test case to see if I can duplicate this issue. I will post a follow-up with the results.
In the Mean Time
I'm hoping this is just a config issue and someone more familiar with this can tell me what I'm doing wrong, or that someone can provide some thoughts and comments on how to correct this to work properly.
Thank you!
After fighting this one for a bit I think I finally resolved it.
The issue appears to be due to the "DefaultJmsListenerContainerFactory", this factory creates a new "DefaultJmsListenerContainer" for EACH method with a '#JmsListener' annotation. The person who originally wrote the code thought it was only called once for the application, and the created container would be re-used. So the issue was two-fold
The 'ThreadPoolTaskExecutor' attached to the factory had 40 threads, when the application had 1 '#JmsListener' method this worked fine, but when we aded a second method then each method got 10 threads (total of 20) for listening. This is fine, however; since we stated that each listener could grow up to 30 listeners we quickly ran out of threads in the pool mentioned in 1 above. This caused the "Number of scheduled consumers has dropped below concurrentConsumers limit" error
This is probably obvious given the above, but I wanted to call it out explicitly. In the Listener Factory we set the concurrency to be "10-30", however; all of the listeners have to share that pool. As such the max concurrency has to be setup so that each listeners' max value is small enough so that if each listener creates its maximum that it doesn't exceed the maximum number of threads in the pool (e.g. if we have 2 '#JmsListener' annotated methods and a pool with 40 threads, then the max value can be no more than 20).
Hopefully this might help someone else with a similar issue in the future....

How to balance multiple message queues

I have a task that is potentially long running (hours). The task is performed by multiple workers (AWS ECS instances in my case) that read from a message queue (AWS SQS in my case). I have multiple users adding messages to the queue. The problem is that if Bob adds 5000 messages to the queue, enough to keep the workers busy for 3 days, then Alice comes along and wants to process 5 tasks, Alice will need to wait 3 days before any of Alice's tasks even start.
I would like to feed messages to the workers from Alice and Bob at an equal rate as soon as Alice submits tasks.
I have solved this problem in another context by creating multiple queues (subqueues) for each user (or even each batch a user submits) and alternating between all subqueues when a consumer asks for the next message.
This seems, at least in my world, to be a common problem, and I'm wondering if anyone knows of an established way of solving it.
I don't see any solution with ActiveMQ. I've looked a little at Kafka with it's ability to round-robin partitions in a topic, and that may work. Right now, I'm implementing something using Redis.
I would recommend Cadence Workflow instead of queues as it supports long running operations and state management out of the box.
In your case I would create a workflow instance per user. Every new task would be sent to the user workflow via signal API. Then the workflow instance would queue up the received tasks and execute them one by one.
Here is a outline of the implementation:
public interface SerializedExecutionWorkflow {
#WorkflowMethod
void execute();
#SignalMethod
void addTask(Task t);
}
public interface TaskProcessorActivity {
#ActivityMethod
void process(Task poll);
}
public class SerializedExecutionWorkflowImpl implements SerializedExecutionWorkflow {
private final Queue<Task> taskQueue = new ArrayDeque<>();
private final TaskProcesorActivity processor = Workflow.newActivityStub(TaskProcesorActivity.class);
#Override
public void execute() {
while(!taskQueue.isEmpty()) {
processor.process(taskQueue.poll());
}
}
#Override
public void addTask(Task t) {
taskQueue.add(t);
}
}
And then the code that enqueues that task to the workflow through signal method:
private void addTask(WorkflowClient cadenceClient, Task task) {
// Set workflowId to userId
WorkflowOptions options = new WorkflowOptions.Builder().setWorkflowId(task.getUserId()).build();
// Use workflow interface stub to start/signal workflow instance
SerializedExecutionWorkflow workflow = cadenceClient.newWorkflowStub(SerializedExecutionWorkflow.class, options);
BatchRequest request = cadenceClient.newSignalWithStartRequest();
request.add(workflow::execute);
request.add(workflow::addTask, task);
cadenceClient.signalWithStart(request);
}
Cadence offers a lot of other advantages over using queues for task processing.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
See the presentation that goes over Cadence programming model.

Delay on StreamListener or condition

I'm reading docs but I'm not sure if this is possible on spring-cloud-stream using binder for kinesis.
I want to wait consuming messages from stream with some delay or configuration or by positive condition.
For example, I want wait 30 minutes after consumer process the message.
First aproximation is use condition with SPEL based on header message and current time, but the condition is created on startup. Then new Date is always the same.
I know that condition in below code is invalid.
#StreamListener(StreamProcessor.MY_STREAM, condition="#{headers['creation-date'] + 30minutes < new java.util.Date().getTime()}")
public void checkOut(Message<String> myMessage) {
//Do something
}
Do you know if is this possible without sleeping threads?
All you need is use Polled Consumer, this way you have full control over frequency, acks etc.

Spring Scheduler stops working for my cron expression

I've a method scheduled to run periodically with Spring Scheduler, it's been working fine and stopped working today with no error. What could be the potential cause ? Is there any alternative way to schedule task periodically using Spring Scheduler that ensures that the method will be executed no matter what?
#Scheduled(cron="0 0/1 * * * ?")
public void executePollingFlows(){
if(applicationConfig.isScheduleEnabled()) {
for (long flowId : applicationConfig.getPollingFlowIds()) {
flowService.executeFlow(flowId);
}
logger.info("Finished executing all polling flows at {}", new Date());
}
}
You may have got Out of Memory exception if the job could not finish its tasks but you try to run it again and again. If it is a Out of Memory exception you may try to create a ThreadPool and check it in every run. If there is no enough space in the ThreadPool you can skip the task for this turn.
There is alternative way to use #Scheduled periodically. You may change your #Scheduled annotation with this:
#Scheduled(fixedRate=1000)
It will still be running in every second and if necessary you can add initialDelay to it:
#Scheduled(initialDelay=1000, fixedRate=1000)
You can find more details about fixedRate, initialDelay and fixedDelay here:
https://docs.spring.io/spring/docs/current/spring-framework-reference/html/scheduling.html

Resources