I am struggling hard to properly test this scenario and could not really match the numbers. Could you please verify whether this configuration is correct for the below scenario?
When the message comes to consumer first time, want to retry for these exceptions WebException, HttpRequestException, RequestTimeoutException, TimeoutException. And after this retries exhausted I want to redeliver these messages( only for the above exceptions) using delayed exchange with intervals first time delay 2 minutes, then 4 minutes and finally 6 minutes and after 3 times stop redeliver and push to _error queue.
I want UseMessageRetry() should execute only first time and not every time when the message get to consumer through delayed exchange.
cfg.UseDelayedExchangeMessageScheduler();
cfg.ReceiveEndpoint(rabbitMqConfig.QueueName, e =>
{
e.PrefetchCount = 20;
e.UseRateLimit(100, TimeSpan.FromMinutes(3));
e.UseDelayedRedelivery(p =>
{
p.Intervals(TimeSpan.FromMinutes(2), TimeSpan.FromMinutes(4),TimeSpan.FromMinutes(6));
});
e.UseCircuitBreaker(cb =>
{
cb.TrackingPeriod = TimeSpan.FromMinutes(1);
cb.TripThreshold = 15;
cb.ActiveThreshold = 10;
cb.ResetInterval = TimeSpan.FromMinutes(5);
});
e.UseMessageRetry(r =>
{
r.Incremental(2, TimeSpan.FromSeconds(3), TimeSpan.FromSeconds(6));
r.Handle<WebException>();
r.Handle<HttpRequestException>();
r.Handle<TimeoutException>();
r.Handle<RequestTimeoutException>();
});
e.Consumer<Consumers.ProductConsumer>(provider);
});
Related
I have configured an AWS Lambda such that it will be triggered if a SQS queue receives a message. If any exception occurs while processing the message, the lambda will retry 1 time and if it fails again then the message will go to DLQ.
The configuration is:
on the queue: maxReceiveCount = 2 on the queue. redrive to the dlq. VisibilityTimeout = 30 seconds
on my lambda: ReservedConcurrentExecutions = 1, BatchSize = 1, timeout of 10 seconds
link to the sam template
If i make the lambda throw an error like that:
const receiveHandler = async (event) => {
throw new Error('Booooooom')
};
Then I send a batch of 3 messages.:
I see the failed invocations on CloudWatch
then I see 1 retry for each message on CloudWatch
then I receive the 3 messages in the dead-letter queue. Each message has a receiveCount of 3
Conclusion: everything works as expected
So I do another test. I change the code of my lambda, so it will timeout:
function delay(milliseconds) {
return new Promise(resolve => {
setTimeout(() => { resolve() }, milliseconds);
})
}
const receiveHandler = async (event) => {
// wait 20 seconds
await delay(20000);
};
exports.receiveHandler = receiveHandler;
I purge the dlq, I deploy the new stack, and I send a new batch of 3 messages.
What happens is:
I see in Cloudwatch my message 2 start being processed (1st attempt)
then i see my message 1 start being processed (1st attempt)
then i see my message 1 being processed again (retry)
then i see my message 3 start being processed (1st attempt)
All processings ended in timeout as expected
So 2 logs are missing:
retry for message 2
retry for message 3
However, when I poll the dlq, what I see is:
3 messages in the dlq
ReceiveCount = 3 for each message
So, if I believe the count in the DLQ, all my messages have been retried once. What could be the reason why CloudWatch is missing 2 messages ?
Edit: I did 2 successful tests:
if I triple the VisibilityTimeout and I redo the test, I see all my logs
if, instead, I triple the ReservedConcurrency, I also see all my logs
So, i think that with the configuration I set, some of my messages have no room to be retried, they are moved to the DLQ instead.
Thank you for your help guys !
I am currently dispatching queued jobs to send API Events instantly, in busy times these queued jobs need to be held until overnight when the API is less busy, how can I hold these queued jobs or schedule them to only run from 01:00am the following day.
the Queued Job call currently looks like:
EliQueueIdentity::dispatch($EliIdentity->id)->onQueue('eli');
there are other jobs on the same queue, all of which will need to be held in busy times
Use delay to run job at a certain time.
EliQueueIdentity::dispatch($EliIdentity->id)
->onQueue('eli')
->delay($this->scheduleDate());
Helper for calculating the time, handling a edge case between 00:00 to 01:00, where it would delay it a whole day. While not specified how to handle busy, made an pseudo example you can implement.
private function scheduleDate()
{
$now = Carbon::now();
if (! $this->busy()) {
return $now;
}
// check for edge case of 00:00:00 to 01
if ($now->hour <= 1) {
$now->setTime(1, 0, 0);
return $now;
}
return Carbon::tomorrow()->addHour();
}
You can use delayed dispatching (see https://laravel.com/docs/6.x/queues#delayed-dispatching):
// Run it 10 minutes later:
EliQueueIdentity::dispatch($EliIdentity->id)->onQueue('eli')->delay(
now()->addMinutes(10)
);
Or pass another carbon instance like:
// Run it at the end of the current week (i believe this is sunday 23:59, havent checked).
->delay(Carbon::now()->endOfWeek());
// Or run it at february second 2020 at 00:00.
->delay(Carbon::createFromFormat('Y-m-d', '2020-02-02'));
You get the picture.
I have 1000 http API requests to be made. I have kept them all as promises in an array. I want to execute them in "BATCHES" of 100 at a time - not more than that to avoid hitting any API rate-limit / throttling etc.
While bluebirdJS provides the .map() function with the concurrency option what it does is it limits the number of calls made AT A TIME. Meaning it will ensure that no more than 100 concurrent requests are being worked on at a time - as soon as the 1st request resolves, it will begin processing the 101st request - it doesn't wait for all the 100 to resolve first before starting with the next 100.
The "BATCHING" behavior i am looking for is to first process the 100 requests, and ONLY AFTER all of the 100 requests have completed it should begin with the next 100 requests.
Does BlueBirdJS provide any API out of the box to handle batches this way?
You can split big urls array to an array of batches. For each batch run Promise#map which will be resolved when all async operations are finished. And run these batches in sequence using Array#reduce.
let readBatch(urls) {
return Promise.map(url => request(url));
}
let read(urlBatches) {
return urlBatches.reduce((p, urls) => {
return p.then(() => readBatch(urls));
}, Promise.resolve());
}
const BATCH_SIZE = 100;
let urlBatches = [];
for (let i = 0; i < urls.length; i+= BATCH_SIZE) {
let batch = array.slice(i, i + BATCH_SIZE);
urlBatches.push(batch);
}
read(urlBatches)
.then(() => { ... }) // will be called when all 1000 urls are processed
I have created an application using IBM XMS.NET. All is good and I am able to read the mssages from queue.
I want to read only those messages which are older that 2 mins from now.
How to use selector in this case. Below is code I have created.
var time = 120000; // 2 mins in miliseconds
var currentTime = (DateTime.Now - DateTime.MinValue).TotalMilliseconds; // current time in milliseconds
long finaltime = Convert.ToInt64(currentTime - time); // Time in milliseconds after substracting 2 minutes
var dtt = Convert.ToInt64(((new DateTime(1970, 01, 01, 01, 00, 00)) - DateTime.MinValue).TotalMilliseconds); // Time in miliseconds till 1970
finaltime = finaltime - dtt; // substracting milliseconds till 1976 as JMSTimestamp shows time after 1970
string selector = "JMSTimestamp <=" + finaltime.ToString();
Here selector is getting set as fixed value for example 1454322340382.
How I can set selector to choose latest DateTime.Now and then look for message older that DateTime.Now - 2 minutes.
Selecting on those messages that are older than 2 minutes is probably the most inefficient way to look at those messages. You don't say why you want to do this. If you simply want to discard them, then I suggest you have the producer of the message add an expiry time on those messages. If you cannot get the producer of the message to change their application to do this, then consider using the CAPEXPRY administrative over-ride.
If you want to look at them, then browsing through all the messages and only operating on those which are the right age would be more efficient that selecting them I'm sure.
Because the selector is passed as parameter at the time of creating consumer, it can be changed without closing and recreating the consumer.
MessageConsumer receiver;
receiver = session.createConsumer(stockQueue, selector);
Update:
Evaluation of selector expression happens during creation of consumer. The
DateTime.Now - 2 expression evaluates to a fixed value and does not change. For example "JMSTimestamp <= 1454322340382". So when a consumer is created with that selection string, consumer will get only those messages that match the above condition.
While the above is fine. But when the consumer is getting messages, new messages can come into the queue. Those message may become older than 2 minutes by the time consumer attempts to get them. Consumer will not get those messages even though they are older than two minutes because their JMSTimestamp is higher, for example 1454666666666. To remove such messages you will have to close the consumer and create it again with updated selector condition.
Hope I am clear.
For your use case, I would go for MQ Base .NET API instead of XMS .NET. First browse messages and if a message is older than 2 minutes remove it.
queueBrowse = queueManager.AccessQueue(strQueueName, MQC.MQOO_BROWSE + MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING);
try
{
MQMessage msgBrowse = new MQMessage();
MQGetMessageOptions mqgmoBrowse = new MQGetMessageOptions();
mqgmoBrowse.Options = MQC.MQGMO_BROWSE_NEXT;
queueBrowse.Get(msgBrowse, mqgmoBrowse);
TimeSpan ts = DateTime.Now.ToUniversalTime().Subtract(msgBrowse.PutDateTime);
if (ts.TotalMinutes > 2)
{
MQMessage msgDelete = new MQMessage();
msgDelete.MessageId = msgBrowse.MessageId;
MQGetMessageOptions mqgmo = new MQGetMessageOptions();
mqgmo.MatchOptions = MQC.MQMO_MATCH_MSG_ID;
queueBrowse.Get(msgDelete, mqgmo);
Console.WriteLine("Message older than 2 minutes deleted");
}
else
{
Console.WriteLine("Message not older than 2 minutes");
}
}
catch (MQException ex)
{
}
I'm using NetMQ (Nuget 3.3.2.2) on .NET 4.5 and I have a single fast generator process with a PUSH socket, and a single slow consumer process using a PULL socket. If I send enough messages to hit the sending HWM, the sending process blocks the thread indefinitely.
Some contrived (generator) code which illustrates the problem:
using (var ctx = NetMQContext.Create())
using (var pushSocket = ctx.CreatePushSocket())
{
pushSocket.Connect("tcp://127.0.0.1:42404");
var template = GenerateMessageBody(i);
for (int i = 1; i <= 100000; i++)
{
pushSocket.SendMoreFrame("SampleMessage").SendFrame(Messages.SerializeToByteArray(template));
if (i % 1000 == 0)
Console.WriteLine("Sent " + i + " messages");
}
Console.WriteLine("All finished");
Console.ReadKey();
}
On my configuration, this will usually report it has sent about 5000 or 6000 messages, and will then simply block. If I set the send HWM set to a large value (or 0), then it sends all of the messages as expected.
It looks like it's waiting to receive another command before it tries again, here: (SocketBase.TrySend)
// Oops, we couldn't send the message. Wait for the next
// command, process it and try to send the message again.
// If timeout is reached in the meantime, return EAGAIN.
while (true)
{
ProcessCommands(timeoutMillis, false);
From what I've read in the 0MQ guide, blocking on a full PUSH sockeet is the correct behaviour (and is what I want it to do), however I would expect it to recover once the consumer has cleared its queue.
Short of using some sort of TrySend pattern and dealing with the block myself, is there some option I can set or some other facility I can use to have the PUSH socket attempt to resend blocked messages periodically?