I am currently dispatching queued jobs to send API Events instantly, in busy times these queued jobs need to be held until overnight when the API is less busy, how can I hold these queued jobs or schedule them to only run from 01:00am the following day.
the Queued Job call currently looks like:
EliQueueIdentity::dispatch($EliIdentity->id)->onQueue('eli');
there are other jobs on the same queue, all of which will need to be held in busy times
Use delay to run job at a certain time.
EliQueueIdentity::dispatch($EliIdentity->id)
->onQueue('eli')
->delay($this->scheduleDate());
Helper for calculating the time, handling a edge case between 00:00 to 01:00, where it would delay it a whole day. While not specified how to handle busy, made an pseudo example you can implement.
private function scheduleDate()
{
$now = Carbon::now();
if (! $this->busy()) {
return $now;
}
// check for edge case of 00:00:00 to 01
if ($now->hour <= 1) {
$now->setTime(1, 0, 0);
return $now;
}
return Carbon::tomorrow()->addHour();
}
You can use delayed dispatching (see https://laravel.com/docs/6.x/queues#delayed-dispatching):
// Run it 10 minutes later:
EliQueueIdentity::dispatch($EliIdentity->id)->onQueue('eli')->delay(
now()->addMinutes(10)
);
Or pass another carbon instance like:
// Run it at the end of the current week (i believe this is sunday 23:59, havent checked).
->delay(Carbon::now()->endOfWeek());
// Or run it at february second 2020 at 00:00.
->delay(Carbon::createFromFormat('Y-m-d', '2020-02-02'));
You get the picture.
Related
I am struggling hard to properly test this scenario and could not really match the numbers. Could you please verify whether this configuration is correct for the below scenario?
When the message comes to consumer first time, want to retry for these exceptions WebException, HttpRequestException, RequestTimeoutException, TimeoutException. And after this retries exhausted I want to redeliver these messages( only for the above exceptions) using delayed exchange with intervals first time delay 2 minutes, then 4 minutes and finally 6 minutes and after 3 times stop redeliver and push to _error queue.
I want UseMessageRetry() should execute only first time and not every time when the message get to consumer through delayed exchange.
cfg.UseDelayedExchangeMessageScheduler();
cfg.ReceiveEndpoint(rabbitMqConfig.QueueName, e =>
{
e.PrefetchCount = 20;
e.UseRateLimit(100, TimeSpan.FromMinutes(3));
e.UseDelayedRedelivery(p =>
{
p.Intervals(TimeSpan.FromMinutes(2), TimeSpan.FromMinutes(4),TimeSpan.FromMinutes(6));
});
e.UseCircuitBreaker(cb =>
{
cb.TrackingPeriod = TimeSpan.FromMinutes(1);
cb.TripThreshold = 15;
cb.ActiveThreshold = 10;
cb.ResetInterval = TimeSpan.FromMinutes(5);
});
e.UseMessageRetry(r =>
{
r.Incremental(2, TimeSpan.FromSeconds(3), TimeSpan.FromSeconds(6));
r.Handle<WebException>();
r.Handle<HttpRequestException>();
r.Handle<TimeoutException>();
r.Handle<RequestTimeoutException>();
});
e.Consumer<Consumers.ProductConsumer>(provider);
});
I have a .Net Core application with Hangfire implementation.
There is a recurring job per minute as below:-
RecurringJob.AddOrUpdate<IS2SScheduledJobs>(x => x.ProcessInput(), Cron.MinuteInterval(1));
var hangfireOptions = new BackgroundJobServerOptions
{
WorkerCount = 20,
};
_server = new BackgroundJobServer(hangfireOptions);
The ProcessInput() internally checks the BlockingCollection() of some Ids to process, it keeps on continuously processing.
There is a time when the first ten ProcessInput() jobs keeps on processing with 10 workers where as the other new ProcessInput() jobs gets enqueued.
For this purpose, I wanted to increase the workers count, say around 50, so that there would be 50 ProcessInput() jobs being processed parallely.
Please suggest.
Thanks.
In the .NET Core version you can set the worker count when adding the Hangfire Server:
services.AddHangfireServer(options => options.WorkerCount = 50);
you can try increasing the worker process using this method. You have to know the number of processors running on your servers. To get 100 workers, you can try
var options = new BackgroundJobServerOptions { WorkerCount = Environment.ProcessorCount * 25 };
app.UseHangfireServer(options);
Source
Between 4 and 5 o'clock a remote system is regularly down.
This means some cron jobs produce exceptions.
Is there a way to ignore these exceptions.
But exceptions before or after that time period are important.
This is currently not possible with Sentry.
If you want you can watch this GitHub Sentry issue: Mute whole projects in case of maintenance downtime #1517.
Actually, there is a workaround for that;
Sentry.init(options -> {
options.setBeforeSend((event, hint) -> {
if (time is between 4-5 o-clock) {
return null;
}
return event;
});
}
);
I have created an application using IBM XMS.NET. All is good and I am able to read the mssages from queue.
I want to read only those messages which are older that 2 mins from now.
How to use selector in this case. Below is code I have created.
var time = 120000; // 2 mins in miliseconds
var currentTime = (DateTime.Now - DateTime.MinValue).TotalMilliseconds; // current time in milliseconds
long finaltime = Convert.ToInt64(currentTime - time); // Time in milliseconds after substracting 2 minutes
var dtt = Convert.ToInt64(((new DateTime(1970, 01, 01, 01, 00, 00)) - DateTime.MinValue).TotalMilliseconds); // Time in miliseconds till 1970
finaltime = finaltime - dtt; // substracting milliseconds till 1976 as JMSTimestamp shows time after 1970
string selector = "JMSTimestamp <=" + finaltime.ToString();
Here selector is getting set as fixed value for example 1454322340382.
How I can set selector to choose latest DateTime.Now and then look for message older that DateTime.Now - 2 minutes.
Selecting on those messages that are older than 2 minutes is probably the most inefficient way to look at those messages. You don't say why you want to do this. If you simply want to discard them, then I suggest you have the producer of the message add an expiry time on those messages. If you cannot get the producer of the message to change their application to do this, then consider using the CAPEXPRY administrative over-ride.
If you want to look at them, then browsing through all the messages and only operating on those which are the right age would be more efficient that selecting them I'm sure.
Because the selector is passed as parameter at the time of creating consumer, it can be changed without closing and recreating the consumer.
MessageConsumer receiver;
receiver = session.createConsumer(stockQueue, selector);
Update:
Evaluation of selector expression happens during creation of consumer. The
DateTime.Now - 2 expression evaluates to a fixed value and does not change. For example "JMSTimestamp <= 1454322340382". So when a consumer is created with that selection string, consumer will get only those messages that match the above condition.
While the above is fine. But when the consumer is getting messages, new messages can come into the queue. Those message may become older than 2 minutes by the time consumer attempts to get them. Consumer will not get those messages even though they are older than two minutes because their JMSTimestamp is higher, for example 1454666666666. To remove such messages you will have to close the consumer and create it again with updated selector condition.
Hope I am clear.
For your use case, I would go for MQ Base .NET API instead of XMS .NET. First browse messages and if a message is older than 2 minutes remove it.
queueBrowse = queueManager.AccessQueue(strQueueName, MQC.MQOO_BROWSE + MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING);
try
{
MQMessage msgBrowse = new MQMessage();
MQGetMessageOptions mqgmoBrowse = new MQGetMessageOptions();
mqgmoBrowse.Options = MQC.MQGMO_BROWSE_NEXT;
queueBrowse.Get(msgBrowse, mqgmoBrowse);
TimeSpan ts = DateTime.Now.ToUniversalTime().Subtract(msgBrowse.PutDateTime);
if (ts.TotalMinutes > 2)
{
MQMessage msgDelete = new MQMessage();
msgDelete.MessageId = msgBrowse.MessageId;
MQGetMessageOptions mqgmo = new MQGetMessageOptions();
mqgmo.MatchOptions = MQC.MQMO_MATCH_MSG_ID;
queueBrowse.Get(msgDelete, mqgmo);
Console.WriteLine("Message older than 2 minutes deleted");
}
else
{
Console.WriteLine("Message not older than 2 minutes");
}
}
catch (MQException ex)
{
}
I have a single ActorA that reads from an input stream and sends messages to a group of ActorB's. When ActorA reaches the end of the input stream it cleans up its resources, broadcasts a Done message to the ActorB's, and shuts itself down.
I have approx 12 ActorB's that send messages to a group of ActorC's. When an ActorB receives a Done message from ActorA then it cleans up its resources and shuts itself down, with the exception of the last surviving ActorB which broadcasts a Done message to the ActorC's before it shuts itself down.
I have approx 24 ActorC's that send messages to a single ActorD. Similar to the ActorB's, when each ActorC gets a Done message it cleans up its resources and shuts itself down, with the exception of the last surviving ActorC which sends a Done message to ActorD.
When ActorD gets a Done message it cleans up its resources and shuts itself down.
Initially I had the ActorB's and ActorC's immediately propagate the Done message when they received it, but this might cause the ActorC's to shut down before all of the ActorB's have finished processing their queues; likewise the ActorD might shut down before the ActorC's have finished processing their queues.
My solution is to use an AtomicInteger that is shared among the ActorB's
class ActorB(private val actorCRouter: ActorRef,
private val actorCount: AtomicInteger) extends Actor {
private val init = {
actorCount.incrementAndGet()
()
}
def receive = {
case Done => {
if(actorCount.decrementAndGet() == 0) {
actorCRouter ! Broadcast(Done)
}
// clean up resources
context.stop(self)
}
}
}
ActorC uses similar code, with each ActorC sharing an AtomicInteger.
At present all actors are initialized in a web service method, with the downstream ActorRef's passed in the upstream actors' constructors.
Is there a preferred way to do this, e.g. using calls to Akka methods instead of an AtomicInteger?
Edit: I'm considering the following as a possible alternative: when an actor receives a Done message it sets the receive timeout to 5 seconds (the program will take over an hour to run, so delaying cleanup/shutdown by a few seconds won't impact the performance); when the actor gets a ReceiveTimeout it broadcasts Done to the downstream actors, cleans up, and shuts down. (The routers for ActorB and ActorC are using a SmallestMailboxRouter)
class ActorB(private val actorCRouter: ActorRef) extends Actor {
def receive = {
case Done => {
context.setReceiveTimeout(Duration.create(5, SECONDS))
}
case ReceiveTimeout => {
actorCRouter ! Broadcast(Done)
// clean up resources
context.stop(self)
}
}
}
Sharing actorCount among related actors is not good thing to do. Actor should only be using its own state to handle messages.
How about having ActorBCompletionHanlder actor for actor of type ActorB. All ActorB will have reference to ActorBCompletionHanlder actor. Every time ActorB receives Done message it can do necessay cleanup and simply pass done message to ActorBCompletionHanlder. ActorBCompletionHanlder will maintain state variale for maintaining counts. Everytime it receives done message it can simply update counter. As this is solely state variable for this actor no need to have it atomic and that way no need for any explicit locking. ActorBCompletionHanlder will send done message to ActorC once it receives last done message.
This way sharing of activeCount is not among actors but only managed by ActorBCompletionHanlder. Same thing can be repeated for other types.
A-> B's -> BCompletionHanlder -> C's -> CCompletionHandler -> D
Other approach could be to have one monitoring actor for evey related group of actors. And using watch api and child terminated event on monitor you can chose to decide what to do once you receive last done message.
val child = context.actorOf(Props[ChildActor])
context.watch(child)
case Terminated(child) => {
log.info(child + " Child actor terminated")
}