Spreading/smoothing periodic tasks out over time - algorithm

I have a database table with N records, each of which needs to be refreshed every 4 hours. The "refresh" operation is pretty resource-intensive. I'd like to write a scheduled task that runs occasionally and refreshes them, while smoothing out the spikes of load.
The simplest task I started with is this (pseudocode):
every 10 minutes:
find all records that haven't been refreshed in 4 hours
for each record:
refresh it
set its last refresh time to now
(Technical detail: "refresh it" above is asynchronous; it just queues a task for a worker thread pool to pick up and execute.)
What this causes is a huge resource (CPU/IO) usage spike every 4 hours, with the machine idling the rest of the time. Since the machine also does other stuff, this is bad.
I'm trying to figure out a way to get these refreshes to be more or less evenly spaced out -- that is, I'd want around N/(10mins/4hours), that is N/24, of those records, to be refreshed on every run. Of course, it doesn't need to be exact.
Notes:
I'm fine with the algorithm taking time to start working (so say, for the first 24 hours there will be spikes but those will smooth out over time), as I only rarely expect to take the scheduler offline.
Records are constantly being added and removed by other threads, so so we can't assume anything about the value of N between iterations.
I'm fine with records being refreshed every 4 hours +/- 20 minutes.

Do a full refresh, to get all your timestamps in sync. From that point on, every 10 minutes, refresh the oldest N/24 records.
The load will be steady from the start, and after 24 runs (4 hours), all your records will be updating at 4-hour intervals (if N is fixed). Insertions will decrease refresh intervals; deletions may cause increases or decreases, depending on the deleted record's timestamp. But I suspect you'd need to be deleting quite a lot (like, 10% of your table at a time) before you start pushing anything outside your 40-minute window. To be on the safe side, you could do a few more than N/24 each run.

Each minute:
take all records older than 4:10 , refresh them
If the previous step did not find a lot of records:
Take some of the oldest records older than 3:40, refresh them.
This should eventually make the last update time more evenly spaced out. What "a lot" and "some" means You should decide Yourself (possibly based on N).

Give each record its own refreshing interval time, which is a random number between 3:40 and 4:20.

Related

What is the idle ramp up period for the 1000 users in the JMeter, while working on SharePoint online app?

While working on the SharePoint app, I noticed that it takes more than 10 seconds to load the app first time. so, I was thinking that how much ramp-up period will be idle for running 1000 users on JMeter?
Not sure about the "idle", do you mean "ideal" ramp-up?
I believe the slow initial load can be explained by how IIS internally works:
https://social.technet.microsoft.com/Forums/ie/en-US/297fb51b-b7b4-4b7b-a898-f6c91efd994e/sharepoint-2013-first-load-takes-long-time?forum=sharepointadmin
Taking all that the information into account, we can predict that high response times due to the long initial load will occur for at most several minutes in the beginning of your test and affect the users which are started during this period.
Finding out the "ideal" ramp-up time can be a little bit tricky.
I would address the problem like that:
Add the Response Times Over Time (https://jmeter-plugins.org/wiki/ResponseTimesOverTime/) graph.
Set the ramp-up time to 10 seconds per user and, using the graph added above, find out when the response times will settle down. This will mean that the initial load of the application is completed. Don't forget to exclude this period from the report.
Now you should observe that the response times are slowly growing with the addition of new users.
After the addition of all of the users, if everything went well, you can try to lower your ramp-up time to, say, 3600 seconds for all of the users.
If you see that the response times skyrocketed and/or exceeded the SLA, than you either are adding the users too fast, or the application is already saturated.

Will the last entry in Guava cache be removed within a definite time?

We use Guava for caching with an expireAfterAccess (1 minute) strategy. Typically in the morning the cache is accessed quite often, often enough to trigger its 'best-effort' maintenance.
My question is what will happen to the last entry? After the last access there could be a gap up to 20 hours. Will the last entry still be removed within a 1 minute or even a couple of minutes?

What is a good way to design and build a task scheduling system with lots of recurring tasks?

Imagine you're building something like a monitoring service, which has thousands of tasks that need to be executed in given time interval, independent of each other. This could be individual servers that need to be checked, or backups that need to be verified, or just anything at all that could be scheduled to run at a given interval.
You can't just schedule the tasks via cron though, because when a task is run it needs to determine when it's supposed to run the next time. For example:
schedule server uptime check every 1 minute
first time it's checked the server is down, schedule next check in 5 seconds
5 seconds later the server is available again, check again in 5 seconds
5 seconds later the server is still available, continue checking at 1 minute interval
A naive solution that came to mind is to simply have a worker that runs every second or so, checks all the pending jobs and executes the ones that need to be executed. But how would this work if the number of jobs is something like 100 000? It might take longer to check them all than it is the ticking interval of the worker, and the more tasks there will be, the higher the poll interval.
Is there a better way to design a system like this? Are there any hidden challenges in implementing this, or any algorithms that deal with this sort of a problem?
Use a priority queue (with the priority based on the next execution time) to hold the tasks to execute. When you're done executing a task, you sleep until the time for the task at the front of the queue. When a task comes due, you remove and execute it, then (if its recurring) compute the next time it needs to run, and insert it back into the priority queue based on its next run time.
This way you have one sleep active at any given time. Insertions and removals have logarithmic complexity, so it remains efficient even if you have millions of tasks (e.g., inserting into a priority queue that has a million tasks should take about 20 comparisons in the worst case).
There is one point that can be a little tricky: if the execution thread is waiting until a particular time to execute the item at the head of the queue, and you insert a new item that goes at the head of the queue, ahead of the item that was previously there, you need to wake up the thread so it can re-adjust its sleep time for the item that's now at the head of the queue.
We encountered this same issue while designing Revalee, an open source project for scheduling triggered callbacks. In the end, we ended up writing our own priority queue class (we called ours a ScheduledDictionary) to handle the use case you outlined in your question. As a free, open source project, the complete source code (C#, in this case) is available on GitHub. I'd recommend that you check it out.

Spreading out data from bursts

I am trying to spread out data that is received in bursts. This means I have data that is received by some other application in large bursts. For each data entry I need to do some additional requests on some server, at which I should limit the traffic. Hence I try to spread up the requests in the time that I have until the next data burst arrives.
Currently I am using a token-bucket to spread out the data. However because the data I receive is already badly shaped I am still either filling up the queue of pending request, or I get spikes whenever a bursts comes in. So this algorithm does not seem to do the kind of shaping I need.
What other algorithms are there available to limit the requests? I know I have times of high load and times of low load, so both should be handled well by the application.
I am not sure if I was really able to explain the problem I am currently having. If you need any clarifications, just let me know.
EDIT:
I'll try to clarify the problem some more and explain, why a simple rate limiter does not work.
The problem lies in the bursty nature of the traffic and the fact, that burst have a different size at different times. What is mostly constant is the delay between each burst. Thus we get a bunch of data records for processing and we need to spread them out as evenly as possible before the next bunch comes in. However we are not 100% sure when the next bunch will come in, just aproximately, so a simple divide time by number of records does not work as it should.
A rate limiting does not work, because the spread of the data is not sufficient this way. If we are close to saturation of the rate, everything is fine, and we spread out evenly (although this should not happen to frequently). If we are below the threshold, the spreading gets much worse though.
I'll make an example to make this problem more clear:
Let's say we limit our traffic to 10 requests per seconds and new data comes in about every 10 seconds.
When we get 100 records at the beginning of a time frame, we will query 10 records each second and we have a perfect even spread. However if we get only 15 records we'll have one second where we query 10 records, one second where we query 5 records and 8 seconds where we query 0 records, so we have very unequal levels of traffic over time. Instead it would be better if we just queried 1.5 records each second. However setting this rate would also make problems, since new data might arrive earlier, so we do not have the full 10 seconds and 1.5 queries would not be enough. If we use a token bucket, the problem actually gets even worse, because token-buckets allow bursts to get through at the beginning of the time-frame.
However this example over simplifies, because actually we cannot fully tell the number of pending requests at any given moment, but just an upper limit. So we would have to throttle each time based on this number.
This sounds like a problem within the domain of control theory. Specifically, I'm thinking a PID controller might work.
A first crack at the problem might be dividing the number of records by the estimated time until next batch. This would be like a P controller - proportional only. But then you run the risk of overestimating the time, and building up some unsent records. So try adding in an I term - integral - to account for built up error.
I'm not sure you even need a derivative term, if the variation in batch size is random. So try using a PI loop - you might build up some backlog between bursts, but it will be handled by the I term.
If it's unacceptable to have a backlog, then the solution might be more complicated...
If there are no other constraints, what you should do is figure out the maximum data rate that you are comfortable with sending additional requests, and limit your processing speed according to that. Then monitor what is happening. If that gets through all of your requests quickly, then there is no harm . If its sustained level of processing is not fast enough, then you need more capacity.

Reoccurring Events in Ruby

I've written a basic RESTful API in Sinatra/Ruby for handling betting markets. Due to the nature of fixed odds, I need to recalculate the current odds in the market over a specific interval, lets say five minutes. Normally you could put something like this a cron job, or run clockwork etc to fire off an event every five minutes, however my trouble is that I might be running hundreds or thousands of these markets at once and I don't want them all synced to the same clock if I can avoid it.
My first thought is to put an item into a delayed job queue for regenerating the odds at a specific interval, however I can guarantee that the job will run on time (or at all in the event of the queue server going down)
The most elegant way of handling this would be to just have a time stamp in the database and automatically recalculate odds once a new bet comes in, if the odds are past a certain time. The downside to this is that if the betting is slow, I'm constantly going to be rejecting the newest bet because I'm invalidating the odds before I'm placing it. Not good.
Any thoughts?
Not sure that I understand the background of your problem completely, but maybe this strategy can be useful:
Split all your N markets into m chunks, say by using modulo of dividing their ID by m.
If your recalculation period is T, run a recalculation every T/m, recalculating only one chunk at a time.

Resources