Given requests at random times, return the requests from the last 1 minute
This was a question asked in Microsoft technical interview. I could not find any more details about the problem. Can anyone suggest how to approach the problem
This is really an interesting question which serves as a baseline for multiple cloud services which operate upon the idea of throttling. The ideology behind throttling is to limit the number of requests per second from a given client depending upon the throughput he's paying for. An example of such a service is DynamoDB from AWS.
Since cloud services usually have a high level of clients and traffic, one must design a solution-at-scale which works at high load. A queue would indeed be a data-structure of choice to handle such a scenario. However, would enqueuing and dequeuing millions of transaction per minute be efficient? A general way to avoid having a big queue tail is by introducing a precision trade-off through batching.
A blog which defines this concept in depth is this: https://medium.com/#saisandeepmopuri/system-design-rate-limiter-and-data-modelling-9304b0d18250
Let me know if you need any more explanation about the same. Cheers!
Make a queue.
Add new requests to the queue tail.
After every adding and before checking remove too old ones from the queue head.
When checking needed - return queue size
Related
In the context of a highly requested web service written in go language, I am considering to cache some computations. For that, I am thinking to use Redis.
My application is susceptible to receiving an avalanche of requests containing the same payload that triggers a costly computation. So a cache would reward and allow to compute once.
Consider the following figure extracted from here
I use this figure because I think it helps me illustrate the problem. The figure considers the general two cases: the book is in the cache, or this one is not in. However, the figure does not consider the transitory case when a book is being retrieved from the database and other "get-same-book" requests arrive. In this case, I would like to queue the repeated requests temporarily until the book is retrieved. Next, once the book has already arrived, the queued requests are replied with the result which would remain in the cache for fast retrieving of future requests.
So my question asks for approaches for implementing this requirement. I'm considering to use a kind of table on the server (repository) that writes the status of a query database (computing, ready), but this seems a little complicated, because I would need to handle some race conditions.
So I would like to know if anyone knows this pattern or if Redis itself implements it in some way (I have not found it in my consultations, but I suspect that using a Redis lock would be possible)
You can design it as you have described. But there is some things that are important.
Use a unique key
Use an unique key for each book, and if the book is ever changed, that key should also change. This design makes your step (6) save the book in Redis an idempotent operation (you can do it many times with the same result). So you will avoid any race condition with "get-same-book".
Idempotent requests OR asynchronous messages
I would like to queue the repeated requests temporarily until the book is retrieved. Next, once the book has already arrived, the queued requests are replied with the result
I would not recommend to queue requests as you describe it. If the request is a cache-miss, let it retrieve it from the database - but design it idempotent. Alternatively, you should handle all requests as asynchronous, and use a message queue e.g. nats, RabbitMQ or something, but the complexity grows with that solution.
Serializing requests
My problem is that while that second of computation where the result is not still gotten, too many repeated requests can arrive and due to the cost I need to avoid to repeat their computations. I need to find a way of retaining them while the result of the first request arrives.
It sounds like you want to have your computations serialized instead of doing them concurrently because you want to avoid doing the same computation twice. To solve this, you should let the requests initialize the computation, e.g. by putting the input on a queue and then do the computation in a serial order (but still possibly concurrently if they have a different key) and finally notify the client, or if the client is subscribing for updates (a better solution).
Redis do have support for PubSub but it depends on what requirements you have on the clients. I would recommend a solution without locks, for scalability.
I am using the Birman-Schiper-Stephenson protocol of distributed system with the current assumption that peer set of any node doesn't change. As the protocol dictates, the messages which have come out of causal order to a node have to be put in a 'delay queue'. My problem is with the organisation of the delay queue where we must implement some kind of order with the messages. After deciding the order we will have to make a 'Wake-Up' protocol which would efficiently search the queue after the current timestamp is modified to find out if one of the delayed messages can be 'woken-up' and accepted.
I was thinking of segregating the delayed messages into bins based on the points of difference of their vector-timestamps with the timestamp of this node. But the number of bins can be very large and maintaining them won't be efficient.
Please suggest some designs for such a queue(s).
Sorry about the delay -- didn't see your question until now. Anyhow, if you look at Isis2.codeplex.com you'll see that in Isis2, I have a causalsend implementation that employs the same vector timestamp scheme we described in the BSS paper. What I do is to keep my messages in a partial order, sorted by VT, and then when a delivery occurs I can look at the delayed queue and deliver off the front of the queue until I find something that isn't deliverable. Everything behind it will be undeliverable too.
But in fact there is a deeper insight here: you actually never want to allow the queue of delayed messages to get very long. If the queue gets longer than a few messages (say, 50 or 100) you run into the problem that the guy with the queue could be holding quite a few bytes of data and may start paging or otherwise running slowly. So it becomes a self-perpetuating cycle in which because he has a queue, he is very likely to be dropping messages and hence enqueuing more and more. Plus in any case from his point of view, the urgent thing is to recover that missed message that caused the others to be out of order.
What this adds up to is that you need a flow control scheme in which the amount of pending asynchronous stuff is kept small. But once you know the queue is small, searching every single element won't be very costly! So this deeper perspective says flow control is needed no matter what, and then because of flow control (if you have a flow control scheme that works) the queue is small, and because the queue is small, the search won't be costly!
Background
Echo Nest have a rate limited API. A given application (identified in requests using an API key) can make up to 120 REST calls a minute. The service response includes an estimate of the total number of calls made in the last minute; repeated abuse of the API (exceeding the limit) may cause the API key to be revoked.
When used from a single machine (a web server providing a service to clients) it is easy to control access - the server has full knowledge of the history of requests and can regulate itself correctly.
But I am working on a program where distributed, independent clients make requests in parallel.
In such a case it is much less clear what an optimal solution would be. And in general the problem appears to be undecidable - if over 120 clients, all with no previous history, make an initial request at the same time, then the rate will be exceeded.
But since this is a personal project, and client use is expected to be sporadic (bursty), and my projects have never been hugely successful, that is not expected to be a huge problem. A more likely problem is that there are times when a smaller number of clients want to make many requests as quickly as possible (for example, a client may need, exceptionally, to make several thousand requests when starting for the first time - it is possible two clients would start at around the same time, so they must cooperate to share the available bandwidth).
Given all the above, what are suitable algorithms for the clients so that they rate-limit appropriately? Note that limited cooperation is possible because the API returns the total number of requests in the last minute for all clients.
Current Solution
My current solution (when the question was written - a better approach is given as an answer) is quite simple. Each client has a record of the time the last call was made and the number of calls made in the last minute, as reported by the API, on that call.
If the number of calls is less than 60 (half the limit) the client does not throttle. This allows for fast bursts of small numbers of requests.
Otherwise (ie when there are more previous requests) the client calculates the limiting rate it would need to work at (ie period = 60 / (120 - number of previous requests)) and then waits until the gap between the previous call and the current time exceeds that period (in seconds; 60 seconds in a minute; 120 max requests per minute). This effectively throttles the rate so that, if it were acting alone, it would not exceed the limit.
But the above has problems. If you think it through carefully you'll see that for large numbers of requests a single client oscillates and does not reach maximum throughput (this is partly because of the "initial burst" which will suddenly "fall outside the window" and partly because the algorithm does not make full use of its history). And multiple clients will cooperate to an extent, but I doubt that it is optimal.
Better Solutions
I can imagine a better solution that uses the full local history of the client and models other clients with, say, a Hidden Markov Model. So each client would use the API report to model the other (unknown) clients and adjust its rate accordingly.
I can also imagine an algorithm for a single client that progressively transitions from unlimited behaviour for small bursts to optimal, limited behaviour for many requests without introducing oscillations.
Do such approaches exist? Can anyone provide an implementation or reference? Can anyone think of better heuristics?
I imagine this is a known problem somewhere. In what field? Queuing theory?
I also guess (see comments earlier) that there is no optimal solution and that there may be some lore / tradition / accepted heuristic that works well in practice. I would love to know what... At the moment I am struggling to identify a similar problem in known network protocols (I imagine Perlman would have some beautiful solution if so).
I am also interested (to a lesser degree, for future reference if the program becomes popular) in a solution that requires a central server to aid collaboration.
Disclaimer
This question is not intended to be criticism of Echo Nest at all; their service and conditions of use are great. But the more I think about how best to use this, the more complex/interesting it becomes...
Also, each client has a local cache used to avoid repeating calls.
Updates
Possibly relevant paper.
The above worked, but was very noisy, and the code was a mess. I am now using a simpler approach:
Make a call
From the response, note the limit and count
Calculate
barrier = now() + 60 / max(1, (limit - count))**greedy
On the next call, wait until barrier
The idea is quite simple: that you should wait some length of time proportional to how few requests are left in that minute. For example, if count is 39 and limit is 40 then you wait an entire minute. But if count is zero then you can make a request soon. The greedy parameter is a trade-off - when greater than 1 the "first" calls are made more quickly, but you are more likely hit the limit and end up waiting for 60s.
The performance of this is similar to the approach above, and it's much more robust. It is particularly good when clients are "bursty" as the approach above gets confused trying to estimate linear rates, while this will happily let a client "steal" a few rapid requests when demand is low.
Code here.
After some experimenting, it seems that the most important thing is getting as good an estimate as possible for the upper limit of the current connection rates.
Each client can track their own (local) connection rate using a queue of timestamps. A timestamp is added to the queue on each connection and timestamps older than a minute are discarded. The "long term" (over a minute) average rate is then found from the first and last timestamps and the number of entries (minus one). The "short term" (instantaneous) rate can be found from the times of the last two requests. The upper limit is the maximum of these two values.
Each client can also estimate the external connection rate (from the other clients). The "long term" rate can be found from the number of "used" connections in the last minute, as reported by the server, corrected by the number of local connections (from the queue mentioned above). The "short term" rate can be estimated from the "used" number since the previous request (minus one, for the local connection), scaled by the time difference. Again, the upper limit (maximum of these two values) is used.
Each client computes these two rates (local and external) and then adds them to estimate the upper limit to the total rate of connections to the server. This value is compared with the target rate band, which is currently set to between 80% and 90% of the maximum (0.8 to 0.9 * 120 per minute).
From the difference between the estimated and target rates, each client modifies their own connection rate. This is done by taking the previous delta (time between the last connection and the one before) and scaling it by 1.1 (if the rate exceeds the target) or 0.9 (if the rate is lower than the target). The client then refuses to make a new connection until that scaled delta has passed (by sleeping if a new connected is requested).
Finally, nothing above forces all clients to equally share the bandwidth. So I add an additional 10% to the local rate estimate. This has the effect of preferentially over-estimating the rate for clients that have high rates, which makes them more likely to reduce their rate. In this way the "greedy" clients have a slightly stronger pressure to reduce consumption which, over the long term, appears to be sufficient to keep the distribution of resources balanced.
The important insights are:
By taking the maximum of "long term" and "short term" estimates the system is conservative (and more stable) when additional clients start up.
No client knows the total number of clients (unless it is zero or one), but all clients run the same code so can "trust" each other.
Given the above, you can't make "exact" calculations about what rate to use, but you can make a "constant" correction (in this case, +/- 10% factor) depending on the global rate.
The adjustment to the client connection frequency is made to the delta between the last two connection (adjusting based on the average over the whole minute is too slow and leads to oscillations).
Balanced consumption can be achieved by penalising the greedy clients slightly.
In (limited) experiments this works fairly well (even in the worst case of multiple clients starting at once). The main drawbacks are: (1) it doesn't allow for an initial "burst" (which would improve throughput if the server has few clients and a client has only a few requests); (2) the system does still oscillate over ~ a minute (see below); (3) handling a larger number of clients (in the worst case, eg if they all start at once) requires a larger gain (eg 20% correction instead of 10%) which tends to make the system less stable.
The "used" amount reported by the (test) server, plotted against time (Unix epoch). This is for four clients (coloured), all trying to consume as much data as possible.
The oscillations come from the usual source - corrections lag signal. They are damped by (1) using the upper limit of the rates (predicting long term rate from instantaneous value) and (2) using a target band. This is why an answer informed by someone who understand control theory would be appreciated...
It's not clear to me that estimating local and external rates separately is important (they may help if the short term rate for one is high while the long-term rate for the other is high), but I doubt removing it will improve things.
In conclusion: this is all pretty much as I expected, for this kind of approach. It kind-of works, but because it's a simple feedback-based approach it's only stable within a limited range of parameters. I don't know what alternatives might be possible.
Since you're using the Echonest API, why don't you take advantage of the rate limit headers that are returned with every API call?
In general you get 120 requests per minute. There are three headers that can help you self-regulate your API consumption:
X-Ratelimit-Used
X-Ratelimit-Remaining
X-Ratelimit-Limit
**(Notice the lower-case 'ell' in 'Ratelimit'--the documentation makes you think it should be capitalized, but in practice it is lower case.)
These counts account for calls made by other processes using your API key.
Pretty neat, huh? Well, I'm afraid there is a rub...
That 120-request-per-minute is really an upper bound. You can't count on it. The documentation states that value can fluctuate according to system load. I've seen it as low as 40ish in some calls I've made, and have in some cases seen it go below zero (I really hope that was a bug in the echonest API!)
One approach you can take is to slow things down once utilization (used divided by limit) reaches a certain threshold. Keep in mind though, that on the next call your limit may have been adjusted download significantly enough that 'used' is greater than 'limit'.
This works well up until a point. Since the Echonest doesn't adjust the limit in a predictable mannar, it is hard to avoid 400s in practice.
Here are some links that I've found helpful:
http://blog.echonest.com/post/15242456852/managing-your-api-rate-limit
http://developer.echonest.com/docs/v4/#rate-limits
I have a site running on amazon elastic beanstalk with the following traffic pattern:
~50 concurrent users normally.
~2000 concurrent users for 1/2 minutes when post is made to Facebook page.
Amazon web services claim to be able to rapidly scale to challenges like this but the "Greater than x for more than 1 minute" setup of cloudwatch doesn't appear to be fast enough for this traffic pattern?
Usually within seconds all the ec2 instances crash, killing all cloudwatch metrics and the whole site is down for 4/6 minutes. So far I've yet to find a configuration that works for this senario.
Here is the graph of a smaller event that also killed the site:
Are these links posted predictably? If so, you can use Scaling by Schedule or as alternative you might change DESIRED-CAPACITY value of Auto Scaling Group or even trigger as-execute-policy to scale out straight before your link is posted.
Do you know you can have multiple scaling policies in one group? So you might have special Auto Scaling policy for your case, something like SCALE_OUT_HIGH which adds say 10 more instances at once. Take a look at as-put-scaling-policy command.
Also, you need to check your code and find bottle necks.
What HTTPD do you use? Consider of switching to Nginx as it's much more faster and less resource consuming software than Apache. Try to use Memcache... NoSQL like Redis for hight read and writes is fine option as well.
The suggestion from AWS was as follows:
We are always working to make our systems more responsive, but it is
challenging to provision virtual servers automatically with a response
time of a few seconds as your use case appears to require. Perhaps
there is a workaround that responds more quickly or that is more
resilient when requests begin to increase.
Have you observed whether the site performs better if you use a larger
instance type or a larger number of instances in the steady state?
That may be one method to be resilient to rapid increases in inbound
requests. Although I recognize it may not be the most cost-effective,
you may find this to be a quick fix.
Another approach may be to adjust your alarm to use a threshold or a
metric that would reflect (or predict) your demand increase sooner.
For example, you might see better performance if you set your alarm to
add instances after you exceed 75 or 100 users. You may already be
doing this. Aside from that, your use case may have another indicator
that predicts a demand increase, for example a posting on your
Facebook page may precede a significant request increase by several
seconds or even a minute. Using CloudWatch custom metrics to monitor
that value and then setting an alarm to Auto Scale on it may also be a
potential solution.
So I think the best answer is to run more instances at lower traffic and use custom metrics to predict traffic from an external source. I am going to try, for example, monitoring Facebook and Twitter for posts with links to the site and scaling up straight away.
Can someone please explain the correlation between requests per second and response time? Which are you trying to improve at first? If your competitor offers less 'requests per second' on his most used functionality then you, is your application performing better in terms of end-user performance?
Can someone please explain the correlation between requests per second and response time?
Think of this situation as if it were a gas station. Cars arrive at various intervals and occupy a pump; they spend some time filling up, and then they leave.
Each car that arrives and occupies a pump is a request.
The time it takes to fill up is your response time.
You can improve things in two ways:
If you add more pumps, you can service additional cars at once because there will be more capacity.
If you make all your pumps faster, you can service more cars over time with the same number of pumps, because each car will finish sooner.
Which are you trying to improve at first?
That depends. Do you want to serve people faster (improving their experience while making some others wait) and thus more people overall, or do you want to serve more people at once (at the possible expense of request time)? Ideally, get both metrics as good as possible.
It all depends on what sort of load your system will be under.
If you have millions of users then you need to handle more requests per second possibly at the expense of response time otherwise users may not be able to connect when they want to.
However, if you are only going to have 30 users then it's more important to them that your system responds quickly than it being able to handle a thousand requests a second.
Requests per second may be high while offering an awful user experience. You might have a lot of users buying thousands of concert tickets per second but the response time for each user is over 30 seconds.
For a high performing, enjoyable web site, you need to have a high number of requests per second and a maximum response time. As a user, I like 5 seconds or less.
If your competitor offers less 'requests per second' on his most used functionality then you, is your application performing better in terms of end-user performance?
I wouldn't agree with that. Look at Google. They make thousands of requests a second - hell, I think it's something like 100 million per day and 3 billion per month.
To answer your question, I think response time is more important than requests per second. Sure you can optimize/minimize the number of requests made, but if your product scales to handle unlimited requests (just by throwing more hardware at the problem) then I think that is more valuable.