There is a specific api endpoint. To reduce the load on the server, I want clients to send post requests to it, for example, no more than once every 15 minutes. At the same time, the rest of the endpoints worked as usual.
I thought that I needed to somehow implement a timeout. So that the client waits and does not receive a response to all requests until 15 minutes have passed. Those. just exit the post function. But it is impossible, it says that you need to return response. But if the client receives a response, he will immediately be able to send the next request. And you need to reduce the number of requests as much as possible. So that this timeout on the client side prevents him from bombing with requests.
And I would like to include the logic for enabling such behavior in the post function. It is a little more complicated than described in the question.
In python and django, a complete noob. Perhaps this can be implemented in some other way? Aim in which direction to dig.
You can use the Throttling and add the following lines to REST_FRAMEWORK block of your settings.py like below:
REST_FRAMEWORK = {
...
'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.AnonRateThrottle',
'rest_framework.throttling.UserRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'anon': '20/min',
'user': '60/min'
},
...
}
This will raise the 429 status_code if user send more than 60 requests in one minute.
Throttling can be a solution?
It can set up the rate of request: https://www.django-rest-framework.org/api-guide/throttling/
Related
I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...
In Laravel we have an api endpoint that may take a few minutes. It's processing an input in batches and giving a response when all batches are processed. Pseudo-code below.
Sometimes it takes too long for the user, so the user navigates away and the connection is killed client-side. However, the backend processing still continues until the backend tries to return the response with a broken pipe error.
To save ressources, we're looking for a way to check after each batch if the client is still connected with a function like check_if_client_is_still_connected() below. If not, an error is raised and processing is stopped. Is there a way to achieve this ?
function myAPIEndpoint($all_batches){
$result = [];
for ($batch in $all_batches) {
$batch_result = do_something_long($batch);
$result = $result + $batch_result;
check_if_client_is_still_connected();
}
return result;
}
PS: I know async tasks or web sockets could be more appropriate for long requests, but we have good reasons to use a standard http endpoint for this.
I am using cy.intercept to verify requests made to an API.
Whilst asserts are working great an issue is that my requests are actually sent to an API.
So, what do I do to stop actual requests from being sent to an API?
This is what I am doing right now:
cy.intercept('api/login', []).as('login')
cy.get('[data-cy=login]')
.click()
.wait('#login')
.its('request')
.then(({ headers, body }) => {
// perform asserts...
})
The way to reply from the intercept instead of the API is to use cy.intercept(url, staticResponse) where the simplest staticResponse is an empty object {},
cy.intercept('api/login', {}).as('login')
Obviously if your app needs some properties in the response, you should add those, or use a fixture, etc
Ref StaticResponse objects.
I'm currently making application with these spesifications:
Back-end: Laravel 7+
Front-end: Vue.js
Local Development Pack: Laragon, with local SSL enabled
And I've been using some free template based on bootstrap as well.
So here's the deal, each time I make some fetch() API request from the Vue.js side (front-end side), I noticed that out of 100 attempt, roughly 6 of them return 404.
My API is on ```Route::prefix('api')``` and I guess there is nothing wrong with the CORS, etc.
I don't know if this is a big deal, but I hesitate to continue on with this method of getting data until I figure out what is actually going on with the request and fetch, since 6 / 100 chance is actually quite scary for me, considering the functionality of the App that I'm currently building.
I don't know if my description is enough, but if it isn't, then feel free to yell at me so that I could make it even deeper.
And... If any of you ask what I've tried to fix this, to be true I don't really know what should I do. I think the only thing I tried was to look for an answer in here and learned that no one ever face similar issue.
Cheers.
Edit:
Case closed. It was definitely because of the throttle limit. Count it myself, as I went to the 61st request, the 404 is thrown. Big thanks.
Edit:
Shoot! It is still a problem. And I've noticed that throttle problem should've return 429. My API return 404. Any idea why guys? I've disabled the throttle in Kernel. But funny thing is, I've tried deploying it on my VPS and it works perfectly. Is the problem on my local server or what?
You are reaching maximum trottle rate. The default throttle limits it to 60 attempts per minute.To Track throttle look at the response header. for example if you have route like the following
Route::group(['prefix' => 'api', 'middleware' => 'throttle'], function () {
Route::get('people', function () {
return Person::all();
});
});
viewing the response header will reveal the throttle info like
HTTP/1.1 200 OK
... other headers here ...
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
Remember, this response means:
A) This request succeeded (the status is 200)
B) You can try this route 60 times per minute
C) You have 59 requests left for this minute
What response would we get if we went over the rate limit?
HTTP/1.1 429 Too Many Requests
... other headers here ...
Retry-After: 60
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
Customizing the throttle middleware
The throttle middleware accepts two parameters that determine the maximum number of requests that can be made in a given number of minutes. For example, let's specify that an authenticated user may access the following group of routes 60 times per minute:
Route::middleware('auth:api', 'throttle:60,1')->group(function () {
Route::get('/user', function () {
//
});
});
you if you want to allow making more that 60 request per min to the route change the throttle first param
Route::middleware('auth:api', 'throttle:100,1')->group(function () {
Route::get('/user', function () {
//
});
});
So I am building this springboot REST consumer within an API. The API request is dependend on a different API.
The user can make a Request to my API and my API makes a request to another service to log the user in.
While building this I came to the conclusion that returning a ResponseEntity is much slower than just returning the result in the body of the request.
This my fast code, response time less than a seccond:
#PostMapping("/adminLogin")
fun adminLogin(#RequestBody credentials: Credentials): AuthResponse {
return RestTemplate().getForEntity(
"$authenticatorURL/adminLogin?userName=${credentials.username}&passWord=${credentials.password}",
AuthResponse::class.java).body
}
When doing this it takes lots of seconds to respond:
#PostMapping("/adminLogin")
fun adminLogin(#RequestBody credentials: Credentials): ResponseEntity<AuthResponse> {
return RestTemplate().getForEntity(
"$authenticatorURL/adminLogin?userName=${credentials.username}&passWord=${credentials.password}",
AuthResponse::class.java)
}
Can someone explain to me what the difference is why one approach is faster than the other.
I had the same issue yesterday. The problem was as follows: imagine the API I use is sending a json like this:
{"id": "12"}
what I do is take that into a ResponseEntity, and IdDTO stores the id field as an integer. When I returned this ResponseEntity as a response to my request, it returns this:
{"id": 12}// notice the absence of string quotes around 12
The problem is as follows: the API that I used sends the Content-Length header to be equal to 12, but after my DTO conversion it becomes 10.
Spring does not recalculate the content length and the client is reading the 10 characters you sent, then waiting for other 2. It never receives anything and Spring closes the connection after 1 minute(that is the default timeout for a connection).
If you create a new response entity and put your data into it, Spring will calculate the new content length and it will be as fast as the first case you mentioned.