I am using cy.intercept to verify requests made to an API.
Whilst asserts are working great an issue is that my requests are actually sent to an API.
So, what do I do to stop actual requests from being sent to an API?
This is what I am doing right now:
cy.intercept('api/login', []).as('login')
cy.get('[data-cy=login]')
.click()
.wait('#login')
.its('request')
.then(({ headers, body }) => {
// perform asserts...
})
The way to reply from the intercept instead of the API is to use cy.intercept(url, staticResponse) where the simplest staticResponse is an empty object {},
cy.intercept('api/login', {}).as('login')
Obviously if your app needs some properties in the response, you should add those, or use a fixture, etc
Ref StaticResponse objects.
Related
I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...
There is a specific api endpoint. To reduce the load on the server, I want clients to send post requests to it, for example, no more than once every 15 minutes. At the same time, the rest of the endpoints worked as usual.
I thought that I needed to somehow implement a timeout. So that the client waits and does not receive a response to all requests until 15 minutes have passed. Those. just exit the post function. But it is impossible, it says that you need to return response. But if the client receives a response, he will immediately be able to send the next request. And you need to reduce the number of requests as much as possible. So that this timeout on the client side prevents him from bombing with requests.
And I would like to include the logic for enabling such behavior in the post function. It is a little more complicated than described in the question.
In python and django, a complete noob. Perhaps this can be implemented in some other way? Aim in which direction to dig.
You can use the Throttling and add the following lines to REST_FRAMEWORK block of your settings.py like below:
REST_FRAMEWORK = {
...
'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.AnonRateThrottle',
'rest_framework.throttling.UserRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'anon': '20/min',
'user': '60/min'
},
...
}
This will raise the 429 status_code if user send more than 60 requests in one minute.
Throttling can be a solution?
It can set up the rate of request: https://www.django-rest-framework.org/api-guide/throttling/
I'm trying to synchronize my POSTs to an endpoint in Angular. I did see some examples of doing a synchronized GET but had trouble understanding the examples well enough to apply them to POSTs.
The POSTs are pretty simple, at least from my perspective as the front-end developer. I send an object with an parent group ID and a sub group ID to a /parentgroups endpoint. On the backend, however, async calls cause the data to get overwritten.
Apologies for lack of an example, but I am pretty far from having one that's close to working how I need. My code is still async and a loop over calls to $http.post().
You actually cannot do real synchronous (as in blocking) http calls in Angular, it forces you do use async. If you can't do it with callbacks then you have a problem with your architecture that the entire team should focus on solving ASAP. If your current architecture requires the frontend to do blocking calls then your architecture is quite simply broken and needs to be fixed.
Anyway, while I recommend against it you could always register your request in a list, and then in each callback you pop the next request from the list and run it. That way you can just keep pushing requests into the list without knowing how many there will be. Something like this (untested, but the general principle should work):
var requestList = [];
requestList.push(function() {
$http.post('/someUrl', {})
.success(function(data, status, headers, config) {
// Remove the next request from list and call it
requestList.shift()();
});
});
requestList.push(function() {
$http.post('/someOtherUrl', {})
.success(function(data, status, headers, config) {
// Remove the next request from list and call it
requestList.shift()();
});
});
// Start the first request
requestList.shift()();
This is fairly clean, but still a bit of a hack. It would probably work fine but I would be taking a good long look at why the API forces you to do something like this.
I have the following situation, where the already sent headers problem happens, when sending multiple request from the server to the client via AJAX:
It is something I expected since I opted to go with AJAX, instead of sockets. Is there is other way around to exchange the data between the server and the client, like using browserify to translate an emitter script for the client? I suppose that I can't escape the sockets, so I will take advice about simpler library, as sockets.io seems too complex for such a small operation.
//-------------------------
Update:
Here is the node.js code as requested.
var maxRunning = 1;
var test_de_rf = ['rennen','ausgehen'];
function callHandler(word, cb) {
console.log("word is - " + word);
gender.gender_function_rf( word , function (result_rf) {
console.log(result_rf);
res.send(result_rf);// Here I send data back to the ajax call
setTimeout(function() { cb(null);
}, 3000);
});
}
async.eachLimit(test_de_rf, maxRunning, function(item, done) {
callHandler(item, function(err) {
if (err) throw new Error(err);
done();
});
}, function(err) {
if (err) throw new Error(err);
console.log('done');
});
res.send() sends and finishes an http response. You can only call it once per request because the request is finished and done after calling that. It is a fairly high level way of sending a response (does it all at once in one call).
If you wanted to have several different functions contributing to a response, you could use the lower level functions on the http object such as res.setHeader(), res.writeHead(), res.write() (which you can call multiple times) and res.end() (which indicates the end of the response).
You can use the standard webSocket API in the browser and get webSocket module for server-side support or you can use socket.io which offers both client and server support and a number of higher level functions (such as automatic reconnect, automatic failover to http polling if webSockets are not supported, etc...).
All that said, if what you really want is the ability to just send some data from server to client whenever you want, then a webSocket is really the better way to go. This is a persistent connection, is supported by all modern browsers and allows the server to send data unsolicited to the client at any time. I'd hardly say socket.io is complex. The doc isn't particularly great at explaining things (not uncommon in the open source world as the node.js doc isn't particularly great either). But, I've always been able to figure advanced things out by just looking at a few runtime data structures in the debugger and/or looking at the source code.
I am new to angular and want to use it to send data to my app's backend. In several occasions, I have to make several http post calls that should either all succeed or all fail. This is the scenario that's causing me a headache: given two http post calls, what if one call succeeds, but the other fails? This will lead to inconsistencies in the database. I want to know if there's a way to cancel the succeeding calls if at least one call has failed. Thanks!
Without knowing more about your specific situation I would urge you to use the promise error handling if you are not already doing so. There's only one situation that I know you can cancel a promise that has been sent is by using the timeout option in the $http(look at this SO post), but you can definitely prevent future requests. What happens when you make a $http call is that it returns a promise object(look at $q here). What this does is it returns two methods that you can chain on your $http request called success and failure so it looks like $http.success({...stuff...}).error({...more stuff..}). So if you do have error handling in each of these scenarios and you get a .error, dont make the next call.
You can cancel the next requests in the chain, but the previous ones have already been sent. You need to provide the necessary backend functionality to reverse them.
If every step is dependent on the other and causes changes in your database, it might be better to do the whole process in the backend, triggered by a single "POST" request. I think it is easier to model this process synchronously, and that is easier to do in the server than in the client.
However, if you must do the post requests in the client side, you could define each request step as a separate function, and chain them via then(successCallback, errorCallback) (Nice video example here: https://egghead.io/lessons/angularjs-chained-promises).
In your case, at each step you can check if the previous one failed an take action to reverse it by using the error callback of then:
var firstStep = function(initialData){
return $http.post('/some/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
});
};
var secondStep = function(dataFromPreviousStep){
return $http.post('/some/other/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
}, function(){
// On error
reversePreviousStep(dataFromPreviousStep.dataNeededToReverseThisStep);
});
};
var thirdFunction = function(){ ... };
...
firstFunction(initialData).then(secondFunction)
.then(thirdFunction)
...
If any of the steps in the chain fails, it's promise would fail, and next steps will not be executed.