Cypress: how to wait for all requests to finish - cypress

I am using cypress to test our web application.
In certain pages there are different endpoint requests that are executed multiple times. [ e.g. GET /A GET /B GET /A].
What would be the best practise in cypress in order to wait for all requests to finish and guarantee that page has been fully loaded.
I don't want to use a ton cy.wait() commands to wait for all request to be processed. (there are a lot of different sets of requests in each page)

You can use the cy.route() feature from cypress. Using this you can intercept all your Get requests and wait till all of them are executed:
cy.server()
cy.route('GET', '**/users').as('getusers')
cy.visit('/')
cy.wait('#getusers')

I'm sure this is not recommended practice but here's what I came up with. It effectively waits until there's no response for a certain amount of time:
function debouncedWait({ debounceTimeout = 3000, waitTimeout = 4000 } = {}) {
cy.intercept('/api/*').as('ignoreMe');
let done = false;
const recursiveWait = () => {
if (!done) {
// set a timeout so if no response within debounceTimeout
// send a dummy request to satisfy the current wait
const x = setTimeout(() => {
done = true; // end recursion
fetch('/api/blah');
}, debounceTimeout);
// wait for a response
cy.wait('#ignoreMe', { timeout: waitTimeout }).then(() => {
clearTimeout(x); // cancel this wait's timeout
recursiveWait(); // wait for the next response
});
}
};
recursiveWait();
}

According to Cypress FAQ there is no definite way. But I will share some solutions I use:
Use the JQuery sintax supported by cypress
$('document').ready(function() {
//Code to run after it is ready
});
The problem is that after the initial load - some action on the page can initiate a second load.
Select an element like an image or select and wait for it to load. The problem with this method is that some other element might need more time.
Decide on a maindatory time you will wait for the api requests (I personaly use 4000 for my app) and place a cy.wait(mandatoryWaitTime) where you need your page to be loaded.

I faced the same issue with our large Angular application doing tens of requests as you navigate through it.
At first I tried what you are asking: to automatically wait for all requests to complete. I used https://github.com/bahmutov/cypress-network-idle as suggested by #Xiao Wang in this post. This worked and did the job, but I eventually realized I was over-optimizing my tests. Tests became slow. Test was waiting for all kinds of calls to finish, even those that weren't needed at that point in time to finish (like 3rd party analytics etc).
So I'd suggest not trying to wait for everything at a step, but instead finding the key API calls (you don't need to know the full path, even api/customers is enough) in your test step, use cy.intercept() and create an alias for it. Then use cy.wait() with your alias. The result is that you are waiting only when needed and only for the calls that really matter.
// At this point, there are lots of GET requests that need to finish in order to continue the test
// Intercept calls that contain a GET request with a request path containing /api/customer/
cy.intercept({ method: 'GET', url: '**/api/customer/**' }).as("customerData");
// Wait for all the GET requests with path containing /api/customer/ to complete
cy.wait("#customerData");
// Continue my test knowing all requested data is available..
cy.get(".continueMyTest").click()

Related

Cypress cy.wait() only waits for the first networkcall, need to wait for all calls

I would like to wait until the webpage is loaded with items. Each is getting retreived with a GET.
And I would like to wait on all these items until the page is loaded fully. I already made a interceptions for these. Named: 4ItemsInEditorStub
I have tried cy.wait('#4ItemsInEditorStub.all')
But this gives an timeout error at the end.
How can I let Cypress wait untill all "4ItemsInEditorStub" interceptions are completed?
Trying to wait on alias.all won't work -- Cypress has no idea what .all means in this context, or what value it should have. Even after your 4 expected calls are completed, there could be a fifth call after that (Cypress doesn't know). alias.all should only be used with cy.get(), to retrieve all yielded calls by that alias.
Instead, if you know that it will always be four calls, you can just wait four times.
cy.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub');
You can either hard code a long enough wait (ie. cy.wait(3_000)) to cover the triggered request time and then use cy.get('#4ItemsInEditorStub.all')
cy.wait(10_000)
cy.get('#4ItemsInEditorStub.all')
// do some checks with the calls
or you can use unique intercepts and aliases to wait on all 4
cy.intercept('/your-call').as('4ItemsInEditorStub1')
cy.intercept('/your-call').as('4ItemsInEditorStub2')
cy.intercept('/your-call').as('4ItemsInEditorStub3')
cy.intercept('/your-call').as('4ItemsInEditorStub4')
cy.visit('')
cy.wait([
'#4ItemsInEditorStub1',
'#4ItemsInEditorStub2',
'#4ItemsInEditorStub3',
'#4ItemsInEditorStub4',
])
There is a package cypress-network-idle that makes the job simple
cy.waitForNetworkIdlePrepare({
method: 'GET',
pattern: '**/api/item/*',
alias: 'calls',
})
cy.visit('/')
// now wait for the "#calls" to finish
cy.waitForNetworkIdle('#calls', 2000) // no further requests after 2 seconds
Installation
# install using NPM
npm i -D cypress-network-idle
# install using Yarn
yarn add -D cypress-network-idle
In cypress/support/e2e.js
import 'cypress-network-idle'
Network idle testing looks good, but you might find it difficult to set the right time period, which may change each time you run (depending on network speed).
Take a look at my answer here Test that an API call does NOT happen in Cypress.
Using a custom command you can wait for a maximum number of calls without failing if there are actually less calls.
For example, if you have 7 or 8 calls, setting the maximum to 10 ensures you wait for all of them
Cypress.Commands.add('maybeWaitAlias', (selector, options) => {
const waitFn = Cypress.Commands._commands.wait.fn
return waitFn(cy.currentSubject(), selector, options)
.then((pass) => pass, (fail) => fail)
})
cy.intercept(...).as('allNetworkCalls')
cy.visit('/');
// up to 10 calls
Cypress._.times(10, () => {
cy.maybeWaitAlias('#allNetworkCalls', {timeout:1000}) // only need short timeout
})
// get array of all the calls
cy.get('#allNetworkCalls.all')
.then(calls => {
console.log(calls)
})

Angular.JS multiple $http post: canceling if one fails

I am new to angular and want to use it to send data to my app's backend. In several occasions, I have to make several http post calls that should either all succeed or all fail. This is the scenario that's causing me a headache: given two http post calls, what if one call succeeds, but the other fails? This will lead to inconsistencies in the database. I want to know if there's a way to cancel the succeeding calls if at least one call has failed. Thanks!
Without knowing more about your specific situation I would urge you to use the promise error handling if you are not already doing so. There's only one situation that I know you can cancel a promise that has been sent is by using the timeout option in the $http(look at this SO post), but you can definitely prevent future requests. What happens when you make a $http call is that it returns a promise object(look at $q here). What this does is it returns two methods that you can chain on your $http request called success and failure so it looks like $http.success({...stuff...}).error({...more stuff..}). So if you do have error handling in each of these scenarios and you get a .error, dont make the next call.
You can cancel the next requests in the chain, but the previous ones have already been sent. You need to provide the necessary backend functionality to reverse them.
If every step is dependent on the other and causes changes in your database, it might be better to do the whole process in the backend, triggered by a single "POST" request. I think it is easier to model this process synchronously, and that is easier to do in the server than in the client.
However, if you must do the post requests in the client side, you could define each request step as a separate function, and chain them via then(successCallback, errorCallback) (Nice video example here: https://egghead.io/lessons/angularjs-chained-promises).
In your case, at each step you can check if the previous one failed an take action to reverse it by using the error callback of then:
var firstStep = function(initialData){
return $http.post('/some/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
});
};
var secondStep = function(dataFromPreviousStep){
return $http.post('/some/other/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
}, function(){
// On error
reversePreviousStep(dataFromPreviousStep.dataNeededToReverseThisStep);
});
};
var thirdFunction = function(){ ... };
...
firstFunction(initialData).then(secondFunction)
.then(thirdFunction)
...
If any of the steps in the chain fails, it's promise would fail, and next steps will not be executed.

Jquery async calls block page reload

I have a function that performs a backup every 5 seconds. From time to time the target server of the backup is not reachable and the request stops until the timeout is reached.
Since this affects the user interface I execute this 'backup function' as a async ajax request.
setInterval("doSync()", 5000 );
function doSync() {
$.ajax({
url: "backup.php",
async : true
});
};
This runs pretty good in the background.
But as soon as a reload of the page is executed, already waiting backup function calls will be completed. So in the worst case, if I have a backup with 30 seconds timeout, the user has to wait this 30 seconds before the new page is loaded.
That is not acceptable for the user.
Which strategy can I implement to avoid this?
It would be ok to terminate the backup request...
I think that issue is rather specific to the browser.
Indeed, most of them limit the number of parallel requests to the same host and that's why it "waits" before reloading the page.
If you're calling the exact same URL via your AJAX request as the one you're trying to reload, Firefox will not run more than ONE request at the same time. A simple workaround is to append a random query string to the URL.
Another option is to use the javascript beforeunload event to cancel your AJAX request : Abort Ajax requests using jQuery
I would maybe think about setting timeout in your case.
I also found similar problem already solved: click

Building an high performance node.js application with cluster and node-webworker

I'm not a node.js master, so I'd like to have more points of view about this.
I'm creating an HTTP node.js web server that must handle not only lots of concurrent connections but also long running jobs. By default node.js runs on one process, and if there's a piece of code that takes a long time to execute any subsequent connection must wait until the code ends what it's doing on the previous connection.
For example:
var http = require('http');
http.createServer(function (req, res) {
doSomething(); // This takes a long time to execute
// Return a response
}).listen(1337, "127.0.0.1");
So I was thinking to run all the long running jobs in separate threads using the node-webworker library:
var http = require('http');
var sys = require('sys');
var Worker = require('webworker');
http.createServer(function (req, res) {
var w = new Worker('doSomething.js'); // This takes a long time to execute
// Return a response
}).listen(1337, "127.0.0.1");
And to make the whole thing more performant, I thought to also use cluster to create a new node process for each CPU core.
In this way I expect to balance the client connections through different processes with cluster (let's say 4 node processes if I run it on a quad-core), and then execute the long running job on separate threads with node-webworker.
Is there something wrong with this configuration?
I see that this post is a few months old, but I wanted to provide a comment to this in the event that someone comes along.
"By default node.js runs on one process, and if there's a piece of code that takes a long time to execute any subsequent connection must wait until the code ends what it's doing on the previous connection."
^-- This is not entirely true. If doSomething(); is required to complete before you send back the response, then yes, but if it isn't, you can make use of the Asynchronous functionality available to you in the core of Node.js, and return immediately, while this item processes in the background.
A quick example of what I'm explaining can be seen by adding the following code in your server:
setTimeout(function(){
console.log("Done with 5 second item");
}, 5000);
If you hit the server a few times, you will get an immediate response on the client side, and eventually see the console fill with the messages seconds after the response was sent.
Why don't you just copy and paste your code into a file and run it over JXcore like
$ jx mt-keep:4 mysourcefile.js
and see how it performs. If you need a real multithreading without leaving the safety of single threading try JX. its 100% node.JS 0.12+ compatible. You can spawn the threads and run a whole node.js app inside each of them separately.
You might want to check out Q-Oper8 instead as it should provide a more flexible architecture for this kind of thing. Full info at:
https://github.com/robtweed/Q-Oper8

Long Running Wicket Ajax Request

I occasionally have some long running AJAX requests in my Wicket application. When this occurs the application is largely unusable as subsequent AJAX requests are queued up to process synchronously after the current request. I would like the request to terminate after a period of time regardless of whether or not a response has been returned (I have a user requirement that if this occurs we should present the user an error message and continue). This presents two questions:
Is there any way to specify a
timeout that's specific to an AJAX
or all AJAX request(s)?
If not, is there any way to kill the current request?
I've looked through the wicket-ajax.js file and I don't see any mention of a request timeout whatsoever.
I've even gone so far as to try re-loading the page after some timeout on the client side, but unfortunately the server is still busy processing the original AJAX request and does not return until the AJAX request has finished processing.
Thanks!
I think it won't help you to let the client 'cancel' the request. (However this could work.)
The point is that the server is busy processing a request that is not required anymore. If you want to timeout such operations you had to implement the timeout on the server side. If the operation takes too long, then the server aborts it and returns some error value as the result of the Ajax request.
Regarding your queuing problem: You may consider to use asynchronous requests in spite of synchronous ones. This means that the client first sends a request for starting the long running process. This request immediately returns. Then the client periodically polls the server and asks if the process has finished. Those poll requests also return immediately saying either that the process is still running or that it has finished with a certain result.
Failed solution: After a given setTimeout I kill the active transports and restart the channel, which handles everything on the client side. I avoided request conflicts by tying each to an ID and checking that against a global reference that increments each time a request is made and each time a request completes.
function longRunningCallCheck(refId) {
// make sure the reference id matches the global id.
// this indicates that we are still processing the
// long running ajax call.
if(refId == id){
// perform client processing here
// kill all active transport layers
var t = Wicket.Ajax.transports;
for (var i = 0; i < t.length; ++i) {
if (t[i].readyState != 0) {
t[i].onreadystatechange = Wicket.emptyFunction;
t[i].abort();
}
}
// process the default channel
Wicket.channelManager.done('0|s');
}
}
Unfortunately, this still left the PageMap blocked and any subsequent calls wait for the request to complete on the server side.
My solution at this point is to instead provide the user an option to logout using a BookmarkablePageLink (which instantiates a new page, thus not having contention on the PageMap). Definitely not optimal.
Any better solutions are more than welcome, but this is the best one I could come up with.

Resources