In Cypress when to use Custom Command vs Task? - cypress

What is the difference between a Custom Command and a Task? I am trying to understand how they should each be used.
Custom Command documentation: https://docs.cypress.io/api/cypress-api/custom-commands.html
Task documentation: https://docs.cypress.io/api/commands/task.html

A command (most methods on the global cy object) is a function that enqueues (pushes) an action to a queue of currently-executing commands. The queue executes serially and asynchronously (that's why return value of a command is an object having a .then method --- but despite that and the fact it behaves like promise, it's not a promise). Until a previous command is finished, the next command doesn't execute.
Commands are defined and executed directly in the browser.
A custom command is a regular command, but defined by you as opposed to the default commands that Cypress supplies out of the box. Custom commands are useful for automating a workflow you repeat in your tests over and over (e.g. by grouping several default cy commands together).
Commands are used to interact with your web app under test (AUT) --- most notably with the DOM (e.g. via cy.get(selector) to query the DOM), and to make assertions.
It's also important to realize that while commands are being executed serially, they are enqueued immediately (in the same event loop tick), and any expressions you pass to them are evaluated then and there. This isn't a Cypress-specific behavior, just plain JavaScript. That's why you can't do things like these:
// INCORRECT USAGE
let value;
cy.get('.myInput').invoke('val').then(val => value = val);
cy.get('.mySecondInput').type(value); // ✗ value is undefined here
Nor can you use async/await:
// INCORRECT USAGE
let value;
// ✗ doesn't work on Cypress commands
const value = await cy.get('.myInput').invoke('val');
cy.get('.mySecondInput').type(value);
A task is a function defined and executed on the Cypress backend process (Node.js), not in the browser.
To execute a task (which you previously defined in your cypress/plugins/index.js file), you need to first enqueue it as a regular command in your test via cy.task(taskName, data). Cypress then (when the command takes its turn to execute) sends a message to the backend process where the task is executed.
Data your task returns is serialized (via JSON.stringify or something similar) and sent back to the browser where it's passed to a callback you potentially chained to your cy.task() command using .then(callback).
Tasks are mainly used to communicate with your own server backend, to e.g. seed the database; or for I/O such as reading/writing to a file (although cypress supplies commands for these such as cy.exec() or cy.writeFile()).
There are no default tasks --- every task you execute you first need to define yourself.
Another important point is that the messages that are sent between processes (the Cypress browser process, and the Cypress node process) are sent via an IPC channel, and must be serializable. That means that the data you pass to cy.task(taskName, data) are stringified, as well as is the response returned from the task itself. Thus, sending e.g. an object containing a method will not work (that is, the method won't be transferred at all).

Great answers but in order to sum up, here are two main differences that help you choose whether you need a Cypress command or a task :
If you need to run a promise or interact with your backend, go with a task.
If you are interacting with the DOM and making assertions, go with a command.
Taken from this article

Cypress commands, in general do not return promises. The documentation refers to them as 'thenable', it simply means the results are only obtained via the .then(result=>{}) construct.
This is why the comment above is true. (Shown here for reference)
// INCORRECT USAGE
let value;
// ✗ doesn't work on Cypress commands
const value = await cy.get('.myInput').invoke('val');
cy.get('.mySecondInput').type(value);
However, there is a native way to wrap cypress command to get true async/await behavior as shown here:
function getAsync(query, cb) {
let prom = new Promise<any[]>((resolve, reject) => {
cy.get(query).then((elements) => {
let objArray = [];
if (elements === undefined) {
reject();
}
elements.each((index) => {
let element = elements[index];
let obj = cb(element);
objArray.push(obj);
});
resolve(objArray);
});
});
return prom;
}
To call this function above:
it('Gets specific DOM Elements', async () => {
let data = await getAsync('form', getForm);
...

Related

How do I use Heartbeat with a Callback Return Step Function in my Lambda Function?

My Lambda function is required to send a token back to the step function for it to continue, as it is a task within the state machine.
Looking at my try/catch block of the lambda function, I am contemplating:
The order of SendTaskHeartbeatCommand and SendTaskSuccessCommand
The required parameters of SendTaskHeartbeatCommand
Whether I should add the SendTaskHeartbeatCommand to the catch block, and then if yes, which order they should go in.
Current code:
try {
const magentoCallResponse = await axios(requestObject);
await stepFunctionClient.send(new SendTaskHeartbeatCommand(taskToken));
await stepFunctionClient.send(new SendTaskSuccessCommand({output: JSON.stringify(magentoCallResponse.data), taskToken}));
return magentoCallResponse.data;
} catch (err: any) {
console.log("ERROR", err);
await stepFunctionClient.send(new SendTaskFailureCommand({error: JSON.stringify("Error Sending Data into Magento"), taskToken}));
return false;
}
I have read the documentation for AWS SDK V3 for SendTaskHeartbeatCommand and am confused with the required input.
The SendTaskHeartbeat and SendTaskSuccess API actions serve different purposes.
When your task completes, you call SendTaskSucces to report this back to Step Functions and to provide the results from the Task that your workflow can then process. You do not need to call SendTaskHeartbeat before SendTaskSuccess and the usage you have in the code above seems unnecessary.
SendTaskHeartbeat is optional and you use it when you've set "HeartbeatSeconds" on your Task. When you do this, you then need your worker (i.e. the Lambda function in this case) to send back regular heartbeats while it is processing work. I'd expect that to be running asynchronously while your code above was running the first line in the try block. The reason for having heartbeats is that you can set a longer TimeoutSeconds (or dynamically using TimeoutSecondsPath) than HeartbeatSeconds, therefore failing / retrying fast when the worker dies (Heartbeat timeout) while you still allow your tasks to take longer to complete.
That said, it's not clear why you are using .waitForTaskToken with Lambda. Usually, you can just use the default Request Response integration pattern with Lambda. This uses the synchronous invoke mode for Lambda and will return the response back to you without you needing to integrate back with Step Functions in your Lambda code. Possibly you are reading these off of an SQS queue for concurrency control or something. But if not, just use Request Response.

Cypress cy.wait() only waits for the first networkcall, need to wait for all calls

I would like to wait until the webpage is loaded with items. Each is getting retreived with a GET.
And I would like to wait on all these items until the page is loaded fully. I already made a interceptions for these. Named: 4ItemsInEditorStub
I have tried cy.wait('#4ItemsInEditorStub.all')
But this gives an timeout error at the end.
How can I let Cypress wait untill all "4ItemsInEditorStub" interceptions are completed?
Trying to wait on alias.all won't work -- Cypress has no idea what .all means in this context, or what value it should have. Even after your 4 expected calls are completed, there could be a fifth call after that (Cypress doesn't know). alias.all should only be used with cy.get(), to retrieve all yielded calls by that alias.
Instead, if you know that it will always be four calls, you can just wait four times.
cy.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub');
You can either hard code a long enough wait (ie. cy.wait(3_000)) to cover the triggered request time and then use cy.get('#4ItemsInEditorStub.all')
cy.wait(10_000)
cy.get('#4ItemsInEditorStub.all')
// do some checks with the calls
or you can use unique intercepts and aliases to wait on all 4
cy.intercept('/your-call').as('4ItemsInEditorStub1')
cy.intercept('/your-call').as('4ItemsInEditorStub2')
cy.intercept('/your-call').as('4ItemsInEditorStub3')
cy.intercept('/your-call').as('4ItemsInEditorStub4')
cy.visit('')
cy.wait([
'#4ItemsInEditorStub1',
'#4ItemsInEditorStub2',
'#4ItemsInEditorStub3',
'#4ItemsInEditorStub4',
])
There is a package cypress-network-idle that makes the job simple
cy.waitForNetworkIdlePrepare({
method: 'GET',
pattern: '**/api/item/*',
alias: 'calls',
})
cy.visit('/')
// now wait for the "#calls" to finish
cy.waitForNetworkIdle('#calls', 2000) // no further requests after 2 seconds
Installation
# install using NPM
npm i -D cypress-network-idle
# install using Yarn
yarn add -D cypress-network-idle
In cypress/support/e2e.js
import 'cypress-network-idle'
Network idle testing looks good, but you might find it difficult to set the right time period, which may change each time you run (depending on network speed).
Take a look at my answer here Test that an API call does NOT happen in Cypress.
Using a custom command you can wait for a maximum number of calls without failing if there are actually less calls.
For example, if you have 7 or 8 calls, setting the maximum to 10 ensures you wait for all of them
Cypress.Commands.add('maybeWaitAlias', (selector, options) => {
const waitFn = Cypress.Commands._commands.wait.fn
return waitFn(cy.currentSubject(), selector, options)
.then((pass) => pass, (fail) => fail)
})
cy.intercept(...).as('allNetworkCalls')
cy.visit('/');
// up to 10 calls
Cypress._.times(10, () => {
cy.maybeWaitAlias('#allNetworkCalls', {timeout:1000}) // only need short timeout
})
// get array of all the calls
cy.get('#allNetworkCalls.all')
.then(calls => {
console.log(calls)
})

cypress write customised logs into file with writeFile is a good idea?

I would like to write some customised logs into files with cypress. Such as 'current test case name', 'user's id used for this test', 'fail or pass', etc.
I googled awhile, found cy.writeFile meet my needs. But it seems most people would recommend cy.task for logging.
So is cy.writeFile for logging a good idea? If not, what's the reason?
Thanks!
BTW, here's the codes, very simple:
function logger (log) {
cy.writeFile('logs/combined.log', log + '\n', { flag: 'a+' })
}
module.exports = {
logger
}
The cy.task() command is used generally when you do not want the tests themselves to interrupt the logging or if you want to interact with the node process itself. Whereas the cy.writeFile() has to be called between each test and cannot interact with the node process. You can add things in your plugin files so that continous logs can be produced regardless of the test being ran and concatenate those things into the same file.
// in plugins/index.js file
on('task', {
readJson: (filename) => {
// reads the file relative to current working directory
return fsExtra.readJson(path.join(process.cwd(), filename)
}
})
https://docs.cypress.io/api/commands/task.html#Command
cy.task() provides an escape hatch for running arbitrary Node code, so
you can take actions necessary for your tests outside of the scope of
Cypress. This is great for:
Seeding your test database. Storing state in Node that you want
persisted between spec files. Performing parallel tasks, like making
multiple http requests outside of Cypress. Running an external
process. In the task plugin event, the command will fail if undefined
is returned. This helps catch typos or cases where the task event is
not handled.
If you do not need to return a value, explicitly return null to signal
that the given event has been handled.

Asynchronous node.js calls fail in when run in a VM script

I've managed to write an extension to my custom node.js server, that allows an AJAX script to be run using the VM module. I've also succeeded in persuading VM to acknowledge the existence of require, which is necessary for my scripts to be of any use.
But I've hit a truly weird and intractable problem.
When I run the AJAX script "normally", by handing it off to a child process using child_process.spawn(), all my asynchronous API calls work as expected, and my callback chain runs to completion. In the case of the script I'm currently testing, this uses fs.readdir() to launch the callback chain, and it works in child process mode, returning directory listings to the HTML client that makes the initial AJAX request.
But when I run the AJAX script inside VM using vm.script(), followed by script.runInContext(), something weird happens. The script reaches my fs.readdir() call, but the callback is never called! The entire script stalls at this point!
Since the same V8 engine is running the code in both child process and VM mode, I'm more than a little puzzled, as to why the AJAX script works perfectly if launched in child process mode, but fails to launch the callback chain at all when launched in VM mode.
The only reason I can think of, why VM doesn't like my AJAX script, is because the callbacks are defined as inner functions of a closure, and the outer function comes to an end after the fs.readdir() call. But if that's the case, why does the script work perfectly, both when run in the node.js debugger, and passed to a child process by my server after an AJAX request?
The documentation for VM makes no mention of fs calls being blocked if called within code running under VM, and I wouldn't have embarked upon this development path, if I'd been informed beforehand that such blocking had been implemented.
So what is going on?
EDIT: Here's the code that processes the arguments passed to the script. It calls a function getCmdArgs(), defined earlier, which I don't need to list here because it's been tested and works. It works in both child process mode and VM mode, and conducts a stringent test to see if the argument passing variables needed to supply the script with its data actually exist, before
returning a suitable return argument (the args in question if they exist). Once again, this part works.
//Get the arguments passed to the script - new dual mode version! See function notes above.
var args = getCmdArgs();
//Check to see if the script is being run in a VM sandbox by the server ...
if (args[0] == "RunInVM")
runMode = "V";
else
runMode = "P";
//End if/else
Next, once the script arguments have been passed to the script, it's time to process them, and find out what directory is being referenced. I use an anonymised path reference, which begins with the word "root", the idea being that the client does not need to know where on the server it has had its permitted storage allocated, only that it has storage on disc that it's permitted to access. The client, once given this, can manipulate that directory tree at will, and the child process versions of the scripts that permit this all work. So, this is how the arguments are processed:
//Set our root directory, to which all other directory references will be attached ...
//ROOTPATH is set at the beginning of the script, and is known only to the script, not the client
var dirRoot = ROOTPATH;
//Now get the arguments that were passed to the script ...
var tmpChk = args[2];
var queryMarker = tmpChk.indexOf("?");
if (queryMarker != -1)
{
//Strip off the initial "?"
var actualQuery = tmpChk.substring(queryMarker+1);
//Separate the x=y parts from each other
var queryParts = actualQuery.split("=");
var argType = queryParts[0];
var argData = queryParts[1];
if (argType === "dir")
{
requestedDir = argData;
//Here, we have an argument of the form "dir=xxx". Find out what "xxx" is.
if (argData === "root")
dirName = dirRoot;
else
{
//Here, we have an argument of the form "dir=root/xxx". Find out what "xxx" is, and make it a subdirectory of our
//chosen root directory.
var subIndex = argData.indexOf("root/");
var subDir = argData.substring(subIndex+5);
subDir = subDir.replace(/\x2F/g, path.sep);
subDir = qs.unescape(subDir);
dirName = dirRoot+path.sep+subDir;
//Also, insert an entry for the parent directory into our custom DirEntry array ...
var newEntry = new DirEntry();
newEntry.fileName = "(Parent Directory)";
newEntry.fileType = "DIR";
//Remember we're using the UNIX separator for our GET call!
var pdIdx = requestedDir.lastIndexOf("/");
var pd = requestedDir.substring(0,pdIdx);
//This is the secure path to the parent directory
newEntry.filePath = pd;
myDirEntries[idx2++] = newEntry;
//End if/else
}
//End if
}
}
The above generates the initial directory entry to be returned (via a custom DirEntry() object), and again, this all works. It also determines what path is to be referenced when searching the server's storage.
At this point, we're ready to roll, and we call fs.readdir() accordingly:
//Get entire contents of dir in one go!
fs.readdir(dirName, DirContentsCallback);
Now when running in child process mode, the above function executes, and the callback function DirContentsCallBack(), which is the first part of the callback chain, is duly launched. But when I try to run it in VM mode, this doesn't happen. The callback is never called. Now if fs.readdir() had actually executed, and an error had been generated, then surely my callback would have been called, with an error argument that was non-null? Instead, the whole script just grinds to a halt and hangs.
I inserted a series of console.log() calls to check the progress of the script, to see if any parts of the callback chain were ever executed, and NONE of the console.log() calls fired. My entire callback chain stalled. This does not happen if I run the script in child process mode, by passing it to cp.spawn() - instead, I end up with a completed AJAX transaction, and my directory contents are nicely displayed in the client, once the client receives the JSON data generated by the script. What's even more bizarre, is that the SAME DATA is being passed to the script in both modes - I've checked this repeatedly. Yet the data that worked perfectly in child process mode, and generates a target directory path that is duly examined for entries, comes crashing to a silent halt, the moment I try executing fs.readdir() in VM mode.
No to be perfectly honest, I don't see why I needed to bother posting a host of prior code that all works, when the part that DOESN'T work in VM mode is the last line of the lot. What I opened this question with, seemed to me perfectly sufficient to alert people to the problem. But if it means I receive meaningful answers, then so be it - waste time posting code that works I shall.

Angular.JS multiple $http post: canceling if one fails

I am new to angular and want to use it to send data to my app's backend. In several occasions, I have to make several http post calls that should either all succeed or all fail. This is the scenario that's causing me a headache: given two http post calls, what if one call succeeds, but the other fails? This will lead to inconsistencies in the database. I want to know if there's a way to cancel the succeeding calls if at least one call has failed. Thanks!
Without knowing more about your specific situation I would urge you to use the promise error handling if you are not already doing so. There's only one situation that I know you can cancel a promise that has been sent is by using the timeout option in the $http(look at this SO post), but you can definitely prevent future requests. What happens when you make a $http call is that it returns a promise object(look at $q here). What this does is it returns two methods that you can chain on your $http request called success and failure so it looks like $http.success({...stuff...}).error({...more stuff..}). So if you do have error handling in each of these scenarios and you get a .error, dont make the next call.
You can cancel the next requests in the chain, but the previous ones have already been sent. You need to provide the necessary backend functionality to reverse them.
If every step is dependent on the other and causes changes in your database, it might be better to do the whole process in the backend, triggered by a single "POST" request. I think it is easier to model this process synchronously, and that is easier to do in the server than in the client.
However, if you must do the post requests in the client side, you could define each request step as a separate function, and chain them via then(successCallback, errorCallback) (Nice video example here: https://egghead.io/lessons/angularjs-chained-promises).
In your case, at each step you can check if the previous one failed an take action to reverse it by using the error callback of then:
var firstStep = function(initialData){
return $http.post('/some/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
});
};
var secondStep = function(dataFromPreviousStep){
return $http.post('/some/other/url', data).then(function(dataFromServer){
// Do something with the data
return {
dataNeededByNextStep: processedData,
dataNeededToReverseThisStep: moreData
}
}, function(){
// On error
reversePreviousStep(dataFromPreviousStep.dataNeededToReverseThisStep);
});
};
var thirdFunction = function(){ ... };
...
firstFunction(initialData).then(secondFunction)
.then(thirdFunction)
...
If any of the steps in the chain fails, it's promise would fail, and next steps will not be executed.

Resources