cypress write customised logs into file with writeFile is a good idea? - cypress

I would like to write some customised logs into files with cypress. Such as 'current test case name', 'user's id used for this test', 'fail or pass', etc.
I googled awhile, found cy.writeFile meet my needs. But it seems most people would recommend cy.task for logging.
So is cy.writeFile for logging a good idea? If not, what's the reason?
Thanks!
BTW, here's the codes, very simple:
function logger (log) {
cy.writeFile('logs/combined.log', log + '\n', { flag: 'a+' })
}
module.exports = {
logger
}

The cy.task() command is used generally when you do not want the tests themselves to interrupt the logging or if you want to interact with the node process itself. Whereas the cy.writeFile() has to be called between each test and cannot interact with the node process. You can add things in your plugin files so that continous logs can be produced regardless of the test being ran and concatenate those things into the same file.
// in plugins/index.js file
on('task', {
readJson: (filename) => {
// reads the file relative to current working directory
return fsExtra.readJson(path.join(process.cwd(), filename)
}
})
https://docs.cypress.io/api/commands/task.html#Command
cy.task() provides an escape hatch for running arbitrary Node code, so
you can take actions necessary for your tests outside of the scope of
Cypress. This is great for:
Seeding your test database. Storing state in Node that you want
persisted between spec files. Performing parallel tasks, like making
multiple http requests outside of Cypress. Running an external
process. In the task plugin event, the command will fail if undefined
is returned. This helps catch typos or cases where the task event is
not handled.
If you do not need to return a value, explicitly return null to signal
that the given event has been handled.

Related

How do I save Cypress's testrunner's "console log" (left hand side) to a file

I would like to save the data from the left-hand-side of the TestRunner to a text file (json, plain text, or any kind of text).
I feel like this should be very easy, and that I'm simply just missing something. However, I cannot find anything to explain this. I have checked this other S.O. question: Cypress pipe console.log and command log to output, which references this currently open issue -- but this appears to be focused on collecting the browsers console log.
I even tried one of the workarounds suggested in the discussion of that open issue, cypress-log-to-output - but that put a ton of output in the terminal from which I launched the test. I did try to correlate the extra output to the relatively few entries from the TestRunner's left-hand-side, but did not see anything to match them up.
I'm just hoping to get a text file that looks like this (with perhaps a bit of detail for each entry):
1 visit /
(xhr) GET 200 /todos
2 wait #todos
(req) GET /todos Received todos
...
Or perhaps JSON.
My motivation comes from having to write Cypress tests for our CI that will be testing a very old AjaxSwing based application that makes heavy use of XHR requests, and it can be a different number of XHR requests for each test run (sometimes 8, sometimes 12 just to load the first page).
The AjaxSwing app is not changing, so I have to figure this out as best as possible. So I wanted to see a whole text file with all the information from the TestRunner's left hand side. Perhaps even compare separate runs to see if I can spot some "header" or "body" value I could use to distinguish the right XHR request to wait for.
Any help would be appreciated.
One approach using the log:added event
// top of spec
const logs = []
Cypress.on('log:added', (log) => {
const message = `${log.consoleProps.Command}: ${log.message}`
logs.push(message)
})
it('writes to logs', () => {
... // some commands that log
cy.writeFile('logs.txt', logs)
});

In Cypress when to use Custom Command vs Task?

What is the difference between a Custom Command and a Task? I am trying to understand how they should each be used.
Custom Command documentation: https://docs.cypress.io/api/cypress-api/custom-commands.html
Task documentation: https://docs.cypress.io/api/commands/task.html
A command (most methods on the global cy object) is a function that enqueues (pushes) an action to a queue of currently-executing commands. The queue executes serially and asynchronously (that's why return value of a command is an object having a .then method --- but despite that and the fact it behaves like promise, it's not a promise). Until a previous command is finished, the next command doesn't execute.
Commands are defined and executed directly in the browser.
A custom command is a regular command, but defined by you as opposed to the default commands that Cypress supplies out of the box. Custom commands are useful for automating a workflow you repeat in your tests over and over (e.g. by grouping several default cy commands together).
Commands are used to interact with your web app under test (AUT) --- most notably with the DOM (e.g. via cy.get(selector) to query the DOM), and to make assertions.
It's also important to realize that while commands are being executed serially, they are enqueued immediately (in the same event loop tick), and any expressions you pass to them are evaluated then and there. This isn't a Cypress-specific behavior, just plain JavaScript. That's why you can't do things like these:
// INCORRECT USAGE
let value;
cy.get('.myInput').invoke('val').then(val => value = val);
cy.get('.mySecondInput').type(value); // ✗ value is undefined here
Nor can you use async/await:
// INCORRECT USAGE
let value;
// ✗ doesn't work on Cypress commands
const value = await cy.get('.myInput').invoke('val');
cy.get('.mySecondInput').type(value);
A task is a function defined and executed on the Cypress backend process (Node.js), not in the browser.
To execute a task (which you previously defined in your cypress/plugins/index.js file), you need to first enqueue it as a regular command in your test via cy.task(taskName, data). Cypress then (when the command takes its turn to execute) sends a message to the backend process where the task is executed.
Data your task returns is serialized (via JSON.stringify or something similar) and sent back to the browser where it's passed to a callback you potentially chained to your cy.task() command using .then(callback).
Tasks are mainly used to communicate with your own server backend, to e.g. seed the database; or for I/O such as reading/writing to a file (although cypress supplies commands for these such as cy.exec() or cy.writeFile()).
There are no default tasks --- every task you execute you first need to define yourself.
Another important point is that the messages that are sent between processes (the Cypress browser process, and the Cypress node process) are sent via an IPC channel, and must be serializable. That means that the data you pass to cy.task(taskName, data) are stringified, as well as is the response returned from the task itself. Thus, sending e.g. an object containing a method will not work (that is, the method won't be transferred at all).
Great answers but in order to sum up, here are two main differences that help you choose whether you need a Cypress command or a task :
If you need to run a promise or interact with your backend, go with a task.
If you are interacting with the DOM and making assertions, go with a command.
Taken from this article
Cypress commands, in general do not return promises. The documentation refers to them as 'thenable', it simply means the results are only obtained via the .then(result=>{}) construct.
This is why the comment above is true. (Shown here for reference)
// INCORRECT USAGE
let value;
// ✗ doesn't work on Cypress commands
const value = await cy.get('.myInput').invoke('val');
cy.get('.mySecondInput').type(value);
However, there is a native way to wrap cypress command to get true async/await behavior as shown here:
function getAsync(query, cb) {
let prom = new Promise<any[]>((resolve, reject) => {
cy.get(query).then((elements) => {
let objArray = [];
if (elements === undefined) {
reject();
}
elements.each((index) => {
let element = elements[index];
let obj = cb(element);
objArray.push(obj);
});
resolve(objArray);
});
});
return prom;
}
To call this function above:
it('Gets specific DOM Elements', async () => {
let data = await getAsync('form', getForm);
...

Dropzone.js - Multiple file upload without duplicated response

TLDR;
I managed to simplify my question after a good night's sleep. Here's the simpler question.
I want to upload N files to a server, which would process them together and return a single response (e.g. Total foobars in all files combined = XYZ).
What's the best way to send this single response back to the client?
Thanks.
&
Below is the old question, left behind as a lesson for me.
I'm using Dropzone.js to build D&D functionality into my app.
Please note: I know there are a couple of questions already that discuss multifile uploads. But they are different from my question. They talk about how to get a single callback call instead of multiple ones.
My issue is related to the situation where I drag and drop multiple files into the dropzone, but am seeing the single server response being duplicated multiple times. Here is my config:
Dropzone.options.inner = {
init: function() {
this.on("dragenter", function(e) {
$('#inner').addClass('drag-over');
//// TODO - find out WTF this isn't working (low priority)
}),
this.on("completemultiple", function(file, resp) {
//// TODO
})
},
url: "php/...upload...php",
timeout: 120000, // 2m
uploadMultiple: true,
autoProcessQueue: false,
clickable: false,
};
//// ... Some other stuff
//// ...
$(document).ready(function() {
$('#inner').click(function() {
Dropzone.forElement('.dropzone').processQueue();
});
In the beginning I intercepted the "complete" event, rather than "completemultiple". That resulted in its handler being invoked multiple separate times (once for each file), even though the server-side php was only being invoked once. Each invocation returned a duplicate copy of the same server-side message.
I didn't want that, so I changed it to "completemultiple", and now I can confirm that the handler only gets called once with an array of files, but the single server response is now buried within each file object returned - each has a duplicate copy of the exact same response.
It doesn't matter ultimately because it is the same message, after all. But the whole esthetics of the thing now seems off which indicates to me I'm doing something wrong - the response seems to indicate two independent uploads, but they were part of a single invocation of the server side php. Why make the client "believe" there were two separate upload requests when the server-side script only has one opportunity to respond (i.e. The php is not sending back different messages for each file - should it? And if so, what's the best way to do it?)
How can I make it so that if I have a scenario in which it's all-or-none, I get a single response back from the php script?
This is especially important to me because my server response will contain the status and some other data. The script does more than simply receiving the uploaded files (hence the longer timeout).
I thought maybe that's a sign that I should separate the uploading part from the processing part and trigger the processing once the upload is complete.
But that means that the server side upload script can't clean up after itself. It needs to persist data beyond its own life. Also it now needs to return a handle to this data back to the client, which would dispatch the server-side processor in a different ajax call passing it this handle - and the subsequent call needs to clean up the files left by the uploader after it is done processing them.
This seems the less elegant solution. Is this something I just need to get used to? Or is there a better way of accomplishing what I want?
Also, any other free tips and hints from the front-end gurus in my network will be gratefully accepted.
Thanks.
&
The following approach works. Until something better can be found.
Dropzone.options.inner = {
// . . .
init: function() {
this.on("completemultiple", function(file) {
var code = JSON.parse(file[0].xhr.response).code;
var data = { "code" : code };
$.post('php/......php', data, function(res) {
// TODO - surface the res back to the user
});
})
},
};
&

Asynchronous node.js calls fail in when run in a VM script

I've managed to write an extension to my custom node.js server, that allows an AJAX script to be run using the VM module. I've also succeeded in persuading VM to acknowledge the existence of require, which is necessary for my scripts to be of any use.
But I've hit a truly weird and intractable problem.
When I run the AJAX script "normally", by handing it off to a child process using child_process.spawn(), all my asynchronous API calls work as expected, and my callback chain runs to completion. In the case of the script I'm currently testing, this uses fs.readdir() to launch the callback chain, and it works in child process mode, returning directory listings to the HTML client that makes the initial AJAX request.
But when I run the AJAX script inside VM using vm.script(), followed by script.runInContext(), something weird happens. The script reaches my fs.readdir() call, but the callback is never called! The entire script stalls at this point!
Since the same V8 engine is running the code in both child process and VM mode, I'm more than a little puzzled, as to why the AJAX script works perfectly if launched in child process mode, but fails to launch the callback chain at all when launched in VM mode.
The only reason I can think of, why VM doesn't like my AJAX script, is because the callbacks are defined as inner functions of a closure, and the outer function comes to an end after the fs.readdir() call. But if that's the case, why does the script work perfectly, both when run in the node.js debugger, and passed to a child process by my server after an AJAX request?
The documentation for VM makes no mention of fs calls being blocked if called within code running under VM, and I wouldn't have embarked upon this development path, if I'd been informed beforehand that such blocking had been implemented.
So what is going on?
EDIT: Here's the code that processes the arguments passed to the script. It calls a function getCmdArgs(), defined earlier, which I don't need to list here because it's been tested and works. It works in both child process mode and VM mode, and conducts a stringent test to see if the argument passing variables needed to supply the script with its data actually exist, before
returning a suitable return argument (the args in question if they exist). Once again, this part works.
//Get the arguments passed to the script - new dual mode version! See function notes above.
var args = getCmdArgs();
//Check to see if the script is being run in a VM sandbox by the server ...
if (args[0] == "RunInVM")
runMode = "V";
else
runMode = "P";
//End if/else
Next, once the script arguments have been passed to the script, it's time to process them, and find out what directory is being referenced. I use an anonymised path reference, which begins with the word "root", the idea being that the client does not need to know where on the server it has had its permitted storage allocated, only that it has storage on disc that it's permitted to access. The client, once given this, can manipulate that directory tree at will, and the child process versions of the scripts that permit this all work. So, this is how the arguments are processed:
//Set our root directory, to which all other directory references will be attached ...
//ROOTPATH is set at the beginning of the script, and is known only to the script, not the client
var dirRoot = ROOTPATH;
//Now get the arguments that were passed to the script ...
var tmpChk = args[2];
var queryMarker = tmpChk.indexOf("?");
if (queryMarker != -1)
{
//Strip off the initial "?"
var actualQuery = tmpChk.substring(queryMarker+1);
//Separate the x=y parts from each other
var queryParts = actualQuery.split("=");
var argType = queryParts[0];
var argData = queryParts[1];
if (argType === "dir")
{
requestedDir = argData;
//Here, we have an argument of the form "dir=xxx". Find out what "xxx" is.
if (argData === "root")
dirName = dirRoot;
else
{
//Here, we have an argument of the form "dir=root/xxx". Find out what "xxx" is, and make it a subdirectory of our
//chosen root directory.
var subIndex = argData.indexOf("root/");
var subDir = argData.substring(subIndex+5);
subDir = subDir.replace(/\x2F/g, path.sep);
subDir = qs.unescape(subDir);
dirName = dirRoot+path.sep+subDir;
//Also, insert an entry for the parent directory into our custom DirEntry array ...
var newEntry = new DirEntry();
newEntry.fileName = "(Parent Directory)";
newEntry.fileType = "DIR";
//Remember we're using the UNIX separator for our GET call!
var pdIdx = requestedDir.lastIndexOf("/");
var pd = requestedDir.substring(0,pdIdx);
//This is the secure path to the parent directory
newEntry.filePath = pd;
myDirEntries[idx2++] = newEntry;
//End if/else
}
//End if
}
}
The above generates the initial directory entry to be returned (via a custom DirEntry() object), and again, this all works. It also determines what path is to be referenced when searching the server's storage.
At this point, we're ready to roll, and we call fs.readdir() accordingly:
//Get entire contents of dir in one go!
fs.readdir(dirName, DirContentsCallback);
Now when running in child process mode, the above function executes, and the callback function DirContentsCallBack(), which is the first part of the callback chain, is duly launched. But when I try to run it in VM mode, this doesn't happen. The callback is never called. Now if fs.readdir() had actually executed, and an error had been generated, then surely my callback would have been called, with an error argument that was non-null? Instead, the whole script just grinds to a halt and hangs.
I inserted a series of console.log() calls to check the progress of the script, to see if any parts of the callback chain were ever executed, and NONE of the console.log() calls fired. My entire callback chain stalled. This does not happen if I run the script in child process mode, by passing it to cp.spawn() - instead, I end up with a completed AJAX transaction, and my directory contents are nicely displayed in the client, once the client receives the JSON data generated by the script. What's even more bizarre, is that the SAME DATA is being passed to the script in both modes - I've checked this repeatedly. Yet the data that worked perfectly in child process mode, and generates a target directory path that is duly examined for entries, comes crashing to a silent halt, the moment I try executing fs.readdir() in VM mode.
No to be perfectly honest, I don't see why I needed to bother posting a host of prior code that all works, when the part that DOESN'T work in VM mode is the last line of the lot. What I opened this question with, seemed to me perfectly sufficient to alert people to the problem. But if it means I receive meaningful answers, then so be it - waste time posting code that works I shall.

Gradle - Capturing output written to out / err on a per task basis

I'm trying to capture output written from each task as it is executed. The code below works as expected when running Gradle with --max-workers 1, but when multiple tasks are running in parallel this code below picks up output written from other tasks running simultaneously.
The API documentation states the following about the "getLogging" method on Task. From what it says I judge that it should support capturing output from single tasks regardless of any other tasks running at the same time.
getLogging()
Returns the LoggingManager which can be used to control the logging level and standard output/error capture for this task. https://docs.gradle.org/current/javadoc/org/gradle/api/Task.html
graph.allTasks.forEach { Task task ->
task.ext.capturedOutput = [ ]
def listener = { task.capturedOutput << it } as StandardOutputListener
task.logging.addStandardErrorListener(listener)
task.logging.addStandardOutputListener(listener)
task.doLast {
task.logging.removeStandardOutputListener(listener)
task.logging.removeStandardErrorListener(listener)
}
}
Have I messed up something in the code above or should I report this as a bug?
It looks like every LoggingManager instance shares an OutputLevelRenderer, which is what your listeners eventually get added to. This did make me wonder why you weren't getting duplicate messages because you're attaching the same listeners to the same renderer over and over again. But it seems the magic is in BroadcastDispatch, which keeps the listeners in a map, keyed by the listener object itself. So you can't have duplicate listeners.
Mind you, for that to hold, the hash code of each listener must be the same, which seems surprising. Anyway, perhaps this is working as intended, perhaps it isn't. It's certainly worth an issue to get some clarity on whether Gradle should support listeners per task. Alternatively raise it on the dev mailing list.

Resources