Read Mocha json report and publish the result to a Test Management tool - mocha.js

I am running my mocha tests with json reporter enabled.
I am required to publish my results to a Test Management tool once my suite is completed using an API.
For that i am trying to read my output json file and get to know which tests are passed/failed. Accordingly, I am creating a API Payload to publish the results on a Test Management tool.
For that, i have included my code block to read the output json in the 'After' Block of the test file but the problem is that the report is not generated when it is processing the 'After' block.
Below is the sample code
Test.js:
require('dotenv').config();
const commandLineArgs = require('command-line-args');
const request = require('request').defaults({rejectUnauthorized: false});
const fs = require('fs');
const optionDefinitions = [
{name: 'file', alias: 'f', type: String},
{name: 'format', alias: 'm', type: String, defaultValue: 'newman-json'},
{name: 'usetestcaseid', alias: 'i', type: String},
{name: 'regex', alias: 'r', type: String, defaultValue: /^[^\d]*(\d+)/},
{name: 'parentid', alias: 'p', type: String},
{name: 'parenttype', alias: 't', type: String, defaultValue: 'root'},
{name: 'help', alias: 'h', type: Boolean},
];
const options = commandLineArgs(optionDefinitions);
var assert = require('assert');
it('12345_should return -1 when the value is not present', function() {
assert.equal([1, 2, 3].indexOf(4), -1);
});
after(function publishReport() {
// runs once after the last test in this block
var outputReport = fs.readFileSync('filename.json', 'utf8');
var rpt=JSON.parse(outputReport);
console.log(rpt);
var passedCases = rpt.passes;
console.log(passedCases);
});
Config File:
{
"spec": "test/**/test1.js",
"reporter": "mochajs/json-file-reporter",
"reporter-option": [
"output=filename.json"
]
}
Could anyone please suggest me like where i need to exactly put my code on Mocha so that i can read the output report file once generated and publish the results?

The only guarantee mocha (and its reporters) give, is that output files exist after exit of mocha process.
So, you have to put your code into separate script (upload.js, let's say) and run test suite like this (entry in package.json):
"tests-and-upload-results:": "mocha ; node ./upload.js"
All hooks are executed in scope of test run and reporters in general do not generate reports until whole test run is completed.

Related

How to run the single test with different data set in parallel by using cypress on single machine

I just have the below Test.json file in the fixture folder:
[
{
"searchKeyword":"cypress"
},
{
"searchKeyword":"QA automation"
},
{
"searchKeyword":"stackoverflow"
}
]
The above file contains three different dataset.
I just have the below spec file and It contains one It (Test case) and It will run multiple times based on the above input.
Test.spec.js file:
describe("Run the test parallel based on the input data",() =>{
const baseUrl = "https://www.google.com/";
before("Login to consumer account", () => {
cy.fixture('Test').then(function (data) {
this.data = data;
})
});
it("Search the keyword", function () {
this.data.forEach((testData) =>{
cy.visit(baseUrl);
cy.xpath("//input[#name='q']").type(testData.searchKeyword);
cy.xpath("//input[#value='Google Search']").click();
cy.get("//ul/li[2]").should("be.visible");
});
});
});
The above code is working as expected. But I just want to run the above single test parallelly by using different dataset.
Example: Three browser instance open and it should pick three different data from the Test.json file and run the single test which is available in the Test.spec.js file.
I just need logic to implement for one of my project, But I'm not able to share the code which is more complex that's reason just create some dummy test data and test script to achieve my logic.
Can someone please share yours thoughts to achieve this.
One way to run multiple instances of Cypress in parallel is via the Module API, which is basically using a Node script to start up the multiple instances.
Node script
// run-parallel.js
const cypress = require('cypress')
const fixtures = require('./cypress/fixtures/Test.json')
fixture.forEach(fixture => {
cypress.run({
env: {
fixture
},
})
})
Test
describe("Run the test for given env data",() =>{
const testData = Cypress.env('fixture')
...
it("Search the keyword", function () {
cy.visit(baseUrl);
cy.xpath("//input[#name='q']").type(testData.searchKeyword);
...
});
});
Awaiting results
cypress.run() returns a promise, so you can collate the results as follows
Videos and screenshots are troublesome, since it tries to save all under the same name, but you can specify a folder for each fixture
const promises = fixtures.map(fixture => {
return cypress.run({
config: {
video: true,
videosFolder: `cypress/videos/${fixture.searchKeyword}`,
screenshotsFolder: `cypress/screenshots/${fixture.searchKeyword}`,
},
env: {
fixture
},
spec: './cypress/integration/dummy.spec.js',
})
})
Promise.all(promises).then((results) => {
console.log(results.map(result => `${result.config.env.fixture.searchKeyword}: ${result.status}`));
});

How do I include run time arguments while executing a google cloud workflow in Nodejs?

I'm trying to include run time variables while executing a google cloud workflow. I can't find the documentation to do so unless you're using a REST API.
Here's my code that's mostly from their documentation I just get null for the arguments. I think it could be something with the second parameter it expects on createExecution named execution, but I can't figure it out.
const { ExecutionsClient } = require('#google-cloud/workflows');
const client = new ExecutionsClient();
const execute = () => {
return client.createExecution(
{
parent: client.workflowPath('project_id', 'location', 'name'),
},
{
argument: {
users: ['info here'],
},
},
);
};
module.exports = execute;
Thanks for the help!
In case anyone else has this problem you pass the parameter execution to createExecution() along with parent. It's just an object and you can specify argument there which takes a string. Stringify your object and you're good to go!
const { ExecutionsClient } = require('#google-cloud/workflows');
const client = new ExecutionsClient();
const execute = () => {
return client.createExecution({
parent: client.workflowPath('', '', ''),
execution: {
argument: JSON.stringify({
users: [],
}),
},
});
};
module.exports = execute;

JSON schema validation with perfect messages

I have divided the data entry in a REST call in 4 parts. Data can be sent to REST call via:-
headers
query params
path params
request body
So in order to validate the presence of any key in any of the above 4 parts I have created a schema in this format. So if in case I have to validate anything in query params I will add the key 'query' and then add the fields inside that, that needs to be validated
const schema = {
id: 'Users_login_post',
type: 'object',
additionalProperties: false,
properties: {
headers: {
type: 'object',
additionalProperties: false,
properties: {
Authorization: {
type: 'string',
minLength: 10,
description: 'Bearer token of the user.',
errorMessages: {
type: 'should be a string',
minLength: 'should be atleast of 23 length',
required: 'should have Authorization'
}
}
},
required: ['Authorization']
},
path: {
type: 'object',
additionalProperties: false,
properties: {
orgId: {
type: 'string',
minLength: 23,
maxLength: 36,
description: 'OrgId Id of the Organization.',
errorMessages: {
type: 'should be a string',
minLength: 'should be atleast of 23 length', // ---> B
maxLength: 'should not be more than 36 length',
required: 'should have OrgId'
}
}
},
required: ['orgId']
}
}
};
Now, in my express code, I created a request object so that I can test the validity of the JSON in this format.
router.get("/org/:orgId/abc", function(req, res){
var request = { //---> A
path: {
orgId : req.params.orgId
},
headers: {
Authorization : req.headers.Authorization
}
}
const Ajv = require('ajv');
const ajv = new Ajv({
allErrors: true,
});
let result = ajv.validate(schema, request);
console.log(ajv.errorsText());
});
And I validate the above request object (at A) against my schema using AjV.
The output what I get looks something like this:
data/headers should have required property 'Authorization', data/params/orgId should NOT be shorter than 23 characters
Now I have a list of concerns:
why the message is showing data word in the data/headers and data/params/orgId even when my variable name is request(at A)
Also why not my errormessages are used, like in case of orgId I mentioned: should be atleast of 23 length (at B) as a message, even then the message came should NOT be shorter than 23 characters.
How can I show request/headers instead of data/headers.
Also, the way I used to validate my path params, query params, header params, body param, is this the correct way, if it is not, then what can be the better way of doing the same?
Please shed some light.
Thanks in advance.
Use ajv-keywords
import Ajv from 'ajv';
import AjvKeywords from 'ajv-keywords';
// ajv-errors needed for errorMessage
import AjvErrors from 'ajv-errors';
const ajv = new Ajv.default({ allErrors: true });
AjvKeywords(ajv, "regexp");
AjvErrors(ajv);
// modification of regex by requiring Z https://www.regextester.com/97766
const ISO8601UTCRegex = /^(-?(?:[1-9][0-9]*)?[0-9]{4})-(1[0-2]|0[1-9])-(3[01]|0[1-9]|[12][0-9])T(2[0-3]|[01][0-9]):([0-5][0-9]):([0-5][0-9])(.[0-9]+)?Z$/;
const typeISO8601UTC = {
"type": "string",
"regexp": ISO8601UTCRegex.toString(),
"errorMessage": "must be string of format 1970-01-01T00:00:00Z. Got ${0}",
};
const schema = {
type: "object",
properties: {
foo: { type: "number", minimum: 0 },
timestamp: typeISO8601UTC,
},
required: ["foo", "timestamp"],
additionalProperties: false,
};
const validate = ajv.compile(schema);
const data = { foo: 1, timestamp: "2020-01-11T20:28:00" }
if (validate(data)) {
console.log(JSON.stringify(data, null, 2));
} else {
console.log(JSON.stringify(validate.errors, null, 2));
}
https://github.com/rofrol/ajv-regexp-errormessage-example
AJV cannot know the name of the variable you passed to the validate function.
However you should be able to work out from the errors array which paths have failed (and why) and construct your messages from there.
See https://ajv.js.org/#validation-errors
To use custom error messages in your schema, you need an AJV plugin: ajv-errors.
See https://github.com/epoberezkin/ajv-errors

Can we create lambda function using grunt instead of amazon console?

I am creating a zip file for deployment to lambda using https://github.com/Tim-B/grunt-aws-lambda but when deploying to aws lambda I need to create the function first in amazon console. Can we create a function using grunt instead of amazon console? Thank you.
You can create the function from grunt using AWS JavaScript SDK for Lambda.
Use the createFunction method as shown below.
/* This example creates a Lambda function. */
var params = {
Code: {
},
Description: "",
FunctionName: "MyFunction",
Handler: "souce_file.handler_name", // is of the form of the name of your source file and then name of your function handler
MemorySize: 128,
Publish: true,
Role: "arn:aws:iam::123456789012:role/service-role/role-name", // replace with the actual arn of the execution role you created
Runtime: "nodejs4.3",
Timeout: 15,
VpcConfig: {
}
};
lambda.createFunction(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
/*
data = {
CodeSha256: "",
CodeSize: 123,
Description: "",
FunctionArn: "arn:aws:lambda:us-west-2:123456789012:function:MyFunction",
FunctionName: "MyFunction",
Handler: "source_file.handler_name",
LastModified: "2016-11-21T19:49:20.006+0000",
MemorySize: 128,
Role: "arn:aws:iam::123456789012:role/service-role/role-name",
Runtime: "nodejs4.3",
Timeout: 123,
Version: "1",
VpcConfig: {
}
}
*/
});
Note: You can fill the code parameter with code or use addition attributes to refer a code zipped and uploaded to S3.
E.g.
Code: { /* required */
S3Bucket: 'STRING_VALUE',
S3Key: 'STRING_VALUE',
S3ObjectVersion: 'STRING_VALUE',
ZipFile: new Buffer('...') || 'STRING_VALUE'
},
Also make sure to give required permission to the IAM User and setup JavaScript SDK credentials to run the code.

Looking for an example on how to use the Initial File List function

I have looked over the doc and searched for forums but I can not seem to find an examples on how to implement the Initial File List functionality for fine-uploader.
Below is the script that I am using - works great but what I would like to do is to use the Initial File List function to populate the fineuploader with the existing files that have been uploaded during this session.
I have code that will return a json feed with the required files in an array format.
I just can ot figure out where our how to call the function to initalize.
Thanks in advance.
<script>
// Wait until the DOM is 'ready'
$(document).ready(function () {
$("#fine-uploader").fineUploader({
debug: true,
request: {
endpoint: 'upload.cfm'
},
session : {
endpoint: 'imageStatus.cfm',
refreshOnRequest:true
},
validation: {
itemLimit: 2,
allowedExtensions: ["jpeg", "jpg", "gif" , "png"],
sizeLimit: 5000000 // 5 MiB
},
messages: {
tooManyItemsError: 'You can only add 2 images'
},
deleteFile: {
enabled: true, // defaults to false
endpoint: 'upload_delete.cfm?uuid=',
method: 'post'
},
retry: {
enableAuto: false
},
scaling: {
sendOriginal: true,
hideScaled: true,
sizes: [
{name: "THUMB_XX", maxSize: 113},
{name: "FULLIMAGE", maxSize: 450}
]
},
});
});
</script>
I solved the issue.
ends up that I did a custom build of the JS files and did not include the status function.
rebuild the downloads and works like a charm.
thanks everyone for the help.
The initial file list feature is not a function that you call, per say, it is an option that you set in the client. More or less, all you need to set is the endpoint where the uploader can retrieve this list of files, and then have your server correctly process them.
The server response should be a JSON Array of Objects.
[{ name: 'foo.jpg', uuid: "7afs-sdf8-sdaf-7asdf" }, ... ]
The trickiest part is getting that list of files server-side, and you may want to ask some Coldfusion folks about how to do that.

Resources