Here is my custom command:
exports.command = function (element, time, debug) {
let waitTime = time || 10000
if (debug) {
return this
.log('waiting ' + waitTime + 'ms for: ' + element)
.waitForElementVisible(element, waitTime)
}
return this
.waitForElementVisible(element, waitTime)
}
I have also set this variable in the globalModules: abortOnFailure: true.
When I call this in a pageObject though like this:
findElement() {
this.waitFor('#driversLicenseNumbers');
return this
}
The object isn't found (which is expected and intended since I'm upgrading to Nightwatch v1.0.14) and the error message is logged to the console, but the test doesn't fail.
× Timed out while waiting for element <#driversLicenseNumbers> to be
present for 10000 milliseconds. - expected "visible" but got: "not
found"
Does anyone know what I'm doing wrong here?
There is already an open issue on the Nightwatch issues board regarding this specific problem. Here it is!
This behavior affects custom_commands in nightwatch#1.0.15 & nightwatch#0.9.21 (according to the BUG report, yet I am running nightwatch#0.9.21 & this behavior is not reproducible for me).
Basically your test fails, but it does so silently, at the end of the test, where you get the timeout error.
Proposed fix: Install a different version (npm install --save-dev nightwatch#0.9.x), or a suitable version that hasn't introduced the defect yet.
Cheers!
Related
Every time a Protractor element locator fails, it prints an error and continues down a horrible path of endless cascading failures in my spec and suite. Every test that follows depends on the element locator finding its element, and depends on the current spec passing.
I would like to keep the web page under test open while I use the console. The goal is to debug the current state of the page and investigate why the element locator may have failed to find its target.
I'm not too concerned about failing the entire suite and exiting on the first spec failure (I've seen other answers on --fail-fast and stopping on first spec failure.) This is not the approach I would like to take. I want to set a breakpoint, and inspect the environment while the page is running.
Maybe there's something like a Jasmine option for doThisOnFailure: () => { debugger }, which would work for me I think.
I really do not like the solution of using a spec reporter to execute during afterEach and check the failed spec count on the Jasmine environment for the entire spec function. I want to immediately know when an element locator has failed and immediately break as soon as it has failed.
Maybe something really gross would work $('element').click().catch(() => { debugger }).
EDIT: Please, note that I am asking about breaking in a spec, not breaking at the end of the spec.
it('should execute deadly code', function () {
p.navigation.openStorageConfigTab()
$$('.bad-selector').get(0).click() /* IMPORTANT: I want to break here */
p.volume.navigateTo()
})
it('should not execute this spec', function () {
$$('.bad-selector').get(0).click()
})
And the output
✗ should execute deadly code
- Failed: Index out of bound. Trying to access element at index: 0, but there are only 0 elements that match locator By(css selector, .bad-selector)
✗ should not execute this spec
- Failed: Index out of bound. Trying to access element at index: 0, but there are only 0 elements that match locator By(css selector, .bad-selector)
I can recommend you the approach I use, and I hope you can take it from here
Overall approach is to wait until until you type close/ command in browser url:
await browser.waitForAngularEnabled(false);
await browser.wait(
async () => {
let url = await browser.getCurrentUrl();
return url.includes('close/');
},
5 * 60 * 1000,
'Keep-alive timeout reached, closing the session...'
);
The question is when you want to call it. I use the advantage of onComplete callback function in config file. When it's called, the browser is still available. So once all tests are completed, it doesn't exit for 5 minutes unless I submit close/ to the url field. Obviously that can be conditional, by adding something like if (DEBUG === true)
A downside of this setup is it's called when all tests are completed, and it's possible your spec has navigated away from the page where there was error. So what you can also do is to use advantage of jasmine reporter (if you use jasmine). Roughly, you just need to add this to your onPrepare func:
jasmine.getEnv().addReporter({
jasmineStarted: function(suiteInfo) {},
suiteStarted: function(result) {},
specStarted: function(result) {},
specDone: async function(spec) {
if (spec.status === 'failed') {
await browser.waitForAngularEnabled(false);
await browser.wait(
async () => {
let url = await browser.getCurrentUrl();
return url.includes('close/');
},
5 * 60 * 1000,
'Keep-alive timeout reached, closing the session...'
);
await browser.close();
process.exit(35);
}
},
suiteDone: function(result) {},
jasmineDone: function(result) {},
});
So if any it block has failed status, then it'll stop. BUT, I have not tested it, I'll leave it up to you. And second, I didn't think about what will happen to the rest of queued specs since you're redirected to non existing url close/, but I believe it'll still work for you. Worst case scenario, you can play around and make it continue or close the browser instance, as long as you understood the concept
P.S.
I modified the code to close the browser when you type close/, by adding
await browser.close();
process.exit(35);
I tested this code with the following scenarios:
happy path: all 5 it are successful
first element finder of second it block fails
second element finder of second it block fails
All passed. The code works as expected
I have established Quality Gate for my Jenkins project via SonarQube. One of my projects have no tests at all, so in the analysis I see that the code coverage is 0%. By the quality gate rules (<60% coverage = fail) my pipeline should return an error. However, this does not happen. The quality gate says that the analysis was a success and quality gate is 'OK'. In another project, I removed some tests to make coverage be <60% and the quality gate passed once again, even though it was meant to fail.
I had an error related to the analysis always returning 0% coverage before, but managed to fix it (with help from this link). Found a lot of articles with the similar questions but with no answers on any of them. This post looks promising but I cannot find the suitable alternative to its suggestion.
It is worth mentioning that the analysis stage is done in parallel with another stage (to save some time). The Quality Gate stage comes shortly afterwards.
The relevant code I use to initialise the analysis for my project is (org.jacoco... bit is the solution to 0% coverage error I mentioned above):
sh "mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent verify sonar:sonar -Dsonar.host.url=${env.SONAR_HOST_URL} -Dsonar.login=${env.SONAR_AUTH_TOKEN} -Dsonar.projectKey=${projectName} -Dsonar.projectName=${projectName} -Dsonar.sources=. -Dsonar.java.binaries=**/* -Dsonar.language=java -Dsonar.exclusions=$PROJECT_DIR/src/test/java/** -f ./$PROJECT_DIR/pom.xml"
The full quality gate code (to clarify how my quality gate starts and finishes):
stage("Quality Gate") {
steps {
timeout(time: 15, unit: 'MINUTES') { // If analysis takes longer than indicated time, then build will be aborted
withSonarQubeEnv('ResearchTech SonarQube'){
script{
// Workaround code, since we cannot have a global webhook
def reportFilePath = "target/sonar/report-task.txt"
def reportTaskFileExists = fileExists "${reportFilePath}"
if (reportTaskFileExists) {
def taskProps = readProperties file: "${reportFilePath}"
def authString = "${env.SONAR_AUTH_TOKEN}"
def taskStatusResult =
sh(script: "curl -s -X GET -u ${authString} '${taskProps['ceTaskUrl']}'", returnStdout: true)
//echo "taskStatusResult[${taskStatusResult}]"
def taskStatus = new groovy.json.JsonSlurper().parseText(taskStatusResult).task.status
echo "taskStatus[${taskStatus}]"
if (taskStatus == "SUCCESS") {
echo "Background tasks are completed"
} else {
while (true) {
sleep 10
taskStatusResult =
sh(script: "curl -s -X GET -u ${authString} '${taskProps['ceTaskUrl']}'", returnStdout: true)
//echo "taskStatusResult[${taskStatusResult}]"
taskStatus = new groovy.json.JsonSlurper().parseText(taskStatusResult).task.status
echo "taskStatus[${taskStatus}]"
if (taskStatus != "IN_PROGRESS" && taskStatus != "PENDING") {
break;
}
}
}
} else {
error "Haven't found report-task.txt."
}
def qg = waitForQualityGate() // Waiting for analysis to be completed
if(qg.status != 'OK'){ // If quality gate was not met, then present error
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
}
}
}
What is shown in the SonarQube UI for the project? Does it show that the quality gate failed, or not?
I don't quite understand what you're doing in that pipeline script. It sure looks like you're calling "waitForQualityGate()" twice, but only checking for error on the second call. I use scripted pipeline, so I know it would look slightly different.
Update:
Based on your additional comment, if the SonarQube UI says that it passed the quality gate, then that means there's nothing wrong with your pipeline code (at least with respect to the quality gate). The problem will be in the definition of your quality gate.
However, I would also point out one other error in how you're checking for the background task results.
The possible values of "taskStatus" are "SUCCESS", "ERROR", "PENDING", and "IN_PROGRESS". If you need to determine whether the task is still running, you have to check for either of the last two values. If you need to determine whether the task is complete, you need to check for either of the first two values. You're checking for completion, but you're only checking for "SUCCESS". That means if the task failed, which it would if the quality gate failed (which isn't happening here), you would continue to wait for it until you timed out.
I am using Wow64GetThreadContext calling from a 64bit process on a 32 bit process. I am catching the WOW64 Context structure with this method.
The MSDN seems to no longer have the documentation for this method available, it is however still referenced on the GetThreadContext documentation page. I am not sure why this is. As the documentation is not available I am having a hard time figuring out why I am getting the error below.
The code where the error is being thrown is below. The error being thrown when I check GetLastWin32Error is: When the file already exists, the file cannot be created.
Does anyone have any ideas why it would throw this error? I am not creating a file at all which is confusing me.
ContextWow = new WOW_CONTEXT();
ContextWow.ContextFlags = CONTEXT_FLAGS.CONTEXT_ALL;
try
{
Wow64GetThreadContext(ThreadHandle, ref ContextWow);
if (new Win32Exception(Marshal.GetLastWin32Error()).Message != "The operation completed successfully")
{
throw new Exception("Win32 Exception encountered when attempting to get thread context" + new Win32Exception(Marshal.GetLastWin32Error()).Message);
}
}
Here is a link to the documentation you want, captured by the Internet Archive on July 10 2019:
Wow64GetThreadContext() function
Per the documentation:
Return Value
If the function succeeds, the return value is nonzero.
If the function fails, the return value is zero. To get extended error information, call GetLastError.
Your error handling is wrong. It is the equivalent of doing the following:
ContextWow = new WOW_CONTEXT();
ContextWow.ContextFlags = CONTEXT_FLAGS.CONTEXT_ALL;
try
{
Wow64GetThreadContext(ThreadHandle, ref ContextWow);
if (Marshal.GetLastWin32Error() != 0)
{
throw new Exception("Win32 Exception encountered when attempting to get thread context" + new Win32Exception().Message);
}
}
You are making a very common mistake of calling GetLastError() at the wrong time. As the documentation says, the Win32 error code is valid to use only if Wow64GetThreadContext() returns false, which you are not checking for.
What you are doing is not the correct way to check for an error (either to get the error code, or to perform comparisons on it). The correct code should look more like the following instead:
ContextWow = new WOW_CONTEXT();
ContextWow.ContextFlags = CONTEXT_FLAGS.CONTEXT_ALL;
if (!Wow64GetThreadContext(ThreadHandle, ref ContextWow))
{
throw new Exception("Error encountered when attempting to get thread context", new Win32Exception());
}
That being said, the error message you are seeing, "When the file already exists, the file cannot be created", is your system's text for the ERROR_ALREADY_EXISTS (183) error code, which is not an error code that Wow64GetThreadContext() is documented as reporting on failure, and really just doesn't make much sense for this kind of function to report on failure. So, what is most likely happening is that Wow64GetThreadContext() is actually returning true, but because you are not checking for failure correctly, you are actually seeing an error code from an earlier/internal API call that has not been overwritten when Wow64GetThreadContext() returns true, and so it should be ignored in this situation, not acted on.
I have a .m script that I've been running using Windows Task Scheduler, generally successfully, every 15 minutes for about a year (options: -automation -minimize -r remotedata -logfile logfile.txt;quit).
When I run the code manually in Matlab, everything behaves as expected.
However, when it is run as an automated script, it has two issues I can't resolve, that seem to indicate the code is not being executed the same way.
First, I have the following conditional:
~isempty(remoteData.Time(setdiff(1:end,ni))) which is terrible syntax, I know, but works just fine when I run the script manually. However, when it runs automated, it gives the error:
Error using setdiff (line 80) Not enough input arguments.
I corrected it to ~isempty(remoteData.Time(setdiff(1:height(remoteData),ni)))
but it made me curious.
Second, I have a webread function with a number of queries (see below) that executes normally when I have it open and hit "run", however, when running as an automation the dateutc query is ignored. This one is a bit more puzzling. Can anyone suggest a reason it might be failing to register, or how I might fix it? Debugging is difficult since it works as expected when I run it manually.
WUurl = 'http://weatherstation.wunderground.com/weatherstation/updateweatherstation.php';
WUID = '***';
WUpwd = '***';
WUdateutc = datestr(datenum(webData.Time(WDNewest-newTimes+i))+7/24,'yyyy-mm-dd HH:MM:SS');
WUwindspeedmph = num2str(webData.WndSpd(WDNewest-newTimes+i)*0.62);
WUwinddir = num2str(webData.WndDir(WDNewest-newTimes+i));
WUtempf = num2str(webData.AirTmp(WDNewest-newTimes+i)*1.8+32);
WUrainin = num2str(webData.Rain(WDNewest-newTimes+i)/25.4*4);
WUdailyrainin = num2str(sum(webData.Rain(WDMidnight:WDNewest-newTimes+i))/25.4);
WUbaromin = num2str(webData.BarPress(WDNewest-newTimes+i)*.0295);
WUhumidity = num2str(webData.RelHum(WDNewest-newTimes+i));
gamma = log(webData.RelHum(WDNewest-newTimes+i)/100)+ ...
(17.67*webData.AirTmp(WDNewest-newTimes+i))/ ...
(243.5+webData.AirTmp(WDNewest-newTimes+i));
WUdewptf = num2str((243.5*gamma)/(17.67-gamma)*1.8+32); % Magnus formula estimation
WUsolarradiation = num2str(webData.NetRad_Wm2(WDNewest-newTimes+i));
WUsoiltempf = num2str(nanmean(webData{WDNewest,20:3:77})*1.8+32);
WUsoilmoisture = num2str(nanmean(webData{WDNewest,18:3:75}));
options = weboptions('Timeout',newTimes);
WU_debugging = webread(WUurl,...
'ID',WUID,...
'PASSWORD',WUpwd,...
'dateutc',WUdateutc,...
'windspeedmph',WUwindspeedmph,...
'winddir',WUwinddir,...
'tempf',WUtempf,...
'rainin',WUrainin,...
'dailyrainin',WUdailyrainin,...
'baromin',WUbaromin,...
'humidity',WUhumidity,...
'dewptf',WUdewptf,...
'solarradiation',WUsolarradiation,...
'soiltempf',WUsoiltempf,...
'soilmoisture',WUsoilmoisture,...
'action','updateraw',...
options);
Does anyone have any idea if this is possible? Most of the sample for node-inspector seemed geared toward debugging an invoked webpage. I'd like to be able to debug jasmine-node tests though.
In short, just debug jasmine-node:
node --debug-brk node_modules/jasmine-node/lib/jasmine-node/cli.js spec/my_spec.js
If you look at the source of the jasmine-node script, it just invokes cli.js, and I found I could debug that script just fine.
I wanted to use node-inspector to debug a CoffeeScript test. Just adding the --coffee switch worked nicely, e.g.
node --debug-brk node_modules/jasmine-node/lib/jasmine-node/cli.js --coffee spec/my_spec.coffee
I ended up writing a little util called toggle:
require('tty').setRawMode(true);
var stdin = process.openStdin();
exports.toggle = function(fireThis)
{
if (process.argv.indexOf("debug")!=-1)
{
console.log("debug flag found, press any key to start or rerun. Press 'ctrl-c' to cancel out!");
stdin.on('keypress', function (chunk, key) {
if (key.name == 'c' && key.ctrl == true)
{
process.exit();
}
fireThis();
});
}
else
{
console.log("Running, press any key to rerun or ctrl-c to exit.");
fireThis();
stdin.on('keypress', function (chunk, key) {
if (key.name == 'c' && key.ctrl == true)
{
process.exit();
}
fireThis();
});
}
}
You can drop it into your unit tests like:
var toggle = require('./toggle');
toggle.toggle(function(){
var vows = require('vows'),
assert = require('assert');
vows.describe('Redis Mass Data Storage').addBatch({
....
And then run your tests like: node --debug myfile.js debug. If you run debug toggle will wait until you anything but ctrl-c. Ctrl-c exits. You can also rerun, which is nice.
w0000t.
My uneducated guess is that you'd need to patch jasmine, I believe it spawns a new node process or something when running tests, and these new processes would need to be debug-enabled.
I had a similar desire and managed to get expressso working using Eclipse as a debugger:
http://groups.google.com/group/nodejs/browse_thread/thread/af35b025eb801f43
…but I realised: if I needed to step through my code to understand it, I probably need to refactor the code (probably to be more testable), or break my tests up into smaller units.
Your tests is your debugger.