jasmine-karma handleGlobalErrors - jasmine

I'm using the Karma-Jasmine framework plugin and have some flaky tests that sometimes throw an exception in a beforeAll method within a suite. Unfortunately this appears to prevent all subsequent test suites from running and in particular the reporter plugins don't report any details that tests were skipped and they just disappear from reporter output.
This Karma-Jasmine runner has a function called handleGlobalErrors that reports if there was any failed expectations back to Karma and this error message appears to causes Karma to stop running tests.
Anyone have ideas/thoughts on possibility of logging the error as an 'info' message and continue to attempt to run tests? or am I missing some kind of Karma configuration option?

Related

JMeter exception after test completes

I am observing the following issue while running a JMeter script from non GUI command through Jenkins pipeline.
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#park at line:175
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await at line:2039
java.awt.EventQueue#getNextEvent at line:554
java.awt.EventDispatchThread#pumpOneEventForFilters at line:187
java.awt.EventDispatchThread#pumpEventsForFilter at line:116
java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105
java.awt.EventDispatchThread#pumpEvents at line:101
java.awt.EventDispatchThread#pumpEvents at line:93
java.awt.EventDispatchThread#run at line:82
Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait
sun.awt.AWTAutoShutdown#run at line:314
java.lang.Thread#run at line:748
Thread[DestroyJavaVM,5,main], stackTrace:
java.awt stands for Abstract Window Toolkit, you should not be seeing this form of messages given you run JMeter in command-line non-GUI mode.
I can only think of a bug in JMeter like 64479 so if you have a HTTP(S) Test Script Recorder in your test plan - try removing it completely.
Other things to try:
set jmeterengine.force.system.exit=true property in user.properties file
make sure to follow recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article so your JMeter instance will be properly configured for high loads as JMeter's default settings are suitable for tests development and debugging but not sufficient for more or less immense load

Gradle : Cleanup resources after build failure

I execute test suite through Gradle for the build and it spins up a lots of processes on different ports. Also, failFast is set to true for my test task. So, following happens when I execute my suite:
Suite starts up and spins up processes/servers listening to different ports
Tests in the suite are executed
When one or more tests fails, the suite execution is halted and the build is marked as failed
Now, when failing tests are fixed and the build is eventually run, step 1 (described above) fails with the message that the port is already in use. Also, I am using forkEvery parameter, meaning the previous tests might have more than one JVM running.
Is there any way to clean everything up (in terms of processes and not the physical files) when a build fails through gradle?
You can add a custom TestListener that stops the processes/servers from (1)
You can reference Spring Boot's FailureRecordingTestListener: https://github.com/spring-projects/spring-boot/blob/master/buildSrc/src/main/java/org/springframework/boot/build/testing/TestFailuresPlugin.java#L57..L95
The basic idea here is that in the afterSuite method, you would stop whatever processes where started/created from (1). Although within the TestListener, you don't have access to the test instance where processes were started from (1). So you'll need to figure out how to stop those processes without having a reference to the original class where it may have defined some things.

How to re-run only failed tests (failed data row) in visual studio test task?

We have a build pipeline for our automated scripts (selenium) in Azure-Devops. And we use browserstack to run all our scripts. Sometimes we get timeout issue, no matter what we implement additional options to browser settings we still get timeout. So we decided to re run the tests to certain limit and then the pass percentage went high and scripts were passed without any issues.
So we have different data rows for each test method. Now when a test method for a particular data row is failed then the entire test (including all the passed data row) are getting executed again in the re-run which is unnecessary since some are passed.
Is there any way to run only failed data row test ?
As seen in below screenshot on the regular attempt, data row 0 is failed and others passed. So it goes for re-run and running all the passed test again.
Test result screenshot
Note: Using Batch option to re-run the failed tests. Also tried with Automatic batch option where it failed to re-run because of test name issue of vstest.console.exe (There is formatting issue if the test name has spaces or round braces)

MTM missing Error message for data driven tests

I have a suite of integration tests that I run nightly through TFS's build/test agent framework. When tests that are not data driven fail, then I can examine their Error message in MTM via Test | Analyze Test Runs. However if the test is a data driven test
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", #"|DataDirectory|\DataFiles\Providers.csv", "Providers#csv", DataAccessMethod.Sequential)]
and the test fails, the Error message field is not even present in the test results. Neither the summary nor the detail for the individual test that failed.
As shown in ID 120574 below:
Running the test locally does provide an error message in the test explorer of Visual Studio, and in the cases I've encountered there is a mixture of pass & fail (i.e. one of the data driven cases failed but not all).
I'm assuming that MTM is not showing the message because there is an aggregate of results.
Is there a way to configure my test, MTM, or the build to show these error messages for data driven tests?
Adding my comment as an answer for whoever is looking for solution.
The .trx file should have most (almost all) details about the test failure. It will have the Error Message, Exception and Stacktrace, wherever available, containing information about why a test failed/aborted/timeout.
Just in case nothing shows up in trx file, do check the Test Log as it may have information about Agent-Controller connection issues or other general network issues which could lead to test failures or abort.

Detect ELMAH exceptions during integration (WATIN + NUNIT) tests

Here is my scenario:
I have a suite of WATIN GUI tests that run against a ASP.net MVC3 web app per build in CI. The tests verify that the GUI behaves as expected when given certain inputs.
The web app uses ELMAH xml files for logging exceptions.
Question: How can I verify that no unexpected exceptions occurred during each test run. It would also be nice to provide a link to the ELMAH detail of the exception in the NUNIT output if an exception did occur.
Elmah runs in the web application which is a separate process from the test runner. There is no easy way to directly intercept unhandled errors.
The first thing that comes to mind is to watch the folder (App_Data?) that holds the Elmah XML error reports.
You could clear error reports from the folder when each test starts and check whether it is still empty at the end of the test. If the folder is not empty you can copy the error report(s) into the test output.
This approach is not bullet proof. It could happen that an error has occurred but the XML file was not yet (completely) written when you check the folder. For example when your web application is timing out. You could potentially also run into file locking issues if you try to read an XML file that is still being written.
Instead of reading XML files from disk you could configure Elmah to log to a database and read the error reports from there. That would help you get around file locking issues if they occur.
Why not go to the Elmah reporting view (probably elmah.axd) in the setup of your WatiN test and read the timestamp of the most recent error logged there. Then do the same after your test and assert that the timestamps are the same.
It would be easy to read the url of this most recent error from the same row and record that in the message of your failing assert which would then be in your nunit output.
This is what my tests do:
During test setup record the time
Run test
During test teardown browse to (elmah.axd/download) and parse the ELMAH CSV
Filter out rows that don't match the user that is running the WATIN tests and rows where the UNIX time was before the recorded time in (1)
If there are rows report the exception message and the error URL to NUNIT.
Seems to work well, we have already caught a couple of bugs that wouldn't have surfaced until a human tested the app.

Resources