MTM missing Error message for data driven tests - visual-studio-2013

I have a suite of integration tests that I run nightly through TFS's build/test agent framework. When tests that are not data driven fail, then I can examine their Error message in MTM via Test | Analyze Test Runs. However if the test is a data driven test
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", #"|DataDirectory|\DataFiles\Providers.csv", "Providers#csv", DataAccessMethod.Sequential)]
and the test fails, the Error message field is not even present in the test results. Neither the summary nor the detail for the individual test that failed.
As shown in ID 120574 below:
Running the test locally does provide an error message in the test explorer of Visual Studio, and in the cases I've encountered there is a mixture of pass & fail (i.e. one of the data driven cases failed but not all).
I'm assuming that MTM is not showing the message because there is an aggregate of results.
Is there a way to configure my test, MTM, or the build to show these error messages for data driven tests?

Adding my comment as an answer for whoever is looking for solution.
The .trx file should have most (almost all) details about the test failure. It will have the Error Message, Exception and Stacktrace, wherever available, containing information about why a test failed/aborted/timeout.
Just in case nothing shows up in trx file, do check the Test Log as it may have information about Agent-Controller connection issues or other general network issues which could lead to test failures or abort.

Related

How to retry only failed tests in the CI job run on Gitlab?

Our automation tests run in gitlab CI environment. We have a regression suite of around 80 tests.
If a test fails due to some intermittent issue, the CI job fails and since the next stage is dependent on the Regression one, the pipeline gets blocked.
We retry the job to rerun regression suite expecting this time it will pass, but some other test fails this time.
So, my question is:
Is there any capability using which on retrying the failed CI job, only the failed tests run (Not the whole suite)?
You can use the retry keyword when you specify the parameters for a job, to define how many times the job can be automatically retried: https://docs.gitlab.com/ee/ci/yaml/#configuration-parameters
[Retry Only Failed Scenarios]
Yes, but it depends. let me explain. I'll mention the psuedo-steps which can be performed to retry only failed scenarios. The steps are specific to pytest, but can be modified depending on the test-runner.
Execute the test scenarios with --last-failed. At first, all 80 scenarios will be executed.
The test-runner creates a metadata file containing a list of failed tests. for example, pytest creates a folder .pytest_cache containing lastfailed file with the list of failed scenarios.
We now have to add the .pytest_cache folder in the GitLab cache with the key=<gitlab-pipeline-id>.
User checks that there are 5 failures and reruns the failed job.
When the job is retried it will see that now .pytest_cache folder exists in the GitLab cache and will copy the folder to your test-running directory. (shouldn't fail if the cache doesn't exist to handle the 1st execution)
you execute the same test cases with the same parameter --last-failed to execute the tests which were failed earlier.
In the rerun, 5 test cases will be executed.
Assumptions:
The test runner you are using creates a metadata file like pytest.
POC Required:
I have not done POC for this but in theory, it looks possible. The only doubt I have is how Gitlab parses the results. Ideally in the final result, all 80 scenarios should be pass. If it doesn't work out this way, then we have to have 2 jobs. execute tests -> [manual] execute failed tests to get 2 parsed results. I am sure with 2 stages, it will definitely work.
You can use Retry Analyser. This will help you definitely.

How to re-run only failed tests (failed data row) in visual studio test task?

We have a build pipeline for our automated scripts (selenium) in Azure-Devops. And we use browserstack to run all our scripts. Sometimes we get timeout issue, no matter what we implement additional options to browser settings we still get timeout. So we decided to re run the tests to certain limit and then the pass percentage went high and scripts were passed without any issues.
So we have different data rows for each test method. Now when a test method for a particular data row is failed then the entire test (including all the passed data row) are getting executed again in the re-run which is unnecessary since some are passed.
Is there any way to run only failed data row test ?
As seen in below screenshot on the regular attempt, data row 0 is failed and others passed. So it goes for re-run and running all the passed test again.
Test result screenshot
Note: Using Batch option to re-run the failed tests. Also tried with Automatic batch option where it failed to re-run because of test name issue of vstest.console.exe (There is formatting issue if the test name has spaces or round braces)

jasmine-karma handleGlobalErrors

I'm using the Karma-Jasmine framework plugin and have some flaky tests that sometimes throw an exception in a beforeAll method within a suite. Unfortunately this appears to prevent all subsequent test suites from running and in particular the reporter plugins don't report any details that tests were skipped and they just disappear from reporter output.
This Karma-Jasmine runner has a function called handleGlobalErrors that reports if there was any failed expectations back to Karma and this error message appears to causes Karma to stop running tests.
Anyone have ideas/thoughts on possibility of logging the error as an 'info' message and continue to attempt to run tests? or am I missing some kind of Karma configuration option?

Is it possible to display JMeter 'View Result In Table' listener data

I have one JMeter test plan with several test cases. Also,I use jmeter-maven-plugin.
If one of test cases fail (for 350 threads) it looks like
Tests Run: 1, Failures: 350, Errors: 0
So it not clear what test case is failed.
Is it possible to show more detail information about failed test case in Jenkins UI or in the console? Exactly like the 'View Result In Table' listener show it in the JMeter GUI.
Is there a plugin to show formatted output for resulting JTL-file (only about test case status and fail details) in the console or in Jenkins UI?
Give a try to this Jenkins Plugin: Performance
The code that checks for failures is just searching through your jtl file and looking for instances of failure, nothing more. It's really just there so that you can trigger a failure that maven can detect, the plugin does not do in depth analysis of the .jtl file.
There is also a jmeter-analysis-maven-plugin that will give you some more detailed information about the test results, if this doesn't meet your need feel free to add feature requests.

Detect ELMAH exceptions during integration (WATIN + NUNIT) tests

Here is my scenario:
I have a suite of WATIN GUI tests that run against a ASP.net MVC3 web app per build in CI. The tests verify that the GUI behaves as expected when given certain inputs.
The web app uses ELMAH xml files for logging exceptions.
Question: How can I verify that no unexpected exceptions occurred during each test run. It would also be nice to provide a link to the ELMAH detail of the exception in the NUNIT output if an exception did occur.
Elmah runs in the web application which is a separate process from the test runner. There is no easy way to directly intercept unhandled errors.
The first thing that comes to mind is to watch the folder (App_Data?) that holds the Elmah XML error reports.
You could clear error reports from the folder when each test starts and check whether it is still empty at the end of the test. If the folder is not empty you can copy the error report(s) into the test output.
This approach is not bullet proof. It could happen that an error has occurred but the XML file was not yet (completely) written when you check the folder. For example when your web application is timing out. You could potentially also run into file locking issues if you try to read an XML file that is still being written.
Instead of reading XML files from disk you could configure Elmah to log to a database and read the error reports from there. That would help you get around file locking issues if they occur.
Why not go to the Elmah reporting view (probably elmah.axd) in the setup of your WatiN test and read the timestamp of the most recent error logged there. Then do the same after your test and assert that the timestamps are the same.
It would be easy to read the url of this most recent error from the same row and record that in the message of your failing assert which would then be in your nunit output.
This is what my tests do:
During test setup record the time
Run test
During test teardown browse to (elmah.axd/download) and parse the ELMAH CSV
Filter out rows that don't match the user that is running the WATIN tests and rows where the UNIX time was before the recorded time in (1)
If there are rows report the exception message and the error URL to NUNIT.
Seems to work well, we have already caught a couple of bugs that wouldn't have surfaced until a human tested the app.

Resources