Detect ELMAH exceptions during integration (WATIN + NUNIT) tests - asp.net-mvc-3

Here is my scenario:
I have a suite of WATIN GUI tests that run against a ASP.net MVC3 web app per build in CI. The tests verify that the GUI behaves as expected when given certain inputs.
The web app uses ELMAH xml files for logging exceptions.
Question: How can I verify that no unexpected exceptions occurred during each test run. It would also be nice to provide a link to the ELMAH detail of the exception in the NUNIT output if an exception did occur.

Elmah runs in the web application which is a separate process from the test runner. There is no easy way to directly intercept unhandled errors.
The first thing that comes to mind is to watch the folder (App_Data?) that holds the Elmah XML error reports.
You could clear error reports from the folder when each test starts and check whether it is still empty at the end of the test. If the folder is not empty you can copy the error report(s) into the test output.
This approach is not bullet proof. It could happen that an error has occurred but the XML file was not yet (completely) written when you check the folder. For example when your web application is timing out. You could potentially also run into file locking issues if you try to read an XML file that is still being written.
Instead of reading XML files from disk you could configure Elmah to log to a database and read the error reports from there. That would help you get around file locking issues if they occur.

Why not go to the Elmah reporting view (probably elmah.axd) in the setup of your WatiN test and read the timestamp of the most recent error logged there. Then do the same after your test and assert that the timestamps are the same.
It would be easy to read the url of this most recent error from the same row and record that in the message of your failing assert which would then be in your nunit output.

This is what my tests do:
During test setup record the time
Run test
During test teardown browse to (elmah.axd/download) and parse the ELMAH CSV
Filter out rows that don't match the user that is running the WATIN tests and rows where the UNIX time was before the recorded time in (1)
If there are rows report the exception message and the error URL to NUNIT.
Seems to work well, we have already caught a couple of bugs that wouldn't have surfaced until a human tested the app.

Related

JMeter - JSR223 PreProcessor is getting failed in the GitHub for OTP creation

I am trying to execute a .jmx script in the GitHub. The login page requires a OTP, which I am generating using org.jboss.aerogear. However, it seems when I am executing the script in GitHub the script is not generating this OTP and throwing 406 (Not Acceptable). Can anyone please guide me on this issue please.
This is running perfectly in JMeter but getting error in GitHub.
Do I need to add this specific driver and how?
Whenever you face any "error" first of all take a look at jmeter.log file, normally it contains the reason or at least a clue so you can figure out or guess the error cause. If it doesn't - increase JMeter logging verbosity for the test elements you're using.
Most probably you need to add the .jar file which provides this Totp class (along with dependencies, if any) to JMeter Classpath to JMeter installation in "Github" (whatever it means) and the error will go away

Distributed JMeter test fails with java error but test will run from JMeter UI (non-distributed)

My goal is to run a load test using 4 Azure servers as load generators and 1 Azure server to initiate the test and gather results. I had the distributed test running and I was getting good data. But today when I remote start the test 3 of the 4 load generators fail with all the http transactions erroring. The failed transactions log the following error:
Non HTTP response message: java.lang.ClassNotFoundException: org.apache.commons.logging.impl.Log4jFactory (Caused by java.lang.ClassNotFoundException: org.apache.commons.logging.impl.Log4jFactory)
I confirmed the presence of commons-logging-1.2.jar in the jmeter\lib folder on each machine.
To try to narrow down the issue I set up one Azure server to both initiate the load and run JMeter-server but this fails too. However, if I start the test from the JMeter UI on that same server the test runs OK. I think this rules out a problem in the script or a problem with the Azure machines talking to each other.
I also simplified my test plan down to where it only runs one simple http transaction and this still fails.
I've gone through all the basics: reinstalled jmeter, updated java to the latest version (1.8.0_111), updated the JAVA_HOME environment variable and backed out the most recent Microsoft Security update on the server. Any advice on how to pick this problem apart would be greatly appreciated.
I'm using JMeter 3.0r1743807 and Java 1.8
The Azure servers are running Windows Server 2008 R2
I did get a resolution to this problem. It turned out to be a conflict between some extraneous code in a jar file and a component of JMeter. It was “spooky” because something influenced the load order of referenced jar files and JMeter components.
I had included a jar file in my JMeter script using the “Add directory or jar to classpath” function in the Test Plan. This jar file has a piece of code I needed for my test along with many other components and one of those components, probably a similar logging function, conflicted with a logging function in JMeter. The problem was spooky; the test ran fine for months but started failing at the maximally inconvenient time. The problem was revealed by creating a very simple JMeter test that would load and run just fine. If I opened the simple test in JMeter then, without closing JMeter, opened my problem test, my problem test would not fail. If I reversed the order, opening the problem test followed by the simple test then the simple test would fail too. Given that the problem followed the order in which things loaded I started looking at the jar files and found my suspect.
When I built the script I left the jar file alone thinking that the functions I need might have dependencies to other pieces within the jar. Now that things are broken I need to find out if that is true and happily it is not. So, to fix the problem I changed the extension on my jar file to zip then edited it in 7-zip. I removed all the code except what I needed. I kept all the folders in the path to my needed code, I did this for two reasons; I did not have to update my code that called the functions and when I tried changing the path the functions did not work.
Next I changed the extension on the file back to jar and changed the reference in JMeter’s “Add directory or jar to classpath” function to point to the revised jar. I haven’t seen the failure since.
Many thanks to the folks who looked at this. I hope the resolution will help someone out.

MTM missing Error message for data driven tests

I have a suite of integration tests that I run nightly through TFS's build/test agent framework. When tests that are not data driven fail, then I can examine their Error message in MTM via Test | Analyze Test Runs. However if the test is a data driven test
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", #"|DataDirectory|\DataFiles\Providers.csv", "Providers#csv", DataAccessMethod.Sequential)]
and the test fails, the Error message field is not even present in the test results. Neither the summary nor the detail for the individual test that failed.
As shown in ID 120574 below:
Running the test locally does provide an error message in the test explorer of Visual Studio, and in the cases I've encountered there is a mixture of pass & fail (i.e. one of the data driven cases failed but not all).
I'm assuming that MTM is not showing the message because there is an aggregate of results.
Is there a way to configure my test, MTM, or the build to show these error messages for data driven tests?
Adding my comment as an answer for whoever is looking for solution.
The .trx file should have most (almost all) details about the test failure. It will have the Error Message, Exception and Stacktrace, wherever available, containing information about why a test failed/aborted/timeout.
Just in case nothing shows up in trx file, do check the Test Log as it may have information about Agent-Controller connection issues or other general network issues which could lead to test failures or abort.

How to get the test outcome programmatically?

I am using the Visual Studio agents to run VS coded UI tests on a remote test server (the test agent) from my own developer machine (the test controller).
When running tests locally, I was able to access and read the TRX results file that is created once the test had completed, but I am unable to access this file on the remote test server since the TRX results file remains within the Visual Studio folder on the test controller.
The reason I want to access the results file programmatically as I have code to read the file and then to send out the result as an automated email.
So, is there any way to get the outcome of the test programmatically so that I can send out the results email automatically?
Ideally, I would be able to get access to the TRX results file from the test server but I'm not sure if this is feasible or possible.
If you run the coded ui inside of a load test, you can set it to write to the LoadTest2010 database. From there you can set up ways to get the results and trigger events as with any database.

VS2010 Load Test Failing - Cannot Open Database - NOT The Load Test Results database

Hi I have been battling with this issue all day. I have a vs2010 load test which consists of three scenarios which are composed of three different web performance tests.
Each of the web performance tests select urls from a database which is configured correctly and runs locally. However when the load test is run remotely it fails with the error:
Could not run load test 'Load Test' on agent 'AGENTSERVER'. Could not open the database 'URLSDB' requested by the login. login failed for useraccount
In an attempt to get this working the agents and controller are set to run under a domain admin account, I can login to the database through Management Studio. I've checked the connection string and can run the test locally but not remotely. Does anyone have any ideas? My next step is to set the connection string to the UrlsDB to use SQL Authentication
Finally managed to resolve it at 01:20AM. When checking the datasources of three individual tests which made up the mixes in the scenario, I found that the UI was showing that once one had been updated all three updated the connection string so that is why I was baffled as to why I was getting these errors, plus the error doesn't indicate which connection was having the issue.
So to eliminate the tests as being the issue I removed the datasource from each test and created individually named brand new datasources all till effectively pointing to the same sql server and the same database. Then I ran the tests and all performed correctly, finally!!
So the core issue was the connection strings in the underlying tests were incorrect. Will be testing the UI further to check if I was just my own error or there may actually be a bug in the UI, if I find a bug I'll report it.
Thanks to those who took the time to try to help me solve it, gutted that the issue was so minor when it had me baffled for nearly 20 hours :/
The domain admin account you are running the test from cannot connect to the database server from the agent machine.
Log into the agent and debug the database connection from there.
Please be aware that a thread blocking call inside a web test, such as this may cause issues with your load test. I recommend that you load all test url's during the test instanciation if at all possible.
Essentially minimise the database calls to as few as possible.

Resources