How to display/order unit tests by date created in Visual Studio 2013? - visual-studio

I am new to a fairly large code base(in my case is the Project Katana source code).
I am studying the unit tests in the project in order to get acquainted with the code base (there are around 554 tests in the solution).
Since there are a considerable number of unit tests, I would like to study/review them in the order they were created.
I cannot seem to find a way in Test Explorer to arrange the unit tests in chronological order. A quick internet search found nothing.
Any suggestions?
EDIT: In the meantime, I will review the unit tests in an alternative order: by using the library and then looking through the test methods corresponding to the method I want to use. From the perspective of a consumer of the project, I believe this may be a more efficient way.

An approach that just occurred to me (although not ideal) would be to open the first commit of the source code and go from there.
Although this does not answer the question of how to arrange them in the chronological order in Test Explorer, it does gives us the opportunity to look at the tests in chronological order. A brute force approach.

Related

How to log execution order of unit tests in Visual Studio 2019 Test Explorer

We have a legacy code base with some unit tests still coupled by global state (static variables etc.). To find them, I need to know the exact execution order the tests ran when I ran them via VS test explorer.
Is there a way to log execution order in VS test explorer?
I know that vstest.console does output/log the execution order but then I need to narrow down the subset of tests which is very inconvenient with vstest. In VS test runner, i can just run subsets.
I also know that there are other tools (like resharper test runner) but this is also not an option.
Not out of the box that I am aware of, but I guess you can find ways to log it, like have a global counter that you increment in a before method.
But, since the order of the tests are non deterministic by design I am not sure how valuable that information is.
If you know what test are coupled then run those as Ordered Tests or use a different test traits to control how you execute those test. The best option is of course to break their external dependencies.

In VS2017, is there a way to copy the names of all the failed unit tests to the clipboard at one time?

It is not my decision personally to have lingering failing unit tests.
In the solution I work with at my company, there are a few failing unit tests that have lingered over time. Occasionally, while making changes, another unit test will start to fail, but it won't be clear which one it is. To figure it out, it's sometimes necessary to copy at least the names of the failing unit tests out into a text file and do a comparison.
In VS2017, in the Test Explorer, you can right-click a unit test and select Copy, and it will copy the name and some other meta-information to your clipboard. However, if you select multiple unit tests, that option disappears. Additionally there doesn't appear to be any "Copy All" option available.
So if you're needing to copy the names of all of the unit tests that are failing (other related meta-information is okay), is there a way to do this in Visual Studio 2017 other than manually copying the tests one at a time?
This is not a direct answer to your question, but rather a workaround (though I'd call it a best practice compared to what you seem to be doing):
You seem to have a number of unit tests that produce errors and for whatever reason you decided to not fix them in time. Fixing them would be the obvious solution, but lets assume there were reasons to not do so.
Now everybody that develops a feature after that decision is in trouble, because the unit test result just became unreliable. They might fail and you will never know if it's your error, or maybe that test was supposed to fail because of the former (bad) decision. Failed tests have transformed from a red/green quality signal to a broken traffic light signaling nothing.
You should mark those tests that fail on purpose so that you know which they are. If you are using MSTest (the Visual Studio default) you can do so by annotating them with the [Ignore] attribute. That way, they will not be run, will not count as failed, but will still appear in the list and remind you of the fact that they still need to be fixed.
That way, your tests are reliable again. Anything red is something you broke. Anything red is something you need to fix. Yellow are tests that were broken anyway and green.. well is green.
No need to compare lists of testnames against each other. Use the means available.

Visual Studio and Individual Unit Tests as Actuall Admin Tools

This question may be a bit nebulous, so please bear with me.
I am using Visual Studio, and I am new to the entire realm of unit testing. One thing I do a lot though is use Unit Testing as a quick and dirty ad-hoc "administration ui" at times when I need to just TRY things, but don't have time to make an actual admin system.
What I mean is ... sometimes I just want to get some data thrown into my database to see how it looks on a page. So I'll make a dirty unit test...
[Fact]
public void install_some_test_data(){
using(var database = RavenDocumentStore()){
using(var session = database.OpenSession()){
// create some objects
// add some objects
// save some objects
}
}
}
Nowhere in here have I really cared about "testing", I just like the fact that I can right click and say "Run it" and it'll go, without launching up the program, without having to have a lot of interaction, etc. etc.
So my question is this;
Is this okay to do? I realize this isn't how a program should be long term managed, but I was scolded for doing this by another developer simply because I wanted to quickly show them how something they wanted to see worked. Is there really a problem with using these convenient tools as simpler ways of running commands against my database that can give me immediate feedback on whether it passed or failed?
My followup question to that is this; I have had to actually search pretty hard to find a unit test runner that lets me run individual methods instead of running every single unit test in a class. Is this normal? This whole thing kind of confuses me in general. Even when I do use tests for actually testing, why would I always want to run EVERY test every time? Isn't that extremely wasteful of time and resources? How do you control the order that they run in? This seems even more imperative when tests have to depend on some data to exist before they can run appropriately.
At the moment, I am using xunit and then the unit test runner in ReSharper, but out of the box, Visual Studio doesn't seem to let me run unit tests individually, just as huge batches in a single class.
Am I the only person confused by this?
Sure, it's actually very easy to execute a single unit test (without an external test runner required).
Go to TEST -> Windows -> Test Explorer
Now you'll see a window like this
Now you can rightclick on the test you want to execute and select 'run selected methods'.
As to your other question: it's hard to distinguish what you are asking. Are you demonstrating a proof of concept with a unit test? How does this test replace your admin panel?
Just put your cursor on a Test function name. Then go to Test -> Run Selected Scope (or similar - the name changes by context). That test should execute and also create a test list for you automatically.

Web performance testing with datasource in vs 2012

I'm having some issues with my web performance test that I created in Visual Studio 2012. I've created a test to go through our order system, but on the first run of the test it has errors on the page where you select orders. If I run that same test again it seems to work.
Since I am using a data source containing usernames and passwords, I only have one performance test and it runs once for each user in the data source. When it runs it passes the first test, but each additional user causes errors on that page which results in an empty shopping cart. It seems like an issue with POST variables not being generated or passed for each user after the first in the test.
Does anyone know how to fix this without having to create a web performance test specifically for each user? Using one performance test with a data source is so much nicer.
Thanks!
The web performance system is intended to allow data driven tests in the style you want. Your web site probably has some parameters that Visual Studio has not detected. The mechanisms built in to Visual Studio for detecting dynamic parameters are good, but not infallible.
First step. Just read through the recorded test including form parameters looking for things that may have been missed. You learn what they are through experience.
Another step. Record two versions of the same test, as closely as possible perform identical steps. (But do not worry about think times.) Then compare the two recorded tests. Look for form post parameters and other values that differ and consider whether they should be taken from earlier responses. Find which responses the values come from and write the appropriate extraction rules to create the context parameter.
It can also be worth recording and comparing two tests that are identical expect for user name and password used.
As well as recording tests with Visual Studio and comparing the files, it can be worth recording with a program such as Fiddler.
I have found that comparing the ".webtest" files with a good text comparison program helps find the differences, then make the edits within Visual Studio. If you are confident and keep backups you might edit the XML in the ".webtest" files.
Update: Note on comparing the .webtest files. Look at where the RecordedValue="..." fields differ but the associated parameter fields are not replaced by context variables.

Best way to test a Delphi application

I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI.
My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work.
I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done.
But I have a few questions before I do anything drastic... (and before purchasing anything)
Is it worth it?
Would this be a good way to test?
The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)?
I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp.
Is there a way in testcomplete to specify command-line parameters for an exe?
Does anybody have any similar experiences.
I would suggest you plan to use both DUnit and something like TestComplete, as they each serve a different purpose.
DUnit is great for Unit Testing, but is difficult to use for overall application testing and UI testing.
TestComplete is one of the few automated testing products that actually has support for Delphi, and our QA Engineer tells me that their support is very good.
Be aware though that setting up automated testing is a large and time-consuming job. If you rigourously apply unit testing and auomated UI testing, you could easily end up with more test code than production code.
With a large (existing) application you're in a difficult situation with regards to implementing automated testing.
My recommendation is to set up Unit Testing first, in conjunction with an automated build server. Every time someone checks anything in to source control, the Unit Tests get run automatically. DO NOT try to set up unit tests for everything straight up - it's just too big an effort for an existing application. Just remember to create unit tests whenever you are adding new functionality, and whenever you are about to make changes. I also strongly suggest that whenever a bug is reported that you create a unit test that reproduces the bug BEFORE you fix it.
I'm in a similar situation. (Large app with lots of dependencies). There is almost no automated testing. But there is a big wish to fix this issue. And that's why we are going to tackle some of the problems with each new release.
We are about to release the first version of the new product. And the first signs are good. But it was a lot of work. So next release we sure need some way to automate the test process. That's why I'm already introducing unit tests. Although due to the dependencies, these are no real unit tests, but you have to start somewhere.
Things we have done:
Introduced a more OO approach, because a big part the code was still procedural.
Moved stuff between files.
Eliminated dependencies where possible.
But there is far more on the list of requirements, ensuring enough work for the entire team until retirement.
And maybe I'm a bit strange, but cleaning up code can be fun. Refactoring without unit tests is a dangerous undertaking, especially if there are a lot of side effects. We used pair programming to avoid stupid mistakes. And lots of test sessions. But in the end we have cleaner code, and the amount of new bugs introduced was extremely low.
Oh, and be sure that you know this is an expensive process. It takes lots of time. And you have to fight the tendency to tackle more than one problem in a row.
I can't answer everything, as I've never used testcomplete, but I can answer some of those.
1 - Yes. Regression testing is worth it. It's quite embarrassing to you as a developer when the client comes back to you when you've broken something that used to work. Always a good idea to make sure everything that used to work, still does.
4 - Oracle has something called Flashback which lets you create a restore point in the database. After you've done your testing you can just jump back to this restore point. You can write scripts to use it too, FLASHBACK DATABASE TO TIMESTAMP (FEB-12-2009, 00:00:00);, etc
We're looking at using VMWare to isolate some of our testing.
You can start from a saved snapshot, so you always have a consistent environment and local database state.
VMWare actions can be scripted, so you can automatically install your latest build from a network location, launch your tests and shut down afterwards.
Is it worth it?
Probably. Setting up and maintaining tests can be a big job, but when you have them, tests can be executed very easily and consistently. If your project is evolving, some kind of test suite is very helpful.
Would this be a good way to test?
I would say that proper DUnit test suite is better first step. However if you have large codebase which is not engineered for testing, setting up functional tests is even bigger pain than setting up GUI tests.
The result of the test should in my database (Oracle), is there an easy
way in testcomplete to check these values (multiple fields in multiple tables)?
TestComplete has ADO and BDE interface. Or you can use OLE interface in VBScript to access everything that's available.
Is there a way in testcomplete to specify command-line parameters for
an exe?
Yes.
One way to introduse unittesting in an (old) application could be to have a "Start database" (like the "Flashback" feature described by Rich Adams).
The program som unittest using DUnit to control the GUI.
Se the "GUI testing with DUnit" on http://delphixtreme.com/wordpress/?p=181
Every time the test is started by restoring to "Start database", because, then a known set of data, can be used.
I would need to setup a test database to do all the automated
testing, would there be an easy way to
automate re-setting the test db?
Use transactions: perform a rollback when the test completed. This should revert everything to the initial state.
Recommended reading:
http://xunitpatterns.com/

Resources