I have different tests with different deployment items, like this:
[TestMethod]
[DeploymentItem("item1.xml")]
public void Test1(){...}
[TestMethod]
[DeploymentItem("item2.xml")]
public void Test2(){...}
If I run the tests one by one it works, however if I run them all togheter all the deployment items are copied, so Test2 will run wit item1.xml and item2.xml in the output folder.
What I am trying to do is run each test only with the specified deployment items in the output folder.
Is there any way to clean deployment items after each test?
My workaround is deploying the items to an output subfolder for each test, but I dont like it.
If you define the following in your test class, you may be able to accomplish what you're looking for:
[TestInitialize()]
public void Setup()
{
//Delete all files from the deployment directory
}
I tend not to use files as part of 'unit' tests, so I'm not positive about this, but it's probably worth looking into if you'd really prefer the files to be isolated in that directory, and not in sub-directories.
Related
So - I have a set of tests that test something - They are sensitive to the current environment - so testing locally tests locally - testing on the build machine tests the deployed server.
However I want to have one method locally that allows me to call the deployed server - but of course I don't want it to run if I run all the local tests - and I don't want it to run on the build machine - I only want it to run when it is the only test being run
ie: I want have something like the following code
[Fact]
public void TestA()
{
}
[FactAlone]
public void TestB()
{
}
[Fact]
public void TestC()
{
}
and if I go into resharper or vs test runner, I see that they are all tests - and if I right click and say "run all" - it runs A and C.... but if I right click on B and say run test - I want B to run.
I tried to make a custom fact attribute and a custom IXunitTestCaseDiscoverer... but to make it work I need to know the complete set of tests that will be run - and that wasn't available in the discoverer. I know I could probably do it it if every attribute was mine - but this is a big codebase, and I can't stop people using "Fact", so that's not an option.
at the moment I just comment out the fact, and then uncomment it when I want to run it - which is of course a terrible solution.
I am building a whole suite of plugins for a new build pipeline. I have certain metadata that if it is not present in the build file, i want it to fail. for example
inventory_link: '124djsj39r'
This links the build back to our inventory system. If that tag is not present in the build, I don't want the developer to be able to do squat. no tests, no compiles, no builds, no nothing. I want the project to be a worthless group of files.
Now reading the docs, i understand the build lifecycle, sorta. There's an init, config, execute, clean. basically How do I configure a custom plugin (writing it as an independant jar) so that a validation task executes automatically that checks to make sure this tag is present at the end of the configuration phase of the build lifecycle?
how does multi-project builds effect this? each individual project of a multiproject build must have this tag as well.
EDIT
I was able to get back to this. I got this to run and it executes the closure, problem is project_hash always null. Now granted, I'm using testkit so i may be dealing with something weird in testkit. see anything wrong?:
#Override
void apply(Project project) {
this.project = project
def metadata = new Metadata()
// Create and install the extension object
project.extensions.create('metadata', MetadataExtension, metadata)
def ignore = project.tasks.create(METADATA_REPORT, MetadataReportTask)
ignore.group = PLUGIN_GROUP
ignore.description = 'Gets the detailed information for this project and formats it into a user readable report'
project.afterEvaluate {
throw new InvalidUserDataException(project.metadata.metadata.project_hash)
}
}
Any code you put in the main body of your build.gradle script will run during the configuration phase. If you want it done last, just put it at the bottom.
if (inventory_link == null) //or whatever check makes sense
throw new GradleException('Hey, you need to set the inventory_link')
Note that clean is not a "phase" in the gradle build lifecycle like init, config, execute. It is achieved via a task.
For example, I might write lots of Selenium tests for my staging website, now instead of writing the exact same code again but with the URL on my live website I'd like to reuse the code.
(I might need more than just the URL, I might need a different login/password etc.)
I thought it might be possible by using a .testsettings file and in my tests I could read what the current URL is that I should test against etc.
There has to be a way, how do you do it?
I would suggest storing your test configuration in a simple text based file. It could be as simple as URI on the first line, username on the second and password on the third, etc. Or, if you already have NewtonSoft.JSON in your project, create a simple JSON config file.
Then in your Test Assembly Setup, you could read in that file and parse it into a global static test settings object that you can access from all of your tests.
I would check the default TestSettings.json into source control so it is always available, then you can check if a TestSettings.local.json is present. If so, load it, otherwise load the default.
You could also set the defaults in code and override them if TestSettings.json is present.
To load the file before all of your tests run, use the SetupFixture attribute which allows you to run code when your test assembly loads, before all of your tests run.
namespace NUnit.Tests
{
using System;
using NUnit.Framework;
[SetUpFixture]
public class MySetUpClass
{
public static TestSettings Settings { get; set; }
[OneTimeSetUp]
RunBeforeAnyTests()
{
// Load your settings file here into the static
// Settings property
}
}
}
I heavily make use of unit tests for my developer needs (POCs, unit tests, etc). For one particular test method there was a line that went...
var file = #"D:\data\file.eml";
So I am referencing some file on my file system.
Now in a team when other people are trying to run my "personal" tests (POCs or whatever) they don't have a reference to that file in that path...hence the tests fails. How we'd have to normally make this work is to provide the test data, and allow the user to modify the test code so that it runs on his computer.
Any visual studio way to manage this particular problem?
Whats the benefit in this? Well, people can review the test data (email in my case) as well as the method I wrote for testing, and can raise defects in TFS (the source control system) relating to it if need be.
One way I often handle data files for unit test projects are to set the data files as Resources. (* Note that this link is for vs2010 but I have used this approach through vs2015RC).
In the project with the data file: Project -> Properties -> Resources and choose to add a resource file if you the project doesn't already have one. Select Files in the resource pane and click Add Resource or just drag and drop your data files onto the resource manager. By default resources are marked internal, so to access the resources from another project you have several ways:
In the assembly with the data files, add the following to your AssemblyInfo.cs file and this will allow only specified assemblies to access the internal resources
[assembly: InternalsVisibleTo("NameSpace.Of.Other.Assembly.To.Access.Resources")]
Create a simple provider class to abstract away the entire Resource mechanism, such as:
public static class DataProvider
{
public static string GetDataFile(int dataScenarioId)
{
return Properties.Resources.ResourceManager.GetString(
string.Format("resource_file_name_{0}", id));
}
}
Change the resource management to public (not an approach I have used)You can then access the data file (now a resource) from a unit test such as:
[TestCase(1)]
public void X_Does_Y(int id)
{
//Arrange
var dataAsAString = Assembly_With_DataFile.DataProvider.GetScenario(id);
//Act
var result = classUnderTest.X(dataAsAString);
//Assert
Assert.NotNull(result);
}
Note that using data files as resources, the ResourceManager handles the file I/O and returns strings of the file contents.
Update: The test method in the example above is from an NUnit project and is not meant to imply process, but a mechanism by which a data file can be accessed from another project.
What you'd normally do is add the file to your project and check it into TFS. Then make sure the item's settings are:
Build action: Content
Copy to output: If newer
Then put an attribute on your Test method or Test class:
[DeploymentItem("file.eml")]
You can optionally specify an output dircetory:
[DeploymentItem("file.eml", "Directory to place the item")]
If you put the files in subdirectories of your test project, then adjust the attribute accordingly:
[DeploymentItem(#"testdata\file.eml")]
The file will be copied to the working directory of your test project and that makes it easy to access from your test code. Either load the file directly, or pass the path to any method that needs it.
If you tests expect the files in a specific location you can use a simple System.IO.File.Copy() or System.IO.File.Move() to put the item in the place you need it to be.
The process is explained here on MSDN.
I suppose the most straight forward way is to simply add whatever to the project, and set the correct value for Copy To Output Directory. In other words, say your data is in a text file.
Add text file to your test project
Right-click to access properties window
Set copy to output directory field as Always or Copy if newer.
Now if you build the test project, the file gets copied to your output directly. This enables to write unit test code of the fashion:
var dataFile = File.OpenRead("data.txt");
I'm needing some help with a coded web test.
I created a coded web test to see how many accounts are valid to log into my application. I have a lot of accounts (2000+) and I need to know which of them are valid. Basically, I recorded a web test that hits an URL and searches for some certain text that appears in the page after the login. Then I created an xml file containing all account names and passwords and set it as credentials data source. Then modified the testrun.testrunconfig to specify "one test per datasource row" to have the test run for every row in the xml file.
After this, I converted the test to a "coded" web test. So far so good.
The problem arises when I try to create a file (to programmatically add the successful logins in a file). I have a StreamWriter declared privately and try to initialize it in the test constructor, but this throws an error: "could not run webtest xxx on agent yyy: exception has been thrown by the target of an invocation".
I tried to initialize the stream in the same line where it's declared, but I get the same results.
Does anyone have any idea on how can I accomplish the desired test?
I know that I can accomplish this without a coded web test, but to collect the successful login information I have to go line by line in the test result and see what are the ones that passed.
If anyone has a better idea, it's very welcomed!
Best regards
Beto
You can certainly achieve what you are asking, since I have also implemented a similar test.
There must be an error in your code that is causing that exception at runtime.
Instead of using controller/agent rig, try running the test locally first, so that you might get a better error message than the generic "could not run webtest".
Alternatively, if you posted the code perhaps someone could spot the error.
I would follow agentnega's suggestion to run the test locally in order to get a more clear message for the error. Maybe there is something wrong with the file path.
Besides this, I would keep the test as recorded, instead of converting it to a coded one.
I would set a context variable to the path of the file that will have the successful logins at the end, preferably relative to the test deployment directory.
Then write a request plugin class, derived from WebTestRequestPlugin, and override the PostRequest() method in way similar to this one:
public override void PostRequest(object sender, PostRequestEventArgs e)
{
if(Outcome.Pass == e.Request.Outcome)
{
string path = Path.Combine(e.WebTest.Context["$TestDeploymentDir"].ToString(), e.WebTest.Context["logins.txt"].ToString());
StreamWriter sw = null;
if (!File.Exists(path))
{
sw = File.CreateText(path);
}
else
{
sw = File.AppendText(path);
}
sw.WriteLine(e.WebTest.Context["Username"]);
}
}