Unit testing: how to access a text file? - visual-studio

I'm using Visual Studio 2008 with Microsoft test tools. I need to access a text file from within the unit test.
I've already configured the file with build action set to 'Content' and copy to output directory to 'Copy Always', but the file is not being copied to the output directory, which according to System.Environment.CurrentDirectory is
'{project_path}\TestResults\Pablo_COMPU 2009-11-26 15_01_23\Out'
This folder contains all the DLL dependencies of the project, but my text file is not there.
Which is the correct way to access a text file from a unit test?

You have to add the DeploymentItem attribute to your test class. With this attribute you specify the files which are copied into the out directory for the test run.
For example:
[TestMethod]
[DeploymentItem(#"myfile.txt", "optionalOutFolder")]
public void MyTest()
{
...
}
See also: http://msdn.microsoft.com/en-us/library/ms182475.aspx.

Alternatively if you set all your text files to "Copy to build directory" then you could reference their path in your tests by doing this
var directory = Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
var path = System.IO.Path.Combine(directory, "myFile.txt");

When I need a chunk of text as part of a unit test and it's more than a line or two, I use an embedded resource. It doesn't clutter your test code, because it's a separate text file in the source code. It gets compiled right into the assembly, so you don't have to worry about copying around a separate file after compilation. Your object under test can accept a TextReader, and you pass in the StreamReader that you get from loading the embedded resource.

I can't answer your question as I don't use MSTest. However, I'd consider whether accessing the file system in a unit test is the right thing to do. If you introduce a dependency on the file system, the test will become slower and less trustworthy (you now depend on something that may not be there/accessible/etc). It is for these reasons that many folk will say "it's not a unit test if it hits the file system".
Although this is not a hard rule, it's always worth considering. Any time I have to touch the file system in tests, I try to avoid it because I find tests that rely on files are harder to maintain and are generally less consistent.
I'd consider abstracting the file operations to some degree. You can do numerous things here, from changing the internal loading strategy (via Dependency Injection) to -- even better -- separating the loading/use of the file so that the consumer of the file's contents doesn't even have to care about the loading strategy.

How are you running your tests?
We use (TestDriven.net -> Run Tests).
From my experience, some test runners (like Junit in Netbeans) won't automatically copy any additional text files that you might need for testing. So in your case you might have to do a full build, and then try running the tests again.
And the correct way for accessing text files from tests is the way you're trying to do it. (Setting the files to "copy always" and "content", and accessing them from the compiled directory).
Also, not sure where people are getting the idea that having tests rely on files is a bad thing. It's not.
If anything, having separate test files, will only clean up your tests and make them more readable. Consider some xml parsing method that returns a large string:
String expectedOutput = fileOperator.ReadStringFromFile("expectedFileContents.xml");
String result = systemUnderTest.Parse(somethingtoparse);
Assert.Equals(expectedOutput, result);
Imagine if the output was 30 lines long, would you clutter your test code with one giant String? or just read it from a file.

Related

MSTest: .testsettings is not always deploying files

We have a solution that contains a series of projects used for our tests. One of those projects contains a set of files that are needed for the tests. Those files are schemas that will get validated against every time an API route is called. Tests, naturally, call one or more API routes.
The solution has a .testsettings file. This file has deployment enabled, and it specifies that these schemas need to be deployed. In addition, every schema file is set to Copy Always. Also. the .testsettings file is in the solution, under Solution Items.
The problem is that the .testsettings file is only occasionally respected. Sometimes the files are copied; sometimes they are not. When they don't copy, we can do the following to fox it:
Go to the Test -> Test Settings menu and Choose Select Test Settings
Select the .testsettings file in the solution
Rebuild the solution
Rerun the tests
This usually works at least once. But inevitably, it stops working and the files aren't deployed again.
Note that when you go to the Test -> Test Settings menu, our current .testsettings file is always already checked. So choosing a new .testsettings file just means choosing the one that the UI already says is chosen.
We thought of going the DeploymentItem route, but this is impractical for a two reasons, surrounding code maintenance.
From what I can tell, DeploymentItem can only be placed on individual tests. With literally hundreds of tests, we'd be sprinkling it everywhere. It'd become a code maintenance nightmare. I thought of placing it on the global TestInitialize method, but that would just re-copy the files every time a test is run, which seems unnecessary. Not to mention that I'd have to put literally dozens of DeploymentItem attributes on the method and we'd need to keep that up-to-date every time a new schema is added.
Related to that, adding new schemas means altering existing tests where necessary. Again, we have hundreds of tests.
A far better solution would be to have the files copied over once, and then have the code look in the communal pool of schemas whenever it needs one.
I also looked at replacing .testsettings with .runsettings, but it doesn't seem to have a DeploymentEnabled node in the XML, and where that option exists it's again specific to DeploymentEnabled.
Does anyone have a solution to this, or does anyone know if it's a known bug? Schema validation happens behind the scenes -- individual test authors don't have to explicitly call it -- and it doesn't fail the test if it doesn't occur (we don't always have schemas available yet for every API call, so we don't want to fail the test if that's the case). As a result, it's often difficult to immediately ascertain whether or not validation happened. Because of that, we sometimes get false passes on tests where the schema actually broke, all because the .testsettings file didn't actually deploy our files like it's set to.
So I found out the problem: apparently this issue was fixed in Visual Studio 2015 Update 3. We were using Update 2. Once we got the new update, this problem went away.

Is it possible to move a unit test and maintain the test lists?

I'm having trouble organising unit tests in Visual Studio. Once the tests are in test lists, changing namespaces, changing TestClass names etc. blows away all of the test list organisation structure. I case see in the solution's vsmdi file that each test and test list gets a unique GUID, which changed is the path changes, but I can't see any way of updating the test lists to use the new GUID of the new location.
Perhaps there is a tool I am missing to relocate tests without breaking the structure?
Unfortunately the test list is not clever enough to keep track of changes like renaming classes or methods - if you change something the test ends up in 'tests not in a list', and the vsdmi file is regenerated as you have discovered. For this reason, I tend not to use test lists or check vsdmi files into source control.
Instead, I use the 'group by' pull down in the test list editor window, and group by project or full class name (which makes it important to use namespaces consistently).
In VS 2011 test lists are deprecated, so I don't expect this to be fixed.

How to test file manipulation

I hear that accessing to database is wrong in testing.
But what about file manipulation? Things like, cp, mv, rm and touch methods in FileUtils.
If I write a test and actually run the commands (moving files, renaming files, making directories and so on), I can test them. But I need to "undo" every command I ran before running a test again. So I decided to write all the code to "undo", but it seems a waste of time because I don't really need to "undo".
I really want to see how others do. How would you test, for example, when you generate a lot of static files?
In your case accessing the files is totally legit, if you are writing file manipulation code it should be tested on files. The one thing you have to be careful about is that a failed test means that you code is wrong and not that somebody deleted a file that is needed for the test or something like that. I would put the directory and the files you need for the tests in a separate folder that is only used for the test. Then in the build up of the test copy the whole folder to a temporary place do all the testing and then after the test delete the temporary files. In that way each test has a clean copy of the files that are saved for the test.
"Pure" unit testing shall not access "expensive" resources such as filesystem, DB ...
Now you may want to run those "integration" tests (or whatever you call them) at the same time as your unit-tests, and use the same framework it's convenient.
You can have a set of files for unit testing that you copy into temporary location as suggested in Janusz' answer, or generate them in your unit tests, or you can use a mock of the FileUtils instead of the real FileUtils when unit testing.
Accessing a database is not "wrong in testing". How else will you test the integration of your code with the database?
The key to repeatable testing is a consistent environment. So long as you start from the same file system or database contents for your tests, you should are not wrong. This is usually handled via a cleanup process at the start of the test suite.
Accessing resources like the database, file system, smtp server, etc. are bad ideas for unit testing. At some point obviously you have to do have to try it out with real files, that's a different kind of test, an integration test. Integration tests are more painful, you do have to take care to make sure your test is starting from a well-defined state, also they will run slower since you're accessing the real file system. On the other hand you shouldn't have to run them as frequently as you would with unit tests.
For unit tests you should be able to take advantage of duck typing to create objects that react to the same methods that the file objects you're working with have. Plus there's nothing to undo with this approach, and the tests will run a lot faster.
If your operating system supports RAM-based filesystems, you could go with one of these. This has even the advantage that an occasional `unix command` in your code keeps working.
Maybe you could create a directory inside your test package named "test_*". Then, the files that you change will be put on this directory (for example: if you create a directory, you will create the directory inside the test directory). At the end of the test you could delete this directory (with only one command). This is the unique UNDO operation that you will execute.
You will put all files that you need to the test on your test directory inside the test package.

How to get MSTest to find my test data files?

I have a few tests that need to be fed with external data from excel files. The files are included in the test project, and in Visual Studio, I have edited the test settings file (Local.testsettings) to deploy the data files. This makes it work fine i VS.
We are, however, also running continous integration with TeamCity, and in TeamCity this doesn't work. My data files are unavailable to the test. Seems that the tests are run from a temporary folder named "C:\TeamCity\buildAgent\temp\buildTmp\ciuser_AS40VS6 2009-12-11 09_40_17\Out", and the data files are not copied there.
I have tried changing the build action for the data files to "Resource" and setting copy to output dir to "Always", but that didn't help.
Does anyone know how to make this work?
I am running Visual Studio 2010 beta 2 and TeamCity 4.5.5, which is why I'm running MSTest in the first place, and not NUnit...
I get round this by adding my data files (in my case usually XML) as embedded resources and I extract them from the test assembly.
[TestInitialize]
public void InitializeTests()
{
var asm = Assembly.GetExecutingAssembly();
this.doc = new XmlDocument();
this.doc.Load(asm.GetManifestResourceStream("TestAssembly.File.xml"));
}
This post answers this question: MSTest copy file to test run folder
The accepted answer is technically correct. However, from my experience, I find that the embedding files as resources requires an additional step of remembering to set the property "Embedded Resource". This becomes a challenge when you have a large number of data files. Also, with increasing number of data files, the size of the unit test assembly keeps growing . In my case, I had over 500MB of test data files and packing all them into the assembly was not a good idea.
What is the alternative?
Let the data files remain as they are. Do not use DeploymentItemAttribute, do not use embedded resources. Please refer my proposed solution How do I make a data file available to unit tests?

Visual Studio unit testing - how to access external files?

I have data files used as input to my unit tests. These files are quite big and I don't want to copy them each time unit tests are executed. Tests are executed without deployment. So I can just put them into folder under my solution, and... how to obtain path to my solution (or test project source code) when unit test is executing?
Because you can run a test project in different ways (TD.NET, Visual Studio, R# etc.), the path used to reference the tests can change.
For this reason, I embed test needed files in my test assembly and draw them out from there.
You can use:
Assembly.GetExecutingAssembly().Location
in your tests to get the path of the assembly containing the unit tests.
Simple, make the location of the files configurable (and testable).
Then either, set it in the unit testing code or set it thru some config file.

Resources