How to test file manipulation - ruby

I hear that accessing to database is wrong in testing.
But what about file manipulation? Things like, cp, mv, rm and touch methods in FileUtils.
If I write a test and actually run the commands (moving files, renaming files, making directories and so on), I can test them. But I need to "undo" every command I ran before running a test again. So I decided to write all the code to "undo", but it seems a waste of time because I don't really need to "undo".
I really want to see how others do. How would you test, for example, when you generate a lot of static files?

In your case accessing the files is totally legit, if you are writing file manipulation code it should be tested on files. The one thing you have to be careful about is that a failed test means that you code is wrong and not that somebody deleted a file that is needed for the test or something like that. I would put the directory and the files you need for the tests in a separate folder that is only used for the test. Then in the build up of the test copy the whole folder to a temporary place do all the testing and then after the test delete the temporary files. In that way each test has a clean copy of the files that are saved for the test.

"Pure" unit testing shall not access "expensive" resources such as filesystem, DB ...
Now you may want to run those "integration" tests (or whatever you call them) at the same time as your unit-tests, and use the same framework it's convenient.
You can have a set of files for unit testing that you copy into temporary location as suggested in Janusz' answer, or generate them in your unit tests, or you can use a mock of the FileUtils instead of the real FileUtils when unit testing.

Accessing a database is not "wrong in testing". How else will you test the integration of your code with the database?
The key to repeatable testing is a consistent environment. So long as you start from the same file system or database contents for your tests, you should are not wrong. This is usually handled via a cleanup process at the start of the test suite.

Accessing resources like the database, file system, smtp server, etc. are bad ideas for unit testing. At some point obviously you have to do have to try it out with real files, that's a different kind of test, an integration test. Integration tests are more painful, you do have to take care to make sure your test is starting from a well-defined state, also they will run slower since you're accessing the real file system. On the other hand you shouldn't have to run them as frequently as you would with unit tests.
For unit tests you should be able to take advantage of duck typing to create objects that react to the same methods that the file objects you're working with have. Plus there's nothing to undo with this approach, and the tests will run a lot faster.

If your operating system supports RAM-based filesystems, you could go with one of these. This has even the advantage that an occasional `unix command` in your code keeps working.

Maybe you could create a directory inside your test package named "test_*". Then, the files that you change will be put on this directory (for example: if you create a directory, you will create the directory inside the test directory). At the end of the test you could delete this directory (with only one command). This is the unique UNDO operation that you will execute.
You will put all files that you need to the test on your test directory inside the test package.

Related

AppDomain Usage Issue in Xunit test cases -Cake Script

I am running the Xunit Test cases as mentioned in my previous question.
How to get passed and fail test case count in xunit using cake(c# make) script
While running the test cases, most of my test cases are failed while trying to access a file from my AppDomain.
The test cases are succeeded in Visual Studio.
From the Error log, I can see that it tries to read the file from Xunit console runner location instead of application location.
Note:
I am using NoAppDomain of Xunit2Settings as false.
When working with unit tests and files, I would recommend first to see if you can avoid using the actual filesystem by using data in memory or assembly resources. That way parallel execution, filesystem locks or similar environment related things won't be an issue.
So if you have the opportunity to refactor the filesystem out of the tests (unless that is what your testing) I would go that route first.
One way to work around the issue you're having is to use absolute paths, you could achieve this by in your tests utilizing the test assembly location and then Path.Combine the relative path to that. There's a good StackOverflow answer explaining on how to get the path of your assembly:
https://stackoverflow.com/a/52956/5883153
A quick fix you could try is using the Xunit2Settings WorkingDirectory to set same current dirrectory as VisualStudio, but that isn't something I've testest or recomend.

Testing file copy, move, delete operations in Ruby

I am developing a backup library in Ruby. And, as you may expect, there are many files copied, moved and deleted during the backup. In my test I want to make sure that the proper files and folders are copied from source to destination. What are the best practices of testing it? Should I deal with physical files during the tests? Or is it better to mock it?
It is better to avoid using real filesystem for testing (it results in slow, brittle tests with messy setup/cleanup). Better to stub it out, with fakefs gem, for example.
Unit tests need to run fast, so that they can be run very often, after each change. So touching the file system is not an option here.
Then integration tests (or whatever they can be called) will ensure the physical files are actually copied. These tests can be slower, as they are run less often.

Unit testing: how to access a text file?

I'm using Visual Studio 2008 with Microsoft test tools. I need to access a text file from within the unit test.
I've already configured the file with build action set to 'Content' and copy to output directory to 'Copy Always', but the file is not being copied to the output directory, which according to System.Environment.CurrentDirectory is
'{project_path}\TestResults\Pablo_COMPU 2009-11-26 15_01_23\Out'
This folder contains all the DLL dependencies of the project, but my text file is not there.
Which is the correct way to access a text file from a unit test?
You have to add the DeploymentItem attribute to your test class. With this attribute you specify the files which are copied into the out directory for the test run.
For example:
[TestMethod]
[DeploymentItem(#"myfile.txt", "optionalOutFolder")]
public void MyTest()
{
...
}
See also: http://msdn.microsoft.com/en-us/library/ms182475.aspx.
Alternatively if you set all your text files to "Copy to build directory" then you could reference their path in your tests by doing this
var directory = Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
var path = System.IO.Path.Combine(directory, "myFile.txt");
When I need a chunk of text as part of a unit test and it's more than a line or two, I use an embedded resource. It doesn't clutter your test code, because it's a separate text file in the source code. It gets compiled right into the assembly, so you don't have to worry about copying around a separate file after compilation. Your object under test can accept a TextReader, and you pass in the StreamReader that you get from loading the embedded resource.
I can't answer your question as I don't use MSTest. However, I'd consider whether accessing the file system in a unit test is the right thing to do. If you introduce a dependency on the file system, the test will become slower and less trustworthy (you now depend on something that may not be there/accessible/etc). It is for these reasons that many folk will say "it's not a unit test if it hits the file system".
Although this is not a hard rule, it's always worth considering. Any time I have to touch the file system in tests, I try to avoid it because I find tests that rely on files are harder to maintain and are generally less consistent.
I'd consider abstracting the file operations to some degree. You can do numerous things here, from changing the internal loading strategy (via Dependency Injection) to -- even better -- separating the loading/use of the file so that the consumer of the file's contents doesn't even have to care about the loading strategy.
How are you running your tests?
We use (TestDriven.net -> Run Tests).
From my experience, some test runners (like Junit in Netbeans) won't automatically copy any additional text files that you might need for testing. So in your case you might have to do a full build, and then try running the tests again.
And the correct way for accessing text files from tests is the way you're trying to do it. (Setting the files to "copy always" and "content", and accessing them from the compiled directory).
Also, not sure where people are getting the idea that having tests rely on files is a bad thing. It's not.
If anything, having separate test files, will only clean up your tests and make them more readable. Consider some xml parsing method that returns a large string:
String expectedOutput = fileOperator.ReadStringFromFile("expectedFileContents.xml");
String result = systemUnderTest.Parse(somethingtoparse);
Assert.Equals(expectedOutput, result);
Imagine if the output was 30 lines long, would you clutter your test code with one giant String? or just read it from a file.

Organization of Unit Tests in Visual Studio

I'm currently creating a paired unit test assembly for every assembly in my project, both are in the same folder.
MyProject/MyProject.csproj
MyProject.Test/MyProject.Test.csproj
Looking at open source projects, I've seen some smaller project put all tests in one assembly, and other split it out like mine. I'm dealing with a large solution so it would be pretty crazy to put all tests in one project.
I currently have msbuild logic to run tests on all *.Test.csproj files. If I had all my tests in a different folder I wouldn't need to do this.
Just wondering if there are any good arguments to do things a certain way.
Thanks
I do it the same way but I change the default namespace for each test project to match the namespace of the production project. So the tests for class X.Y.Foo are in X.Y.FooTest rather than X.Y.Test.FooTest - it means you need fewer using directives, and generally makes things simpler.
My main reason for wanting to keep the two in separate projects is to avoid either including the tests in the production library or having to ship an untested library. With the separate project structure, you can run unit tests against anything you build. It also makes it easier to look through just the production classes without having twice as many files to look at (when getting the "feel" of a library).
Finally, don't forget that if you need to access internal members when testing, there's always [InternalsVisibleTo].
I suggest making as few unit test projects as possible. The reason being is that each one you create adds on at least ten seconds of compile time. In a big project, it starts adding up.
Here's the directory structure I use:
projectName/branches/trunk/projects/code/codeproject1
projectName/branches/trunk/projects/code/codeproject2
projectName/branches/trunk/projects/code/codeproject3
projectName/branches/trunk/projects/Tests/testproject1
projectName/branches/trunk/Dependencies
projectName/prototypes
projectName/...
and within testproject1, the following directory structure:
codeproject1/
codeproject2/
codeproject2/web
codeproject2/web/mvc
codeproject3/
codeproject3/support
I do the same thing except each project is in it's own folder under the same root folder.
Something along the following:
Solution Folder
ProjectA folder
ProjectA.Test folder
ProjectB folder
ProjectB.Test folder
I always have a separate test project for each project. Part of it is simply I like the organization of it, but I've also often run into situations where I've decide to break a library out into it's own solution so that it can be reused by other solutions. In those cases, having the library project have it's own separate test project (rather than all the tests in a single project) makes it much easier to break that library out.

Mock filesystem in integration testing

I'm writing a ruby program that executes some external command-line utilities. How could I mock the filesystem from my rspec tests so that I could easily setup some file hierarchy and verify it after testing. It would also be best to be implemented in ram so that tests would run quickly.
I realize that I may not find a portable solution as my external utilities are native programs interacting directly with operating system file services. Linux is my primary platform and solution for that would suffice.
Have you checked out FakeFS or MockFS?
Note: The original link to MockFS doesn't work. It looks like it's no longer being maintained.
Maybe this won't answer your question directly, but in such cases I tend to create a temporary directory during test setup and remove it on teardown. Of course you also have to ensure the application writes to this temporary directory. I always have a configuration option defining destination directory that I can overwrite during testing.
When it comes to assertions I use plain File.exist? or File.directory?, but of course you can create your own wrappers around it. If you need some initial state you can build a directory that can be used as a fixture and will be copied to the temporary direcory during test setup.
You can create a big file (size of you dummy disk) and mount the file as a loop-back device. You can create any filesystem and directory structure on this device.
You can create 2 of them and make even simple diff compare to ensure data integrity after tests.
I hope i understand you requirements correctly since i don't sure why simple ramdisk solution is not good enough.
This might be relevant as well.

Resources