Mock filesystem in integration testing - ruby

I'm writing a ruby program that executes some external command-line utilities. How could I mock the filesystem from my rspec tests so that I could easily setup some file hierarchy and verify it after testing. It would also be best to be implemented in ram so that tests would run quickly.
I realize that I may not find a portable solution as my external utilities are native programs interacting directly with operating system file services. Linux is my primary platform and solution for that would suffice.

Have you checked out FakeFS or MockFS?
Note: The original link to MockFS doesn't work. It looks like it's no longer being maintained.

Maybe this won't answer your question directly, but in such cases I tend to create a temporary directory during test setup and remove it on teardown. Of course you also have to ensure the application writes to this temporary directory. I always have a configuration option defining destination directory that I can overwrite during testing.
When it comes to assertions I use plain File.exist? or File.directory?, but of course you can create your own wrappers around it. If you need some initial state you can build a directory that can be used as a fixture and will be copied to the temporary direcory during test setup.

You can create a big file (size of you dummy disk) and mount the file as a loop-back device. You can create any filesystem and directory structure on this device.
You can create 2 of them and make even simple diff compare to ensure data integrity after tests.
I hope i understand you requirements correctly since i don't sure why simple ramdisk solution is not good enough.
This might be relevant as well.

Related

Is there a way to append/remove a resource to a binary at execution time?

Is it possible to append/remove a ressource file to a binary at execution time?
I have an application written with go, which saves/searches data from a database file, and i would like this database file to be embedded to the binary, and updated by the application itself.
This way the application would be self contained with its database.
Modifying the executable, this is generally a very bad idea.
Several issues pop right into my head, such as:
Does the current user have sufficient permissions?
Is the file locked during execution?
What about multiple running instances of the application?
Even if you manage to do just that, think of what anti-virus and firewall applications will say to it: most when they detect the change will flag the executable and/or contain it, or deny running it, or some may even delete it. Rightfully, as this is what many viruses do: modify existing executables.
Also virus scanner databases maintain reports where files (their contents) are identified based on the hash of their content. Modifying the executable will naturally change the file content hash, thus render the file unknown / suspicious to these databases.
As mentioned, just write / cache data in separate file(s), preferably in user's home folder or in the application folder (next to the executable, optionally in sub-folders). Or make the cache file / folder a changeable option (command line flags).
Technically, this is possible, but this is a bad idea. Your application could be run by users not having write permissions to your binary.
If you're talking about a portable app, your best option might be using a file in the same directory the binary is located, otherwise - use the user's home directory according to the conventions of the OS you're running on. You can use the os/user package to find the home directory.

Q: Neo4j-wrapper.conf. Can I put more information into the Wrapper configuration files?

I'm working on updating the Neo4j windows installation process into Powershell and I was thinking that perhaps it could read/write neo4j windows service information from the neo4j-wrapper.conf configuration file.
The Windows wrapper conf has very little information that is related the windows service itself (in fact I think it has no information that is used in the creation, management and removal process!)
My intention is to have the relevant windows service information in the configuration file and then when calls such as Install or Stop are made, then the Service Name can be retrieved from there instead of via command line arguments.
My questions are;
If I put more information into that configuration file, will it affect the linux wrapper?
Is there any reason why I shouldn't put more settings into the configuration file (but only related to a Windows Service)?
Note - My changes would also support this PR;
https://github.com/neo4j/neo4j/pull/4433
Thanks,
Glenn.
I think the answer is, in principle, yes. Putting extra stuff in that file wouldn't hurt anything.
But it's not ideal to have a single file that's used for different purposes on different platforms (I see the presence there of Linux-specific service stuff as a problem rather than something to copied).
The real solution, I think, is for each package build to provide its own copy of that file (or one derived from a common starting point).

Testing file copy, move, delete operations in Ruby

I am developing a backup library in Ruby. And, as you may expect, there are many files copied, moved and deleted during the backup. In my test I want to make sure that the proper files and folders are copied from source to destination. What are the best practices of testing it? Should I deal with physical files during the tests? Or is it better to mock it?
It is better to avoid using real filesystem for testing (it results in slow, brittle tests with messy setup/cleanup). Better to stub it out, with fakefs gem, for example.
Unit tests need to run fast, so that they can be run very often, after each change. So touching the file system is not an option here.
Then integration tests (or whatever they can be called) will ensure the physical files are actually copied. These tests can be slower, as they are run less often.

Storing temporary files

I would like to generate some temporary files in the course of my application. Specifically, I'm using AVAudioRecorder to record a file that I, upon stopping the recording, would like to load and edit/process. My question is:
What is the appropriate standard place to create temporary files. Is there some generally accepted approach to this for Mac or for iPad programming in general? I don't want to simply create a directory and write files into it if there is a proper protocol to this.
The answer to this question is actually a lot more complicated then one might assume. One cannot necessarily just use NSTemporaryDirectory and be done. I cocoadev.com has some good pages on this topic and I would suggest that you study them yourself and determine what will work best for your circumstance.
http://www.cocoadev.com/index.pl?NSTemporaryDirectory
http://www.cocoadev.com/index.pl?GettingTemporaryFolderOnSpecificVolume
The usual place for applications to store temporary data is /var/tmp. You could also use /tmp but this directory is for system-generated temporary files and anything in /tmp is deleted when the machine reboots.
What I found was that according to the iOS Application Programming Guide, I am supposed to query for the appropriate temporary folder for my application via NSTemporaryDirectory(). I tried this and it returned a folder within the /var directory, in my case '/var/folders/pQ/pQ+ZqZCSHWSIHftcbIo57U+++TI/-Tmp-/'.
/tmp or /usr/tmp are the usual places to store temporary files in Unix (which Mac OS X and iOS are).

How to test file manipulation

I hear that accessing to database is wrong in testing.
But what about file manipulation? Things like, cp, mv, rm and touch methods in FileUtils.
If I write a test and actually run the commands (moving files, renaming files, making directories and so on), I can test them. But I need to "undo" every command I ran before running a test again. So I decided to write all the code to "undo", but it seems a waste of time because I don't really need to "undo".
I really want to see how others do. How would you test, for example, when you generate a lot of static files?
In your case accessing the files is totally legit, if you are writing file manipulation code it should be tested on files. The one thing you have to be careful about is that a failed test means that you code is wrong and not that somebody deleted a file that is needed for the test or something like that. I would put the directory and the files you need for the tests in a separate folder that is only used for the test. Then in the build up of the test copy the whole folder to a temporary place do all the testing and then after the test delete the temporary files. In that way each test has a clean copy of the files that are saved for the test.
"Pure" unit testing shall not access "expensive" resources such as filesystem, DB ...
Now you may want to run those "integration" tests (or whatever you call them) at the same time as your unit-tests, and use the same framework it's convenient.
You can have a set of files for unit testing that you copy into temporary location as suggested in Janusz' answer, or generate them in your unit tests, or you can use a mock of the FileUtils instead of the real FileUtils when unit testing.
Accessing a database is not "wrong in testing". How else will you test the integration of your code with the database?
The key to repeatable testing is a consistent environment. So long as you start from the same file system or database contents for your tests, you should are not wrong. This is usually handled via a cleanup process at the start of the test suite.
Accessing resources like the database, file system, smtp server, etc. are bad ideas for unit testing. At some point obviously you have to do have to try it out with real files, that's a different kind of test, an integration test. Integration tests are more painful, you do have to take care to make sure your test is starting from a well-defined state, also they will run slower since you're accessing the real file system. On the other hand you shouldn't have to run them as frequently as you would with unit tests.
For unit tests you should be able to take advantage of duck typing to create objects that react to the same methods that the file objects you're working with have. Plus there's nothing to undo with this approach, and the tests will run a lot faster.
If your operating system supports RAM-based filesystems, you could go with one of these. This has even the advantage that an occasional `unix command` in your code keeps working.
Maybe you could create a directory inside your test package named "test_*". Then, the files that you change will be put on this directory (for example: if you create a directory, you will create the directory inside the test directory). At the end of the test you could delete this directory (with only one command). This is the unique UNDO operation that you will execute.
You will put all files that you need to the test on your test directory inside the test package.

Resources