What's the purpose of Factory Girl in rspec tests when I could use before(:each) blocks? It feels like the only difference between Factory Girl and a before(:each) is that the factory prepares object creation outside of the test. Is this right?
Gems like Factory Girl and Sham allow you to create templates for valid and re-usable objects. They were created in response to fixtures which where fixed records that had to be loaded into the database. They allow more customization when you instantiate the objects and they aim to ensure that you have a valid object to work with. They can be used anywhere in your tests and in your before and after test hooks.
before(:each), before(:all), after(:each) and after(:all) all aim to give you a place to do setup and teardown that will be shared amongst a test group. For example, if you are going to be creating a new valid user for every single test, then you'll want to do that in your before(:each) hook. If you are going to be clearing some files out of the filesystem, you want to do that in a before hook. If your tests all create a tmp file and you want to remove it after your test, you'll do that in your after(:each) or after(:all) hook.
The ways these two concepts differ is that Factories are not aimed at creating hooks around your tests, they are aimed at creating valid Ruby objects and records, so that you can keep your object creation flexible and DRY. Before and after hooks are aimed at setup and teardown tasks that are shared in an example group so that you can keep your setup and teardown code DRY.
FactoryGirl replaces fixtures in tests. This way, you won't have to keep the fixtures up-to-date as you change the data model. Fixtures can also tend to get unwieldy as you add more edge cases.
FactoryGirl generates data on the fly and adding and removing fields is much easier. Also, you can use it in setup just how you'd use fixtures.
Does that make it clearer?
also fixture definitions are global across your entire application. Factories can be local - so data specific to a isolated test case is in the setup for that particular context and not in a single global fixtures file.
This book covers the subject very well if you want some more reading matter
Related
At work, we're using Ruby 2.3.4, Rails 4.2.7, and FactoryGirl 4.0.0. We've run into the problem of having way too many traits in our base factories. We've started a project for replacing them and have a pattern in mind, but we need to be able to incrementally add our new factories to our production code while the original factories remain active.
Are there any reasonable ways to get this done?
This question, Running two spec folders in Rails with RSpec, seems to have the same problem, but separate branches is not a feasible solution for us.
The first example here might be what you're looking for.
On one of the latest versions of factory-bot, you can set up concurrent factories like this:
factory :orignal_class do
text 'original text'
end
factory :new_class_factory, class: OriginalClass do
text 'this will be used instead when creating new_class_factory'
end
When you invoke the second factory, it will still be of the same class as the first factory, but will instead use the fields defined in its factory over the original factory.
I want to use FactoryGirl to build in-memory stubs of models, then have all ActiveRecord queries run against only those. For example:
# Assume we start with an empty database, a Foo model,
# and a Foo factory definition.
#foo_spec.rb
stubbed_foo = FactoryGirl.build_stubbed(:foo)
# Elsewhere, deep in the guts of application
Foo.first() # Ideally would return the stubbed_foo we created
# in the test. Currently this returns nil.
The solution might be to use an in-memory database. But is the above scenario possible?
If your reason for avoiding the database, is to speed up your tests, then there are better ways.
Use FactoryGirl.build as much as possible instead of create. This works as long as the record won't be fetched from the database by your code. This works well for unit tests with well-structured code. (For example, it helps to use Service Objects and unit test them independently.
For tests that actually need to read from the database (as in your Foo.first example call), you can use FactoryGirl.create and use transactional fixtures. This creates a database transaction at the beginning of each test example, and then rolls back the transaction at the end of the example. This can cause problems when you use callbacks in your ActiveRecord models such as after_commit.
If you use after_commit or other callbacks in your models that require the database transaction to close (or you use explicit transactions in your code), I recommend setting up DatabaseCleaner. Here's an example of to configure and use it: https://gist.github.com/RobinDaugherty/9f4e5f782d9fdbe191a23de30ad8b539
I have a requirement where we should be creating for Factories for servers, we call them as Vservers. It basically has the attributes like Fully qualified domain name (fqdn), eg: dev-vserver.storage.com, ID and etc..
We make a connection to the Vserver and retrieve information about different operations that we do like provisioning storage, resizing of volumes, deletion of volumes etc., in a json format.
We would like to come up with a better unit testing environment with RSPEC and FactoryGirl, where we can mock the Vserver functionality by creating doubles or mocks instead of directly interacting with the actual Vserver for testing.
I would like to hear suggestions and opinions from all you.
Help is much appreciated.
Thanks,
Mahesh
So from what I gather, you want to use mocks or test doubles to mimic the functionality of your Vservers. This is indeed possible, the question is should it be done.
You can use factory girl to create factories for each of your Vserver classes, but those facotories will only return whatever data you give them, they wont have any real data, since they only have whatever you assigned them in your tests or factories.
What I would recommend is that within your facotries, you actually establish a connection to each instance of your Vservers, and pull back all the attributes which you need. So for example you could create a factory like so:
factory :vserver_1 do
id { connect_to_vserver_1_and_get_id }
domain_name { connect_to_vserver_1_get_domain_name }
end
This would instantiate a new instance of vserver_1, with all the attributess of your current vserver. If you were to change the domain name of verserver 1 in the future, your vserver_1 factory would reflect these changes.
Also, the factory girl gem has great docs, you should definitely read them in their entirety before building a new test suite. Good luck!
I'm implementing several classes which does not have data by itself, just logics. These classes implements access control policy to date which depends on several parameters taken from data from other models.
I initially try to find answer to "Where to store such classes?" here, and the answer was apps/models directory. That's ok, but I like to clearly separate these classes from ActiveRecord inherited classes in hierarchy, both as file and class.
So, I created classes inside Logic module, like Logic::EvaluationLogic or Logic::PhaseLogic. I also wanted to have constants which passed between these logics. I prefer to place these constants into Logic module too. Thus, I implemented like this:
# in logic/phase_logic.rb
module Logic
PHASE_INITIAL = 0
PHASE_MIDDLE = 1000
class PhaseLogic
def self.some_phase_control_code
end
end
end
# in logic/evaluation_logic.rb
module Logic
class EvaluationLogic
def self.some_other_code
Logic::PhaseLogic.self.some_phase_control_code(Logic::PHASE_INITIAL)
end
end
end
Now, it work just fine with rspec (It passes tests I wrote without issues), but not with development server, since it can't find the Logic::PHASE_INITIAL constant.
I suspect it's related to the mismatch of the autoloading scheme of Rails and what I wanted to do. I tried to tweak rails, but no luck, ended-up with eliminating module Logic wrap.
Now the question I want to ask: How I can organize these classes with Rails?
I'm using 3.2.1 at this moment.
Posted a follow-up question "How I can organize namespace of classes in app/modules with rails?"
I am not sure whether I really understand your classes, but couldn't you create a Logic module or (I would rather do this:) PhaseLogic and EvaluationLogic objects in /lib directory?
It is not said that "Model" is always descendant of ActiveRecord. If the object belongs to "business logic" then it is a model. You can have models which do not touch database in any way. So, if your classes are "business objects", place them in 'app/models' and use like any other model.
Another question is whether you should use inheritance or modules - but I would rather think about including a module in PhaseLogic, and not about defining PhaseLogic in a module. Of course, all this depends heavily on the intended role of your objects.
Because in Ruby the class of object is not important, you do not need to use inheritance. If you want to 'plug' the logic objects into other objects, just take care that all '*Logic' classes have the required methods. I know that all I said is very vague, but I think I cannot give you some more concrete suggestions without knowing more about the role of these objects.
Ah, and one more thing!
If you find yourself fighting with Rails class autoloading, just use the old require "lib/logic.rb" in all the classes where you are using Logic::PHASE_INITIAL constants.
In this case I suppose that your problem was caused by different order of loading. The logic/evaluation_logic.rb has been loaded before logic/phase_logic.rb. The problem may disappear if you create logic.rb somewhere, where class autoloading can find it, and define these constants in that file.
Don't name your classes or modules Logic use specific names. Start with extracting logic into separate classes and then try to break them into smaller ones. Use namespaces to distinguish them from each other in lib folder, after this steps you would be able to extract some logic parts to separate gems and reduce codebase and complexity of application. Also take a look into presenter pattern.
I consider myself still pretty new to the TDD scene. But find that no matter which method I use (mock framework or stubbing my own objects) I find that I have to write a lot of code to create mock data. I like the idea of loading up objects to create an in-memory database. But what I don't like is cluttering up my tests with a ton of code for the sole purpose of creating mock data. This is especially the case when the data needs to account for all the different cases.
I'd love some suggestions for a better way of doing this.
It would seem to me that I should be able to load the data once into a known state from some data store and then I could use a snapshot of that state which is loaded in the test setup/initialize before each test method is executed. This would satisfy proper testing practices while providing convenience and let me focus on writing tests instead of writing code to create test data "by hand".
May be you could try the NBuilder library. It provides a very fluent interface and is easy to use. You can use it for generating single instances of a class with defualt values or generate lists with default or overriden values. You can have a look at this one.
If your are using .Net Try NDBUnit
You populate your store and then it reverts your DB to a known state at test time, for each test. The Autumn of Agile screen cast series shows this in pretty good detail.
Or you can do this manually...build a stored procedure or whatever to truncate your tables and copy in the data in your teardown method.
You can have Builder class(es) that helps you building the instances you need / in this case ones you would use related to the repository.
Have the Builder use appropiate defaults, and on your tests you can overwride what you need. This helps you avoid needing to put have every single case of "data" mixed up for all the different tests (which introduces problems, because usually there are cases that aren't compatible for different tests).
**Update 1:**Take a look at www.markhneedham.com/blog/2009/01/21/c-builder-pattern-still-useful-for-test-data
I know exactly what you mean. I think a good approach to solving this problem is to actually have a separate MockFramework project that houses all your mock data, outside the test project. This way you can generate mock data separately, store it in memory if you want to, or not, and then reference the mock framework from the test project. If you use a third party framework to do this, all the better, but you can still wrap that third party framework in your own mock framework so you can get all that "glue" that creates the mock data the way you need it out of your tests so the tests can really be only what they need to be.
Thanks for all the suggestions, I think the solution requires a little bit of everything. I don't want these tests to end up being regression tests, but w/o some kind of existing data store everything still boils down to creating the data by "manually" building the objects.
What would really be nice would be a framework that allowed me to use my existing DAL to either script the data to code for me or get the data in memory and access it like an in memory database.
Untils.org covers this way better than I ever could.
Their whole guide is actually very good.
But basically, if your units require "a lot of data" they may not be unit tests anymore. I'd recommend attempting testing the smaller pieces individually.