In Ruby, using Cucumber, should I mock out calls to a webservice? - ruby

All,
I'm using Cucumber for acceptance testing a Ruby command line utility. This utility pulls data in from a webservice.
I understand Cucumber is for acceptance testing and tests the whole stack but I still need to provide consistant replies from the webservice.
Should I mock the webservice? If yes, how? What's the best approach here?
Cheers,
Gordon

So after a bit of thinking! Then a bit of googling I found FakeWeb. Does exactly what I needed!
Check out Dr Nic's slides - especially slide 17.
And it was easy - in under 2 hours I've managed to set it up, rewrite my tests, get everything passing and check it all back in to git hub!!
HTH others!

I am not very familiar with Ruby or Cucumber, so I can only give you a very general answer related to your problem, and it actually has more questions than answers.
How reliable are the web services? If they are down a lot it will make your tests fail from time to time and there is nothing more annoying than chasing down the reason for a failing test only to realize it was that time of the month again.
Can your web services take the pounding from tests? If you have several developers running these tests very often and your web services are on a partner company's server, they might not like the fact that you are testing against them.
Do you trust their output? For me the biggest reason not to mock a dependency is if I don't know what sort of data I am exactly going to get from the service. If I am using services that are well documented and easily understandable I usually don't mock them, but if they are not entirely clear or they change often I do recommend testing against them.
How hard is it to mock the dependency? Replacing dependencies is not always easy, especially if adding test code afterwards. Luckily in dynamic languages it is usually a lot easier than lets say Java. I would still consider how much work does it take to build a mock service that responds with the answers you are really wanting.
How much of a speed benefit I gain from mocking? Integration tests are slow, mocking a web service dependency is gonna make your test run faster, how much faster? I don't know but it probably does matter.
Those are just a few points, but at least the last three I always consider before choosing to mock or not to mock.

Mock of the Webservice
I would write a wrapper around the calls to the webservice in the application.
Example in Pseudo Code
CallWebService (action, options,...) {
// Code for connectiong to Webservice
}
Then you just mock of that function just you would like any other function
CallWebService (action, options,...) {
return true;
}
This way you can mock of the webservice without bothering about it being a webservice or a database connection or whatever. And you can have it return true or whatever.
Test how your code handles responses from the Webservice
To take this idea one step further and make your tests even more powerful you could use some kind of test parameters or environment parameters to control what happens in the mocked off webservice method. Then you can successfully test how your codes handels different responses from the web services.
Again in pseudo-code:
CallWebService (action, options,...) {
if TEST_WEBSERVICE_PARAMETER == CORRUPT_XML
return "<xml><</xmy>";
else if TEST_WEBSERVICE_PARAMETER == TIME_OUT
return wait(5000);
else if TEST_WEBSERVICE_PARAMETER == EMPTY_XML
return "";
else if TEST_WEBSERVICE_PARAMETER == REALLY_LONG_XML_RESPONSE
return generate_xml_response(1000000);
}
And tests to match:
should_raise_error_on_empty_xml_response_from_webservice() {
TEST_WEBSERVICE_PARAMETER = EMPTY_XML;
CallWebService(action, option, ...);
assert_error_was_raised(EMPTY_RESPONSE_FROM_WEBSERVICE);
assert_written_in_log(EMPTY_RESPONSE_LOG_MESSAGE);
}
...
And so on, you get the point.
Please note that even though all my examples are Negative test cases this could of course be used to test Positive test cases also.
Do note that this is a copy of an answer i made to a similar questions:
Mockup webservice for iPhone
Good luck

Related

How to test #transactional methods

I have a spring application and it needs transactions.
How would i write a test to see if the #Transactional annotation is working correctly?
I have a service class with an autowired repo, and methods like
#Transactional(propagation = Propagation.REQUIRED)
public boolean saveObjectToDb(Object a){
repo.save(a)
}
Here's what you can do. It isn't necessarily the best idea but here goes:
#Test
public void testTransactionality() {
Method[] methods = YourClassWhateverItsCalled.class.getMethods();
Optional<Method> findFirst = Arrays.stream(methods)
.filter(method -> method.getName()
.equals("saveObjectToDb"))
.findFirst();
Transactional[] annotationsByType =
findFirst.get().getAnnotationsByType(Transactional.class);
assertThat(annotationsByType).isNotEmpty();
}
So the specific assertion i'm using here could be more on-point for sure. However...
The reason that this isn't the best idea is because you're just testing that the annotation is there. You don't test that things are actually saved to the database. Testing things manually in a development environment has its merits and you should definitely keep in mind that you don't need to test 100% of your code. 80% is a good rule of thumb.
If you wanted to test that data is actually saved to the database that's harder. Approaches include:
Actually having a test database that you run these kinds of tests against. Debugging these tests can be awful but they are at least fast sometimes because the database is already up and running before the test comes along.
Spinning up an in memory database for each test of this kind and performing transactional inserts, updates, deletes against that. This can also be a pain because it requires a lot in terms of the test harness and the tests are slow. However if they're written well (you don't want your data from one test in a JUnit test class to end up in another without developers realising) then it can be super maintainable. Relatively speaking.
I would love to learn of more approaches here but generally speaking this is a difficult area to get right.
Update: I should note that if you test database access be sure to keep your tests unit tests. Nothing worse than trying to figure out what's happening with a database access test which spans 20 different classes. PURE. HELL.

Rspec: Avoiding out of sync issues with message expectations

Message expectations allow you to check if the object under test is sending the right message, but not that the target object can actually respond to this call. On the other end of the spectrum, integration testing checks that everything actually work, that is, that the calls are made, understood and executed properly.
Is there a middle ground, like checking that the object under test sends the right messages and that the receiving object can respond to these message ? This would ensure that the test break when the receiving object changes, without running a full integration test. In essence:
target.should_receive(:my_method) && target.should respond_to(:my_method)
using a custom matcher like
target.should_get_message(:my_method)
this could be useful for glue classes that just coordinate different actions I think.
What are your opinions on that ? Is it a viable approach ?
Checkout rspec-fire it solves this exact problem. I've been using it in lots of projects.

How to share state between scenarios using cucumber

I have a feature "Importing articles from external website".
In my first scenario I test importing a list of links from the external website.
Feature: Importing articles from external website
Scenario: Searching articles on example.com and return the links
Given there is an Importer
And its URL is "http://example.com"
When we search for "demo"
Then the Importer should return 25 links
And one of the links should be "http://example.com/demo.html"
In my steps I have the 25 links in a #result array.
In my second scenario I want to take one of the links and test the fact that I parse the article correctly.
Now obviously I do not want to go to the external website every time, especially now that the first scenario passes.
How do I proceed here so I can keep testing without making the HTTP requests for the first scenario? Or should I run it once and persist the #result array across the rest of the scenarios so I can keep working with the actual result set?
This is intentionally very difficult to do! Sharing state between tests is generally a Very Bad Thing, not least because it forces your tests to run in sequence (your first scenario MUST run before the subsequent ones, not something Cucumber supports explicitly).
My suggestion would be to re-think your testing strategy. Hitting external services in tests is a great way to make them run slowly and be unreliable (what happens when the external service goes down?). In this case I'd suggest using something like webmock or vcr to create a fake version of the external site, that returns the same response as you'd expect from the real site, but you can hit as many times as you like without the worry of performance or unavailability.
I found that it is technically possible to use
##global_variable in step definition to share the global state.
However, just like other people points, it may not be a good idea.
I tried to avoid repeated login steps in the similar scenarios. Again, it may not be a good practice. Use the trick when really necessary
You shouldn't share state between scenarios. A scenario describes a piece of the intended behavior for the whole system, and it should be possible to run just one scenario. E.g. if you have run the entire test suite, and you find that a single scenario fails, you should be able to run just that one scenario in order to investigate what went wrong.
Your problem arises because you try to contact external systems. That is not advisable. Not only does it make you test suite run more slowly, but it also makes the test dependent on the external system, to which you have no control. If the external system is not running, your tests are not running. If the external tests does not contain the data you expect, your tests will fail, even though there are no bugs in your own system. And you end up letting your tests be controlled by what you expect to be in the external systems, instead of controlling what is in the external system based on what you need to test.
Instead you should mock out the external system, and let your scenarios control what the mocked system will deliver:
Scenario: Query external system
# These two lines setup expected data in a mocked version of the external system
Given there the system x contains an article named "y"
And the article contains the text "Lorep ipsum"
When I query for article "y"
Then I should see the text "Lorem ipsum"
This scenario is independent of any actual data in external systems, as it explicitly specifies what needs to be there. And more importantly, it clearly describes how your own system should behave.
The scenario in that form can also be communicated to stakeholders, and they can validate the scenarios without any prior knowledge to any test data present in those external systems.
It may take some time getting a proper framework running, but in the end, it will be worth it.
Making scenario dependent or sharing the data between scenarios is not a good practice.
some solutions
1) Cucumber provide Background tag to run preconditions for each scenario.
2) Cucumber provides hooks #Before and #after which can be customized for each scenario.
I use a file. I have a case to create a new user, then I want to logout and log back in with that same user in other features.
I generate the user with:
#randomName = [*('a'..'z')].sample(8).join
Then I save the user to a file:
File.open("randomName.txt", 'w') {|f| f.write("#{#randomName}") }
Later, when I need that data in other feature, I use:
#randomName = data = File.read("randomName.txt")
I haven't seen anything that makes me want to use any of these DI little frameworks. I simply created a static initializer in a class where I store my session data, and had all step definition classes extend that class. It works and I don't have to add any more libraries to my project.
public class MyAbstractClass {
public static final Object param1;
public static final Object param2;
public static final Object param3;
static {
// initialize params here
}
public class MyStepDefinition extends MyAbstractClass {}
If you require data that can change over time, simply declare them non-static.

How do I do TDD correctly? When do I write tests for layers deeper than my business logic layer (i.e. DAL)?

I'm still uncertain about how best to use Mocks when doing development from the outside-in (i.e. write test first that mimics the customer's requirement)
Assume my client's requirement is "Customer can cancel her order".
I can write a test for this - Test_Customer_Can_Cancel_Her_Order.
Using TDD I then write the business class and mock the data access layer (or any layer that is 'deeper' than my business logic layer). Fantastic, my test passes and I am only testing the business layer and not any layers underneath.
Now what I am wondering is... when do I delve deeper and start writing and testing the data access layer? Directly after the top-level test? Only when I write the integration test? Maybe it doesn't matter? As long as it gets done at some point?
Just wondering...
This is a tricky question. Some people think that it's ok to test "private" code, some people think it's not. It is certainly easier, as you don't have to do complex setups to reproduce an application state that lets you test a small function that you want to add (you can test that function directly). I used to be among those people, but now I'd say that it's better to just test public APIs (create tests and setups that do and see only what the user can do and see). The reason for this is that if you only test public API, you empower yourself with unlimited re-factoring potential. If you test private code, you will have to change your tests if you choose to re-factor (making you less prone to "refactor mercilessly", thus leading to bad code).
However, I have to say, testing only public APIs do lead to more complex test frameworks.
So, to answer to your question more specifically, your next step is to try to think of a situation where there would be a problem with your current order cancellation code and write a new high-level setup to reproduce that problem. You mentioned data access tests. Create a high level setup where the database is corrupt, and test that your application behaves gracefully.
Oh, if for GUI applications, it's a little trickier, as you don't really want to get to the point where you test mouse clicks and read pixels. That's just silly. If you use (and you should) the MVC system, stop at the controller level.
Use integration tests for query and other data access testing.
I always write tests to test the DAO layer, which isn't testing business logic, but I feel it is important to test the CRUD features. This has gotten me into a lot of trouble because if my database is corrupt for some reason my tests have the high possibility of failing. What I do to prevent these DAO type tests from failing is, first do the testing in a non-production database. Then for each CRUD/DAO test I
find objects that may have been left around from a previous test and if exist I delete them.
I create objects I want to test
I update the objects I want to test
I clean up or delete the objects I created.
This sequence helps me to make sure my database is in a condition where my tests will not fail if run twice and the first time the test stopped half way in between.
Another way is to wrap your CRUD tests in a transaction and at the end of the test rollback the transaction so the database is in the state that it began.

TDD and Service (class what do something but nothing returns back)

I'm trying to follow TDD (i'm newbee) during development of Service class which builds tasks passed by Service clients. The built objects are then passed to other systems. In other words this service takes tasks but returns nothing as a result - it passes built tasks to other Services.
So I'm wondering how can I write test for it because there is nothing to assert.
I'm thinking about using mocks to track interactions inside the service but I'm a little bit afraid of using mocks because I will be tied up with internal implementarion of the service.
Thanks all of you in advance!
There's no problem using mocks for this, since you are effectively going to be mocking the external interface of the components that are used internally in the component. This is really what mocking is intended for, and sound like a perfect match for your use case.
When doing TDD it should also allow you to get those quick turnaround cycles that are considered good practice, since you can just create mocks of those external services. These mocks will easily allow you to write another failing test.
You can consider breaking it up in a couple classes. One responsible to build the list of tasks that will be executed, and the other responsible to execute the list of tasks it is handed. This way you can directly test the code that build the lists of tasks.
That said, I want to add a sample I posted on another question, regarding how I view the TDD process when external systems are involved.
Lets say you have to check whether
some given logic sends an email, logs
the info on a file, saves data on the
database, and calls a web service (not
all at once I know, but you start
adding tests for each of those). On
each test you don't want to hit the
external systems, what you really want
to test is if the logic will make the
calls to those systems that you are
expecting it to do. So when you write
a test that checks that an email is
sent when you create an user, what you
test is if the logic calls the
dependency that does that. Notice that
you can write these tests and the
related logic, without actually having
to implement the code that sends the
email (and then having to access the
external system to know what was sent
...). This will help you focus on the
task at hand and help you get a
decoupled system. It will also make it
simple to test what is being sent to
those systems.
Not sure what language you're using so in psuedo-code it could be something like this:
when_service_is_passed_tasks
before_each_test
mockClients = CreateMocks of 3 TaskClients
tasks = GetThreeTasks()
myService = new TaskRouter(mockClients)
sends_all_to_the_appropriate_clients
tasks = GetThreeTasks()
myService.RouteTaks(tasks)
Assert mockClients[0].AcceptTask(tasks[0]) was called
Assert mockClients[1].AcceptTask(tasks[1]) was called
Assert mockClients[2].AcceptTask(tasks[2]) was called
if_one_routing_fails_all_fail
tasks = GetTasksWhereOneIsFailing()
myService.RouteTaks(tasks)
Assert mockClients[0].AcceptTask(*) was not called
Assert mockClients[1].AcceptTask(*) was not called
Assert mockClients[2].AcceptTask(*) was not called

Resources