Rspec: Avoiding out of sync issues with message expectations - ruby

Message expectations allow you to check if the object under test is sending the right message, but not that the target object can actually respond to this call. On the other end of the spectrum, integration testing checks that everything actually work, that is, that the calls are made, understood and executed properly.
Is there a middle ground, like checking that the object under test sends the right messages and that the receiving object can respond to these message ? This would ensure that the test break when the receiving object changes, without running a full integration test. In essence:
target.should_receive(:my_method) && target.should respond_to(:my_method)
using a custom matcher like
target.should_get_message(:my_method)
this could be useful for glue classes that just coordinate different actions I think.
What are your opinions on that ? Is it a viable approach ?

Checkout rspec-fire it solves this exact problem. I've been using it in lots of projects.

Related

Cypress test passed due to briefly picking up element in the DOM

I have some Cypress tests which are passing due to the fact that the cy.get elements have briefly come up in the DOM before an error page has been redirected to. I can only get the test to fail if a cy.wait is added before the assertion but I do not want to use those. Has anyone come across this before and able to offer some best practices on what to assert next?
I am thinking to either assert cy.url does not exist, but I would have to put this at the end of absolutely every test so hoping for a better option. Thanks

Use Cases for LRA

I am attempting to accomplish something along these lines with Quarkus, and Naryana:
client calls service to start a process that takes a while: /lra/start
This call sets off an LRA, and returns an LRA id used to track the status of the action
client can keep polling some endpoint to determine status
service eventually finishes and marks the action done through the coordinator
client sees that the action has completed, is given the result or makes another request to get that result
Is this a valid use case? Am I visualizing the correct way this tool can work? Based on how the linked guide reads, it seems that the endpoints are more of a passthrough to the coordinator, notifying it that we start and end an LRA. Is there a more programmatic way to interact with the coordinator?
Yes, it might be a valid use case, but in every case please read the MicroProfile LRA specification - https://github.com/eclipse/microprofile-lra.
The idea you describe is more or less one LRA participant executing in a new LRA and polling the status of this execution. This is not totally what the LRA is intended for, but surely can be used this way.
The main idea of LRA is the composition of distributed transactions based on the saga pattern. Basically, the point is to coordinate multiple services to achieve consistent results with an eventual consistency guarantee. So you see that the main benefit arises when you can propagate LRA through different services that either all complete their actions or all of their compensation callbacks will be called in case of failures (and, of course, only for the services that executed their actions in the first place). Here is also an example with the LRA propagation https://github.com/xstefank/quarkus-lra-trip-example.
EDIT: Sorry, I forgot to add the programmatic API that allows same interactions as annotations - https://github.com/jbosstm/narayana/blob/master/rts/lra/client/src/main/java/io/narayana/lra/client/NarayanaLRAClient.java. However, note that is not in the specification and is only specific to Narayana.

In Ruby, using Cucumber, should I mock out calls to a webservice?

All,
I'm using Cucumber for acceptance testing a Ruby command line utility. This utility pulls data in from a webservice.
I understand Cucumber is for acceptance testing and tests the whole stack but I still need to provide consistant replies from the webservice.
Should I mock the webservice? If yes, how? What's the best approach here?
Cheers,
Gordon
So after a bit of thinking! Then a bit of googling I found FakeWeb. Does exactly what I needed!
Check out Dr Nic's slides - especially slide 17.
And it was easy - in under 2 hours I've managed to set it up, rewrite my tests, get everything passing and check it all back in to git hub!!
HTH others!
I am not very familiar with Ruby or Cucumber, so I can only give you a very general answer related to your problem, and it actually has more questions than answers.
How reliable are the web services? If they are down a lot it will make your tests fail from time to time and there is nothing more annoying than chasing down the reason for a failing test only to realize it was that time of the month again.
Can your web services take the pounding from tests? If you have several developers running these tests very often and your web services are on a partner company's server, they might not like the fact that you are testing against them.
Do you trust their output? For me the biggest reason not to mock a dependency is if I don't know what sort of data I am exactly going to get from the service. If I am using services that are well documented and easily understandable I usually don't mock them, but if they are not entirely clear or they change often I do recommend testing against them.
How hard is it to mock the dependency? Replacing dependencies is not always easy, especially if adding test code afterwards. Luckily in dynamic languages it is usually a lot easier than lets say Java. I would still consider how much work does it take to build a mock service that responds with the answers you are really wanting.
How much of a speed benefit I gain from mocking? Integration tests are slow, mocking a web service dependency is gonna make your test run faster, how much faster? I don't know but it probably does matter.
Those are just a few points, but at least the last three I always consider before choosing to mock or not to mock.
Mock of the Webservice
I would write a wrapper around the calls to the webservice in the application.
Example in Pseudo Code
CallWebService (action, options,...) {
// Code for connectiong to Webservice
}
Then you just mock of that function just you would like any other function
CallWebService (action, options,...) {
return true;
}
This way you can mock of the webservice without bothering about it being a webservice or a database connection or whatever. And you can have it return true or whatever.
Test how your code handles responses from the Webservice
To take this idea one step further and make your tests even more powerful you could use some kind of test parameters or environment parameters to control what happens in the mocked off webservice method. Then you can successfully test how your codes handels different responses from the web services.
Again in pseudo-code:
CallWebService (action, options,...) {
if TEST_WEBSERVICE_PARAMETER == CORRUPT_XML
return "<xml><</xmy>";
else if TEST_WEBSERVICE_PARAMETER == TIME_OUT
return wait(5000);
else if TEST_WEBSERVICE_PARAMETER == EMPTY_XML
return "";
else if TEST_WEBSERVICE_PARAMETER == REALLY_LONG_XML_RESPONSE
return generate_xml_response(1000000);
}
And tests to match:
should_raise_error_on_empty_xml_response_from_webservice() {
TEST_WEBSERVICE_PARAMETER = EMPTY_XML;
CallWebService(action, option, ...);
assert_error_was_raised(EMPTY_RESPONSE_FROM_WEBSERVICE);
assert_written_in_log(EMPTY_RESPONSE_LOG_MESSAGE);
}
...
And so on, you get the point.
Please note that even though all my examples are Negative test cases this could of course be used to test Positive test cases also.
Do note that this is a copy of an answer i made to a similar questions:
Mockup webservice for iPhone
Good luck

How do I do TDD correctly? When do I write tests for layers deeper than my business logic layer (i.e. DAL)?

I'm still uncertain about how best to use Mocks when doing development from the outside-in (i.e. write test first that mimics the customer's requirement)
Assume my client's requirement is "Customer can cancel her order".
I can write a test for this - Test_Customer_Can_Cancel_Her_Order.
Using TDD I then write the business class and mock the data access layer (or any layer that is 'deeper' than my business logic layer). Fantastic, my test passes and I am only testing the business layer and not any layers underneath.
Now what I am wondering is... when do I delve deeper and start writing and testing the data access layer? Directly after the top-level test? Only when I write the integration test? Maybe it doesn't matter? As long as it gets done at some point?
Just wondering...
This is a tricky question. Some people think that it's ok to test "private" code, some people think it's not. It is certainly easier, as you don't have to do complex setups to reproduce an application state that lets you test a small function that you want to add (you can test that function directly). I used to be among those people, but now I'd say that it's better to just test public APIs (create tests and setups that do and see only what the user can do and see). The reason for this is that if you only test public API, you empower yourself with unlimited re-factoring potential. If you test private code, you will have to change your tests if you choose to re-factor (making you less prone to "refactor mercilessly", thus leading to bad code).
However, I have to say, testing only public APIs do lead to more complex test frameworks.
So, to answer to your question more specifically, your next step is to try to think of a situation where there would be a problem with your current order cancellation code and write a new high-level setup to reproduce that problem. You mentioned data access tests. Create a high level setup where the database is corrupt, and test that your application behaves gracefully.
Oh, if for GUI applications, it's a little trickier, as you don't really want to get to the point where you test mouse clicks and read pixels. That's just silly. If you use (and you should) the MVC system, stop at the controller level.
Use integration tests for query and other data access testing.
I always write tests to test the DAO layer, which isn't testing business logic, but I feel it is important to test the CRUD features. This has gotten me into a lot of trouble because if my database is corrupt for some reason my tests have the high possibility of failing. What I do to prevent these DAO type tests from failing is, first do the testing in a non-production database. Then for each CRUD/DAO test I
find objects that may have been left around from a previous test and if exist I delete them.
I create objects I want to test
I update the objects I want to test
I clean up or delete the objects I created.
This sequence helps me to make sure my database is in a condition where my tests will not fail if run twice and the first time the test stopped half way in between.
Another way is to wrap your CRUD tests in a transaction and at the end of the test rollback the transaction so the database is in the state that it began.

TDD and Service (class what do something but nothing returns back)

I'm trying to follow TDD (i'm newbee) during development of Service class which builds tasks passed by Service clients. The built objects are then passed to other systems. In other words this service takes tasks but returns nothing as a result - it passes built tasks to other Services.
So I'm wondering how can I write test for it because there is nothing to assert.
I'm thinking about using mocks to track interactions inside the service but I'm a little bit afraid of using mocks because I will be tied up with internal implementarion of the service.
Thanks all of you in advance!
There's no problem using mocks for this, since you are effectively going to be mocking the external interface of the components that are used internally in the component. This is really what mocking is intended for, and sound like a perfect match for your use case.
When doing TDD it should also allow you to get those quick turnaround cycles that are considered good practice, since you can just create mocks of those external services. These mocks will easily allow you to write another failing test.
You can consider breaking it up in a couple classes. One responsible to build the list of tasks that will be executed, and the other responsible to execute the list of tasks it is handed. This way you can directly test the code that build the lists of tasks.
That said, I want to add a sample I posted on another question, regarding how I view the TDD process when external systems are involved.
Lets say you have to check whether
some given logic sends an email, logs
the info on a file, saves data on the
database, and calls a web service (not
all at once I know, but you start
adding tests for each of those). On
each test you don't want to hit the
external systems, what you really want
to test is if the logic will make the
calls to those systems that you are
expecting it to do. So when you write
a test that checks that an email is
sent when you create an user, what you
test is if the logic calls the
dependency that does that. Notice that
you can write these tests and the
related logic, without actually having
to implement the code that sends the
email (and then having to access the
external system to know what was sent
...). This will help you focus on the
task at hand and help you get a
decoupled system. It will also make it
simple to test what is being sent to
those systems.
Not sure what language you're using so in psuedo-code it could be something like this:
when_service_is_passed_tasks
before_each_test
mockClients = CreateMocks of 3 TaskClients
tasks = GetThreeTasks()
myService = new TaskRouter(mockClients)
sends_all_to_the_appropriate_clients
tasks = GetThreeTasks()
myService.RouteTaks(tasks)
Assert mockClients[0].AcceptTask(tasks[0]) was called
Assert mockClients[1].AcceptTask(tasks[1]) was called
Assert mockClients[2].AcceptTask(tasks[2]) was called
if_one_routing_fails_all_fail
tasks = GetTasksWhereOneIsFailing()
myService.RouteTaks(tasks)
Assert mockClients[0].AcceptTask(*) was not called
Assert mockClients[1].AcceptTask(*) was not called
Assert mockClients[2].AcceptTask(*) was not called

Resources