When testing an API - Should I test API methods validation? - tdd

I've inherited a project which will be connecting to a CRM system via a SOAP service, written by another dev. My question is: to what level should I be testing the interface with the Soap services?
I set up a test case and wrote some methods to test a Soap update method, and confirmed it failed with a suitable error code for invalid customers or order numbers.
I next tested an invalid order status value (not within in a set of expected parameters) and the service returned a success code, which wasn't expected.
I believe I should report this to the developer, but should I now remove this test from my test suite? Or leave it showing as a fail?
If the soap service chooses not to validate its imput parameters I think it's a poor design, but it's not a fault in MY code, I just need to ensure I clean the input before passing values to the other system, and that validation should be covered under another set of tests anyway.
Should I even be talking to the SOAP service through the Unit Tests in the first place?

You should write at least one test per atomic functional requirement. Now, if you are writing using minimal interface guidelines, then you should have at most one interface per atomic functional requirement. But you can write more than one test for each requirement, since there may be a variety of invariants that can be tested.
In general, you should think of invariants and functional requirements when writing tests, not interface methods. Tests >= Atomic Functional Requirements.

Think of the service "contract", what are it's pre-conditions (i.e. legal inputs), it's post-conditions (i.e. legal outputs) and invariants (legal service state). If these are not clear by the developer, or there's a chance another developer is misusing the service, this should be reported and handled.
One exception though - these is all nice and good in theory, but if there are no other customers to the service (besides maybe the original developer) sometimes excess checking is obsolete. It is quite reasonable to assume that in such cases invalid inputs are checked and eliminated by the customer code.

If the library (or 3rd party code) has a good reputation (for example, Apache Commons, Guava...), I don't retest the API.
However, when I am not sure of the quality of the code, I tend to write a couple of tests verifying my assumptions about the library/API (but I don't retest all the library).
If those simple assumptions fail, it is a very bad sign for me. In this case, I tend to write more tests to check more aspects of the library. In your case, I would write more tests and signal errors to the developer.
These tests are not lost, because if a new version of the library is delivered, you can still use them to check for regressions.

Related

Order of Operations for System Testing?

I was taking an exam yesterday, and I noticed they asked in which order the following occur (and I'll put the order I deemed it to be here):
Unit Testing (Always write your unit tests first!)
Integration Testing (After you have some code and it works with other code / systems)
Validation Testing (Keep your data in a consistent state and make sure no bad data is input)
User / Acceptance Testing (It's all about the users otherwise why are we building a system in the first place?)
Is this about right?
Personally I think load-testing or database tuning oughta be in there at the end, but it wasn't on the test.
This question doesn't make a whole lot of sense.
For one thing, different people have different definitions of pretty much every kind of testing you have mentioned. For example, in Extreme Programming (XP) Acceptance Tests (while being derived from User Stories) have nothing to do with User Testing, or User Acceptance Testing (UAT). Using the XP definition, Acceptance Testing refers to automated tests that run on a build agent before code makes it anywhere near a user. User Acceptance Testing (UAT) on the other hand, is typically a manual process that happens after a proposed final version has been created and deployed to a UAT environment.
As pointed out in the comments already, Validation Testing is not a common concept with a widely accepted definition. Integration testing also means different things to different people. To some, it is testing that different processes/applications work together (in a UAT environment, for example). For others, it is simply automated tests that involve more that one class i.e. not Unit Tests.
Also, what do you mean by "order"? Do you mean the order in which the tests are written, or the order in which they are run before releasing code to the wild and/or production environment?
In any case, the question is largely irrelevant in the real world because different processes work for different teams. For example, I myself would always write an Acceptance Test before any Unit Tests. Following a test first approach, you always write a Unit Test before modifying a class, yes? So why wouldn't you write an Acceptance Test before modifying the whole system?
If "Acceptance Testing" means anything close to the XP definition of acceptance testing, then I don't think it makes sense for this to come last.
This sounds like the kind of "exam question" that only makes sense in the context of the course that you took before the exam. Without all that information (particularly the definitions of each kind of testing) it is very difficult to provide a useful answer to this question.
Instead of validation testing, System testing is correct word. And Database testing is a part of integration and system testing. Also Load testing will be performed on the phase of system and user acceptance test.

How does one unit test network-dependent operations?

Some of my apps grab data from the internet. I'm getting into the habit of writing unit tests, and I suspect that if I write a test for values returned from the internet, I will run into latency and/or reliability problems.
What's a valid way to test data that "lies on the other end of a web request"?
I'm using the stock unit testing toolkit that comes with Xcode, but the question theoretically applies to any testing framework.
Unit test is focused specifically on the business logic of your class. There would no latency, reliability etc as you would use some mock object to simulate what you actually interact.
What you are describing is some form of integration testing and for the OP seems like is not what you intent.
You should "obscure" the other end by mocking and not really access the network, a remote database etc.
Among others:
introduce artificial latency in requests
use another machine on the same network or at least another VM
test connection failures (either by connecting to a non existent server or cutting physically the connection
test for incomplete data (connection could be cut half way)
test for duplicate data (app could try to submit the request more than once if it thinks it was not successful - and in some scenarios may lead to lost data)
All of these should fail gracefully (either on the server side or on the client side)
I posed this question to the venerable folks on #macdev IRC channel on freenode.net, and I got a few really good answers.
Mike Ash (of mikeash.com) suggests implementing a local web server inside my app. For complex cases, I'd probably do this. However, I'm just using some of the built in initWithContentsOfURL:(NSURL *)url method of NSData.
For simpler cases, mike says an alternate method is to pass base64 encoded dummy data directly to the NSData initializer. (Use data://dummyDataEncodedAsBase64GoesAfterTheDataProtocolThingy.)
Similarly, "alistra" suggests using local file URLs that point to files containing mock data.

Best practice for end to end testing whole systems

End to end testing means exercising an application from the outer boundaries to verify its behavior. This far I've only done written tests for a single executable artifact. How should I test systems made up of multiple artifacts that is deployed on different hosts?
I see two alternatives.
The tests set up the whole system and exercise it from the very outer edges.
Each artifact is end to end tested in isolation, relying on the test content to enforce the protocol between them.
Is there a clear case for only adhering to one of these, or are one of them preferred, or are they interchangeable? If interchangeable, then what are some advantages and disadvantages between them?
Even though I think it depends on the context, I prefer the first alternative. Here are my random thoughts:
I like my tests to be as closely mapped to use cases as possible (BDD style) (with the disclaimer that I misuse the term use case). These use cases may span several applications and sub-systems.
Example: A back office administrator can view a transaction made by a user from the public interface.
Here, the back office admin interface and the public interface are different applications, but they are included in the same use case.
Mapping these thoughts to your problem where you have sub-systems deployed on different hosts, I would say it depends on how it is used, from the user/actor perspective. Do the use cases span several sub-systems?
Also, perhaps the fact that the system is deployed on several hosts isn't important to the tests. You could replace the inter-process communication with method calls in your tests and have the whole system within the same process during tests, reducing the complexity. Supplement this with tests that only verify the inter-process communication.
Edit:
I realise that I forgot to include why I prefer to test the whole system.
Your asset is features, that is, behaviour, while the code is a liability. Therefore you'd like to test the behaviour, not the code (BDD style).
If you are testing each sub-system separately you are testing the code, not the features. Why? When you divided your system into sub-systems you did so based on some technical reasons. When you learn more you might discover that the chosen seam is sub-optimal and would like to move some responsibility from one sub-system to another. And you would have to modify test and production code at the same time, leaving you without a safety net. That's a typical symptom of testing implementation details.
That said, these kind of tests are too blunt to test everything. So you need to have complementary tests for details as well, where necessary.
Testing each artifact end-to-end separately would be highly desiderable in any case. This will ensure that every artifact is sound.
In addition, you might want to test a composition of artifacts. That would catch problems in the interactions between artifacts. I don't know about your situation, but one thing that is important to have is a test environment that is a copy of production. Testing the system in the test environment is a very good idea. You might also want to test the system in the production environment; this might be feasible or not. For instance, if your system processes credit card payments, you may want to avoid test payments on the production system.
In any case, testing each system separately is imho more important than testing the composition. Once you know that your artifacts are sound in isolation, catching interaction tests will be much easier. If you only have the end-to-end test of the whole system, it's much more difficult to understand where is the error when the tests fail.

BDD both from business level

In my current project I want to use Behavior Driven Development (BDD), on both levels of business requirements application level tasks.
Is it all right to wrap (group) my internal BDD specs into my high level specs so clients would see that business requirement is done (all internal specs in that requirement passed) but don't actually see my internal specs?
Do you mean "should I put a bunch of test case source code in my specification?" (BDD is essentially a reframing of TDD)
Then the answer is almost certainly NO. Your client probably cares about getting a system that does what she wants, and what she wants is almost certainly not what she asked for in the first place.
Just put the software in the hands of your client as soon as possible to get feedback. Agile software development practices are all about clients giving feedback early and iterating the requirements quickly.
A spec is only useful for two things: a support for discussing requirements (before it's done), and a tool for finger-pointing (when the client says the software does not do what she needs). The former is constructive, the second is not.

Applying TDD when the application is 100% CRUD

I routinely run into this problem, and I'm not sure how to get past this hurdle. I really want to start learning and applying Test-Driven-Development (or BDD, or whatever) but it seems like every application I do where I want to apply is it pretty much only standard database CRUD stuff, and I'm not sure how to go about applying it. The objects pretty much don't do anything apart from being persisted to a database; there is no complex logic that needs to be tested. There is a gateway that I'll eventually need to test for a 3rd-party service, but I want to get the core of the app done first.
Whenever I try to write tests, I only end up testing basic stuff that I probably shouldn't be testing in the first place (e.g. getters/setters) but it doesn't look like the objects have anything else. I guess I could test persistence but this never seems right to me because you aren't supposed to actually hit a database, but if you mock it out then you really aren't testing anything because you control the data that's spit back; like I've seen a lot of examples where there is a mock repository that simulates a database by looping and creating a list of known values, and the test verifies that the "repository" can pull back a certain value... I'm not seeing the point of a test like this because of course the "repository" is going to return that value; it's hard-coded in the class! Well, I see it from a pure TDD standpoint (i.e. you need to have a test saying that your repository needs a GetCustomerByName method or whatever before you can write the method itself), but that seems like following dogma for no reason other than its "the way" - the test doesn't seem to be doing anything useful apart from justifying a method.
Am I thinking of this the wrong way?
For example take a run of the mill contact management application. We have contacts, and let's say that we can send messages to contacts. We therefore have two entities: Contact and Message, each with common properties (e.g. First Name, Last Name, Email for Contact, and Subject and Body and Date for Message). If neither of these objects have any real behavior or need to perform any logic, then how do you apply TDD when designing an app like this? The only purpose of the app is basically to pull a list of contacts and display them on a page, display a form to send a message, and the like. I'm not seeing any sort of useful tests here - I could think of some tests but they would pretty much be tests for the sake of saying "See, I have tests!" instead of actually testing some kind of logic (While Ruby on Rails makes good use of it, I don't really consider testing validation to be a "useful" test because it should be something the framework takes care of for you)
"The only purpose of the app is basically to pull a list of contacts"
Okay. Test that. What does "pull" mean? That sounds like "logic".
" display them on a page"
Okay. Test that. Right ones displayed? Everything there?
" display a form to send a message,"
Okay. Test that. Right fields? Validations of inputs all work?
" and the like."
Okay. Test that. Do the queries work? Find the right data? Display the right data? Validate the inputs? Produce the right error messages for the invalid inputs?
I am working on a pure CRUD application right now
But I see lots of benefits of Unit test cases (note- I didn't say TDD)
I write code first and then the test cases- but never too apart- soon enough though
And I test the CRUD operations - persistence to the database as well.
When I am done with the persistence - and move on to the UI layer- I will have fair amount of confidence that my service\persistence layer is good- and I can then concentrate on the UI alone at that moment.
So IMHO- there is always benefit of TDD\Unit testing (whatever you call it depending on how extreme you feel about it)- even for CRUD application
You just need to find the right strategy for- your application
Just use common sense....and you will be fine.
I feel like we are confusing TDD with Unit Testing.
Unit Testing are specific tests which tests units of behaviors. These tests are often included in the integration build. S.Lott described some excellent candidates for just those types of tests.
TDD is for design. I find more often then not that my tests I write when using TDD will either be discarded or evolve into a Unit Test. Reason behind this is when I'm doing TDD I'm testing my design while I'm designing my application, class, method, domain, etc...
In response to your scenario I agree with what S.Lott implied is that what you are needing is a suite of Unit tests to test specific behaviors in your application.
TDDing a simple CRUD application is in my opinion kind of like practicing scales on a guitar- you may think that it's boring and tedious only to discover how much your playing improves. In development terms - you would be likely to write code that's less coupled - more testable. Additionally you're more likely to see things from the code consumer's perspective - you'll actually be using it. This can have a lot of interesting side effects like more intuitive API's, better segregation of concerns etc. Granted there are scaffold generators that can do basic CRUD for you and they do have a place especially for prototyping, however they are usually tied to a framework of sorts. Why not focus on the core domain first, deferring the Framework / UI / Database decisions until you have a better idea of the core functionality needed - TDD can help you do that as well.
In your example: Do you want messages to be a queue or a hierarchical tree etc?
Do you want them to be loaded in real time? What about sorting / searching? do you need to support JSON or just html? it's much easier to see these kinds of questions with BDD / TDD. If you're doing TDD you may be able to test your core logic without even using a framework (and waiting a minute for it to load / run)
Skip it. All will be just fine. I'm sure you have a deadline to meet. (/sarcasm)
Next month, we can go back and optimize the queries based on user feedback. And break things that we didn't know we weren't supposed to break.
If you think the project will last 2 weeks and then never be reopened, automated testing probably is a waste of time. Otherwise, if you have a vested interest in "owning" this code for a few months, and its active, build some tests. Use your judgement as to where the most risk is. Worse, if you plan on being with the company for a few years, and have other teammates who take turns whacking on various pieces of a system, and it may be your turn again a year from now, build some tests.
Don't over do it, but do "stick a few pins in it", so that if things start to "move around", you have some alarms to call attention to things.
Most of my testing has been JUnit or batch "diff" type tests, and a rudimentaryy screen scraper type tool I wrote a few years ago (scripting some regex + wget/curl type stuff). I hear Selenium is supposed to be a good tool for web app UI testing, but have not tried it. Anybody have available tools for local GUI apps???
Just an idea...
Take the requirements for the CRUD, use tools like watij or watir or AutoIt to create test cases. Start creating the UI to pass the test cases. Once you have the UI up and passing maybe just one test, start writing the logic layer for that test, and then the db layer.
For most users, the UI is the system. Remember to write test cases for each new layer that you are building. So instead of starting from the db to app to ui layer, start in the reverse direction.
At the end of the day, you would probably have a accumulated a powerful set of regression test set, to give you some confidence in doing refactoring safely.
this is just an idea...
I see what you are saying, but eventually your models will become sufficiently advanced that they will require (or be greatly augmented by) automated testing. If not, what you are essentially developing is a spreadsheet which somebody has already developed for you.
Since you mentioned Rails, I would say doing a standard create/read/update/delete test is a good idea for each property, especially because your test should note permissions (this is huge I think). This also ensures that your migrations work as you expected them to.
I am working on a CRUD application now. What I am doing at this point is writing unit tests on my Repository objects and test that the CRUD features are working as they should. I have found that this has inherently unit tested the actual database code as well. We have found quite a few bugs in the database code this way. So I would suggest you push ahead and keep going with unit tests. I know applying TDD on CRUD apps is not as glamorous as things you might read about in blogs or magazines, but it is serving its purpose and you will be that much better when you work on a more complex application.
These days you should not need much hand written code for a CRUD app apart from the UI, as there are a 101 frameworks that will generate the database and data access code.
So I would look at reducing the amount of hand written code, and automating the testing of the UI. Then I would use TDD of the odd bits of logic that need to be written by hand.

Resources