I was taking an exam yesterday, and I noticed they asked in which order the following occur (and I'll put the order I deemed it to be here):
Unit Testing (Always write your unit tests first!)
Integration Testing (After you have some code and it works with other code / systems)
Validation Testing (Keep your data in a consistent state and make sure no bad data is input)
User / Acceptance Testing (It's all about the users otherwise why are we building a system in the first place?)
Is this about right?
Personally I think load-testing or database tuning oughta be in there at the end, but it wasn't on the test.
This question doesn't make a whole lot of sense.
For one thing, different people have different definitions of pretty much every kind of testing you have mentioned. For example, in Extreme Programming (XP) Acceptance Tests (while being derived from User Stories) have nothing to do with User Testing, or User Acceptance Testing (UAT). Using the XP definition, Acceptance Testing refers to automated tests that run on a build agent before code makes it anywhere near a user. User Acceptance Testing (UAT) on the other hand, is typically a manual process that happens after a proposed final version has been created and deployed to a UAT environment.
As pointed out in the comments already, Validation Testing is not a common concept with a widely accepted definition. Integration testing also means different things to different people. To some, it is testing that different processes/applications work together (in a UAT environment, for example). For others, it is simply automated tests that involve more that one class i.e. not Unit Tests.
Also, what do you mean by "order"? Do you mean the order in which the tests are written, or the order in which they are run before releasing code to the wild and/or production environment?
In any case, the question is largely irrelevant in the real world because different processes work for different teams. For example, I myself would always write an Acceptance Test before any Unit Tests. Following a test first approach, you always write a Unit Test before modifying a class, yes? So why wouldn't you write an Acceptance Test before modifying the whole system?
If "Acceptance Testing" means anything close to the XP definition of acceptance testing, then I don't think it makes sense for this to come last.
This sounds like the kind of "exam question" that only makes sense in the context of the course that you took before the exam. Without all that information (particularly the definitions of each kind of testing) it is very difficult to provide a useful answer to this question.
Instead of validation testing, System testing is correct word. And Database testing is a part of integration and system testing. Also Load testing will be performed on the phase of system and user acceptance test.
I'm developing a Rails app with a lot of dependencies on external APIs, for example Delicious.
All APIs share two workflows:
On the first call they are going to load all data since the beginning of time.
All following calls will load data filtered by the last execution time (if supported).
Testing them in real means I must create a test account for each API or at least use my private one. Even with VCR, because they would be called once. And my biggest problem: I would have to mess around a lot with Date's and Time's to emulate the two different workflows mentioned above. Though Timecop makes it really easy, it feels like a pain in the ass.
Another approach is to fake the API calls and their corresponding responses completely, but this means no real tests and furthermore I would never realize changes or problems with the APIs.
Any suggestions? Maybe a good combination of both ways?
Do both.
Start by mocking/stubbing everything in your regular run-frequently test suite. Do this for all the fine-grained model/controller testing.
Then add end-to-end testing (eg in integration tests) that cover usual workflow-scenarios that hit the real (test) servers.
Alternatively use a different test suite for the end-to-end testing eg cucumber instead of Test::Unit, or selenium/Watir whatever as long as it's different to your usual test suite
This isn't a question of what stress testing tools are out there. I'm afraid it's a lot harder than that. (At least for me)
Consider a restful architecture for a forum or blog that generates random IDs for each post.
Simulating creating those topics/articles would be simple, because you'd just be posting form data to an endpoint like: /article, or /topic
But how do you then stress test commenting on those articles/topics? This is different, because the comments need to belong to an article/topic, which means that you need the ids of those items. However, if all you can do is issue posts, and you have no way of pulling those ids, you'd be unable to create them.
I'm creating a site that is similar in this regard, and I have no idea how to stress test the creation of the comments.
I have two ideas, and they're both pretty awful:
Generate a massive system ahead of time with some kind of factory, and then freeze it. From there, I figure I'd have to use some kind of browser automation to create my 'comments' on all of this. The automation would I suppose go through a recording proxy, like what JMeter offers. Then, to run the test, I reload the database, and replay the massive log file.
Use browser automation for the whole thing, taking advantage of the dynamic links delivered in the HTML page. The only option here would be Selenium, and really, we're talking a massive selenium grid that would be extremely expensive. Probably very difficult to maintain also.
Option 2 is completely infeasible near as I can tell, but option 1 sounds excruciating. I'm really hoping someone can suggest something more clever.
Option 1.
I mean, implementation notes aside, you're basically just asking for a testing environment. So, the answer is to make one. In whatever fashion:
Generate it
Make it once and reload it
Randomise it
Whatever. It's the approach to go with.
How do you your testing is kind of a side issue (unit testing/browser/whatever, up to you).
But you've reached a point where you need to test with real data. So make it happen.
This is a common problem. We handle it by extracting the dynamic parts of the URLs from the server responses. I presume this system uses web browser client - which implies that those URLs are being sent in the server responses. If they are in the responses, then you CAN get them. However, since you said "if all you can do is issue posts, and you have no way of pulling those ids", then perhaps this is not the case? In that case, can you clarify?
We've recently been doing a lot of testing of Drupal systems for our customers - which has exactly the problem you've described. We either solve it by extracting the IDs dynamically from the page as the user browses to the page they want to comment on, or we use option 1, or a combination of both. Note that if you have a load testing tool handy, then generation of content is not too difficult - use the tool to do it. I.e. run a "content generation" load test. Besides yielding useful data on its own accord, that will give you a test database that you can then backup/restore as needed to maintain your test infrastructure. Now you can run the test on a more realistic environment - one that has lots of content already in it (assuming, of course, that this is realistic for your purposes).
If you are interested, I'd be happy to demo how we solve the problem using our software (Web Performance Load Tester).
I have used Visual Studio to solve this kind of problem. Visual Studio allows C# coded web tests that can programatically parse the html returned and take action based on that.
I was load testing a SharePoint website and required information to be populated ahead of time. I did create a load test that was specifically for creating "random" pages of content ahead of time. I populated a test harness database with the urls ahead of time, allowing some control over the pages that were loaded.
With a list of "articles" available and a list of potential comments, it is possible to code a pseudo-random number generator (inside a stored procedure because of the asynchronous nature of the test harness) to get a repeatable load test. That meant that the site would be populated in the same way each time the load test was run.
It does take some effort to create a decent way of populating the site off the bat, but the return in the relevance of the load test is quite good.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a good grasp of unit testing, DI, mocks, and all the design principal goodness required to have as close to full code coverage as humanly possible (single responsibility principal, think 'how will i test this' as I code, etc...).
My most recent app, I did not code doing true TDD. I kept unit-testing in mind as I coded, and wrote my tests after writing the code, refactoring, etc.. I did TDD when it was 'easy' to do... however I did not have as good of a grasp as I do now... That was the first project I made full use of DI, mocking frameworks, etc, and the first which had full code coverage - and I learned a lot from it as I went along. I'm itching to get assigned to my next project so I can code it completely doing TDD from scratch.
I know this is a broad question, and I've already ordered TDD by example and XP Unleashed, but I'm hoping for a brief overview of how you all design / write a large application doing TDD.
Do you write the entire application, using nothing but stubbed out code? (e.g., write all the function signatures, interfaces, structures, and write the entire application but without writing any actual implementation)? I could picture it working on small-mid sized, but is this even possible on large applications?
If not, how the heck would you write your first unit test for the highest level function in your system? Lets say for example - on a web service where you have a function called DoSomethingComplicated(param1,...,param6) exposed to the world. Obviously, writing the test first for a simple function like AddNumbers() is trivial - but when the function is at the top of the call stack such as this?
Do you still do design up-front? Obviously you still want to do 'architecture' design - e.g., a flow chart showing IE talking to IIS which talks to a windows service via WCF which talks to the SQL Database... an ERD which shows all your SQL tables and their fields, etc... but what about class design? Interactions between the classes, etc? Do you design this up-front, or just keep writing stub code, refactoring the interactions as you go along, until the whole thing connects and looks like it will work?
Any advice is much appreciated
Do you do design up front?
Of course you do. You've got a big application in front of you. You've got to have some idea of the structure it will have before you start writing tests and code. You don't have to have it all worked out in detail, but you should have some basic idea of the layers, components, and interfaces. For example, if you are working on a web services system, you ought to know what the top level services are, and have a good first approximation of their signatures.
Do you write the entire application using nothing but stubbed out code?
No. You stub things out only if they are really difficult to control in a test. For example, I like to stub out the database, and the UI. I will also stub out third party interfaces. Sometimes I will stub out one of my own components if it vastly increases the test time, or it forces me to create test data that is too complicated. But most of the time I let my tests work on a pretty well integrated system.
I have to say I really dislike the style of testing that relies heavily on mocks and stubs. Don't get me wrong, I think mocks and stubs are very useful for decoupling from things that are hard to test. But I don't like writing things that are hard to test, and so I don't use a lot of mocks and stubs.
How do you write your first unit test for a high level function?
Most high level functions have degenerate behavior. For example, login is a pretty high level function and can be very complicated. But if you try to log in with no user name and no password, the response from the system is going to be pretty simple. Writing that tests will also be very simple. So you start with the degenerate cases. Once you have exhausted them, you move on to the next level of complexity. For example, what if a user tries to log in with a username but no password? Bit by bit you climb the ladder of complexity, never tackling the more complex aspects until the less complex aspects are all passing.
It is remarkable how well this strategy works. You might think that you'd just be climbing around the edges all the time and never getting to the meat; but that's not what happens. Instead you find yourself designing the internal structure of the code based on all the degenerate and exceptional cases. When you finally get around to the primary flow, you find that the structure of the code you are working on has a nice hole of just the right shape to plug the main flow in.
Please don't create your UI first.
UIs are misleading things. They make you focus on the wrong aspects of the system. Instead, imagine that your system must have many different UIs. Some will be web, some will be thick client, some will be pure text. Design your system to work properly irrespective of the UI. Get all the business rules working first, with all tests passing. Then plug the UI in later. I know this flies in the face of a lot of conventional wisdom, but I wouldn't do it any other way.
Please don't design the database first.
Databases are details. Save the details for later. Rather, design your system as though you had no idea what kind of database you were using, Keep any notion of schema, tables, rows, and columns out of the core of the system. Implement your business rules as though all the data were kept in memory all the time. Then add the database later, once you've gotten all the business rules working. Again, I know this flies in the face of some conventional wisdom, but coupling systems to databases too early is a source of a lot of badly warped designs.
Do I write the entire application, using nothing but stubbed out code?
No, not in the slightest sense - that sounds like a very wasteful approach. We must always keep in mind that the underlying reason for doing TDD is rapid feedback. An automated test suite can tell us if we broke anything much faster than a manual test can. If we wait wiring things together until the last moment, we don't get rapid feedback - while we may get rapid feedback from our unit tests, we wouldn't know if the application works as a whole. Unit tests are only one form of test we need to perform to verify the application.
A better approach is to start with the most important feature and work your way in from there, using an outside-in approach. This often means starting with some UI.
The way I do it is by creating the desired UI. Since we normally can't develop UI with TDD, I simply create the View with the technology of choice. No tests there, but I wire up the UI to some API (preferrably using declarative databinding), and that's when the testing begins.
In the beginning, I would then TDD my ViewModels/Presentation Models and corresponding Controllers, possibly hard-coding some responses to see that the UI works. As soon as I have something that doesn't explode when you run it, I check in the code (remember, many small incremental check-ins).
I subsequently work my way vertically down that feature and ensure that this particular piece of UI can go all the way to the data source (or whatever), ignoring all other features.
When the feature is done, I can start on the next feature. The way I picture this process is that I fill out the application by doing one vertical slice at a time until all features are done.
Kick-starting a greenfield app this way always takes extra long time for the first feature since this is where you have to wire up everything, so pick something simple (like the initial View of the app) to keep things as simple as possible. Once the first feature is done, the next ones become much easier because the foundations are now in place.
Do I still design up-front?
Not much, no. I normally have an overall design in mind before I start, and when I work in a team, we sketch this overall architecture on a whiteboard or a slide deck before we start.
This is more or less limited to
The number and names of layers (UI, Presentation Logic, Domain Model, Data Access, etc).
The technologies used (WPF, ASP.NET MVC, SQL Server, .NET 3.5 or whatnot)
How we structure production code and test code, and which test technologies we use
Quality requirements for the code (pair programming, static code analysis, coding standards, etc.)
The rest we figure out as we go, but we use many ad-hoc design sessions at the whiteboard as we go along.
+1 Good question
I truly don't know the answer, but I would start with building blocks of classes that I could test then build into the application, not with the top-level stuff. And yes I would have a rough up-front design of the interfaces, otherwise I think you would find those interfaces changing so often as you refactor that it would be a real hinderance.
TDD By Example won't help I don't think. IIRC it goes through a simple example. I am reading Roy Osherove's The Art of Unit Testing and while it seems to comprehensively cover tools and techniques like mocks and stubs, the example so far seem also pretty simple and I don't see that it tells you how to approach a large project.
Do you write the entire application, using nothing but stubbed out code?
To test our systems we mainly do unit, integration and remote services testing. In unit tests we stub out all long running, time consuming, and external services, i.e. database operations, web services connection or any connection to external services. This is to make sure that our tests are fast, independent and not relying on the response of any external service to provide us quick feedback. We have learnt this the hard way because we do have some tests that do database operations which makes it really slow that goes against the principle "Unit tests must be fast to run"
In integration tests, we test the database operations but still not the web services and external services because that can make the test brittle depending on their availability and we use autotest to run the tests in the background all the while we are coding.
However, to test any kind of remote services, we have tests that connect to the external services, do the operation on them and get the response. What matters to the test is their response and their end state if it is important for the test. The important thing here is, we keep these kind of tests in another directory called remote (that's a convention we created and follow) and these remote tests are only run by our CI (continuous integration) server when we merge any code to the master/trunk branch and push/commit it to the repo so that we know quickly if there has been any changes in those external services that can affect our application.
Do I still design up-front?
Yes but we don't do big design up front basically what uncle Bob (Robert C. Martin) said.
In addition, we get to the whiteboard before immersing ourself into coding and create some Class Collaboration Diagrams just to make it clear and sure that everyone in the team is on the same page and this also helps us to divide the work amongst the team members.
I've been trying to push my mentallity when developing at home to be geared more towards TDD and a bit DDD.
One thing I don't understand though is why you would create a fake repository to test against? I haven't really looked into it much but surely the idea of testing is to help decouple your code (giving you more flexability), trim down code needed and bring down the number of bugs.
So can someone fill in my foolish brain as to why some like to test fake repositories? I would have thought testing against a real database is a much better alternative to creating a fake one because then you KNOW that it works against your real world data store.
The fake repository allows you to test just your application code.
The fake repository means an automated test can easily set up a known state in the repository.
The fake repository will be several orders of magnitude faster than a real database.
The fake repository is NOT a substitute for system testing that will include your database.
As I see it there are two really big reasons why you test against faked resources:
It makes unit testing faster when you have a mocked up against slow I/O or database. This may not look like anything if you have a small test suite but when you're up to +500 unit tests it starts to make a difference. In such amount, tests that run against the database will start to take several seconds to do. Programmers are lazy and want things to go fast so if running a test suite takes more than 10 seconds then you won't be happy to do TDD anymore.
It enforces you to think about your code design to make changes easier. Design by contract and dependency injection also becomes so much easier to do if you've made implementation against interfaces or abstract classes. If done right such design makes it easier to comply to changes in your code.
The only drawback is the obvious one:
How can you be sure it really works?
...and that is what integration tests are for.
I upvoted Giraffe's answer, but want to add just a couple of points:
Each developer can use a mock/fake
repository for her/his own unit
testing without interfering with the
tests being done by other developers
on the same project.
Using a local mock/fake repository
reinforces the user of a data
abstraction layer, which is good
design practice.
As an example, I've used something as simple as a HashMap to implement a mock of the data access layer. This makes it extremely easy for each unit test to ensure that exactly the necessary conditions exist for its purpose, and to verify that the right calls were made on the data access layer.