Performance Testing Versus Unit Testing - performance

I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind:
Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time.
Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test).
Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit).
My question is: do my findings / leanings correspond with the thoughts of the community?

I agree with your findings/learnings. True unit tests only test a portion of the system, ignoring, mocking or faking the rest as necessary. Integration tests (or regression tests) test most or all of the units working together, and that is the true measure of performance.

In some situations you can use unit tests to make sure that an operation finishes within a certain time period. If you want to add more features to your operation, but you don't want to sacrifice performance you can use unit tests to assert that. Of course, these kind of unit tests are machine dependent, but you can throw some additional variables or configuration to the equation.

Performance tests might very well be made up of unit tests.
For example, a unit test might throw several different parameters into a method and verify the method returns an expected output. A performance test might execute that unit test 1000 times (or whatever value makes sense for you) while recording everything from CPU and memory counters right down to how long each test took.

I agree that performance tests cannot be unit tests but there is no reason we cannot have another set of tests called performance tests.
Broadly the tests fall under two categories
a) Unit tests
b) Integration tests
We run integration tests again the real database (instead of in memory) to ensure the sql scripts, the hibernate repositories work as expected
My idea is we can add another set of tests called performance tests which are a part of nightly build which tests for performance of certain functions. This is important to track down the statistics after a code re factoring or to evaluate if changes to one part of application can have unintended consequence on another.
I have come across JunitPerf which might help me to achieve this objective.

Unit tests should take no time to execute because you are only testing a very specific unit / system. Like if your system under test is ClassA : IClassA, you do your mocking / stubbing and only test the behaviour of ClassA, and should not be testing behaviour other than ClassA, such as if ClassA uses ClassB. You should inject a mock of ClassB instead of the concrete to achieve this.
In terms of performance tests, it makes sense to still use a testing framework like NUnit / MBUnit / MavenThought, just keep these tests in a separate assembly and don't invoke them as part of your unit tests.
So if you use Rake to invoke your tests, some of your tasks might look like:
Rake Test:All #Run all unit tests
Rake Test:Acceptance #Run all acceptance tests
Rake Test:Performance #Run all performance tests
Rake Test:Integration #Run all integration tests
Then with your continuous integration, Test:All, is always invoked after a successful build, where as Test:Performance is invoked at 12am once a day.

All depends of what you call performance testing.When micro optimizing specific code I usually use something very similar to unit testing (should I call it unit performance testing ?). That's basically what I do in this question (though not caring there to really use a unit test framework). But I also do this kind of things to optimize my C++ production code within BOOST unit testing framework.
Really there is many kind of performance testing at different levels and with different purposes (heavy-load stress test, profiling, micro optimization). The performance testing you are speaking of in your question seems to be at the functional testing level. A level for which you probably won't use unit testing framework anyway.

I remember years ago Microsoft advocated programmers performance testing their individual asp's using Visual Studio Net Application Center Test (ACT). There was (still may be out there) a whole methodology for performing Transaction Cost Analysis (TCA) on individual asp's. That said these asps could be tested using a web driver and possible mock objects to isolate the code under test (that is mimic DB access if it wasn't developed).
This approach can be followed with any Unit testing provided you have a driver and, optionally, a mock object framework to take care of any dependencies that are not yet written. This approach has also become popular with SOAPUI\LOADUI. In addition I would recommend isolating individual SQL statements that can be tested (optimized) against a given database design. This (DB) SQL unit performance testing can be done early in the SDLC and it will discover query optimization opportunities.
In terms of Cost and Value: I have found early UNIT performance testing, using Mock Objects as appropriate, will identify memory leaks and excessive CPU usage and Disk IO early in the SDLC but I would 'cherry pick' the code under tests for higher risk items.

There are contrast differences between unit test and performance test. First and foremost, unit test is to test the application against its functional requirements. for e.g you want to ensure on clicking the Home tab the webpage navigates to home whereas performance test is a type of non functional test. Here you are concerned about the stability and responsiveness of the application under a particular user load for certain amount of time.

Related

Laravel API testing unit vs feature

I am using Laravel 5.6 to write a API with no views, just endpoints.
I am very new to testing and my understanding is that Unit tests are from a programmers perspective and Feature tests are from a users perspective.
As i am only creating an API, would I be right to assume that i will only be writing Unit tests? and i am safe to remove the tests/Feature directory all together?
My tests will consist of things like
public function it_authenticate_a_user()
Sorry if its a dull question, i am only trying to learn.
Thanks
No, it's not a good idea to write only unit tests.
A true unit test verifies that a single class or function works as expected. However, it does NOT verify that the whole application works as expected - it's perfectly possible for the application to have 100% coverage from unit tests, but not actually work because the components don't quite fit together as expected. You should also write functional tests for your endpoints. The majority of your tests should be unit tests, but it's a good idea to make sure every API method is covered by functional tests too, just to make sure the pieces fit together.
Put it this way, at Google they advocate a model called the testing pyramid, which gives a typical ratio of 70% unit tests, 20% functional tests, and 10% high level acceptance tests. It should not be seen as rigid, and for an API I see little need for acceptance tests, but it gives some idea as to a healthy mix of tests.
An API is in some ways easier to test than a conventional web app since it's stateless and each method is relatively simple, but it's just as important it has functional test coverage. It's straightforward to test API routes in Laravel - just do the setup, make the request and check the response is correct and any changes have been made.

Order of Operations for System Testing?

I was taking an exam yesterday, and I noticed they asked in which order the following occur (and I'll put the order I deemed it to be here):
Unit Testing (Always write your unit tests first!)
Integration Testing (After you have some code and it works with other code / systems)
Validation Testing (Keep your data in a consistent state and make sure no bad data is input)
User / Acceptance Testing (It's all about the users otherwise why are we building a system in the first place?)
Is this about right?
Personally I think load-testing or database tuning oughta be in there at the end, but it wasn't on the test.
This question doesn't make a whole lot of sense.
For one thing, different people have different definitions of pretty much every kind of testing you have mentioned. For example, in Extreme Programming (XP) Acceptance Tests (while being derived from User Stories) have nothing to do with User Testing, or User Acceptance Testing (UAT). Using the XP definition, Acceptance Testing refers to automated tests that run on a build agent before code makes it anywhere near a user. User Acceptance Testing (UAT) on the other hand, is typically a manual process that happens after a proposed final version has been created and deployed to a UAT environment.
As pointed out in the comments already, Validation Testing is not a common concept with a widely accepted definition. Integration testing also means different things to different people. To some, it is testing that different processes/applications work together (in a UAT environment, for example). For others, it is simply automated tests that involve more that one class i.e. not Unit Tests.
Also, what do you mean by "order"? Do you mean the order in which the tests are written, or the order in which they are run before releasing code to the wild and/or production environment?
In any case, the question is largely irrelevant in the real world because different processes work for different teams. For example, I myself would always write an Acceptance Test before any Unit Tests. Following a test first approach, you always write a Unit Test before modifying a class, yes? So why wouldn't you write an Acceptance Test before modifying the whole system?
If "Acceptance Testing" means anything close to the XP definition of acceptance testing, then I don't think it makes sense for this to come last.
This sounds like the kind of "exam question" that only makes sense in the context of the course that you took before the exam. Without all that information (particularly the definitions of each kind of testing) it is very difficult to provide a useful answer to this question.
Instead of validation testing, System testing is correct word. And Database testing is a part of integration and system testing. Also Load testing will be performed on the phase of system and user acceptance test.

Tagging unit-tests

I'm working on a PHP project with solid unit-tests coverage.
I've noticed, that last time, I'm making very tricky manipulations with unit-tests Command-Line Test Runner' --filter command.
Here is this command's explanation from official documentation:
--filter
Only runs tests whose name matches the given pattern. The pattern can be either the name of a single test or a regular expression that matches multiple test names.
I ofter use it because sometimes it becomes very useful to run just a single test suite or test case from the whole test base.
I'm wondering if this is good practice or not?
I have heard that sometimes it is good practice to to run the whole test suite on your Continuous Integration machine, if you know for sure that you have modified only one component and 100% percent confident, that it won't fail other component's unit-tests.
What do you think about it?
Some time ago I thought that we shouldn't care so much about time require to run the whole suite of all unit-tests, but when you have very complicated business logic and unit-tests - this can take significant time.
I understand, that "real" unit-tests shouldn't interact with DB, use mock/stubs objects, I agree with that. But sometimes, it is much easier(cheaper) to use DB fixtures for the tests.
Please give me some advice, how this problem can be solved?
Good unit tests should:
Have clear methods names and variable names to act as documentation
Run fast. This will also be possible
for test with complicated business
logic. Test should run in an avarage
time of something around 0.1 second.
Test exactly one thing in one test method
Not integrate with external resources like the filesystem, email,
databases, webservices, and
everything else. You can create
seperate database integration tests
to test your database ineraction.
These test will slower then your unit
test most of the time. I put my
integration tests in a seperate
project and I run them only when I am
working on the integration code. I
also run them on all builds on the CI
server.
Be completely isolated from each other. When you have tests depending
on each other, you cannot see what
your problem is from reading which
tests are failed. You might have to
debug to find the problem. Isolated
tests will save you a lot of time.
Personally, I don't use category names in my tests. I use 2 test projects per application. One for the unit test and one for the integration tests and slower tests.
Reaction on:
"But sometimes, it is much
easier(cheaper) to use DB fixtures for
the tests."
When your code is written well, it will be easier to mock. I don't know about mocking frameworks in Php, but I use them in other languages to save me a lot of time. Writing test first and code later might help you to design your code to be testable easier.
Personally I learned to test better by
reading blogs about it
reading books about it
reading tested code written by others
writing a lot of tests of course. It took me a few thousends of tests to become good at it.
I ofter use it because sometimes it becomes very useful to run just a single test suite or test case from the whole test base.
I'm wondering if this is good practice or not?
Sure, as long as you run the full set of unit-tests occasionally (via a CI server sounds perfect)
Running the "interesting" tests regularly is better than running all the tests rarely..
I'd address the issue by having a subset of tests ("smoke tests") that take 1 minute or less that must be run before committing, then run the full set of tests from your CI server.
If your full set of tests takes > 15 minutes then I'd look to divide them and run them in parallel.
Then you can use the --filter to run the tests you're most interested in first, then the smoke tests prior to commit, and have the rest run from the CI server.

How to automate integration testing?

I'd like to know something, I know that to make your test easier you should use mock during unit testing to only test the component you want, without external dependencies. But at some point, you have to bite the bullet and test classes which interact with your database, files, network, etc.
My main question is: what do you do to test these classes?
I don't feel that installing a database on my CI server is a good practice, but do you have other options?
Should I create another server with other CI tools, with all externals dependencies?
Should I run integration test on my CI as often as my unit tests?
Maybe a full-time person should be in charge to test these components manually? (or in charge to create the test environment and configure the interaction between your class and your external dependency, like editing config files of your application)
I'd like to know how do you do in the real world.
I'd like to know how do you do in the
real world ?
In the real world there isn't a simple prescription about what to do, but there is one guiding truth: you want to catch mistake/bugs/test failures as soon as possible after they are introduced. Let that be your guide; everything else is technique.
A couple common techniques:
Tests running in parallel. This is my preference; I like to have two systems, each running their own instance of CruiseControl* (which I'm a committer for), one running the unit tests with fast feedback (< 5 minutes) while another system runs the integration tests constantly. I like this because it minimizes the delay between when a checkin happens and a system test might catch it. The downside that some people don't like is that you can end up with multiple test failures for the same checkin, both a unit test failure and an integration test failure. I don't find this a major downside in practice.
A life-cycle model where system/integration tests run only after unit tests have passed. There are tools like AnthillPro* that are built around this kind of model and the approach is very popular. In their model they take the artifacts that have passed the unit tests, deploy them to a separate staging server, and then run the system/integration tests there.
If you've more questions about this topic I'd recommend the Continuous Integration and Testing Conference (CITCON) and/or the CITCON mailing list.
There are lots of CI and build|process automation tools out there. These are just representatives of their class of tools.
The approach I've seen taken most often is to run unit tests immediately on checkin, and to run more lengthy integration tests at fixed intervals (possibly on a different server; that's really up to your preference). I've also seen integration tests split into "short-running" integration tests and "long-running" integration tests, which are run at different intervals (the "short-running" tests run every hour, for example, and the "long-running" tests run overnight).
The real goal of any automated testing is to get feedback to developers as quickly as is feasible. With that in mind, you should run integration tests as often as you possibly can. If there's a wide variance in the run length of your integration tests, you should run the quicker integration tests more often, and the slower integration tests less often. How often you run any set of tests in going to depend on how long it takes all the tests to run, and how disruptive the test runs will be to shorter-running tests (including unit tests).
I realize this doesn't answer your entire question, but I hope it gives you some ideas about the scheduling part.
Depending on the actual nature of the integration tests I'd recommend using an embedded database engine which is recreated at least once before any run. This enables tests of different commits to work in parallel and provides a well defined starting point for the tests.
Network services - by definition - can also be installed somewhere else.
Always be very careful though, to keep your CI machine separated from any dev or prod environments.
I do not know what kind of platform you're on, but I use Java. Where I work, we create integration tests in JUnit and inject the proper dependencies using a DI container like Spring. They are run against a real data source, both by the developers themselves (normally a small subset) and the CI server.
How often you run the integration tests depends on how long they take to run, in my opinion. Run them as often as you can. Leave the real person out of this, and let him or her run manual system tests in areas that are difficult or too expensive to automate testing for (for instance: spelling, position of different GUI components). Leave the editing of config files to a machine. Where I work, we have system variables (DEV; TEST and so on) set on the computers, and let the app choose a config file based on that.

Database integration tests

When you are doing integration tests with either just your data access layer or the majority of the application stack. What is the best way prevent multiple tests from clashing with each other if they are run on the same database?
Transactions.
What the ruby on rails unit test framework does is this:
Load all fixture data.
For each test:
BEGIN TRANSACTION
# Yield control to user code
ROLLBACK TRANSACTION
End for each
This means that
Any changes your test makes to the database won't affect other threads while it's in-progress
The next test's data isn't polluted by prior tests
This is about a zillion times faster than manually reloading data for each test.
I for one think this is pretty cool
For simple database applications I find using SQLite invaluable. It allows you to have a unique and standalone database for each test.
However it does only work if you're using simple generic SQL functionality or can easily hide the slight differences between SQLite and your production database system behind a class, but I've always found that to be fairly easy in the SQL applications I've developed.
Just to add to Free Wildebeest's answer I have also used HSQLDB to do a similar type testing where each test gets a clean instance of the DB.
I wanted to accept both Free Wildebeest's and Orion Edwards' answers but it would not let me. The reason I wanted to do this is that I'd come to the conclusion that these were the two main ways to do it, but which one to chose depends on the individual case (mostly the size of the database).
Also run the tests at different times, so that they do not impact the performance or validity of each other.
While not as clever as the Rails unit test framework in one of the other answers here, creating distinct data per test or group of tests is another way of doing it. The level of tediousness with this solution depends on the number of test cases you have and how dependant they are on one another. The tediousness will hold true if you have one database per test or group of dependant tests.
When running the test suite, you load the data at the start, run the test suite, unload/compare results making sure the actual result meets the expected result. If not, do the cycle again. Load, run suite, unload/compare.

Resources