I am trying to figure out difference between SetupSuite and SetupTest for quite some time now. Based on information on blogs I have understood that SetupSuite is run before entire suite and SetupTest runs before each test case. But what could be practical example in such a case? And how does dependency injection differ in both the cases?
Generally you want to use SetupTest so that each individual test function runs with a clean environment. SetupSuite is useful in cases where the setup code is time consuming and isn't modified in any of the tests. An example of when this could be useful is if you were testing code that reads from a database, and all the tests used the same data and only ran SELECT statements. In this scenario, SetupSuite could be used once to load the database with data.
Related
I'm using PHP with Laravel 5.5 framework.
I recently started writing unitTests for my code and I got a few questions:
What is the best way to interact with my database?
Should I use InMemoryDB like SQLite or Mock everything with Mockery.
If I have an interaction with DB than that is still unitTesting or Integration Testing?
Thank you for the answers in advence
I work in a company where we strive for 80% code coverage, in general we test mostly End-2-End, with database and mocking external calls, we use SQLite so our testsuite can run quickly in a local environment. When the case make sense, we unit tests it, as an example an Tax service i did for different countries i unit tested, because it was very input output based.
Why we prefer End-2-End:
It's quicker if you don't have to make unit, integration and end to end testing
You test the endpoint will actually will be used
I prefer to run with a real database if you are running with Continous Integration
There is drawbacks with SQLite, mainly it does not work as other RDB where there is a lot of settings and limitations, on top of my head i had problems with foreign key enforcing etc.
So to answer your question:
It's smart to use SQLite at least locally
In Unit Testing you are only testing one class and mocking everything else, you are basically testing that the code can execute. Note this is a oversimplified version on a very complex subject.
I was taking an exam yesterday, and I noticed they asked in which order the following occur (and I'll put the order I deemed it to be here):
Unit Testing (Always write your unit tests first!)
Integration Testing (After you have some code and it works with other code / systems)
Validation Testing (Keep your data in a consistent state and make sure no bad data is input)
User / Acceptance Testing (It's all about the users otherwise why are we building a system in the first place?)
Is this about right?
Personally I think load-testing or database tuning oughta be in there at the end, but it wasn't on the test.
This question doesn't make a whole lot of sense.
For one thing, different people have different definitions of pretty much every kind of testing you have mentioned. For example, in Extreme Programming (XP) Acceptance Tests (while being derived from User Stories) have nothing to do with User Testing, or User Acceptance Testing (UAT). Using the XP definition, Acceptance Testing refers to automated tests that run on a build agent before code makes it anywhere near a user. User Acceptance Testing (UAT) on the other hand, is typically a manual process that happens after a proposed final version has been created and deployed to a UAT environment.
As pointed out in the comments already, Validation Testing is not a common concept with a widely accepted definition. Integration testing also means different things to different people. To some, it is testing that different processes/applications work together (in a UAT environment, for example). For others, it is simply automated tests that involve more that one class i.e. not Unit Tests.
Also, what do you mean by "order"? Do you mean the order in which the tests are written, or the order in which they are run before releasing code to the wild and/or production environment?
In any case, the question is largely irrelevant in the real world because different processes work for different teams. For example, I myself would always write an Acceptance Test before any Unit Tests. Following a test first approach, you always write a Unit Test before modifying a class, yes? So why wouldn't you write an Acceptance Test before modifying the whole system?
If "Acceptance Testing" means anything close to the XP definition of acceptance testing, then I don't think it makes sense for this to come last.
This sounds like the kind of "exam question" that only makes sense in the context of the course that you took before the exam. Without all that information (particularly the definitions of each kind of testing) it is very difficult to provide a useful answer to this question.
Instead of validation testing, System testing is correct word. And Database testing is a part of integration and system testing. Also Load testing will be performed on the phase of system and user acceptance test.
I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind:
Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time.
Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test).
Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit).
My question is: do my findings / leanings correspond with the thoughts of the community?
I agree with your findings/learnings. True unit tests only test a portion of the system, ignoring, mocking or faking the rest as necessary. Integration tests (or regression tests) test most or all of the units working together, and that is the true measure of performance.
In some situations you can use unit tests to make sure that an operation finishes within a certain time period. If you want to add more features to your operation, but you don't want to sacrifice performance you can use unit tests to assert that. Of course, these kind of unit tests are machine dependent, but you can throw some additional variables or configuration to the equation.
Performance tests might very well be made up of unit tests.
For example, a unit test might throw several different parameters into a method and verify the method returns an expected output. A performance test might execute that unit test 1000 times (or whatever value makes sense for you) while recording everything from CPU and memory counters right down to how long each test took.
I agree that performance tests cannot be unit tests but there is no reason we cannot have another set of tests called performance tests.
Broadly the tests fall under two categories
a) Unit tests
b) Integration tests
We run integration tests again the real database (instead of in memory) to ensure the sql scripts, the hibernate repositories work as expected
My idea is we can add another set of tests called performance tests which are a part of nightly build which tests for performance of certain functions. This is important to track down the statistics after a code re factoring or to evaluate if changes to one part of application can have unintended consequence on another.
I have come across JunitPerf which might help me to achieve this objective.
Unit tests should take no time to execute because you are only testing a very specific unit / system. Like if your system under test is ClassA : IClassA, you do your mocking / stubbing and only test the behaviour of ClassA, and should not be testing behaviour other than ClassA, such as if ClassA uses ClassB. You should inject a mock of ClassB instead of the concrete to achieve this.
In terms of performance tests, it makes sense to still use a testing framework like NUnit / MBUnit / MavenThought, just keep these tests in a separate assembly and don't invoke them as part of your unit tests.
So if you use Rake to invoke your tests, some of your tasks might look like:
Rake Test:All #Run all unit tests
Rake Test:Acceptance #Run all acceptance tests
Rake Test:Performance #Run all performance tests
Rake Test:Integration #Run all integration tests
Then with your continuous integration, Test:All, is always invoked after a successful build, where as Test:Performance is invoked at 12am once a day.
All depends of what you call performance testing.When micro optimizing specific code I usually use something very similar to unit testing (should I call it unit performance testing ?). That's basically what I do in this question (though not caring there to really use a unit test framework). But I also do this kind of things to optimize my C++ production code within BOOST unit testing framework.
Really there is many kind of performance testing at different levels and with different purposes (heavy-load stress test, profiling, micro optimization). The performance testing you are speaking of in your question seems to be at the functional testing level. A level for which you probably won't use unit testing framework anyway.
I remember years ago Microsoft advocated programmers performance testing their individual asp's using Visual Studio Net Application Center Test (ACT). There was (still may be out there) a whole methodology for performing Transaction Cost Analysis (TCA) on individual asp's. That said these asps could be tested using a web driver and possible mock objects to isolate the code under test (that is mimic DB access if it wasn't developed).
This approach can be followed with any Unit testing provided you have a driver and, optionally, a mock object framework to take care of any dependencies that are not yet written. This approach has also become popular with SOAPUI\LOADUI. In addition I would recommend isolating individual SQL statements that can be tested (optimized) against a given database design. This (DB) SQL unit performance testing can be done early in the SDLC and it will discover query optimization opportunities.
In terms of Cost and Value: I have found early UNIT performance testing, using Mock Objects as appropriate, will identify memory leaks and excessive CPU usage and Disk IO early in the SDLC but I would 'cherry pick' the code under tests for higher risk items.
There are contrast differences between unit test and performance test. First and foremost, unit test is to test the application against its functional requirements. for e.g you want to ensure on clicking the Home tab the webpage navigates to home whereas performance test is a type of non functional test. Here you are concerned about the stability and responsiveness of the application under a particular user load for certain amount of time.
I'm working on a PHP project with solid unit-tests coverage.
I've noticed, that last time, I'm making very tricky manipulations with unit-tests Command-Line Test Runner' --filter command.
Here is this command's explanation from official documentation:
--filter
Only runs tests whose name matches the given pattern. The pattern can be either the name of a single test or a regular expression that matches multiple test names.
I ofter use it because sometimes it becomes very useful to run just a single test suite or test case from the whole test base.
I'm wondering if this is good practice or not?
I have heard that sometimes it is good practice to to run the whole test suite on your Continuous Integration machine, if you know for sure that you have modified only one component and 100% percent confident, that it won't fail other component's unit-tests.
What do you think about it?
Some time ago I thought that we shouldn't care so much about time require to run the whole suite of all unit-tests, but when you have very complicated business logic and unit-tests - this can take significant time.
I understand, that "real" unit-tests shouldn't interact with DB, use mock/stubs objects, I agree with that. But sometimes, it is much easier(cheaper) to use DB fixtures for the tests.
Please give me some advice, how this problem can be solved?
Good unit tests should:
Have clear methods names and variable names to act as documentation
Run fast. This will also be possible
for test with complicated business
logic. Test should run in an avarage
time of something around 0.1 second.
Test exactly one thing in one test method
Not integrate with external resources like the filesystem, email,
databases, webservices, and
everything else. You can create
seperate database integration tests
to test your database ineraction.
These test will slower then your unit
test most of the time. I put my
integration tests in a seperate
project and I run them only when I am
working on the integration code. I
also run them on all builds on the CI
server.
Be completely isolated from each other. When you have tests depending
on each other, you cannot see what
your problem is from reading which
tests are failed. You might have to
debug to find the problem. Isolated
tests will save you a lot of time.
Personally, I don't use category names in my tests. I use 2 test projects per application. One for the unit test and one for the integration tests and slower tests.
Reaction on:
"But sometimes, it is much
easier(cheaper) to use DB fixtures for
the tests."
When your code is written well, it will be easier to mock. I don't know about mocking frameworks in Php, but I use them in other languages to save me a lot of time. Writing test first and code later might help you to design your code to be testable easier.
Personally I learned to test better by
reading blogs about it
reading books about it
reading tested code written by others
writing a lot of tests of course. It took me a few thousends of tests to become good at it.
I ofter use it because sometimes it becomes very useful to run just a single test suite or test case from the whole test base.
I'm wondering if this is good practice or not?
Sure, as long as you run the full set of unit-tests occasionally (via a CI server sounds perfect)
Running the "interesting" tests regularly is better than running all the tests rarely..
I'd address the issue by having a subset of tests ("smoke tests") that take 1 minute or less that must be run before committing, then run the full set of tests from your CI server.
If your full set of tests takes > 15 minutes then I'd look to divide them and run them in parallel.
Then you can use the --filter to run the tests you're most interested in first, then the smoke tests prior to commit, and have the rest run from the CI server.
When you are doing integration tests with either just your data access layer or the majority of the application stack. What is the best way prevent multiple tests from clashing with each other if they are run on the same database?
Transactions.
What the ruby on rails unit test framework does is this:
Load all fixture data.
For each test:
BEGIN TRANSACTION
# Yield control to user code
ROLLBACK TRANSACTION
End for each
This means that
Any changes your test makes to the database won't affect other threads while it's in-progress
The next test's data isn't polluted by prior tests
This is about a zillion times faster than manually reloading data for each test.
I for one think this is pretty cool
For simple database applications I find using SQLite invaluable. It allows you to have a unique and standalone database for each test.
However it does only work if you're using simple generic SQL functionality or can easily hide the slight differences between SQLite and your production database system behind a class, but I've always found that to be fairly easy in the SQL applications I've developed.
Just to add to Free Wildebeest's answer I have also used HSQLDB to do a similar type testing where each test gets a clean instance of the DB.
I wanted to accept both Free Wildebeest's and Orion Edwards' answers but it would not let me. The reason I wanted to do this is that I'd come to the conclusion that these were the two main ways to do it, but which one to chose depends on the individual case (mostly the size of the database).
Also run the tests at different times, so that they do not impact the performance or validity of each other.
While not as clever as the Rails unit test framework in one of the other answers here, creating distinct data per test or group of tests is another way of doing it. The level of tediousness with this solution depends on the number of test cases you have and how dependant they are on one another. The tediousness will hold true if you have one database per test or group of dependant tests.
When running the test suite, you load the data at the start, run the test suite, unload/compare results making sure the actual result meets the expected result. If not, do the cycle again. Load, run suite, unload/compare.