I'm using PHP with Laravel 5.5 framework.
I recently started writing unitTests for my code and I got a few questions:
What is the best way to interact with my database?
Should I use InMemoryDB like SQLite or Mock everything with Mockery.
If I have an interaction with DB than that is still unitTesting or Integration Testing?
Thank you for the answers in advence
I work in a company where we strive for 80% code coverage, in general we test mostly End-2-End, with database and mocking external calls, we use SQLite so our testsuite can run quickly in a local environment. When the case make sense, we unit tests it, as an example an Tax service i did for different countries i unit tested, because it was very input output based.
Why we prefer End-2-End:
It's quicker if you don't have to make unit, integration and end to end testing
You test the endpoint will actually will be used
I prefer to run with a real database if you are running with Continous Integration
There is drawbacks with SQLite, mainly it does not work as other RDB where there is a lot of settings and limitations, on top of my head i had problems with foreign key enforcing etc.
So to answer your question:
It's smart to use SQLite at least locally
In Unit Testing you are only testing one class and mocking everything else, you are basically testing that the code can execute. Note this is a oversimplified version on a very complex subject.
Related
I am using Laravel 5.6 to write a API with no views, just endpoints.
I am very new to testing and my understanding is that Unit tests are from a programmers perspective and Feature tests are from a users perspective.
As i am only creating an API, would I be right to assume that i will only be writing Unit tests? and i am safe to remove the tests/Feature directory all together?
My tests will consist of things like
public function it_authenticate_a_user()
Sorry if its a dull question, i am only trying to learn.
Thanks
No, it's not a good idea to write only unit tests.
A true unit test verifies that a single class or function works as expected. However, it does NOT verify that the whole application works as expected - it's perfectly possible for the application to have 100% coverage from unit tests, but not actually work because the components don't quite fit together as expected. You should also write functional tests for your endpoints. The majority of your tests should be unit tests, but it's a good idea to make sure every API method is covered by functional tests too, just to make sure the pieces fit together.
Put it this way, at Google they advocate a model called the testing pyramid, which gives a typical ratio of 70% unit tests, 20% functional tests, and 10% high level acceptance tests. It should not be seen as rigid, and for an API I see little need for acceptance tests, but it gives some idea as to a healthy mix of tests.
An API is in some ways easier to test than a conventional web app since it's stateless and each method is relatively simple, but it's just as important it has functional test coverage. It's straightforward to test API routes in Laravel - just do the setup, make the request and check the response is correct and any changes have been made.
Am starting to write unit tests along the lines of https://github.com/neo4jrb/neo4j/wiki/How-To-Test
One of the approaches there is really slow (10 seconds per test) and the other doesn't delete labels (and probably other things)
Can anyone suggest a more elaborate approach? I noticed that in the core neo4j material, the java documentation describes methods that create and tear down temporary databases, but I don't see a way to access those from the (very nice) ruby and rails neo4j gems. Perhaps via the low-level REST api? It's hard to figure out what api calls are available.
So you could probably surround your tests in transactions which is a typical approach for testing with ActiveRecord in Ruby. That might be more performant, but it should also help keep the database clean.
But you're right, the impermanent database is the tool that's provided in Neo4j for temporary databases for testing. I think that's only available if you're running JRuby, though. I did run across this, though:
https://groups.google.com/forum/#!topic/neo4j/7xeEPWEiqD0
Which links to a project which lets you start up a Neo4j server in "in memory" mode (using the impermanent database):
https://github.com/jexp/neo4j-in-memory-server
That's showing examples for Neo4j 2.0.0, so I don't know if it would work for later versions, but it might be worth a shot for your testing database.
EDIT: Another thing that I just thought of is to use the vcr gem:
https://github.com/vcr/vcr
It basically records all of the requests made to your server and then plays them back. This works great for API endpoints where result are idempotent, but if you use it for a database like Neo4j you should make sure that your tests are clearing the database before every test run so that it always starts fresh
I'm working on a PHP project with solid unit-tests coverage.
I've noticed, that last time, I'm making very tricky manipulations with unit-tests Command-Line Test Runner' --filter command.
Here is this command's explanation from official documentation:
--filter
Only runs tests whose name matches the given pattern. The pattern can be either the name of a single test or a regular expression that matches multiple test names.
I ofter use it because sometimes it becomes very useful to run just a single test suite or test case from the whole test base.
I'm wondering if this is good practice or not?
I have heard that sometimes it is good practice to to run the whole test suite on your Continuous Integration machine, if you know for sure that you have modified only one component and 100% percent confident, that it won't fail other component's unit-tests.
What do you think about it?
Some time ago I thought that we shouldn't care so much about time require to run the whole suite of all unit-tests, but when you have very complicated business logic and unit-tests - this can take significant time.
I understand, that "real" unit-tests shouldn't interact with DB, use mock/stubs objects, I agree with that. But sometimes, it is much easier(cheaper) to use DB fixtures for the tests.
Please give me some advice, how this problem can be solved?
Good unit tests should:
Have clear methods names and variable names to act as documentation
Run fast. This will also be possible
for test with complicated business
logic. Test should run in an avarage
time of something around 0.1 second.
Test exactly one thing in one test method
Not integrate with external resources like the filesystem, email,
databases, webservices, and
everything else. You can create
seperate database integration tests
to test your database ineraction.
These test will slower then your unit
test most of the time. I put my
integration tests in a seperate
project and I run them only when I am
working on the integration code. I
also run them on all builds on the CI
server.
Be completely isolated from each other. When you have tests depending
on each other, you cannot see what
your problem is from reading which
tests are failed. You might have to
debug to find the problem. Isolated
tests will save you a lot of time.
Personally, I don't use category names in my tests. I use 2 test projects per application. One for the unit test and one for the integration tests and slower tests.
Reaction on:
"But sometimes, it is much
easier(cheaper) to use DB fixtures for
the tests."
When your code is written well, it will be easier to mock. I don't know about mocking frameworks in Php, but I use them in other languages to save me a lot of time. Writing test first and code later might help you to design your code to be testable easier.
Personally I learned to test better by
reading blogs about it
reading books about it
reading tested code written by others
writing a lot of tests of course. It took me a few thousends of tests to become good at it.
I ofter use it because sometimes it becomes very useful to run just a single test suite or test case from the whole test base.
I'm wondering if this is good practice or not?
Sure, as long as you run the full set of unit-tests occasionally (via a CI server sounds perfect)
Running the "interesting" tests regularly is better than running all the tests rarely..
I'd address the issue by having a subset of tests ("smoke tests") that take 1 minute or less that must be run before committing, then run the full set of tests from your CI server.
If your full set of tests takes > 15 minutes then I'd look to divide them and run them in parallel.
Then you can use the --filter to run the tests you're most interested in first, then the smoke tests prior to commit, and have the rest run from the CI server.
I've been trying to push my mentallity when developing at home to be geared more towards TDD and a bit DDD.
One thing I don't understand though is why you would create a fake repository to test against? I haven't really looked into it much but surely the idea of testing is to help decouple your code (giving you more flexability), trim down code needed and bring down the number of bugs.
So can someone fill in my foolish brain as to why some like to test fake repositories? I would have thought testing against a real database is a much better alternative to creating a fake one because then you KNOW that it works against your real world data store.
The fake repository allows you to test just your application code.
The fake repository means an automated test can easily set up a known state in the repository.
The fake repository will be several orders of magnitude faster than a real database.
The fake repository is NOT a substitute for system testing that will include your database.
As I see it there are two really big reasons why you test against faked resources:
It makes unit testing faster when you have a mocked up against slow I/O or database. This may not look like anything if you have a small test suite but when you're up to +500 unit tests it starts to make a difference. In such amount, tests that run against the database will start to take several seconds to do. Programmers are lazy and want things to go fast so if running a test suite takes more than 10 seconds then you won't be happy to do TDD anymore.
It enforces you to think about your code design to make changes easier. Design by contract and dependency injection also becomes so much easier to do if you've made implementation against interfaces or abstract classes. If done right such design makes it easier to comply to changes in your code.
The only drawback is the obvious one:
How can you be sure it really works?
...and that is what integration tests are for.
I upvoted Giraffe's answer, but want to add just a couple of points:
Each developer can use a mock/fake
repository for her/his own unit
testing without interfering with the
tests being done by other developers
on the same project.
Using a local mock/fake repository
reinforces the user of a data
abstraction layer, which is good
design practice.
As an example, I've used something as simple as a HashMap to implement a mock of the data access layer. This makes it extremely easy for each unit test to ensure that exactly the necessary conditions exist for its purpose, and to verify that the right calls were made on the data access layer.
When you are doing integration tests with either just your data access layer or the majority of the application stack. What is the best way prevent multiple tests from clashing with each other if they are run on the same database?
Transactions.
What the ruby on rails unit test framework does is this:
Load all fixture data.
For each test:
BEGIN TRANSACTION
# Yield control to user code
ROLLBACK TRANSACTION
End for each
This means that
Any changes your test makes to the database won't affect other threads while it's in-progress
The next test's data isn't polluted by prior tests
This is about a zillion times faster than manually reloading data for each test.
I for one think this is pretty cool
For simple database applications I find using SQLite invaluable. It allows you to have a unique and standalone database for each test.
However it does only work if you're using simple generic SQL functionality or can easily hide the slight differences between SQLite and your production database system behind a class, but I've always found that to be fairly easy in the SQL applications I've developed.
Just to add to Free Wildebeest's answer I have also used HSQLDB to do a similar type testing where each test gets a clean instance of the DB.
I wanted to accept both Free Wildebeest's and Orion Edwards' answers but it would not let me. The reason I wanted to do this is that I'd come to the conclusion that these were the two main ways to do it, but which one to chose depends on the individual case (mostly the size of the database).
Also run the tests at different times, so that they do not impact the performance or validity of each other.
While not as clever as the Rails unit test framework in one of the other answers here, creating distinct data per test or group of tests is another way of doing it. The level of tediousness with this solution depends on the number of test cases you have and how dependant they are on one another. The tediousness will hold true if you have one database per test or group of dependant tests.
When running the test suite, you load the data at the start, run the test suite, unload/compare results making sure the actual result meets the expected result. If not, do the cycle again. Load, run suite, unload/compare.