I am using Laravel 5.6 to write a API with no views, just endpoints.
I am very new to testing and my understanding is that Unit tests are from a programmers perspective and Feature tests are from a users perspective.
As i am only creating an API, would I be right to assume that i will only be writing Unit tests? and i am safe to remove the tests/Feature directory all together?
My tests will consist of things like
public function it_authenticate_a_user()
Sorry if its a dull question, i am only trying to learn.
Thanks
No, it's not a good idea to write only unit tests.
A true unit test verifies that a single class or function works as expected. However, it does NOT verify that the whole application works as expected - it's perfectly possible for the application to have 100% coverage from unit tests, but not actually work because the components don't quite fit together as expected. You should also write functional tests for your endpoints. The majority of your tests should be unit tests, but it's a good idea to make sure every API method is covered by functional tests too, just to make sure the pieces fit together.
Put it this way, at Google they advocate a model called the testing pyramid, which gives a typical ratio of 70% unit tests, 20% functional tests, and 10% high level acceptance tests. It should not be seen as rigid, and for an API I see little need for acceptance tests, but it gives some idea as to a healthy mix of tests.
An API is in some ways easier to test than a conventional web app since it's stateless and each method is relatively simple, but it's just as important it has functional test coverage. It's straightforward to test API routes in Laravel - just do the setup, make the request and check the response is correct and any changes have been made.
Related
I'm using PHP with Laravel 5.5 framework.
I recently started writing unitTests for my code and I got a few questions:
What is the best way to interact with my database?
Should I use InMemoryDB like SQLite or Mock everything with Mockery.
If I have an interaction with DB than that is still unitTesting or Integration Testing?
Thank you for the answers in advence
I work in a company where we strive for 80% code coverage, in general we test mostly End-2-End, with database and mocking external calls, we use SQLite so our testsuite can run quickly in a local environment. When the case make sense, we unit tests it, as an example an Tax service i did for different countries i unit tested, because it was very input output based.
Why we prefer End-2-End:
It's quicker if you don't have to make unit, integration and end to end testing
You test the endpoint will actually will be used
I prefer to run with a real database if you are running with Continous Integration
There is drawbacks with SQLite, mainly it does not work as other RDB where there is a lot of settings and limitations, on top of my head i had problems with foreign key enforcing etc.
So to answer your question:
It's smart to use SQLite at least locally
In Unit Testing you are only testing one class and mocking everything else, you are basically testing that the code can execute. Note this is a oversimplified version on a very complex subject.
I was taking an exam yesterday, and I noticed they asked in which order the following occur (and I'll put the order I deemed it to be here):
Unit Testing (Always write your unit tests first!)
Integration Testing (After you have some code and it works with other code / systems)
Validation Testing (Keep your data in a consistent state and make sure no bad data is input)
User / Acceptance Testing (It's all about the users otherwise why are we building a system in the first place?)
Is this about right?
Personally I think load-testing or database tuning oughta be in there at the end, but it wasn't on the test.
This question doesn't make a whole lot of sense.
For one thing, different people have different definitions of pretty much every kind of testing you have mentioned. For example, in Extreme Programming (XP) Acceptance Tests (while being derived from User Stories) have nothing to do with User Testing, or User Acceptance Testing (UAT). Using the XP definition, Acceptance Testing refers to automated tests that run on a build agent before code makes it anywhere near a user. User Acceptance Testing (UAT) on the other hand, is typically a manual process that happens after a proposed final version has been created and deployed to a UAT environment.
As pointed out in the comments already, Validation Testing is not a common concept with a widely accepted definition. Integration testing also means different things to different people. To some, it is testing that different processes/applications work together (in a UAT environment, for example). For others, it is simply automated tests that involve more that one class i.e. not Unit Tests.
Also, what do you mean by "order"? Do you mean the order in which the tests are written, or the order in which they are run before releasing code to the wild and/or production environment?
In any case, the question is largely irrelevant in the real world because different processes work for different teams. For example, I myself would always write an Acceptance Test before any Unit Tests. Following a test first approach, you always write a Unit Test before modifying a class, yes? So why wouldn't you write an Acceptance Test before modifying the whole system?
If "Acceptance Testing" means anything close to the XP definition of acceptance testing, then I don't think it makes sense for this to come last.
This sounds like the kind of "exam question" that only makes sense in the context of the course that you took before the exam. Without all that information (particularly the definitions of each kind of testing) it is very difficult to provide a useful answer to this question.
Instead of validation testing, System testing is correct word. And Database testing is a part of integration and system testing. Also Load testing will be performed on the phase of system and user acceptance test.
I'm developing a Rails app with a lot of dependencies on external APIs, for example Delicious.
All APIs share two workflows:
On the first call they are going to load all data since the beginning of time.
All following calls will load data filtered by the last execution time (if supported).
Testing them in real means I must create a test account for each API or at least use my private one. Even with VCR, because they would be called once. And my biggest problem: I would have to mess around a lot with Date's and Time's to emulate the two different workflows mentioned above. Though Timecop makes it really easy, it feels like a pain in the ass.
Another approach is to fake the API calls and their corresponding responses completely, but this means no real tests and furthermore I would never realize changes or problems with the APIs.
Any suggestions? Maybe a good combination of both ways?
Do both.
Start by mocking/stubbing everything in your regular run-frequently test suite. Do this for all the fine-grained model/controller testing.
Then add end-to-end testing (eg in integration tests) that cover usual workflow-scenarios that hit the real (test) servers.
Alternatively use a different test suite for the end-to-end testing eg cucumber instead of Test::Unit, or selenium/Watir whatever as long as it's different to your usual test suite
I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind:
Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time.
Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test).
Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit).
My question is: do my findings / leanings correspond with the thoughts of the community?
I agree with your findings/learnings. True unit tests only test a portion of the system, ignoring, mocking or faking the rest as necessary. Integration tests (or regression tests) test most or all of the units working together, and that is the true measure of performance.
In some situations you can use unit tests to make sure that an operation finishes within a certain time period. If you want to add more features to your operation, but you don't want to sacrifice performance you can use unit tests to assert that. Of course, these kind of unit tests are machine dependent, but you can throw some additional variables or configuration to the equation.
Performance tests might very well be made up of unit tests.
For example, a unit test might throw several different parameters into a method and verify the method returns an expected output. A performance test might execute that unit test 1000 times (or whatever value makes sense for you) while recording everything from CPU and memory counters right down to how long each test took.
I agree that performance tests cannot be unit tests but there is no reason we cannot have another set of tests called performance tests.
Broadly the tests fall under two categories
a) Unit tests
b) Integration tests
We run integration tests again the real database (instead of in memory) to ensure the sql scripts, the hibernate repositories work as expected
My idea is we can add another set of tests called performance tests which are a part of nightly build which tests for performance of certain functions. This is important to track down the statistics after a code re factoring or to evaluate if changes to one part of application can have unintended consequence on another.
I have come across JunitPerf which might help me to achieve this objective.
Unit tests should take no time to execute because you are only testing a very specific unit / system. Like if your system under test is ClassA : IClassA, you do your mocking / stubbing and only test the behaviour of ClassA, and should not be testing behaviour other than ClassA, such as if ClassA uses ClassB. You should inject a mock of ClassB instead of the concrete to achieve this.
In terms of performance tests, it makes sense to still use a testing framework like NUnit / MBUnit / MavenThought, just keep these tests in a separate assembly and don't invoke them as part of your unit tests.
So if you use Rake to invoke your tests, some of your tasks might look like:
Rake Test:All #Run all unit tests
Rake Test:Acceptance #Run all acceptance tests
Rake Test:Performance #Run all performance tests
Rake Test:Integration #Run all integration tests
Then with your continuous integration, Test:All, is always invoked after a successful build, where as Test:Performance is invoked at 12am once a day.
All depends of what you call performance testing.When micro optimizing specific code I usually use something very similar to unit testing (should I call it unit performance testing ?). That's basically what I do in this question (though not caring there to really use a unit test framework). But I also do this kind of things to optimize my C++ production code within BOOST unit testing framework.
Really there is many kind of performance testing at different levels and with different purposes (heavy-load stress test, profiling, micro optimization). The performance testing you are speaking of in your question seems to be at the functional testing level. A level for which you probably won't use unit testing framework anyway.
I remember years ago Microsoft advocated programmers performance testing their individual asp's using Visual Studio Net Application Center Test (ACT). There was (still may be out there) a whole methodology for performing Transaction Cost Analysis (TCA) on individual asp's. That said these asps could be tested using a web driver and possible mock objects to isolate the code under test (that is mimic DB access if it wasn't developed).
This approach can be followed with any Unit testing provided you have a driver and, optionally, a mock object framework to take care of any dependencies that are not yet written. This approach has also become popular with SOAPUI\LOADUI. In addition I would recommend isolating individual SQL statements that can be tested (optimized) against a given database design. This (DB) SQL unit performance testing can be done early in the SDLC and it will discover query optimization opportunities.
In terms of Cost and Value: I have found early UNIT performance testing, using Mock Objects as appropriate, will identify memory leaks and excessive CPU usage and Disk IO early in the SDLC but I would 'cherry pick' the code under tests for higher risk items.
There are contrast differences between unit test and performance test. First and foremost, unit test is to test the application against its functional requirements. for e.g you want to ensure on clicking the Home tab the webpage navigates to home whereas performance test is a type of non functional test. Here you are concerned about the stability and responsiveness of the application under a particular user load for certain amount of time.
I routinely run into this problem, and I'm not sure how to get past this hurdle. I really want to start learning and applying Test-Driven-Development (or BDD, or whatever) but it seems like every application I do where I want to apply is it pretty much only standard database CRUD stuff, and I'm not sure how to go about applying it. The objects pretty much don't do anything apart from being persisted to a database; there is no complex logic that needs to be tested. There is a gateway that I'll eventually need to test for a 3rd-party service, but I want to get the core of the app done first.
Whenever I try to write tests, I only end up testing basic stuff that I probably shouldn't be testing in the first place (e.g. getters/setters) but it doesn't look like the objects have anything else. I guess I could test persistence but this never seems right to me because you aren't supposed to actually hit a database, but if you mock it out then you really aren't testing anything because you control the data that's spit back; like I've seen a lot of examples where there is a mock repository that simulates a database by looping and creating a list of known values, and the test verifies that the "repository" can pull back a certain value... I'm not seeing the point of a test like this because of course the "repository" is going to return that value; it's hard-coded in the class! Well, I see it from a pure TDD standpoint (i.e. you need to have a test saying that your repository needs a GetCustomerByName method or whatever before you can write the method itself), but that seems like following dogma for no reason other than its "the way" - the test doesn't seem to be doing anything useful apart from justifying a method.
Am I thinking of this the wrong way?
For example take a run of the mill contact management application. We have contacts, and let's say that we can send messages to contacts. We therefore have two entities: Contact and Message, each with common properties (e.g. First Name, Last Name, Email for Contact, and Subject and Body and Date for Message). If neither of these objects have any real behavior or need to perform any logic, then how do you apply TDD when designing an app like this? The only purpose of the app is basically to pull a list of contacts and display them on a page, display a form to send a message, and the like. I'm not seeing any sort of useful tests here - I could think of some tests but they would pretty much be tests for the sake of saying "See, I have tests!" instead of actually testing some kind of logic (While Ruby on Rails makes good use of it, I don't really consider testing validation to be a "useful" test because it should be something the framework takes care of for you)
"The only purpose of the app is basically to pull a list of contacts"
Okay. Test that. What does "pull" mean? That sounds like "logic".
" display them on a page"
Okay. Test that. Right ones displayed? Everything there?
" display a form to send a message,"
Okay. Test that. Right fields? Validations of inputs all work?
" and the like."
Okay. Test that. Do the queries work? Find the right data? Display the right data? Validate the inputs? Produce the right error messages for the invalid inputs?
I am working on a pure CRUD application right now
But I see lots of benefits of Unit test cases (note- I didn't say TDD)
I write code first and then the test cases- but never too apart- soon enough though
And I test the CRUD operations - persistence to the database as well.
When I am done with the persistence - and move on to the UI layer- I will have fair amount of confidence that my service\persistence layer is good- and I can then concentrate on the UI alone at that moment.
So IMHO- there is always benefit of TDD\Unit testing (whatever you call it depending on how extreme you feel about it)- even for CRUD application
You just need to find the right strategy for- your application
Just use common sense....and you will be fine.
I feel like we are confusing TDD with Unit Testing.
Unit Testing are specific tests which tests units of behaviors. These tests are often included in the integration build. S.Lott described some excellent candidates for just those types of tests.
TDD is for design. I find more often then not that my tests I write when using TDD will either be discarded or evolve into a Unit Test. Reason behind this is when I'm doing TDD I'm testing my design while I'm designing my application, class, method, domain, etc...
In response to your scenario I agree with what S.Lott implied is that what you are needing is a suite of Unit tests to test specific behaviors in your application.
TDDing a simple CRUD application is in my opinion kind of like practicing scales on a guitar- you may think that it's boring and tedious only to discover how much your playing improves. In development terms - you would be likely to write code that's less coupled - more testable. Additionally you're more likely to see things from the code consumer's perspective - you'll actually be using it. This can have a lot of interesting side effects like more intuitive API's, better segregation of concerns etc. Granted there are scaffold generators that can do basic CRUD for you and they do have a place especially for prototyping, however they are usually tied to a framework of sorts. Why not focus on the core domain first, deferring the Framework / UI / Database decisions until you have a better idea of the core functionality needed - TDD can help you do that as well.
In your example: Do you want messages to be a queue or a hierarchical tree etc?
Do you want them to be loaded in real time? What about sorting / searching? do you need to support JSON or just html? it's much easier to see these kinds of questions with BDD / TDD. If you're doing TDD you may be able to test your core logic without even using a framework (and waiting a minute for it to load / run)
Skip it. All will be just fine. I'm sure you have a deadline to meet. (/sarcasm)
Next month, we can go back and optimize the queries based on user feedback. And break things that we didn't know we weren't supposed to break.
If you think the project will last 2 weeks and then never be reopened, automated testing probably is a waste of time. Otherwise, if you have a vested interest in "owning" this code for a few months, and its active, build some tests. Use your judgement as to where the most risk is. Worse, if you plan on being with the company for a few years, and have other teammates who take turns whacking on various pieces of a system, and it may be your turn again a year from now, build some tests.
Don't over do it, but do "stick a few pins in it", so that if things start to "move around", you have some alarms to call attention to things.
Most of my testing has been JUnit or batch "diff" type tests, and a rudimentaryy screen scraper type tool I wrote a few years ago (scripting some regex + wget/curl type stuff). I hear Selenium is supposed to be a good tool for web app UI testing, but have not tried it. Anybody have available tools for local GUI apps???
Just an idea...
Take the requirements for the CRUD, use tools like watij or watir or AutoIt to create test cases. Start creating the UI to pass the test cases. Once you have the UI up and passing maybe just one test, start writing the logic layer for that test, and then the db layer.
For most users, the UI is the system. Remember to write test cases for each new layer that you are building. So instead of starting from the db to app to ui layer, start in the reverse direction.
At the end of the day, you would probably have a accumulated a powerful set of regression test set, to give you some confidence in doing refactoring safely.
this is just an idea...
I see what you are saying, but eventually your models will become sufficiently advanced that they will require (or be greatly augmented by) automated testing. If not, what you are essentially developing is a spreadsheet which somebody has already developed for you.
Since you mentioned Rails, I would say doing a standard create/read/update/delete test is a good idea for each property, especially because your test should note permissions (this is huge I think). This also ensures that your migrations work as you expected them to.
I am working on a CRUD application now. What I am doing at this point is writing unit tests on my Repository objects and test that the CRUD features are working as they should. I have found that this has inherently unit tested the actual database code as well. We have found quite a few bugs in the database code this way. So I would suggest you push ahead and keep going with unit tests. I know applying TDD on CRUD apps is not as glamorous as things you might read about in blogs or magazines, but it is serving its purpose and you will be that much better when you work on a more complex application.
These days you should not need much hand written code for a CRUD app apart from the UI, as there are a 101 frameworks that will generate the database and data access code.
So I would look at reducing the amount of hand written code, and automating the testing of the UI. Then I would use TDD of the odd bits of logic that need to be written by hand.