I am currently writing unit tests for some of my methods and I still have a lot to write.
Is there a way to mark methods in my primary project as covered or handled by unit tests?
I would like to have some overview.
Related
I have come accross multiple articles on integration testing on Spring Boot applications. Given that the application follows three layer pattern (Web Layer - Service Layer - Repository Layer) I have not seen a single article with integration testing the application up to just the service layer (ommiting the web layer) where all the business logic is contained. All of the integration tests seem like controller unit tests - mostly veryfing only request and response payloads, parameters etc.
What I would like however is to verify the business logic using service integration tests. Since the web layer is responsible only for taking the results from services and exchanging them with the client I think this makes much more sense. Such tests could also contain some database state verifications after running services to e.g. ensure that there are no detached leftovers.
Since I have never seen such a test, is it a good practice to implement one? If no, then why?
There is no one true proper way to test Spring applications. A general approach is as you described:
slices tests (#DataJpaTest, #WebMvcTest) etc for components that heavily rely on Spring
unit tests for domain classes and service layer
small amount of e2e tests (#SpringBootTest) to see if everything is working together properly
Spotify engineers on the other hand wrote how they don't do almost any unit testing and everything is covered with integration tests that covered with integration tests.
There is nothing stopping you from using #SpringBootTest and test your service layer with all underlying components. There are things you need to consider:
it is harder to prepare test data (or put system under certain state), as you need to put them into the database
you need to clean the database by yourself, as (#SpringBootTest) does not rollback transactions
it is harder to test edge cases
you need to mock external HTTP services with things like Wiremock - which is also harder than using regular Mockito
you need to take care of how many application contexts you create during tests - first that it's slow, second each application context will connect to the database, so you will create X connections per context and eventually you can reach limits of your database server.
This is borderline opinion-based, but still, I will share my take on this.
I usually follow Mike Cohn's original test pyramid such as depicted below.
The reason is that unit tests are not only easier to write but also faster and most likely cover much more than other more granular tests.
Then we come across the service or integration tests, the ones you mention in your question. They are usually harder to write simply because you are now testing the whole application and not only a single class and take longer to run. The benefit is that you are able to test a given scenario and most probably they do not require as much maintenance as the unit tests when you need to change something in your code.
However, and here comes the opinion part, I usually prefer to focus much more on writing good and extensive unit tests (but not too much on test coverage and more on what I expect from that class) than on fully-fledged integration tests. What I do like to do is take advantage of Spring Slice Tests which in the pyramid would be placed between the Unit Tests and the Service Tests. They allow you to focus on a specific class (a Controller for example) but they also allow you to test some integration with the underlying Spring Framework or infrastructure. This is for me the best of both worlds. You can still focus on a single class but also test some relevant components of your application. You can test your web layer with #WebMvcTest or #WebFluxTest (so that you can test JSON deserialization and serialization, bean validation, etc...), or you can focus on your persistence layer with #DataJpaTest, #JdbcTest or #DataMongoTest (so that you can test the actual persistence and retrieval of data).
Wrapping up, I usually write a bunch of Unit Tests and then web layer tests to check my Controllers and also some persistence layer tests against a real database.
You can read more in the following interesting online resources:
https://martinfowler.com/articles/practical-test-pyramid.html
https://www.baeldung.com/spring-tests
I need to know if testing only restControllers in spring boot application is enough in unit testing or better to test all the classes independently?.
There is no simple and 'right' answer. You should decide it for yourself.
First of all it worth to mention two things:
Framework for testing controllers: Spring MockMvc framework, it is for integration testing of Spring web applications. It can test single controller (with mocked service layer) and entire application (including service and database layers, with embedded databases).
Article answering which tests you should have: Test pyramid conception (in short you should have a lot of unit tests and a little of integration tests).
My personal preference is to cover the 'happy path' by integration tests (covering service and DB layers).
Then I cover by unit tests classes with complex logic (not all classes).
In my practice most of applications are quite simple, and I found that I have a lot of integration tests and not so many unit tests.
Max Fariskov is right, and I can add two remarks.
Unit tests work properly when you launch them really often. After
every compile is often enough :). That's why every unit test must chek one little part of the functionality. Testing the whole controller is a too complex task.
Decouple business logic from a controller,
Put it into service,
Test service with its own unit test
Mock service functionality in your controller - test and check only
controller behavior.
Unit tests are fast but the integration tests are slooow and complex. So it is a good idea to split them into different groups. Execute your unit tests often and your integration tests only when you have to.
I'm doing it with "profiles":
Create two empty interfaces. For example:
public interface UnitTest {}
public interface IntegrationTest {}
Mark your test classes with proper annotation
#Category({IntegrationTest.class})
public class MySlowAndComplexServiceTest {}
#Category({UnitTest.class})
public class MyFastServiceTest {}
create two profiles in you .pom
unitTests
true
maven-surefire-plugin
2.19.1
UnitTest.class
integrationTests
false
maven-surefire-plugin
2.19.1
IntegrationTest.class,UnitTest.class
It's done.
Your unit test will be executed by default and you can switch on "integrationTests" profile to activate you integration tests at any moment.
The title says it all.
I'm working on writing a test that combines two other tests. Instead of repeating code, is there a way to call an rspec test from inside another rspec test, or do I have to write the new test long hand?
RSpec is a unit test framework and unit tests are intended to exclusively test the internal functionality of a single component. Dummy values and mocks should be injected through some form of mock-ups for tests that use external components.
Your code and tess probably aren't following the Single Responsibility Principle, nor standards of test-driven development if you have to do this strange implementation to have run RSpec tests. I'd suggest reading Clean Code by Robert Martin or some other book on programming theory, or just wikipedia Test Driven Development and RSpec guidelines.
If you are doing integration testing which is totally necessary, then you should try some other test strategies too.
Do you think it is advisable to use Code Contracts inside your unit tests?
I don't think it is since it is like creating unit tests to test your unit tests.
Therefore I would not activate the Code Contracts Static Checking nor the Runtime Checking for a Unit Test project.
To clarify, I would like to know, in particular, if you would write Code Contracts code inside your unit tests.
I think it's a good idea to enable Code Contracts for unit tests, then any contract failures can be caught when the tests are run.
However it's usually not useful to do any contract checking in the test methods themselves, except in helper methods used by the test methods.
Improved testability
•each contract acts as an oracle, giving a test run a pass/fail indication.
•automatic testing tools, such as Pex, can take advantage of contracts to generate more meaningful unit tests by filtering out meaningless test arguments that don't satisfy the pre-conditions.
I have some tests which are dependent on the success and failure of some tests. How can I define dependency as I am using VS2010 Mstest and selenium.
E.g
if test1 is failed then dont run test5, test 6. is this possible.
Unit Tests should always be isolated and completly non dependent on and thing else to run, not make non-fragile.
You could setup catagories with MSTest to seperate them into deferent logical structures.
A great book to find more details is this http://artofunittesting.com
Roy has also does alot of public speaking which is recorded online
Cheers
Tests shouldn't have dependencies between them.
If you have dependencies, then running them in a different order, or in isolation will cause them to fail sporadically - this can be very confusing for anyone else that is running the tests.
It's much better to define tests that setup their own data and assert something specific. You can use a mocking framework like Rhino Mocks to reduce the dependencies between modules of code by faking (mocking) areas that aren't relevant to your test. This is made much easier if you also use a dependency injection framework like Microsoft Unity as your code will have many more seams where mocking can be applied.