Testing multiple classes with the same rspec tests - ruby

I'm building a web app in ruby, and it does all of it's data access through repositories. To make my unit tests fast, I also have in-memory versions of these repositories.
I would like the behaviour of these repositories to be identical, so it makes sense to use the same tests for both the real and in-memory repositories.
How would I go about doing this?

Related

Is it worth implementing service integration tests in Spring Boot application?

I have come accross multiple articles on integration testing on Spring Boot applications. Given that the application follows three layer pattern (Web Layer - Service Layer - Repository Layer) I have not seen a single article with integration testing the application up to just the service layer (ommiting the web layer) where all the business logic is contained. All of the integration tests seem like controller unit tests - mostly veryfing only request and response payloads, parameters etc.
What I would like however is to verify the business logic using service integration tests. Since the web layer is responsible only for taking the results from services and exchanging them with the client I think this makes much more sense. Such tests could also contain some database state verifications after running services to e.g. ensure that there are no detached leftovers.
Since I have never seen such a test, is it a good practice to implement one? If no, then why?
There is no one true proper way to test Spring applications. A general approach is as you described:
slices tests (#DataJpaTest, #WebMvcTest) etc for components that heavily rely on Spring
unit tests for domain classes and service layer
small amount of e2e tests (#SpringBootTest) to see if everything is working together properly
Spotify engineers on the other hand wrote how they don't do almost any unit testing and everything is covered with integration tests that covered with integration tests.
There is nothing stopping you from using #SpringBootTest and test your service layer with all underlying components. There are things you need to consider:
it is harder to prepare test data (or put system under certain state), as you need to put them into the database
you need to clean the database by yourself, as (#SpringBootTest) does not rollback transactions
it is harder to test edge cases
you need to mock external HTTP services with things like Wiremock - which is also harder than using regular Mockito
you need to take care of how many application contexts you create during tests - first that it's slow, second each application context will connect to the database, so you will create X connections per context and eventually you can reach limits of your database server.
This is borderline opinion-based, but still, I will share my take on this.
I usually follow Mike Cohn's original test pyramid such as depicted below.
The reason is that unit tests are not only easier to write but also faster and most likely cover much more than other more granular tests.
Then we come across the service or integration tests, the ones you mention in your question. They are usually harder to write simply because you are now testing the whole application and not only a single class and take longer to run. The benefit is that you are able to test a given scenario and most probably they do not require as much maintenance as the unit tests when you need to change something in your code.
However, and here comes the opinion part, I usually prefer to focus much more on writing good and extensive unit tests (but not too much on test coverage and more on what I expect from that class) than on fully-fledged integration tests. What I do like to do is take advantage of Spring Slice Tests which in the pyramid would be placed between the Unit Tests and the Service Tests. They allow you to focus on a specific class (a Controller for example) but they also allow you to test some integration with the underlying Spring Framework or infrastructure. This is for me the best of both worlds. You can still focus on a single class but also test some relevant components of your application. You can test your web layer with #WebMvcTest or #WebFluxTest (so that you can test JSON deserialization and serialization, bean validation, etc...), or you can focus on your persistence layer with #DataJpaTest, #JdbcTest or #DataMongoTest (so that you can test the actual persistence and retrieval of data).
Wrapping up, I usually write a bunch of Unit Tests and then web layer tests to check my Controllers and also some persistence layer tests against a real database.
You can read more in the following interesting online resources:
https://martinfowler.com/articles/practical-test-pyramid.html
https://www.baeldung.com/spring-tests

Spring Data Jpa. How to cleaning data from repositories befor run unit test from particular test classes?

I have problem with unit tests for persistance stuff written in spring data jpa.
For particular repositories I have a unit tests to be sure that everything works correctly. Also I have a integration tests. Each tests are passed when I run it for particular test classes. But when i run a whole package of tests I got a lot of faliures because I have records inserted into DB from previous tests.
Of course in each test classes I can add #After method to clean each data but I would like to ask that it posible to clean all data from DB before run tests from particular test classes without adding #After method?
Best Regards.
Use Spring's transactional test support to ensure that database changes are rolled back after each test:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/testing.html#testcontext-tx
One common issue in tests that access a real database is their effect
on the state of the persistence store. Even when you’re using a
development database, changes to the state may affect future tests.
Also, many operations — such as inserting or modifying persistent
data — cannot be performed (or verified) outside a transaction.
The TestContext framework addresses this issue. By default, the
framework will create and roll back a transaction for each test.

Best way to do TDD and CSLA

I would like to know what tools, patterns, etc people have used to be able to do TDD with CSLA .NET 3.8 and higher.
Which parts pose the most effort. Are there parts that are completely untested, etc.
Any and all information is most welcome.
Thanks
I use a combination of SpecFlow with xUnit to test my CSLA business objects. SpecFlow with xUnit are used to define and test the user scenarios (think: functional/acceptance testing), and xUnit alone is used to test individual classes and combinations of such.
Dependencies within the CSLA classes, such as data-access, are injected via a container. Such dependencies can and often are mocked for unit testing.
The test client and our remote Data Portal have separate containers loaded with the correct dependencies. If a test needs to mock any of the Data Portal dependencies we have a special CSLA Command that is executed (via xUnit BeforeAfterTestAttribute) on the Data Portal and replaces standard dependencies with our mocked dependencies. When the tests complete the Command is executed again to put the standard dependencies back into the container.
I hope some of this helps.

How to define dependency between tests in MStest

I have some tests which are dependent on the success and failure of some tests. How can I define dependency as I am using VS2010 Mstest and selenium.
E.g
if test1 is failed then dont run test5, test 6. is this possible.
Unit Tests should always be isolated and completly non dependent on and thing else to run, not make non-fragile.
You could setup catagories with MSTest to seperate them into deferent logical structures.
A great book to find more details is this http://artofunittesting.com
Roy has also does alot of public speaking which is recorded online
Cheers
Tests shouldn't have dependencies between them.
If you have dependencies, then running them in a different order, or in isolation will cause them to fail sporadically - this can be very confusing for anyone else that is running the tests.
It's much better to define tests that setup their own data and assert something specific. You can use a mocking framework like Rhino Mocks to reduce the dependencies between modules of code by faking (mocking) areas that aren't relevant to your test. This is made much easier if you also use a dependency injection framework like Microsoft Unity as your code will have many more seams where mocking can be applied.

Should I include system tests in a Spring project?

My Spring web project consists of:
util classes;
repositories;
services;
controllers.
The tests are as follows:
unit tests for util classes;
spring integration tests for repositories with HSQLDB;
unit tests for services with mock repositories;
unit tests for controllers with mock services.
There also may be system tests which test the overall project functionality. It can be performed with an external tool like Selenium or it can be performed using Spring integration testing.
The question is, should I include such spring integration system tests in a project or should they be separated somehow?
I see two problems about including system tests in a project:
1. they need configuration tuning because such tests will not run with production config (e.g. tests need a local datasource, not the one from JNDI);
2. they aren't autonomous, they need some external resources and so on. I cannot just run them as usual unit tests.
How do you organize your system testing?
On small projects I've kept them in the same place. On large enterprise projects (the kind for which you might usefully leverage Spring, for instance) we've usually organised system tests in a separate package / project. This helps keep them separate from the main codebase.
If you don't do this, there's all kinds of temptation to reuse classes from the code to "help out" in something which should be more strongly focused on the experience of users of the system (a user may be another system). If this happens, you end up with coupling between the project domain classes and the UI, which will have the inevitable effect of needing to duplicate much of the logic which helps keep them decoupled in the real codebase.
Most of the time the logic in system scenarios will actually be focused on pages, screens, web-calls, etc. so reusing code from the main project is a red herring. Keep the packages separate to avoid this happening, and because once you avoid it happening there's no need to have them in the same place.
Do, however, make sure that the system tests are checked in to the same version control as the code.
If you're not doing continuous integration and testing / deployment yet, that might be another area for which some learning will help you with the config files. That problem doesn't go away just because you have tests in a separate project, unfortunately.

Resources