Mockito and jdbc procedure call - jdbc

I am trying to write a unit test for some jdbc procedure calls with mockito.
It is my first time to write tests with mock objects (mockito).
The method i am trying to test looks some thing like this...
public void deleteData(final Connection connection, final AnObject ) {
CallableStatement statement = null;
statement = connection.prepareCall("{call DEL_DATA(?)}");
statement.setInt(1, object.getId());
statement.executeUpdate();
connection.commit();
DatabaseSql.close(statement);
}
How can I test methods like this with mockito and junit?
Thanks in advance.

A method like this isn't really a candidate for unit testing, because its whole purpose is to interact with the database. Maybe you want to test that you're interacting with the database correctly. This would be a valid test, but to do that, there would need to be a database involved.
Basically, we're talking about an integration test now, not a unit test. And I can't see that Mockito would be very much help to you, although JUnit certainly would.
In the past, the way I've tested code like this is with a lightweight in-memory database. There are a few of these, but the one that I would recommend is H2 (h2database.com). This is fairly fast and easy to use, once you've got the H2 jar in your path.
You probably want your integration test to do the following.
Create a dummy table to record procedure calls,
Create a dummy DEL_DATA procedure, which does nothing but record what parameters it was called with in the dummy table
Run the method
Select from the dummy table, to verify that the procedure was called correctly.
With H2, you can run such tests in "in memory" mode, which means there is no need for any clean-up step at the end of each test.

There is no point writing a unit test for this code. Once you mocked DB access parts there is no logic left for you to unit test.
You need to mock your business logic no your persistence code.

Well the short answer is "you can't, that's not what it is designed for".
Besides which, your "deleteData" method is not directly testable and has an invalid signature.
In order to test if your functionality works you would have to first invoke your deleteData method, then attempt to load the deleted data (assuming your DataStore is ACID), and assert that the loaded data does not exist. Which is not a unit test (because it is not isolated).
Either rewrite your persistence in such a way as to be testable (as a unit), or alternatively, test this in an integration test instead of a unit test.

You should not mock the JDBC calls - it can be done, but it is too complex and there isnot much value in doing it. Instead you would Mock the deleteData method to test other methods that call it.
To test deleteData method itself you will need to write an integration test that connects to a real database or an embedded database.

Related

Unit, Integration or Feature Test?

A simple question: How do you differentiate between a feature, unit and integration test?
There are a lot of differing opinions, but I'm specifically trying to determine how to organise a Laravel test which touches a model's relationship. Here is an example if some PHP code which would require testing:
public function prices()
{
return $this->hasMany(Prices::class);
}
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
The test descriptions as I understand them (feel free to correct me):
Unit test
Tests the smallest part of your code
Does not touch the database
Does not interact with any other part of the system
Integration test
Tests part of the system working together
e.g controllers which call helper functions which need to be tested together
Feature test
Blackbox test
e.g. Call an api end point, see that it has returned the correct JSON response
Here is my issue given those descriptions:
My Laravel model test needs to test the smallest unit of code - the calculated accessor of a model, which makes it feel like a Unit test
But, it touches the database when it loads the model's relationship
It doesnt feel like an Integration test, because it is only touching other related models, not internal or external services
Other property accessor tests in Laravel would fall under Unit tests when they do not touch the database or the model's relationships
Separating these types of tests into integration tests would mean that a single model's tests against its properties are fragmented between integration and unit tests
So, without mocking relationships between models, where would my test belong?
If I’m interpreting your original question correctly, I think the killer constraint here is:
So, without mocking relationships between models, where would my test belong?
If mocking isn't allowed and you're required to touch a DB then, by your/and google's definition, it has to belong as an integration/medium size test :)
The way I think of this is get price attribute functionality is separate from the DB. Even though it's in the model the prices could come from anywhere. Right now its a RDBMS but what if your org go really big and it split into another service? Basically, I believe, that the capability of getPriceAttributes is distinct from the storage of attributes:
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
If you buy into this reasoning, it creates a logical separation that supports unit tests. prices() can be mocked to returns a collection of 0, 1 & many (2) results. This test can be executed as a unit tests (for orders of magnitude faster test execution (ie on the order of 1ms vs potentially 10s or 100s of ms talking to a local DB)
I am not familiar with php test ecosystem but one way to do this could be with a test specific subclass (not sure if the following is valid PHP :p ):
class PricedModel extends YourModel {
function __construct($stub_prices_supporting_first) {
$this->stub_prices = $stub_prices_supporting_first;
}
public function prices() {
return $this->stub_prices;
}
}
tests
function test_priced_model_0_prices() {
p = new PricedModel(new Prices(array()));
assert.equal(null, p.getPriceAttribute());
}
function test_priced_model_1_price() {
p = new PricedModel(new Prices(array(1)));
assert.equal(2, p.getPriceAttribute());
}
function test_priced_model_2_prices() {
p = new PricedModel(new Prices(array(5, 1)));
assert.equal(10, p.getPriceAttribute());
}
The above should hopeuflly allow you to fully control input into the getPriceAttribute method to support direct IO-free unit testing.
——
Also all the unit tests above can tell you is that you’re able to process prices correctly , it doesn’t price any feedback on if you’re able to query prices !
What distinguishes the tests is their respective goal:
Unit-testing aims at findings those bugs that can be found in isolated small parts of the software. (Note that this does not say you must isolate - it only means your focus is on the isolated code. Isolation and mocking often enough are not needed to reach this goal: Think of a call to a sin function - you almost never need to mock this, you let your system under test just call the original one.)
Integration testing aims at findings bugs in the interaction of two or more components, for example mutual misconceptions about an interface. These bugs can not be found in the isolated software: If you test code in isolation, you also write your tests on your (possibly wrong) understanding of the other components.
Feature tests as you describe them will then have the goal to find further bugs, which the other tests so far could not detect. One example for such a bug could be, that an old version of the feature was integrated (which was correct at that time, but lacked some functionality).
The conclusion, although it may be surprising, is, that it is not in the stricter sense forbidden to make data base accesses in unit-testing. Consider the following scenario: You start writing unit-tests and mock the data base accesses. Later, you realize you can be more lazy and just use the data base without mocking - but otherwise leave all the tests as they are. Your tests have not changed, and they will continue finding the bugs in the isolated code as before. They may run a bit slower now, and the setup may be more complex than with the mocked data base. However, the goal of the test suite was the same - with and without mocking the data base.
This scenario simplifies things a bit, because there may be test cases that can only be done with a mock: For example, testing the case that the data base gets corrupted in a specific way and your code handles this properly. With the real data base such test cases may be practically impossible to set up.

How do I completely avoid using a database in RSpec tests?

I want to use FactoryGirl to build in-memory stubs of models, then have all ActiveRecord queries run against only those. For example:
# Assume we start with an empty database, a Foo model,
# and a Foo factory definition.
#foo_spec.rb
stubbed_foo = FactoryGirl.build_stubbed(:foo)
# Elsewhere, deep in the guts of application
Foo.first() # Ideally would return the stubbed_foo we created
# in the test. Currently this returns nil.
The solution might be to use an in-memory database. But is the above scenario possible?
If your reason for avoiding the database, is to speed up your tests, then there are better ways.
Use FactoryGirl.build as much as possible instead of create. This works as long as the record won't be fetched from the database by your code. This works well for unit tests with well-structured code. (For example, it helps to use Service Objects and unit test them independently.
For tests that actually need to read from the database (as in your Foo.first example call), you can use FactoryGirl.create and use transactional fixtures. This creates a database transaction at the beginning of each test example, and then rolls back the transaction at the end of the example. This can cause problems when you use callbacks in your ActiveRecord models such as after_commit.
If you use after_commit or other callbacks in your models that require the database transaction to close (or you use explicit transactions in your code), I recommend setting up DatabaseCleaner. Here's an example of to configure and use it: https://gist.github.com/RobinDaugherty/9f4e5f782d9fdbe191a23de30ad8b539

TDD Data Access layer

In TDD, I've been testing business logic by mocking data access functionalities.
but in reality I need the layers below the business layer also to be implemented for the application to work.
should I be implementing data access layer using TDD?
Based on the discussions I've seen on web, unit tests should not be connecting to any external resources such as databases, web services etc.. If they connect, then they become integration tests.
Could someone shed some light on this please.
Thank you very much.
You are right, contact to the outside makes it integration test, but that contact is important to test as well. Using TDD, it should be exposed to you, that the contact surface is as small as possible. That can be achieved by using wrappers for each record, or similar methods.
If you're using something like Hibernate for example and if you have any kind of logic in your DAO you can mock out the calls to eg Session and Query and unit test without hitting the database.
If you want to test the queries themselves you could use an in-memory database and something like DbUnit. I would count these as integration tests and run them separately as they tend to take longer.
Here's an example of a typical Java Spring/Hibernate DAO method with some logic you might want to test:
public List<Entity> searchEntities(final String searchText, final int maxResults) {
String sql = "select * from entity where field like :text";
Query query = sessionFactory.getCurrentSession().createSQLQuery(sql);
if (maxResults != 0) {
query.setMaxResults(maxResults);
}
query.setString("searchText", "%"+searchText+"%");
return query.list();
}
Using a mocking framework you could mock sessionFactory, session and query and create a unit test that has expectations that query.setMaxResults is only called if it doesn't equal 0 and that query.setString is called with the correct string value. You could also assert that whatever is returned from query.list() is the thing returned by the method.
However, this is making you test code coupled to your implementation of this method. Also, if your DAO method has a lot of logic in it then you should consider refactoring and maybe moving this logic to a service layer where you could unit test it separately from any database interaction.
You can use Dev Magic Fake to fake the DAL so you can Work with TDD as you need without writing any code for the Fake DAL for example
Just add a reference to DevMagicFake.dll and you can code the following:
[HttpPost]
public ActionResult Create(VendorForm vendorForm)
{
var repoistory = new FakeRepository<VendorForm>();
repoistory.Save(vendorForm);
return View("Page", repoistory.GetAll());
}
This will save the VendorForm permanent in the memory, and you can retrieve it anytime You need, you can also generate data for this object or any other object in your model, without writing any code to that operation, so now you can make TDD as you already finish your DAL for more information about Dev Magic Fake see the following Link on CodePlex:
http://devmagicfake.codeplex.com
Thanks
M.Radwan

How can I integrate Oracle's row level security with MyBatis?

A project I am working on uses and Oracle database with row level security. I need to be able to invoke call DBMS_APPLICATION_INFO.SET_CLIENT_INFO('userId'); before I can execute any other SQL statements. I am trying to figure out a way to implement this within MyBatis. Several ideas that I had, but were unable to make work, include the following:
Attempt 1
<select id="selectIds" parameterType="string" resultType="Integer">
call DBMS_APPLICATION_INFO.SET_CLIENT_INFO(#{userId});
select id from FOO
</select>
However, you can't two statements within a single JDBC call and MyBatis doesn't have support for JDBC batch statements, or at least not that I could find.
Attempt 2
<select id="selectMessageIds" parameterType="string" resultType="Integer">
<![CDATA[
declare
type ID_TYP is table of AGL_ID.ID_ID%type;
ALL_IDS ID_TYP;
begin
DBMS_APPLICATION_INFO.SET_CLIENT_INFO(#{userId});
select ID bulk collect
into ALL_IDS
from FOO
end;
]]>
</select>
However, that is as far I got because I learned that you can't return data in a procedure, only in a function, so there was no way to return the data.
Attempt 3
I've considered just creating a simple MyBatis statement that will set the client information and it will need to be called before executing statements. This seems the most promising, however, we are using Spring and database connection pooling and I am concerned about race conditions. I want to ensure that the client information won't bleed over and affect other statements because the connections will not get closed, they will get reused.
Software/Framework Version Information
Oracle 10g
MyBatis 3.0.5
Spring 3.0.5
Update
Forgot to mention that I am also using MyBatis Spring 1.0.1
This sounds like a perfect candidate for transactions. You can create a #Transactional service (or DAO) base class that makes the DBMS_APPLICATION function call. All your other service classes could extend the base and call the necessary SQL.
In the base class, you want to make sure that you only call the DBMS_APPLICATION function once. To do this, use the TransactionSynchronizationManager.hasResource() and bindResource() methods to bind a boolean or similar marker value to the current TX. Check this value to determine if you need to make the function call or not.
If the function call exists only for a 'unit of work' in the DB, this should be all you need. If the call exists for the duration of the connection, the base class will need too clean up in a finally block somehow.
Rather than a base class, another possibility would be to use AOP and do the function call before method invocation and the clean up as finally advice. The key here would be to make sure that your interceptor is called after Spring's TransactionInterceptor (i.e. after the tx has started).
One of the safest solution would be to have a specilaized DatSourceUtils
1: http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/jdbc/datasource/DataSourceUtils.html and override doGetConnection(DataSource dataSource) and setClientInfo on connection
Write your own abstraction over SqlMapClientDaoSupport to pass client information.

TDD and mocking

First of all, I have to say, I'm new to mocking. So maybe I'm missing a point.
I'm also just starting to get used to the TDD approach.
So, in my actual project I'm working on a class in the business layer, while the data layer has yet to be deployed. I thought, this would be a good time to get started with mocking. I'm using Rhino Mocks, but I've come to the problem of needing to know the implementation details of a class before writing the class itself.
Rhino Mocks checks if alle the methods expected to be called are actually called. So I often need to know which mocked method is being called by the tested method first, even though they could be called in any order. Because of that I'm often writing complicated methods before I test them, because then I know already in which order the methods are being called.
simple example:
public void CreateAandB(bool arg1, bool arg2) {
if(arg1)
daoA.Create();
else throw new exception;
if(arg2)
daoB.Create();
else throw new exception;
}
if I want to test the error handling of this method, I'd have to know which method is being called first. But I don't want to be bugged about implementation details when writing the test first.
Am I missing something?
You have 2 choices. If the method should result in some change in your class the you can test the results of your method instead. So can you call CreateAandB(true,false) then then call some other method to see if the correct thing was created. In this situation your mock objects will probably be stubs which just provide some data.
If the doaA and doaB are objects which are injected into your class that actually create data in the DB or similar, which you can't verify the results of in the test, then you want to test the interaction with them, in which case you create the mocks and set the expectations, then call the method and verify that the expectations are met. In this situation your mock objects will be mocks and will verify the expected behaviour.
Yes you are testing implementation details, but your are testing the details of if your method is using its dependencies correctly, which is what you want to test, not how it is using them, which are the details you are not really interested in.
EDIT
IDao daoA = MockRepository.GenerateMock<IDao>(); //create mock
daoA.Expect(dao=>dao.Create); //set expectation
...
daoA.VerifyExpectations(); //check that the Create method was called
you can ensure that the expectations happen in a certain order, but not using the AAA syntax I believe (source from 2009, might have changed since,EDIT see here for an option which might work), but it seems someone has developed an approach which might allow this here. I've never used that and can't verify it.
As for needing to know which method was called first so you can verify the exception you have a couple of choices:
Have a different message in your exception and check that to determine which exception was raised.
Expect a call to daoA in addition to expecting the exception. If you don't get the call to daoA then the test fails as the exception must have been the first one.
Often times you just need fake objects, not mocks. Mock objects are meant to test component interaction, and often you can avoid this by querying the state of SUT directly. Most practical uses of mocks are to test interaction with some external system (DB, file system, webservice, etc.), and for other things you should be able to query system state directly.

Resources