A project I am working on uses and Oracle database with row level security. I need to be able to invoke call DBMS_APPLICATION_INFO.SET_CLIENT_INFO('userId'); before I can execute any other SQL statements. I am trying to figure out a way to implement this within MyBatis. Several ideas that I had, but were unable to make work, include the following:
Attempt 1
<select id="selectIds" parameterType="string" resultType="Integer">
call DBMS_APPLICATION_INFO.SET_CLIENT_INFO(#{userId});
select id from FOO
</select>
However, you can't two statements within a single JDBC call and MyBatis doesn't have support for JDBC batch statements, or at least not that I could find.
Attempt 2
<select id="selectMessageIds" parameterType="string" resultType="Integer">
<![CDATA[
declare
type ID_TYP is table of AGL_ID.ID_ID%type;
ALL_IDS ID_TYP;
begin
DBMS_APPLICATION_INFO.SET_CLIENT_INFO(#{userId});
select ID bulk collect
into ALL_IDS
from FOO
end;
]]>
</select>
However, that is as far I got because I learned that you can't return data in a procedure, only in a function, so there was no way to return the data.
Attempt 3
I've considered just creating a simple MyBatis statement that will set the client information and it will need to be called before executing statements. This seems the most promising, however, we are using Spring and database connection pooling and I am concerned about race conditions. I want to ensure that the client information won't bleed over and affect other statements because the connections will not get closed, they will get reused.
Software/Framework Version Information
Oracle 10g
MyBatis 3.0.5
Spring 3.0.5
Update
Forgot to mention that I am also using MyBatis Spring 1.0.1
This sounds like a perfect candidate for transactions. You can create a #Transactional service (or DAO) base class that makes the DBMS_APPLICATION function call. All your other service classes could extend the base and call the necessary SQL.
In the base class, you want to make sure that you only call the DBMS_APPLICATION function once. To do this, use the TransactionSynchronizationManager.hasResource() and bindResource() methods to bind a boolean or similar marker value to the current TX. Check this value to determine if you need to make the function call or not.
If the function call exists only for a 'unit of work' in the DB, this should be all you need. If the call exists for the duration of the connection, the base class will need too clean up in a finally block somehow.
Rather than a base class, another possibility would be to use AOP and do the function call before method invocation and the clean up as finally advice. The key here would be to make sure that your interceptor is called after Spring's TransactionInterceptor (i.e. after the tx has started).
One of the safest solution would be to have a specilaized DatSourceUtils
1: http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/jdbc/datasource/DataSourceUtils.html and override doGetConnection(DataSource dataSource) and setClientInfo on connection
Write your own abstraction over SqlMapClientDaoSupport to pass client information.
Related
I'm calling procedure that returns different data in result set based on request type.
For this purpose I use stored-proc-outbound-gateway.
Request type is passed to procedure, but inside mapper it isn't available.
I could use ColumnMetaData to process resultSet, but I would prefer to have specific request type mappers.
Other solution is to have as many gateways as request types, but maybe there are something better.
Could I specify which mapper to use, based on payload, in stored-proc-outbound-gateway?
Well, to be honest if I were you I'd really make separate components for particular types. In the future the logic might be more complex and that would be easier to modify particular function than try to figure out how to come up with all those if..else.
Nevertheless your request is different...
As you see there is only one possible hook for you there - RowMapper injection for particular procedure param.
I can suggest the solution like RoutingRowMapper, which will consult some ThreadLocal variable to select the proper RowMapper to delegate.
The idea is picked up from the AbstractRoutingDataSource. Also there is something like SimpleRoutingConnectionFactory in the Spring AMQP.
The ThreadLocal you can populate before stored-proc-outbound-gateway and that really can be your desired type.
Another trick might be based on the result from the procedure where ResultSet contains a column with a hint which target RowMapper to choose.
In any way your task can be achieved only via composite RowMapper. The stored-proc-outbound-gateway doesn't have any logic to tackle and won't. It's just not its responsibility.
I read that getOne() is lazy loaded and findOne() fetches the whole entity right away. I've checked the debugging log and I even enabled monitoring on my sql server to see what statements gets executed, I found that both getOne() and findOne() generates and executes the same query. However when I use getOne() the values are initially null (except for the id of course).
So could anyone please tell me, if both methods executes the same query on the database, why should I use one over the other? I'm basically looking for a way to fetch an entity without getting all of its children/attributes.
EDIT1:
Entity code
Dao code:
#Repository
public interface FlightDao extends JpaRepository<Flight, Long> {
}
Debugging log findOne() vs getOne()
EDIT2:
Thanks to Chlebik I was able to identify the problem. Like Chlebik stated, if you try to access any property of the entity fetched by getOne() the full query will be executed. In my case, I was checking the behavior while debugging, moving one line at a time, I totally forgot that while debugging the IDE tries to access object properties for debugging purposes (or at least that's what I think is happening), so debugging triggers the full query execution. I stopped debugging and then checked the logs and everything appears to be normal.
getOne() vs findOne() (This log is taken from MySQL general_log and not hibernate.
Debugging log
No debugging log
It is just a guess but in 'pure JPA' there is a method of EntityManager called getReference. And it is designed to retrieve entity with only ID in it. Its use was mostly for indicating reference existed without the need to retrieve whole entity. Maybe the code will tell more:
// em is EntityManager
Department dept = em.getReference(Department.class, 30); // Gets only entity with ID property, rest is null
Employee emp = new Employee();
emp.setId(53);
emp.setName("Peter");
emp.setDepartment(dept);
dept.getEmployees().add(emp);
em.persist(emp);
I assume then getOne serves the same purpose. Why the queries generated are the same you ask? Well, AFAIR in JPA bible - Pro JPA2 by Mike Keith and Merrick Schincariol - almost every paragraph contains something like 'the behaviour depends on the vendor'.
EDIT:
I've set my own setup. Finally I came to conclusion that if You in any way interfere with entity fetched with getOne (even go for entity.getId()) it causes SQL to be executed. Although if You are using it only to create proxy (eg. for relationship indicator like shown in a code above), nothing happens and there is no additional SQL executed. So I assume in your service class You do something with this entity (use getter, log something) and that is why the output of these two methods looks the same.
ChlebikGitHub with example code
SO helpful question #1
SO helpful question #2
Suppose you want to remove an Entity by id. In SQL you can execute a query like this :
"delete form TABLE_NAME where id = ?".
And in Hibernate, first you have to get a managed instance of your Entity and then pass it to EntityManager.remove method.
Entity a = em.find(Entity.class, id);
em.remove(a);
But this way, You have to fetch the Entity you want to delete from database before deletion. Is that really necessary ?
The method EntityManager.getReference returns a Hibernate proxy without querying the database and setting the properties of your entity. Unless you try to get properties of the returned proxy yourself.
Method JpaRepository.getOne uses EntityManager.getReference method instead of EntityManager.find method. so whenever you need a managed object but you don't really need to query database for that, it's better to use JpaRepostory.getOne method to eliminate the unnecessary query.
If data is not found the table for particular ID, findOne will return null, whereas getOne will throw javax.persistence.EntityNotFoundException.
Both have their own pros and cons. Please see example below:
If data not found is not failure case for you (eg. You are just
verifying if data the data is deleted and success will be data to be
null), you can use findOne.
In another case, you can use getOne.
This can be updated as per your requirements, if you know outcomes.
I am trying to write a unit test for some jdbc procedure calls with mockito.
It is my first time to write tests with mock objects (mockito).
The method i am trying to test looks some thing like this...
public void deleteData(final Connection connection, final AnObject ) {
CallableStatement statement = null;
statement = connection.prepareCall("{call DEL_DATA(?)}");
statement.setInt(1, object.getId());
statement.executeUpdate();
connection.commit();
DatabaseSql.close(statement);
}
How can I test methods like this with mockito and junit?
Thanks in advance.
A method like this isn't really a candidate for unit testing, because its whole purpose is to interact with the database. Maybe you want to test that you're interacting with the database correctly. This would be a valid test, but to do that, there would need to be a database involved.
Basically, we're talking about an integration test now, not a unit test. And I can't see that Mockito would be very much help to you, although JUnit certainly would.
In the past, the way I've tested code like this is with a lightweight in-memory database. There are a few of these, but the one that I would recommend is H2 (h2database.com). This is fairly fast and easy to use, once you've got the H2 jar in your path.
You probably want your integration test to do the following.
Create a dummy table to record procedure calls,
Create a dummy DEL_DATA procedure, which does nothing but record what parameters it was called with in the dummy table
Run the method
Select from the dummy table, to verify that the procedure was called correctly.
With H2, you can run such tests in "in memory" mode, which means there is no need for any clean-up step at the end of each test.
There is no point writing a unit test for this code. Once you mocked DB access parts there is no logic left for you to unit test.
You need to mock your business logic no your persistence code.
Well the short answer is "you can't, that's not what it is designed for".
Besides which, your "deleteData" method is not directly testable and has an invalid signature.
In order to test if your functionality works you would have to first invoke your deleteData method, then attempt to load the deleted data (assuming your DataStore is ACID), and assert that the loaded data does not exist. Which is not a unit test (because it is not isolated).
Either rewrite your persistence in such a way as to be testable (as a unit), or alternatively, test this in an integration test instead of a unit test.
You should not mock the JDBC calls - it can be done, but it is too complex and there isnot much value in doing it. Instead you would Mock the deleteData method to test other methods that call it.
To test deleteData method itself you will need to write an integration test that connects to a real database or an embedded database.
I am using derby embedded database for my Maven test cases. And I am not able to use SUBSTR inside TO_DATE, its giving error.
Actually it was used for original application which is connected to oracle db. Now I am writing Maven test cases, with derby embedded db and unable to execute this one. The thing is I should not modify the original query and I need some workaround to rectify this issue.
My query will be like this.
SELECT TO_DATE (SUBSTR (testdate, 1, 9), 'DD-MM-RR') FROM testtable
Please help me on this issue. Thanks.
Substr cannot be used with DATE. You cannot not override it. SQL is not easily re-used between database. The easiest part is to change the sql.
The harder part is to step deep into derby:
If you want to make this work without changing the query, you could wrap the connection or the DataSource and change the sql on a lower Level.
For this to work you need access to the Connection Object in your test:
Connection wrapped = new WrappedConnection(originalConnection);
This is a short example of a wrapped Connection, with a migrate function (this is basically the Adapter Pattern:
public class WrappedConnection implements Connection
{
private final Connection origConnection;
public WrappedConnection(Connection rv)
{
origConnection = rv;
}
//I left out other methods, that you have to implement accordingly
public PreparedStatement prepareStatement(String pSql) throws SQLException
{
//this you have to implement yourself
//this will serve as a bridge between oracle and derby
String sql = migrate(sql);
return sql;
}
}
The migrate function could to something like this:
public String migrate(String sql)
{
return sql.replace("SUBSTR", "SUBSTR_DATE");
}
But you would have to create your own Derby Function SUBSTR_DATE.
I can think of 2 options... I don't know how much sense either makes but...
Create a sub class of the original, and (assuming that the line is only used in one method of that class) simply override that single method. and leave the rest of the code the same.
If the class that calls this SQL send the message to a customised 'sendSQLStatement(String sql) tpe method, this would handle all the creation of the statement object, surround with try / catch error handling etc, and return the result set, you could set an overide in the method to check for the DB engine being used.
This info is obtainable from the databaseMetaData.getDatabaseProductName(), or alternatively from the get .getDriverName() method. you then test this string to see if it contains the word 'derby' and if yes send a different type of SQL.
Of course later on down the road you will need to perform a final test to ensure that the original Oracle code still works.
You may even take the opportunity to modify the whole code snippet to make it more DB agnostic (ie cast the value to a string type (or a date from a long) and then do the substring function.
I'm new to MVC, EF and the like so I followed the MVC3 tutorial at http://www.asp.net/mvc and set up an application (not yet finished with everything though).
Here's the "architecture" of my application so far
GenericRepository
PropertyRepository inherits GenericRepository for "Property" Entity
HomeController which has the PropertyRepository as Member.
Example:
public class HomeController
{
private readonly PropertyRepository _propertyRepository
= new PropertyRepository(new ConfigurationDbContext());
}
Now let's consider the following:
I have a Method in my GenericRepository that takes quite some time, invoking 6 queries which need to be in one transaction in order to maintain integrity. My google results yeldet that SaveChanges() is considered as one transaction - so if I make multiple changes to my context and then call SaveChanges() I can be "sure" that these changes are "atomic" on the SQL Server. Right? Wrong?
Furthermore, there's is an action method that calls _propertyRepository.InvokeLongAndComplex() Method.
I just found out: MVC creates a new controller for each request. So I end up with multiple PropertyRepositories which mess up my Database Integrity. (I have to maintain a linked list of my properties in the database, and if a user moves a property it needs 6 steps to change the list accordingly but that way I avoid looping through all entities when having thousands...)
I thougth about making my GenericRepository and my PropertyRepository static, so every HomeController is using the same Repository and synchronize the InvokeLongAndComplex Method to make sure there's only one Thread making changes to the DB at a time.
I have the suspicion that this idea is not a good solution but I fail to find a suitable solution for this problem - some guys say that's okay to have static repositories (what happens with the context though?). Some other guys say use IOC/DI (?), which sounds like a lot of work to set up (not even sure if that solves my problem...) but it seems that I could "tell" the container to always "inject" the same context object, the same Repository and then it would be enough to synchronize the InvokeLongAndComplex() method to not let multiple threads mess up the integrity.
Why aren't data repositories static?
Answer 2:
2) You often want to have 1 repository instance per-request to make it easier to ensure that uncommited changes from one user don't mess things up for another user.
why have a repository instance per-request doesn't it mess up my linked list again?
Can anyone give me an advice or share a best practice which I can follow?
No! You must have a new context for each request so even if you make your repositories static you will have to pass current context instance to each its method instead of maintaining single context inside repository.
What you mean by integrity in the first place? Are you dealing with transactions, concurrency issues or referential constraints? Handling all of these issues is your responsibility. EF will provide some basic infrastructure for that but the final solution is still up to your implementation.