I'm building an app using a proprietary api. To connect to the database I use a method that returns a Connection object and then on that connection I call the appropriate methods to run queries on the database for example....
Connection conn = JdbcServiceFactory.getInstance().getDefaultDatabase().getConnectionManager().getConnection();
PreparedStatement ps = conn.prepareStatement("select * from test");
If I'm choosing to use Spring MVC 3 for my next project, what must I do to get the database connection setup? From what I've seen in the documentation, I have use the datasource tag in the container and pass a URL, username, and password. As shown, I currently don't have to do that to get the connection.
At the end of the day your proprietary API must access some database (available on some server), using some credentials. You just don't see this. In Spring you must first define some DataSource. Either use existing librariess like dbcp, bonecp or c3p0 or take one provided by your application server via jndi. As long as they implement DataSource interface, it doesn't matter what approach you choose. Too much to explain each one in detail.
Once you have DataSource bean set up, I strongly recommend using JdbcTemplate which simplifies your JDBC code a lot, e.g:
List<Map<String,Object>> res = jdbcTemplate.queryForList("select * from test");
...and much more.
UPDATE: If you want to use your existing legacy API with modern frameworks expecting DataSource (pretty much all of them), implementing DataSource adapter is trivial (remaining methods can stay unimplemented, throwing UnsupportedOperationException):
public class LegacyDataSourceAdapter implements DataSource {
#Override
public Connection getConnection() throws SQLException {
return JdbcServiceFactory.getInstance().getDefaultDatabase().getConnectionManager().getConnection();
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return getConnection();
}
//other methods are irrelevant
}
Now just create an instance of LegacyDataSourceAdapter (maybe as a Spring bean) and pass it to JdbcTemplate, Hibernate, myBatis...
BTW you have here some first class example of bad API design:
Connection conn = JdbcServiceFactory.
getInstance().
getDefaultDatabase().
getConnectionManager().
getConnection();
Related
I have an Spring boot application that runs several services and uses oracle database. The database is maintained properly, indexes also added up, and when executing SQL statements directly on SQL Developer, it's getting executed in milliseconds.
In the spring boot, I use this to execute the statement:
Session session = sessionFactory.getCurrentSession();
session.createQuery("from Table where id = :id and status = 0").setParameter("id", id);
Here is the config properties for the database:
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
hibernate.hbm2ddl.auto=none
Here is the way of datasource initialization in the datasource config:
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource(url, name, pw);
dataSource.setDriverClassName(...);
return dataSource;
}
Recently, it takes so much time to acquire the database connection, it can go up to 10 seconds jut for acquiring connection. I don't think there is any problem in the query. As from the database side, it's also ok. The resources of server which running this service and the database server are also fine, as well as the network. The servers also have auto-scale feature to create new instance when the memory getting low. I just can't figure out what should I do to improve the acquisition time. Could you please help?
I am fetching the connection in jdbctemplate in below fashion:-
getJdbcTemplate().getDataSource().getConnection()
Is it necessary to close the connection fetched in the above manner? The spring JDBCTemplate API states that connection closures will be handled automatically , so I am not sure if this is happening correctly.
http://docs.spring.io/spring/docs/3.0.x/spring-framework-reference/html/jdbc.html
When you are obtaining the DataSource from the JdbcTemplate and use that to obtain a Connection you are basically completely bypassing the JdbcTemplate. You now have a very complex way of obtaining a new Connection. Because this connection isn't managed by Spring but yourself you also need to close it and apply exception handling.
It is better to use the ConnectionCallback instead to get a Connection. The JdbcTemplate will then manage the Connection and do all resource handling.
getJdbcTemplate().execute(new ConnectionCallback<Void>() {
public Void doInConnection(Connection conn) {
// Your JDBC code here.
}
});
It would even better to use one of the other JdbcTemplate methods and write proper code which would save you from messing with plain JDBC code at all.
In a big company, our team provides API's for accessing data on a oracle DB. Until now, we used plain SQL (JDBC) to get/write the data on the database.
So most of the existing API's looked like this (ok, not always that stupid :-)
public class DummyApi {
private final DataSource datasource;
public DummyApi(javax.sql.DataSource datasource) {
this.datasource = datasource;
}
public void doSomething() throws SQLException {
Connection connection = datasource.getConnection();
PreparedStatement statement = connection.prepareStatement("plain sql query");
statement.execute();
}
}
Using such API's is simple, it doesn't matter if your end-application is plain java SE, spring or javaEE. Further, transaction-APIs works proper with this API's. We use them with spring TransactionManager (together with the TransactionAwareDataSourceProxy) and with JTA in CMA-Java EE applications.
Now we evaluate to use JPA in new API's. And the big question we currently struggle is the following: how can we provide a simple interface so that the end-application doesn't need to know about JPA? How can we initialize the EntityManager with a DataSource (for example provided as constructor parameter)? And how can we rollback if there are old, plain JDBC-APIs AND new JPA-APIs in the same transaction (begin/rollback in the end-application)?
Thanks for bringing a little light on the matter!
With JPA the datasource will normally be set in the persistence.xml. If you need some sort of dynamic datasources, then you can pass the DataSource as a property to Persistence.createEntityManagerFactory().
Most JPA providers provide a way to get the JDBC Connection if you want to mix JDBC. Normally this is accessed using em.unwrap(Connection.class). You could also use JTA or Spring to have the transaction share the same connection.
We have an app that is using hibernate, spring, and DB2 in websphere 7. We have audit triggers and we need to set so the triggers can know the logged in user (we use generic logon to the database). We came up with a new scheme for setting this in a new app so that it can automatically join in new transactions. We overrode the transaction manager and did the work in the doBegin.
These scheme worked great in one app, and seemed to work great in a second app, but now, weeks later, and not consistently (behavior is intermittent and does not happen in local development), we are getting this Pre-bound JDBC Connection found error. Looking online most posts say this is when you use two transaction managers against one data source. That is now what we are doing.
I also read one post wondering if it was because he mixed annotation and AOP based transactions. This app does some of that. I don't really buy that theory, but thought I'd mention it.
Exception:
Caused by:
org.springframework.transaction.IllegalTransactionStateException: Pre-bound JDBC Connection found! HibernateTransactionManager does not support running within DataSourceTransactionManager if told to manage the DataSource itself. It is recommended to use a single HibernateTransactionManager for all transactions on a single DataSource, no matter whether Hibernate or JDBC access.
at java.lang.Throwable.<init>(Throwable.java:67)
at org.springframework.core.NestedRuntimeException.<init>(NestedRuntimeException.java:54)
at org.springframework.transaction.TransactionException.<init>(TransactionException.java:34)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:475)
at gov.usdoj.afms.umc.utils.hibernate.AfmsHibernateTransactionManager.doBegin(AfmsHibernateTransactionManager.java:28)
Code (note that the exception comes from the super.doBegin()):
protected void doBegin(Object arg0, TransactionDefinition arg1) {
super.doBegin(arg0, arg1);
if (!Db2ClientInfo.exists()) {
clearDBProperty();
} else {
setDBProperty(Db2ClientInfo.getClientUserId(), Db2ClientInfo.getClientApplicationId());
}
}
private void setDBProperty(String uId, String appName) {
Session session = getSessionFactory().getCurrentSession();
Properties props = new Properties();
props.setProperty(WSConnection.CLIENT_ID, uId);
props.setProperty(WSConnection.CLIENT_APPLICATION_NAME, appName);
try {
Connection nativeConn = new SimpleNativeJdbcExtractor().getNativeConnection(session.connection());
if (nativeConn instanceof WSConnection) {
WSConnection wconn = (WSConnection) nativeConn;
wconn.setClientInformation(props);
} else {
logger.error("Connection was NOT an instance of WSConnection so client ID and app could not be set");
}
} catch (Exception e) {
throw new RuntimeException("Cannot set DB parameters!", e);
}
}
I just realized I never answered this. It turns out that the exception had nothing whatever to do with our Tx manager. It was the fact that this particular EAR has two apps in it, each pointing to the same data source. Evidently this confuses hibernate. We've plans to separate the apps some day, but creating an identical (except in name) data source and pointing the apps at them separately fixes the issue for now.
Instead of modifying the transaction manager it might be easier (better?) to create a wrapper around your datasource (extending DelegatingDataSource from spring) and override the 2 getConnection methods. For the cleanup you could wrap the connection in a proxy and intercept the close method.
That should be a safer (and easier I guess) way then trying to fiddle with the transaction manager and it works for every technology (JDBC, Hibernate, JPA etc.) as long as you use the wrapped datasource. (The registration could be done with a BeanPostProcessor which detects DataSource instances and simply wraps them in the delegate).
If that is to radical (as it means changing your current applications instead of updating a library). It could be a configuration problem, make sure that you are only loading your configuration (and thus DataSource and TransactionManager) only once, duplicating bean instances might lead to a similair behavior.
For posterity, I just got this problem and the answers here weren't very helpful. We resolved the problem by removing a double import of a core XML file which had the AOP transaction manager definition in it:
<tx:annotation-driven transaction-manager="..."
proxy-target-class="true" />
I'm thinking that it causes there to be 2 transaction managers overlapping the same namespace. To fix it, we moved the imports around so they were being done once.
Hope this helps someone else.
I have created glassfish connection pool with ResourceType as ConnectionPoolDataSource.So, glassfish will use the native connection pool implementation for connection pooling. I am not using XADatasource ResourceType as I don't want to perform any distributed transactions.
My application requires the use of TEMPERORY MYSQL table creation at run time. So I am using
the below code to get the connection from JNDI Datasource of glassfish.
#Resource(mappedName = "jdbc/xxxxx")
private DataSource dataSource;
public Connection getConnection() throws SQLException {
Connection con = dataSource.getConnection();
return con;
}
Now, My question is, Can I perform setAutoCommit(false), commit() and rollback(), close() on this Connection object????
In forum, I read that, we should not call these methods on Connection object, if we get the Connection from Container Managed Distributed Transaction (XADataSource) as its involved in distributed transactions.
But, I am getting this connection from non-distributed transactions.So, I can call those methods right???
Other question is, after performing db operations, If I call con.close(), will this connection go back to the connection pool again?