Hibernate with Oracle 11g not working with "select" generator - oracle

I am using Hibernate 3.2.5 and Hibernate Annotations 3.3.1.GA as the JPA provider in a data loading application. I have configured Hibernate to use C3P0 for connection pooling.
My database is: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
As there is no built in hibernate dialect for 11g, so I have configured it to use
org.hibernate.dialect.Oracle10gDialect
JDBC Driver: Oracle JDBC driver, version: 11.2.0.1.0
The application loads some transaction performance logs from a mainframe system into an Oracle DB for later analysis and reporting. It is essentially a batch job that monitors a folder and waits for a new file then reads it and inserts it into the database (averages around 4.5million rows inserted per day), thus I chose Hibernate due to its ability to use JDBC batch inserts which appeared to not work so well in EclipseLink after some comparison testing. The files are in a proprietary binary format thus I cannot use simpler tools such as CSV imports etc.
Originally I developed the application for use with MySQL on my workstation as it was originally for a once of analysis task, but now wish to move it to an enterprise Oracle RAC platform as it has proved to be useful to continue to continue importing data and retaining it for a couple of months for use by myself and a few other analysts. I have had a DBA configure the tables and have adjusted my Entity classes to reflect some minor changes in field names and data types and changed the driver and connection details etc, but I have run into some issues with primary key generation.
There a few tables (main data table with some tables storing various supporting types eg transaction type, usercodes etc). Each has a unique (primary) id column which is auto-generated using a sequence and before-update trigger.
The DBA has configured the sequences to not be viewable by the users they have created.
Using the JPA (javax.annotations) generatedvalue types would not work in any case.
eg:
#GeneratedValue(strategy = GenerationType.AUTO)
This gives the SQL:
select hibernate_sequence.nextval from dual
Which the Oracle drivers throws an exception for with the error:
25/11/2009 11:57:23 AM org.hibernate.util.JDBCExceptionReporter logExceptions
WARNING: SQL Error: 2289, SQLState: 42000
25/11/2009 11:57:23 AM org.hibernate.util.JDBCExceptionReporter logExceptions
SEVERE: ORA-02289: sequence does not exist
After finding that I did some research and found the options to use the Hibernate JPA annotation extensions "GenericGenerator" with a "select" strategy (http://docs.jboss.org/hibernate/stable/core/reference/en/html/mapping.html#mapping-declaration-id-generator)
eg
#GeneratedValue(generator="id_anEntity")
#GenericGenerator(name = "id_anEntity",
strategy = "select")
However when I use this I find that Hibernate hangs during EntityManagerFactory creation. It appears to get past building the properties, building the named queries, connecting to the server, then hangs at:
25/11/2009 1:40:50 PM org.hibernate.impl.SessionFactoryImpl <init>
INFO: building session factory
and doesn't return.
I found the same thing happened when I didn't specify the dialect in the persistence.xml file.
It works fine if I use the "increment" strategy, although this means the sequences are then broken as the value has been incremented without the sequence having been incremented, which is less-than-ideal.
The "native" strategy gives the same output as using GenerationType.AUTO (ORA-02289: sequence does not exist).
I am not sure if this is due to me using the wrong key generation strategy, or an error in my configuration, or a bug.
Any help in either making the "select" strategy work, or a better alternative is much appreciated. I could potentially go back to using pure JDBC with prepared statements and such but this tends to get a little messy and I prefer the JPA approach.
Some more info:
Persistence.xml properties:
<property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.c3p0.min_size" value="5"/>
<property name="hibernate.c3p0.max_size" value="20"/>
<property name="hibernate.c3p0.timeout" value="1800"/>
<property name="hibernate.c3p0.max_statements" value="100000"/>
<property name="hibernate.jdbc.use_get_generated_keys" value="true"/>
<property name="hibernate.cache.use_query_cache" value="false"/>
<property name="hibernate.cache.use_second_level_cache" value="false"/>
<property name="hibernate.order_inserts" value="true"/>
<property name="hibernate.order_updates" value="true"/>
<property name="hibernate.connection.username" value="myusername"/>
<property name="hibernate.connection.driver_class" value="oracle.jdbc.OracleDriver"/>
<property name="hibernate.connection.password" value="mypassword"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/>
<property name="hibernate.connection.url" value="jdbc:oracle:thin:#(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = myoracleserver) (PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = myservicename))
)"/>
<property name="hibernate.jdbc.batch_size" value = "100000" />
A sample of the declaration of the ID field in one of the entity classes using annotations:
#Entity
#Table(name = "myentity",
catalog = "",
schema = "mydb")
public class myEntity implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#Basic(optional = false)
#GeneratedValue(generator="id_anEntity")
#GenericGenerator(name = "id_anEntity",
strategy = "select")
#Column(name = "MYENTITYID",
nullable = false)
private Integer myEntityID;
//... other column mappings
public Integer getMyEntityID() {
return myEntityID;
}
public void setMyEntityID(Integer myEntityID) {
this. myEntityID = myEntityID;
}
//... other getters & setters
}

I'm a bit unclear on what you mean by "The DBA has configured the sequences to not be viewable by the users they have created." - does that mean that the sequence not visible to you? Why not?
In order to use sequence-based generator where sequence name is not "hibernate_sequence" (which it never is in real life; that's just the default) you need to specify the appropriate generator:
#SequenceGenerator(name="myentity_seq", sequenceName="my_sequence")
public class MyEntity {
...
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="myentity_seq")
private Integer myEntityID;
...
}
"select" generator strategy means Hibernate will try to select the row you've just inserted using a unique key (other than PK, obviously). Do you have that defined? I would strongly suggest you go with sequence instead.

Related

Can I use Hibernate Criteria without mapping with Hibernate?

I am using JPA annotations to map entities in my model. However, I found Hibernate Criteria is easy to use and contains less codes to write, so is there some way to use Criteria without mapping with hibernate xml ways? I tried this in my DAO implementation class:
private SessionFactory sFactory; // of type org.hibernate.SessionFactory
....
Session session = sFactory.getCurrentSession();
Criteria criteria = session.createCriteria(BTerminal.class);
But, without hibernate.cfg.xml it's giving nullpointerexception. Of course because it is not injected. But to fill this cfg.xml I have to add mapping xml files, which is not the way I like. So, can I use JPA mapping while using Hibernate Criteria?
I am not using Spring. Still doubt which is easier: write 10+ mapping xmls with all atributes, or to learn more about Spring DaoSupport, or any other ways.
Thanks in advance.
Yes, it will work. You can have JPA annotated entities, while you use Hibernate Criteria to query your entities, instead of JPA Criteria.
I have actually have tested it.
My entity class looks like this:
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
#Entity
public class TestEntity {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private Integer id;
#Version
private Long version;
...
}
Then, I have Hibernate config file: hibernate.cfg.xml
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="connection.url">jdbc:mysql://localhost/test</property>
<property name="connection.username">root</property>
<property name="connection.password">root</property>
<property name="transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hbm2ddl.auto">create</property>
<property name="show_sql">true</property>
<mapping class="com.test.model.TestEntity" />
</session-factory>
</hibernate-configuration>
Notice, that I still have to list down the entity classes, but I'm not using Hibernate mapping files (hbm.xml). I don't think that Hibernate has support for auto-detection of entity classes, like JPA does (so you still have to list them down even if they are annotated).
Then I have this code as a test, persist entity then retrieve using Hibernate Criteria:
Session session = sessionFactory.getCurrentSession();
session.beginTransaction();
TestEntity testEntity = new TestEntity();
testEntity.setName("test");
session.save(testEntity);
List<TestEntity> tests = (List<TestEntity>) session.createCriteria(TestEntity.class).list();
for (TestEntity test : tests) {
System.out.println(test.getName());
}
session.getTransaction().commit();
I have the ff. output in my console:
Hibernate: insert into TestEntity (name, version) values (?, ?)
Hibernate: select this_.id as id1_0_0_, this_.name as name2_0_0_, this_.version as version3_0_0_ from TestEntity this_
test

How to use Spring AbstractRoutingDataSource with dynamic datasources?

I am working in a project using Spring, Spring Data JPA, Spring Security, Primefaces...
I was following this tutorial about dynamic datasource routing with spring.
In this tutorial, you can only achieve dynamic datasource switching between a pre-defined datasources.
Here is a snippet of my code :
springContext-jpa.xml
<bean id="dsCgWeb1" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="${jdbc.driverClassName.Cargest_web}"></property>
<property name="url" value="${jdbc.url.Cargest_web}"></property>
<property name="username" value="${jdbc.username.Cargest_web}"></property>
<property name="password" value="${jdbc.password.Cargest_web}"></property>
</bean>
<bean id="dsCgWeb2" class="org.apache.commons.dbcp.BasicDataSource">
// same properties, different values ..
</bean>
<!-- Generic Datasource [Default : dsCargestWeb1] -->
<bean id="dsCgWeb" class="com.cargest.custom.CargestRoutingDataSource">
<property name="targetDataSources">
<map>
<entry key="1" value-ref="dsCgWeb1" />
<entry key="2" value-ref="dsCgWeb2" />
</map>
</property>
<property name="defaultTargetDataSource" ref="dsCgWeb1" />
</bean>
What i want to do is to make the targetDataSources map dynamic same as its elements too.
In other words, i want to fetch a certain database table, use properties stored in that table to create my datasources then put them in a map like targetDataSources.
Is there a way to do this ?
Nothing in AbstractRoutingDataSource forces you to use a static map of DataSourceS. It is up to you to contruct a bean implementing Map<Object, Object>, where key is what you use to select the DataSource, and value is a DataSource or (by default) a String referencing a JNDI defined data source. You can even modify it dynamically since, as the map is stored in memory, AbstractRoutingDataSource does no caching.
I have no full example code. But here is what I can imagine. In a web application, you have one database per client, all with same structure - ok, it would be a strange design, say it is just for the example. At login time, the application creates the datasource for the client and stores it in a map indexed by sessionId - The map is a bean in root context named dataSources
#Autowired
#Qualifier("dataSources");
Map<String, DataSource> sources;
// I assume url, user and password have been found from connected user
// I use DriverManagerDataSource for the example because it is simple to setup
DataSource dataSource = new DriverManagerDataSource(url, user, password);
sources.put(request.getSession.getId(), dataSource);
You also need a session listener to cleanup dataSources in its destroy method
#Autowired
#Qualifier("dataSources");
Map<String, DataSource> sources;
public void sessionDestroyed(HttpSessionEvent se) {
// eventually cleanup the DataSource if appropriate (nothing to do for DriverManagerDataSource ...)
sources.remove(se.getSession.getId());
}
The routing datasource could be like :
public class SessionRoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
HttpServletRequest request = ((ServletRequestAttributes)
RequestContextHolder.getRequestAttributes()).getRequest();
return request.getSession().getId();
}
#Autowired
#Qualifier("dataSources")
public void setDataSources(Map<String, DataSource> dataSources) {
setTargetDataSources(dataSources);
}
I have not tested anything because it would be a lot of work to setting the different database, but I thing that it should be Ok. In real world there would not be a different data source per session but one per user with a count of session per user but as I said it is an over simplified example.
The datasource used by a thread might change from time to time.
Should pay attention to concurrency, applications might get concurrency issues in concurrent environment.
thread-bound AbstractRoutingDataSource sample
It can be achieved with AbstractRoutingDataSource and keeping the information in the thread-local Variable. Here is a beautiful working example you can refer to:
Multi-tenancy: Managing multiple datasources with Spring Data JPA

Issue with #Transactional annotations in Spring JPA

I have a doubt related to transactions within transactions. For background, I have a School entity object which has Set of Students entity object mapped to it. I am using Spring Data JPA which is taking care of all the crud operations. I have a SchoolManagementService class which has #Transactional(readonly=true) set at the class level and for all updating methods I am using #Transactional over them.
In my SchoolManagementService class I have a method deleteStudents(List) which I have marked as #Transactional. In this method I am calling StudentsRepository.delete(studentId) again and again. I want to make sure if any delete fails then the transaction should rollback for that checked exception. I am trying to test this with my spring junit test case (I am not using default rollback=true or#rollback(true) because I want this to be rollbacked because of some runtime exception I encounter at the repository delete method.
#RunWith(SpringJUnit4ClassRunner.class)
#TestExecutionListeners({DependencyInjectionTestExecutionListener.class, TransactionalTestExecutionListener.class})
#ContextConfiguration(locations = {"classpath:PPLRepository-context.xml"})
public class TestClass{
#Test
#Transactional
public void testDeleteStudents(){
StudentManagementService.delete(randomList)
}
with this testcase it is deleting all the records but the last one. Ideally it should rollback and none of of entries should be deleted.
Here is my sprin settings file with TransactionMangaer configs
<bean id="atomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init"
destroy-method="close">
<property name="forceShutdown" value="true" />
<property name="startupTransactionService" value="true" />
<property name="transactionTimeout" value="1000" />
</bean>
<bean id="atomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp" />
<!-- Configure the Spring framework to use JTA transactions from Atomikos -->
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="atomikosTransactionManager" />
<property name="userTransaction" ref="atomikosUserTransaction" />
<property name="transactionSynchronizationName" value="SYNCHRONIZATION_ON_ACTUAL_TRANSACTION" />
</bean>
<!-- EntityManager Factory that brings together the persistence unit, datasource, and JPA Vendor -->
<bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="PPL_GMR">
<property name="dataSource" ref="PPL_GMRDS"></property>
<property name="persistenceUnitName" value="PPL_GMR"/>
<property name="persistenceXmlLocation" value="classpath:META-INF/PPL-persistence.xml"/>
<property name="jpaVendorAdapter" ref="PPL_GMRJPAVendorAdapter"/>
<property name="jpaPropertyMap">
<map>
<entry key="hibernate.transaction.manager_lookup_class" value="com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup"/>
<entry key="hibernate.connection.release_mode" value="on_close"/>
<entry key="hibernate.default_schema" value="${PPL.schema}"/>
</map>
</property>
</bean>
Can someone suggest where my understanding of transactions is wrong? Whatever I have read from the APIs I got this impression that if some method is #Transactional at the service layer and if it calls several #Transactional methods of Spring Data JPA repositories then if I encounter any Runtime exception then all the transactions should be rolled back.
I even tried to simple create a testcase method as below:
#Test
#Transactional
public void testDeleteStudents(){
StudentRepository.delete(1);
StudentRepository.delete(2);// 2 id is not present so I will get a runtime exception.
}
Inspite of keeping #Rollback(true/false) on this method, this method deletes id 1 Student from the database. I thought that #Transactional at this testcase method will create a new transaction here and all the transactional delete methods from the StudentRepository will run in same transaction. And no student data will be committed until and unless no runtime exception is thrown.
Please help me understand transactions better as I am new to this. i am using Spring Data JPA with Oracle database.
Thanks in advance.
I think that the default behaviour is (even thought you don't have it on the test class)
#TransactionConfiguration(defaultRollback = true)
so it will perform rollback when your test ends. Therefore there is no synchronization of hibernate session with the database and no queries SQL are issued to the database.
You have two posibilities. Either specify
#TransactionConfiguration(defaultRollback = false)
or inject entity manager into your test and call
#PersistenceContext
protected EntityManager em;
/**
* Simulates new transaction (empties Entity Manager cache).
*/
public void simulateNewTransaction() {
em.flush();
em.clear();
}
This will force hibernate to send all queries to the database. Please note that this will solve your problem with deleting non existing entity, but it doesn't behave exactly like new transaction, e.g. when you have missing foreign key it doesn't throw anything (this is predictable.
You can use this for checking the contents of entity returned by em.find(class, id) and check you relational mapping without the need to commit the transaction.

What should I do to ensure Spring is thread safe?

May I know by configuring the data source in Spring like this:
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/dev"/>
<property name="lookupOnStartup" value="false"/>
<property name="cache" value="true"/>
<property name="proxyInterface" value="javax.sql.DataSource"/>
</bean>
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource">
<ref bean="dataSource"/>
</property>
...
</bean>
And configuring my BOC and DAO object in Spring like this:
<bean id="Dao" class="com.dao.impl.DaoImpl">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
<bean id="Bo" class="com.bo.impl.BoImpl">
<property name="theDao">
<ref local="Dao"/>
</property>
</bean>
Currently I am testing it with 3 users, 1 successfully insert data into DB, 1 is hung, and 1 is missing in action, meaning there is no response, no log was capture in Websphere Application Server. With 3 users concurrently using the app has failed the test case, may I know how could I ensure all these are thread safe when come to a situation when there are 1000 users using the app concurrently?
UPDATE
In response to #Adrian Shum query:
Regarding the BO thing, I'm not sure what pattern is this. But I'm BOC is stand for Business Object Controller, the purpose of having this unit class is to separate the business logic from DAO object. Eventually this will end up the XHTML/JSP is the front-end, BO is the business controller, and DAO is concern about hibernate and query construction.
In order to retrieve the session factory, every DAO object must extends the HibernateDaoSupport, this is how Spring-Hibernate Integration work according to this tutorial. Here is some code snippet:
class DAO extends HibernateDaoSupport implements IDao {
public void save( Pojo pojo ) {
getHibernateTemplate().save(pojo);
}
public void update( Pojo pojo ) {
getHibernateTemplate().update(pojo);
}
public void delete( Pojo pojo ) {
getHibernateTemplate().delete(pojo);
}
}
I know that Spring object are singleton by default. Does this means each thread will have only ONE object or the whole JVM instance will have only ONE object? What if I declare those BO and DAO object as session scope like this:
<bean id="Dao" class="com.dao.impl.DaoImpl" scope="session">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
<bean id="Bo" class="com.bo.impl.BoImpl" scope="session">
<property name="theDao">
<ref local="Dao"/>
</property>
</bean>
Regarding the data update or retrieval, this could happen as the 3 users that we are testing on is actaully targeting on the same record. There might be a lock as I notice that there is a function doing this code:
Query queryA = session.createQuery("Delete From TableA where fieldA = :theID");
queryA.setParameter("theID", "XX");
queryA.executeUpdate();
Query queryB = session.createQuery("Delete From TableB where fieldB = :theID");
queryB.setParameter("theID", "YY");
queryB.executeUpdate();
// update tableB object
session.save(tableBObj);
// update each tableA object
for(TableAObj obj : TableAObjList) {
session.save(obj);
session.flush();
session.evict(obj);
}
The TableA(slave) and TableB(master) has relationship in each other. I know there is a database design between TableA and TableB but this is beyond of this question. I'm just curious whether this function could cause the concurrent issue even though I made this class as singleton?
From your problem, it is obviously that the thread-safeness is nothing to do with Spring.
There can be a lot of place that can go wrong, for example: (I don't really know what your BO means, as it seems not a well know pattern. I assume your "user" will invoke method in BO and BO will invoke DAO to do the data retrieval job)
How are you using the session factory? I wish you are not getting one session and keep on using that. It will be great to show some code snippet on how you use it.
If your BO is a singleton, does it keep any state for individual "user session"? Is any shared object used in the processing not thread-safe?
for issue related to DAO which is data retrieval and update, have you did your work to avoid dead lock? for example, function A will update table X and then table Y, while function B update Y then X. Have you done your work to make sure that, for 2 users updating the same record, the latter update won't silently overwrite the former one (in case the update is not idempotent).
There can be tons of reason causing your problem, but I believe 99.999% of them have nothing to do with Spring (or Hibernate).
I have the problem resolved. It is due to the DB2 failed to handle concurrency issues by adding a new column into the table, and make it as a primary key.

Disable eclipselink caching and query caching - not working?

I am using eclipselink JPA with a database which is also being updated externally to my application. For that reason there are tables I want to query every few seconds. I can't get this to work even when I try to disable the cache and query cache. For example:
EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("default");
EntityManager em = entityManagerFactory.createEntityManager();
MyLocation one = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0);
Thread.sleep(10000);
MyLocation two = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0);
System.out.println(one.getCapacity() + " - " + two.getCapacity());
Even though the capacity changes while my application is sleeping the println always prints the same value for one and two.
I have added the following to the persistence.xml
<property name="eclipselink.cache.shared.default" value="false"/>
<property name="eclipselink.query-results-cache" value="false"/>
I must be missing something but am running out of ideas.
James
The issue is you are reading through the PersistenceContext/EM which maintains an Object Transactional view of the data and will never update unless refreshed.
Add the query refresh property "eclipselink.refresh" to the find call (JPA 2.0) or simply call em.refresh after the initial find.
#Entity
#Cache(shared=false)
public class Employee {
...
}
I hope it will solve your cache problem.........

Resources