I am using JPA annotations to map entities in my model. However, I found Hibernate Criteria is easy to use and contains less codes to write, so is there some way to use Criteria without mapping with hibernate xml ways? I tried this in my DAO implementation class:
private SessionFactory sFactory; // of type org.hibernate.SessionFactory
....
Session session = sFactory.getCurrentSession();
Criteria criteria = session.createCriteria(BTerminal.class);
But, without hibernate.cfg.xml it's giving nullpointerexception. Of course because it is not injected. But to fill this cfg.xml I have to add mapping xml files, which is not the way I like. So, can I use JPA mapping while using Hibernate Criteria?
I am not using Spring. Still doubt which is easier: write 10+ mapping xmls with all atributes, or to learn more about Spring DaoSupport, or any other ways.
Thanks in advance.
Yes, it will work. You can have JPA annotated entities, while you use Hibernate Criteria to query your entities, instead of JPA Criteria.
I have actually have tested it.
My entity class looks like this:
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
#Entity
public class TestEntity {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private Integer id;
#Version
private Long version;
...
}
Then, I have Hibernate config file: hibernate.cfg.xml
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="connection.url">jdbc:mysql://localhost/test</property>
<property name="connection.username">root</property>
<property name="connection.password">root</property>
<property name="transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hbm2ddl.auto">create</property>
<property name="show_sql">true</property>
<mapping class="com.test.model.TestEntity" />
</session-factory>
</hibernate-configuration>
Notice, that I still have to list down the entity classes, but I'm not using Hibernate mapping files (hbm.xml). I don't think that Hibernate has support for auto-detection of entity classes, like JPA does (so you still have to list them down even if they are annotated).
Then I have this code as a test, persist entity then retrieve using Hibernate Criteria:
Session session = sessionFactory.getCurrentSession();
session.beginTransaction();
TestEntity testEntity = new TestEntity();
testEntity.setName("test");
session.save(testEntity);
List<TestEntity> tests = (List<TestEntity>) session.createCriteria(TestEntity.class).list();
for (TestEntity test : tests) {
System.out.println(test.getName());
}
session.getTransaction().commit();
I have the ff. output in my console:
Hibernate: insert into TestEntity (name, version) values (?, ?)
Hibernate: select this_.id as id1_0_0_, this_.name as name2_0_0_, this_.version as version3_0_0_ from TestEntity this_
test
Related
In a Spring Application (not spring-boot), I'm using javanica #HystrixCommand annotation and spring cache #Cacheable annotation on the same bean method, Spring execute the cache advice before the Hystrix advice.
It's what I'm waiting, but for me the cache advice and hystrix advice without any configuration have the same order in spring : LOWEST_PRECEDENCE.
I wan't to know what make this order : is it defined somewhere or this an undefined order and I'm lucky to have the good order ?
What must be done to ensure that cache advice will be executed before Hystrix advice (set order on cache:annotation-driven before LOWEST_PRECEDENCE ?)
This is an example of my code :
...
import org.springframework.cache.annotation.CacheConfig;
import org.springframework.cache.annotation.Cacheable;
import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
import com.netflix.hystrix.contrib.javanica.annotation.HystrixProperty;
...
#Service
#Slf4j
#CacheConfig(cacheManager="myCacheManager")
public class MyRessourcesImpl implements MyRessources {
...
#Override
#HystrixCommand(commandProperties = {
#HystrixProperty(name = "execution.isolation.strategy", value = "SEMAPHORE"),
#HystrixProperty(name = "execution.isolation.thread.timeoutInMilliseconds", value = "10000") })
#Cacheable("cacheName") //from spring cache
public Map<String, String> getTheMap() {
...
}
...
}
With this spring config :
<bean id="myCache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" >
<property name="configLocation" value="classpath:/META-INF/...." />
</bean>
<bean id="myCacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager">
<property name="cacheManager" ref="myCache" />
</bean>
<cache:annotation-driven cache-manager="myCacheManager" />
<aop:aspectj-autoproxy/>
<bean id="hystrixAspect" class="com.netflix.hystrix.contrib.javanica.aop.aspectj.HystrixCommandAspect" />
Thanks for help
According to the docs, if you don't specify the order, it is undefined:
When two pieces of advice defined in different aspects both need to run at the same join point, unless you specify otherwise the order of execution is undefined. You can control the order of execution by specifying precedence. This is done in the normal Spring way by either implementing the org.springframework.core.Ordered interface in the aspect class or annotating it with the Order annotation. Given two aspects, the aspect returning the lower value from Ordered.getValue() (or the annotation value) has the higher precedence.
Credit: spring annotation advice order
I'm using latest springframework disto v4.2.5.RELEASE and Hibernate v5.0.7.Final when spring loads EntityManagerFactory i'm getting the below exception
Caused by: org.hibernate.HibernateException: Not all named super-types (extends) were found : [com.sample.model.Sample]
at org.hibernate.boot.model.source.internal.hbm.EntityHierarchyBuilder.buildHierarchies(EntityHierarchyBuilder.java:76)
at org.hibernate.boot.model.source.internal.hbm.HbmMetadataSourceProcessorImpl.<init>(HbmMetadataSourceProcessorImpl.java:66)
at org.hibernate.boot.model.source.internal.hbm.HbmMetadataSourceProcessorImpl.<init>(HbmMetadataSourceProcessorImpl.java:40)
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess$1.<init>(MetadataBuildingProcess.java:142)
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:141)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:847)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:874)
at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:60)
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:343)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:319)
The same code was working with v4.2.5.RELEASE and Hibernate v.4.3.10.
I know Hibernate changed core metadata building in v5.x, is there anything needs to be specified in spring config of JPA/EntityManagerFactory/Hibernate Properties to make it work with Hibernate 5.x?
#Entity
#Table(name = "tbl_sample")
public class Sample extends Auditable {
private Long id;
#ManyToOne
#JoinColumn(name = "relationA", nullable = true)
private RelationA relationA;
... etc
}
#MappedSuperClass
public abstract class Auditable extends Persistable {
//audit props
}
#MappedSuperClass
public abstract class Persistable {
//common props
}
I could able to narrow down the issue after enabling trace log, there was one more class which is extending Sample Class and its mapped using hbm.xml like below
<hibernate-mapping package="com.sample.model">
<joined-subclass name="BloodSample" table="tbl_blood_sample"
extends="com.sample.model.Sample">
<key column="ID" />
<property name="sampleNo" column="sampleNo"/>
etc....
</joined-subclass>
The moment i removed this relation hbm it started working... Still wondering why its happening now which was not in older version of hibernate.
So i guess this issue is nothing to do with spring but something related hibernate. Any insight ?
I have a similar your issue.
You try to class tag instead joined-subclass tag in hbm.xml file
I have created an example - SPRING, JPA(EclipseLink persistence provider) with JTA Transaction Manager(JBoss 7). I have observed that all the data in database is being shown in UI properly for the read operations. But when it comes to save/update or delete operation the services layer is not committing the work to database. No exception is caught(I have checked the console/log too and also debugged the code where I can see entityManager.persist/remove is being invoked without any exception).
--Code Listing--
1. Datasource configuration in standalone.xml
<datasource jta="true" jndi-name="java:/mysql_customerdb3" pool-name="mysql_customerdb3_pool" enabled="true" use-java-context="true" use-ccm="true">
<connection-url>jdbc:mysql://localhost:3306/customerdb</connection-url>
<driver>mysql</driver>
<security>
<user-name>root</user-name>
<password>root</password>
</security>
<statement>
<prepared-statement-cache-size>10</prepared-statement-cache-size>
<share-prepared-statements>true</share-prepared-statements>
</statement>
</datasource>
<drivers>
<driver name="mysql" module="com.mysql">
<driver-class>com.mysql.jdbc.Driver</driver-class>
<xa-datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</xa-datasource-class>
</driver>
<driver name="h2" module="com.h2database.h2">
<xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
</driver>
</drivers>
Database driver configuration in module.xml
persistence.xml
org.eclipse.persistence.jpa.PersistenceProvider
java:/mysql_customerdb3
com.springforbeginners.model.Customer
customerdispatcher-servlet.xml
<context:annotation-config />
<context:component-scan base-package="com.springforbeginners" />
<bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="viewClass" value="org.springframework.web.servlet.view.JstlView" />
<property name="prefix" value="/WEB-INF/jsp/" />
<property name="suffix" value=".jsp" />
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" >
<property name="loadTimeWeaver" ref="loadTimeWeaver" />
<property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml" />
</bean>
<bean id="loadTimeWeaver" class="org.springframework.instrument.classloading.SimpleLoadTimeWeaver" >
</bean>
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManagerName" value="java:jboss/TransactionManager"/>
<property name="userTransactionName" value="java:jboss/UserTransaction"/>
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
CustomerServiceImpl.java
package com.springforbeginners.service;
import com.springforbeginners.dao.CustomerDAO;
import com.springforbeginners.model.Customer;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
#Service
public class CustomerServiceImpl implements CustomerService {
#Autowired
private CustomerDAO customerDAO;
#Transactional
#Override
public void addCustomer(Customer customer) {
customerDAO.addCustomer(customer);
}
#Transactional
#Override
public List<Customer> listCustomer() {
return customerDAO.listCustomer();
}
#Transactional
#Override
public void removeCustomer(Integer customerId) {
customerDAO.removeCustomer(customerId);
}
}
CustomerDAOImpl.java
package com.springforbeginners.dao;
import com.springforbeginners.model.Customer;
import java.util.List;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import org.springframework.stereotype.Repository;
#Repository
public class CustomerDAOImpl implements CustomerDAO {
#PersistenceContext(unitName="CustomerDetailsPU3")
private EntityManager entityManager;
#Override
public void addCustomer(Customer customer) {
entityManager.persist(customer);
}
#Override
public List<Customer> listCustomer() {
return entityManager.createQuery("select c from Customer c", Customer.class).getResultList();
}
#Override
public void removeCustomer(Integer customerId) {
Customer customer = (Customer) entityManager.getReference(Customer.class, customerId);
if (null != customer) {
entityManager.remove(customer);
}
}
}
I do not know what and where exactly is something missing. But with the above code the read operations are working as expected. Problem is with save operations. I have converted the above example to use non-JTA datasource(also modified standalone.xml for jta=false) and to use JpaTransactionManager as below
With non-JTA datasource and 'org.springframework.orm.jpa.JpaTransactionManager' all operations(read as well as save/update/delete) are working fine.
But the JTA version of my example is not working as expected(save operations not committing work to database). Any help/pointers appreciated.
Thanks
Prakash
James,
I will be running this application on JBoss. But one datasource on JBoss and other on Glassfish and transaction should span save operation on both datasources simultaneously. This is what I am trying to achieve. I have a web application including spring for service(data) layer currently running on JBoss.
As you said earlier - I will have two persistence.xmls one for JBoss and one for Glassfish. As I am doing this first time I was/am in doubt whether the transaction(that spans two datasources on different servers-in this case JBoss and Glassfish), can this be executed entirely by JBoss(in case if the entire business logic resides in serviceImpl class deployed on JBoss)? In this case I will be using JBoss transaction manager( property name="transactionManagerName" value="java:jboss/TransactionManager" ). Is this sufficient or do I need to similarly have Glassfish transaction manager too? Sorry if this has created the confusion.
Another question from me was that is there a provision for speifying jndi ports in persistence.xml/anywhere else?(Definitely I will have two different persistence.xmls and I will mention the target server as JBoss in one and as Glassfish in another).
Do we have a technique in spring by which business logic can be distributed across different servers like JBoss/Glassfish and still under one single transatcion? I did not know if this can be an option. Were u talking about this scenario in which it will require two different deployment scripts one for each server?
Thanks
Prakash
What is your persistence.xml?
Since you are using JTA, you must define the "eclipselink.target-server"="JBoss"
My persistence.xml(modified) now looks like below. Added target server property in persistence.xml. This solved the problem.
<persistence-unit name="CustomerDetailsPU3" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>java:/mysql_customerdb3</jta-data-source>
<class>com.springforbeginners.model.Customer</class>
<properties>
<property name="eclipselink.target-server" value="JBoss" />
</properties>
</persistence-unit>
Thanks
Prakash
I would like to be able to support the following Sybase 15 ASE syntax in my unit/integration tests that use HSQL...
create table #myTable (value varchar(12) NULL)
HSQL won't recognise how the temp table is named, and baulks at the # character. Instead HSQL would like to use something like this...
create temporary table myTable (value varchar(12) NULL)
or, HSQL also supports most of ANSI-92 SQL according to their docs, however Sybase ASE 15 doesn't have great support for ANSI-92 SQL including how temporary tables are created so the following won't work in Sybase but does in HSQL...
DECLARE LOCAL TEMPORARY TABLE mytable (value varchar(12) NULL)
From everything I have tried I cannot come up with a common syntax that will work with both Sybase and HSQL. Does anyone know of a clean way around this?
The only option I think I have is to create separate DAO's for each database dialect, and control which one is used in the Spring Application Context XML files.
I don't use Hibernate for my datasource, only Spring's JdbcTemplate.
I chose to resolve this issue by implementing a couple of dialect helper classes for my DAO. My goals were to
Execute tests against HSQL databse instead of Sybase
Test as much of my production DAO as possible including the
RowMapper and various SELECT/INSERT statements against the database schema as used in production (but implemented in HSQL)
My DAO ended up looking like this (note the DialectHelper being injected) ...
#Repository
public class MyDaoJdbc MyDao {
private DialectHelper dialectHelper;
/* the meat of the DAO removed for clarity */
#Override
public void createTemporaryTable() {
getSimpleJdbcTemplate().update(dialectHelper.getTempTableCreateSql());
}
#Autowired
public final void setDialectHelper(DialectHelper dialectHelper) {
this.dialectHelper = dialectHelper;
}
}
... my production Spring configuration (spring-db.xml) looks like this and injects the Sybase dialect
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.sybase.jdbc2.jdbc.SybDriver" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
</bean>
<bean id="dialectHelper" class="com.acme.myapp.jdbc.DialectHelperSybase" />
... and my Test Spring configuration (spring-db-test.xml) looks like this and injects the HSQL dialect
<jdbc:embedded-database id="dataSource" type="HSQL">
<jdbc:script location="classpath:/resources/schema.sql"/>
<jdbc:script location="classpath:/resources/test-data.sql"/>
</jdbc:embedded-database>
<bean id="dialectHelper" class="com.acme.myapp.dao.jdbc.DialectHelperHsql" />
The DialectHelper classes provide a way of separating out the incompatible database syntax from the DAO ...
public class DialectHelperHsql implements DialectHelper {
#Override
public String getTempTableCreateSql() {
return "create temporary table myTable (value varchar(12) NULL)";
}
}
public class DialectHelperSybase implements DialectHelper {
#Override
public String getTempTableCreateSql() {
return "create table #myTable (value varchar(12) NULL)";
}
}
The Test class itself initialises Spring with the HSQL dialectHelper by loading the file spring-db-test.xml
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations={
"classpath:resources/spring-context.xml",
"classpath:resources/spring-db-test.xml"})
#Transactional
#TransactionConfiguration(defaultRollback = true)
public class MyDaoIntegrationHsqlTest {
...
}
I am using Hibernate 3.2.5 and Hibernate Annotations 3.3.1.GA as the JPA provider in a data loading application. I have configured Hibernate to use C3P0 for connection pooling.
My database is: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
As there is no built in hibernate dialect for 11g, so I have configured it to use
org.hibernate.dialect.Oracle10gDialect
JDBC Driver: Oracle JDBC driver, version: 11.2.0.1.0
The application loads some transaction performance logs from a mainframe system into an Oracle DB for later analysis and reporting. It is essentially a batch job that monitors a folder and waits for a new file then reads it and inserts it into the database (averages around 4.5million rows inserted per day), thus I chose Hibernate due to its ability to use JDBC batch inserts which appeared to not work so well in EclipseLink after some comparison testing. The files are in a proprietary binary format thus I cannot use simpler tools such as CSV imports etc.
Originally I developed the application for use with MySQL on my workstation as it was originally for a once of analysis task, but now wish to move it to an enterprise Oracle RAC platform as it has proved to be useful to continue to continue importing data and retaining it for a couple of months for use by myself and a few other analysts. I have had a DBA configure the tables and have adjusted my Entity classes to reflect some minor changes in field names and data types and changed the driver and connection details etc, but I have run into some issues with primary key generation.
There a few tables (main data table with some tables storing various supporting types eg transaction type, usercodes etc). Each has a unique (primary) id column which is auto-generated using a sequence and before-update trigger.
The DBA has configured the sequences to not be viewable by the users they have created.
Using the JPA (javax.annotations) generatedvalue types would not work in any case.
eg:
#GeneratedValue(strategy = GenerationType.AUTO)
This gives the SQL:
select hibernate_sequence.nextval from dual
Which the Oracle drivers throws an exception for with the error:
25/11/2009 11:57:23 AM org.hibernate.util.JDBCExceptionReporter logExceptions
WARNING: SQL Error: 2289, SQLState: 42000
25/11/2009 11:57:23 AM org.hibernate.util.JDBCExceptionReporter logExceptions
SEVERE: ORA-02289: sequence does not exist
After finding that I did some research and found the options to use the Hibernate JPA annotation extensions "GenericGenerator" with a "select" strategy (http://docs.jboss.org/hibernate/stable/core/reference/en/html/mapping.html#mapping-declaration-id-generator)
eg
#GeneratedValue(generator="id_anEntity")
#GenericGenerator(name = "id_anEntity",
strategy = "select")
However when I use this I find that Hibernate hangs during EntityManagerFactory creation. It appears to get past building the properties, building the named queries, connecting to the server, then hangs at:
25/11/2009 1:40:50 PM org.hibernate.impl.SessionFactoryImpl <init>
INFO: building session factory
and doesn't return.
I found the same thing happened when I didn't specify the dialect in the persistence.xml file.
It works fine if I use the "increment" strategy, although this means the sequences are then broken as the value has been incremented without the sequence having been incremented, which is less-than-ideal.
The "native" strategy gives the same output as using GenerationType.AUTO (ORA-02289: sequence does not exist).
I am not sure if this is due to me using the wrong key generation strategy, or an error in my configuration, or a bug.
Any help in either making the "select" strategy work, or a better alternative is much appreciated. I could potentially go back to using pure JDBC with prepared statements and such but this tends to get a little messy and I prefer the JPA approach.
Some more info:
Persistence.xml properties:
<property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.c3p0.min_size" value="5"/>
<property name="hibernate.c3p0.max_size" value="20"/>
<property name="hibernate.c3p0.timeout" value="1800"/>
<property name="hibernate.c3p0.max_statements" value="100000"/>
<property name="hibernate.jdbc.use_get_generated_keys" value="true"/>
<property name="hibernate.cache.use_query_cache" value="false"/>
<property name="hibernate.cache.use_second_level_cache" value="false"/>
<property name="hibernate.order_inserts" value="true"/>
<property name="hibernate.order_updates" value="true"/>
<property name="hibernate.connection.username" value="myusername"/>
<property name="hibernate.connection.driver_class" value="oracle.jdbc.OracleDriver"/>
<property name="hibernate.connection.password" value="mypassword"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/>
<property name="hibernate.connection.url" value="jdbc:oracle:thin:#(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = myoracleserver) (PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = myservicename))
)"/>
<property name="hibernate.jdbc.batch_size" value = "100000" />
A sample of the declaration of the ID field in one of the entity classes using annotations:
#Entity
#Table(name = "myentity",
catalog = "",
schema = "mydb")
public class myEntity implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#Basic(optional = false)
#GeneratedValue(generator="id_anEntity")
#GenericGenerator(name = "id_anEntity",
strategy = "select")
#Column(name = "MYENTITYID",
nullable = false)
private Integer myEntityID;
//... other column mappings
public Integer getMyEntityID() {
return myEntityID;
}
public void setMyEntityID(Integer myEntityID) {
this. myEntityID = myEntityID;
}
//... other getters & setters
}
I'm a bit unclear on what you mean by "The DBA has configured the sequences to not be viewable by the users they have created." - does that mean that the sequence not visible to you? Why not?
In order to use sequence-based generator where sequence name is not "hibernate_sequence" (which it never is in real life; that's just the default) you need to specify the appropriate generator:
#SequenceGenerator(name="myentity_seq", sequenceName="my_sequence")
public class MyEntity {
...
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="myentity_seq")
private Integer myEntityID;
...
}
"select" generator strategy means Hibernate will try to select the row you've just inserted using a unique key (other than PK, obviously). Do you have that defined? I would strongly suggest you go with sequence instead.