Java, Hibernate -- Can't retrieve id after persisting entity with separate embedded key class - spring

I am working on a sample Springboot server application and using hibernate for JPA. I am using a generic repository pattern that performs all the CRUD operations on my entity. I am following this example :
http://www.concretepage.com/spring-boot/spring-boot-rest-jpa-hibernate-mysql-example that I came across. (My idea to have a Generic repository was to have a similar implementation for all CRUD operations, than explicitly stating one in each Service/DAO or repository implementation for each Entity) In the above example the #ID attribute is in the same class as the Entity. As a result of that I was able to persist an entity and the id would be reflected in the object after entityManager.persist(object)
In my code I have the Key class separate and it is referenced in the Entity class. On calling persist on EntityManager, a row is created in the database (since the column for the primary key is set to auto-increment in the database), but that same ID isn't reflected in the object after calling persist(). At all times my ID attribute within the key class is set to 0 that is the default int value. I would like to know if there is a way that I could fetch the ID of the inserted object either through Session or EntityManager. Also is there any alternate strategy to going about this problem without having the include the primary key in the Entity class itself. (As of now, I have looked at multiple posts on SO but haven't been able to get to a solution to my problem.)
Entity class
#Entity
#Table(name = "articles")
public class SampleArticle extends AbstractDomainObject {
/** The serialVersionUID. */
private static final long serialVersionUID = 7072648542528280535L;
/** Uniquely identifies the article. */
#EmbeddedId
#AttributeOverride(name = "articleId", column = #Column(name = "article_id"))
#GeneratedValue(strategy = GenerationType.IDENTITY)
//#GeneratedValue(strategy = GenerationType.AUTO)
private SampleArticleKey key;
/** Indicates the title. */
#Column(name = "title")
private String title;
Key class
#Embeddable
public class SampleArticleKey extends AbstractDomainKey {
/**
* Serial version id.
*/
private static final long serialVersionUID = 1325990987094850016L;
/** The article id. */
private int articleId;
Repository class
#Repository
#Transactional
public class SampleArticleRepository extends
AbstractRepository<SampleArticle, SampleArticleKey> implements
ISampleArticleRepository<SampleArticle, SampleArticleKey> {
/*
* (non-Javadoc)
* #see
* com.wpi.server.entity.repository.ISampleArticleRepository#addArticle
* (java.lang.Object)
*/
#Override
public SampleArticle create(SampleArticle article) throws Exception {
return super.create(article);
}
Abstract Repository
#Transactional
public abstract class AbstractRepository<T extends AbstractDomainObject, K
extends AbstractDomainKey> {
/** The entity manager. */
#PersistenceContext
private EntityManager entityManager;
/** The Constant LOGGER. */
private static final Logger LOGGER = Logger.getLogger(AbstractRepository.class.getName());
/**
* Persist the given object at persistence storage.
*
* #param object
* The object extending {#link AbstractDomainObject} which needs
* to be inserted.
* #return object of type {#link AbstractDomainObject} with the newly
* generated id.
* #throws Exception
* If unable to insert data.
*/
public T create(T object) throws Exception {
final Session session = entityManager.unwrap(Session.class);
session.getTransaction().begin();
session.save(object);
session.flush();
session.getTransaction().commit();
session.close();
LOGGER.fine("Entity CREATED successfully.");
return object;
};

Let me give you a working embeddable key example. It might help.
First overwrite equals() and hashCode() methods so that Hibernate proper identifies objects in the cash.
Now you can persist objects
Let me know if this helps or you have other issues with this.

Related

How to give ttl in Cassandra when inserting data in batches?

Hello I am using Cassandra to save user data . I want to store data of a user for only 24 hours so I am giving a ttl for 24 hours. For each user there are multiple entries. So I want to batch insert data for each user instead of multiple calls to data base . I am using Cassandra operations to give ttl . I am able to give ttl for single record . How to provide ttl when inserting data in batches
public class CustomizedUserFeedRepositoryImpl<T> implements CustomizedUserFeedRepository<T> {
private CassandraOperations cassandraOperations;
#Autowired
CustomizedUserFeedRepositoryImpl(CassandraOperations cassandraOperations){
this.cassandraOperations = cassandraOperations;
}
#Override
public <S extends T> S save(S entity, int ttl){
InsertOptions insertOptions;
if(ttl == 0) {
insertOptions = InsertOptions.builder().ttl(Duration.ofHours(24)).build();
} else {
insertOptions = InsertOptions.builder().ttl(ttl).build();
}
cassandraOperations.insert(entity,insertOptions);
return entity;
}
#Override
public void saveAllWithTtl(java.lang.Iterable<T> entities, int ttl){
entities.forEach(entity->{
save(entity,ttl);
});
}
}
As you can see I have to iterate over the list make and make database calls for each record . The batch operation cassandraOperations.batchOps().insert() only takes list of objects . How to set ttl for each record when using batchops() fucntion ?
/**
* Add a collection of inserts with given {#link WriteOptions} to the batch.
*
* #param entities the entities to insert; must not be {#literal null}.
* #param options the WriteOptions to apply; must not be {#literal null}.
* #return {#code this} {#link CassandraBatchOperations}.
* #throws IllegalStateException if the batch was already executed.
* #since 2.0
*/
CassandraBatchOperations insert(Iterable<?> entities, WriteOptions options);
You can use insert(Iterable<?> entities, WriteOptions options) method
#EqualsAndHashCode(callSuper = true)
public class WriteOptions extends QueryOptions {
private static final WriteOptions EMPTY = new WriteOptionsBuilder().build();
private final Duration ttl;
private final #Nullable Long timestamp;
batchOperations.insert(entity, WriteOptions.builder().ttl(20).build());

Repository save() is not committing or persisting data into table and returning an object with primary key which is already in table

I am writing JPA implementation to replace JDBC implementation with query. I have used the Oracle database sequence object name in #SequenceGenerator as shown in the code. As a result, save() is returning an already existing data in the table instead of generating a new primary key and inserting into the table.
I think the sequence is generating existing primary key instead of generating a new one.
#Entity
#Table(name = "table")
public class TableDetail implements java.io.Serializable{
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator = "table_seq")
#SequenceGenerator(sequenceName = "SEQ_TABLE", allocationSize = 1, name = "table_seq")
private Long AUDT_ID;
....
}
#Repository
public interface TableDetailDAO extends CrudRepository<TableDetail, Long> {
TableDetail save(TableDetail tableDetail);
}
#Service
#Transactional
public class TableDetailServiceImpl implements TableDetailService {
public void createAuditEvent(TableDetail tableDetail) {
#Autowired
TableDetailDAO tableDetailDAO;
TableDetail tableDetail =
tableDetailDAO.save(tableDetail);
}
}
So I figured out that it was due to my JDBC Database config file where I had defined this TransactionManager as shown. I had to remove this or define a JpaTransactionManager in this method. Right now, this definition was not letting JPA handle the commit due to which the data was not being persisted into the database.
This definition was present because it was actually a JDBC implementation before.
Also, I had to use GenerationType.AUTO because as far as my understanding, GenerationType.SEQUENCE is used only when there is no Sequence object defined in the Database.
/**
* Creates a handle to the TransactionManager.
*
* #return PlatformTransactionManager
*/
#Bean(name = "transactionManager")
public PlatformTransactionManager txManager() {
return new DataSourceTransactionManager(initializeDataSource());
}

I need help for persisting into oracle database

There is a problem about generating id while persisting into database.
I added the following code to my jpa entity file, however I'm getting 0 for personid.
#Id
#Column(unique=true, nullable=false, precision=10, name="PERSONID")
#SequenceGenerator(name="appUsersSeq", sequenceName="SEQ_PERSON", allocationSize=1)
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator = "appUsersSeq")
private long personid;
EjbService:
#Stateless
public class EjbService implements EjbServiceRemote {
#PersistenceContext(name = "Project1245")
private EntityManager em;
#Override
public void addTperson(Tperson tp) {
em.persist(tp);
}
}
0 is default value for long type. The id will be set after invoking select query for the related sequence, which commonly is executed when you persist the entity. Are you persisting the entity? In case yes, post the database sequence definition to check it.

Lazy exception : size vs eager?

I was faced to a :
failed to lazily initialize a collection of role: ,no session or session was closed
When trying to access (from the controller, or a junit) the collection of "DataDictionaryEntry" that are in a "DataDictionary".
DataDictionary
#Entity
#Table( name = "IDS_RAVE_DATA_DICTIONARY",
uniqueConstraints={#UniqueConstraint(columnNames={"name"})
})
public class DataDictionary extends UnversionedObject {
#Column
private String name;
#OneToMany(mappedBy="dataDictionary",fetch=FetchType.EAGER)
private Collection<DataDictionaryEntry> dataDictionaryNames;
/* constructor */
public DataDictionary() {
super();
}
/* getters & setters */
}
DataDictionaryEntry
#Entity
#Table( name = "IDS_RAVE_DATA_DICTIONARY_ENTRY",
uniqueConstraints={#UniqueConstraint(columnNames={"dataDictionary","codedData"})
})
public class DataDictionaryEntry extends UnversionedObject {
#ManyToOne
#JoinColumn(name="dataDictionary")
private DataDictionary dataDictionary;
#Column
private String codedData;
#Column
private Integer ordinal;
#Column
private String userDataString;
#Column
private Boolean specify;
/* constructor */
public DataDictionaryEntry() {
super();
}
/* getters & setters */
}
I do have an abstract service object and another service extending it :
Generic service
#Transactional
public abstract class RaveGeneralServiceImpl<T> implements RaveGeneralService<T> {
private JpaRepository<T, Long> repo;
/**
* Init the general rave services with your specific repo
* #param repo
*/
protected void init(JpaRepository<T, Long> repo){
this.repo = repo;
}
#Override
public List<T> findAll(){
return repo.findAll();
}
#Override
public T save(T obj){
return repo.save(obj);
}
#Override
public void flush(){
repo.flush();
}
}
DataDictionaryServiceImpl
#Service
public class DataDictionaryServiceImpl extends RaveGeneralServiceImpl<DataDictionary> implements DataDictionaryService {
#Resource
private DataDictionaryRepository dataDictionaryRepository;
#PostConstruct
public void init() {
super.init(dataDictionaryRepository);
}
}
I could find replies on how to solve it. The first solution often seen is to change to LAZY to a EAGER. When I printed the generated query when accessing a FINDALL() method it shows the following :
Hibernate:
/* select
generatedAlias0
from
DataDictionary as generatedAlias0 */ select
datadictio0_.ID as ID81_,
datadictio0_.createdByUser as createdB2_81_,
datadictio0_.createdTime as createdT3_81_,
datadictio0_.lastUpdateTime as lastUpda4_81_,
datadictio0_.lastUpdateUser as lastUpda5_81_,
datadictio0_.VERSION as VERSION81_,
datadictio0_.name as name81_
from
IDS_RAVE_DATA_DICTIONARY datadictio0_
Hibernate:
/* load one-to-many com.bdls.ids.model.rave.DataDictionary.dataDictionaryNames */ select
datadictio0_.dataDictionary as dataDic11_81_1_,
datadictio0_.ID as ID1_,
datadictio0_.ID as ID82_0_,
datadictio0_.createdByUser as createdB2_82_0_,
datadictio0_.createdTime as createdT3_82_0_,
datadictio0_.lastUpdateTime as lastUpda4_82_0_,
datadictio0_.lastUpdateUser as lastUpda5_82_0_,
datadictio0_.VERSION as VERSION82_0_,
datadictio0_.codedData as codedData82_0_,
datadictio0_.dataDictionary as dataDic11_82_0_,
datadictio0_.ordinal as ordinal82_0_,
datadictio0_.specify as specify82_0_,
datadictio0_.userDataString as userDat10_82_0_
from
IDS_RAVE_DATA_DICTIONARY_ENTRY datadictio0_
where
datadictio0_.dataDictionary=?
The 2nd solution we often see is to make a call to the .size() of the component that is being lazily initialized. So indeed by changing my service to this :
#Service
public class DataDictionaryServiceImpl extends RaveGeneralServiceImpl<DataDictionary> implements DataDictionaryService {
#Resource
private DataDictionaryRepository dataDictionaryRepository;
#PostConstruct
public void init() {
super.init(dataDictionaryRepository);
}
#Override
public List<DataDictionary> findAll() {
List<DataDictionary> results = super.findAll();
for (DataDictionary dd : results) {
dd.getDataDictionaryNames().size();// init lazy
}
return results;
}
}
The lazy exception is also gone ! But the end result is the same query... So the what is the added value of keeping it LAZY if the end-query is the same ? Or did I do it wrong ?
Suppose that for the front-end you would have a data table that displays only basic information (the name for example), it would call the findAll() but still query the complete dependencies of that object ?
While the results with this method are pretty much exactly the same, the value of keeping it lazy is that if you don't need it fetched in other queries, you don't automatically have it eagerly fetched. Making the relationship eager applies to every method of accessing that entity, while calling size on a collection forces it to be fetched for that one occurrence.
There are other ways that might be more efficient, such as using a join fetch qualifier in the JPA query itself, allowing the provider to fetch the relationship using a single select.
You can either use: Hibernate.initialize() to initialize Lazy collections.
Or using spring to avoid LazyException use filer in your web.xml:
<filter>
<filter-name>hibernateFilterChain</filter-name>
<filter-class>org.springframework.orm.hibernate4.support.OpenSessionInViewFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>hibernateFilterChain</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
But remember, using lazy fetching is bad idea if you are thinking about good application design and performance.

OptimisticLockException not thrown when version has changed

I've created a simple EJB application that uses JPA for persistence and have a problem whereby optimistic locking is not functioning as I would have expected.
The application contains a class named Site which defines the model for a table named SITE in the database. The SITE table contains a column named ROW_VERSION which is referenced in the Site class using the #version annotation.
Whenever the record is updated, the ROW_VERSION is incremented by 1. So far, so good.
The problem arises when the row has changed in the time between the application reading the row using the EntityManager find method and the row being updated by the EntityManager merge method. As the ROW_VERSION for the row has been incremented by 1 and therefore is not the same as when the EntityManager find method was called, I would expect an OptimisticLockException to be thrown, but instead the changes are written to the table and in turn overwriting the changes made by the other process.
The application is running on WebSphere 8.5 and is using OpenJPA provided by the container.
Have I mis-understood how optimistic locking is supposed to work or is there something else that I need to do to make the OptimisticLockException occur?
The Site class is as follows:
#Entity
#Table(name="SITE")
public class Site {
#Id
#Column(name="SITE_ID")
private int id;
#Column(name="SITE_NAME")
private String siteName;
#Column(name="SITE_ADDRESS")
private String address;
#Column(name="ROW_VERSION")
#Version
private long rowVersion;
//getters and setters
}
The application makes uses of the Generic DAO wrapper class to invoke the EntityManager methods. The contents of the class are as follows:
public abstract class GenericDAO<T> {
private final static String UNIT_NAME = "Test4EJB";
#PersistenceContext(unitName = UNIT_NAME)
private EntityManager em;
private Class<T> entityClass;
public GenericDAO(Class<T> entityClass) {
this.entityClass = entityClass;
}
public T update(T entity) {
return em.merge(entity);
}
public T find(int entityID) {
return em.find(entityClass, entityID);
}
//other methods
}
Update - I've done some more investigation and have found this http://pic.dhe.ibm.com/infocenter/wasinfo/v8r0/index.jsp?topic=%2Fcom.ibm.websphere.nd.multiplatform.doc%2Finfo%2Fae%2Fae%2Fcejb_genversionID.html but even when I've added the #VersionColumn and #VersionStrategy annotations I still cannot get the OptimisticLockException to be thrown.

Resources