I've created a simple EJB application that uses JPA for persistence and have a problem whereby optimistic locking is not functioning as I would have expected.
The application contains a class named Site which defines the model for a table named SITE in the database. The SITE table contains a column named ROW_VERSION which is referenced in the Site class using the #version annotation.
Whenever the record is updated, the ROW_VERSION is incremented by 1. So far, so good.
The problem arises when the row has changed in the time between the application reading the row using the EntityManager find method and the row being updated by the EntityManager merge method. As the ROW_VERSION for the row has been incremented by 1 and therefore is not the same as when the EntityManager find method was called, I would expect an OptimisticLockException to be thrown, but instead the changes are written to the table and in turn overwriting the changes made by the other process.
The application is running on WebSphere 8.5 and is using OpenJPA provided by the container.
Have I mis-understood how optimistic locking is supposed to work or is there something else that I need to do to make the OptimisticLockException occur?
The Site class is as follows:
#Entity
#Table(name="SITE")
public class Site {
#Id
#Column(name="SITE_ID")
private int id;
#Column(name="SITE_NAME")
private String siteName;
#Column(name="SITE_ADDRESS")
private String address;
#Column(name="ROW_VERSION")
#Version
private long rowVersion;
//getters and setters
}
The application makes uses of the Generic DAO wrapper class to invoke the EntityManager methods. The contents of the class are as follows:
public abstract class GenericDAO<T> {
private final static String UNIT_NAME = "Test4EJB";
#PersistenceContext(unitName = UNIT_NAME)
private EntityManager em;
private Class<T> entityClass;
public GenericDAO(Class<T> entityClass) {
this.entityClass = entityClass;
}
public T update(T entity) {
return em.merge(entity);
}
public T find(int entityID) {
return em.find(entityClass, entityID);
}
//other methods
}
Update - I've done some more investigation and have found this http://pic.dhe.ibm.com/infocenter/wasinfo/v8r0/index.jsp?topic=%2Fcom.ibm.websphere.nd.multiplatform.doc%2Finfo%2Fae%2Fae%2Fcejb_genversionID.html but even when I've added the #VersionColumn and #VersionStrategy annotations I still cannot get the OptimisticLockException to be thrown.
Related
I see some weird case. Sometimes my findBy...() method returns null instead of some object inserted and fetched successfully before. After that the needed object fetches fine. In other words sometimes the search is not working.
Spring Boot edition: 1.5.2.RELEASE
spring-boot-starter-data-redis: 1.5.22.RELEASE
"maxmemory-policy" setting is set to "noeviction"
My obj declaration:
#RedisHash("session")
public class Session implements Serializable {
#Id
private String id;
#Indexed
private Long internalChatId;
#Indexed
private boolean active;
#Indexed
private String chatId;
}
JPA Repository:
#Repository
public interface SessionRepository extends CrudRepository<Session, String> {
Session findByInternalChatIdAndActive(Long internalChatId, Boolean isActive);
}
Redis config:
#Bean
public LettuceConnectionFactory redisConnectionFactory(
RedisProperties redisProperties) {
return new LettuceConnectionFactory(
redisProperties.getRedisHost(),
redisProperties.getRedisPort());
}
#Bean
public RedisTemplate<?, ?> redisTemplate(LettuceConnectionFactory connectionFactory) {
RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
return template;
}
Thanx in advance for any assist.
We have recently seen similar behavior. In our scenario, we can have multiple threads that read and write to the same repository. Our null return occurs when one thread is doing a save to an object while another is doing a findById for that same object. The findById will occasionally fail. It appears that the save implementation does a delete followed by an add; if the findById gets in during the delete, the null result is returned.
We've had good luck so far in our test programs that can reproduce the null return using a Java Semaphore to gate all access (read, write, delete) to the repository. When the repository access methods are all gated by the same semaphore, we have not seen a null return. Our next step is to try adding the synchronized keyword to the methods in the class that access the repository (as an alternative to using the Semaphore).
This should not happen I don't what is reason. But you can use Option class and if it returns null at least you can avoid exception.
Something like:
Optional<Session> findByInternalChatIdAndActive(Long internalChatId, Boolean isActive);
I'm beginner in java programming. And I try to write simple stand alone application with spring data. To basic example which is here http://spring.io/guides/gs/accessing-data-jpa/ I want to add, auditing mechanism which will store previous values for objects. I want in customer entity, on #PreUpdate store old values in another table, but I do not know how.
#Entity
#EntityListeners(AuditingEntityListener.class)
public class Customer implements Serializable {
...
#Transient
private transient Customer savedState;
#PreUpdate
public void onPreUpdate() {
if (!savedState.firstName.equals(this.firstName)) {
log.info(String.format("first name was modified, new value =%s, old value=%s",this.firstName, savedState.firstName ));
}
}
#PostLoad
private void saveState(){
this.savedState = (Customer) SerializationUtils.clone(this); // from apache commons-lang
}
I am creating a simple spring application which is supposed to book seats in a seminar. Lets say Booking class looks like this
#Entity
#Table(name = "bookings")
#IdClass(BookingId.class)
public class Booking{
#Id
private Long seminarId;
#Id
private String seatNo;
// .. other fields like perticipant info
// .. getter setters
}
of course the BookingId class:
public class BookingId implements Serializable{
private static final long serialVersionUID = 1L;
private Long seminarId;
private String seatNo;
// .. constructors, getters, setters
}
And I have a repository
#Repository
public interface BookingsRepository extends JpaRepository<Booking, BookingId>{
}
in the controller when a booking request arrives I first check if a booking with same seminer id and seat number already exists, if it doesn't exist I create one
#RequestMapping(method = RequestMethod.POST)
public ResponseEntity<BaseCrudResponse> createNewBooking(#Valid #RequestBody NewBookingDao newBookingDao, BindingResult bindingResult){
logger.debug("Request for a new booking");
// .. some other stuffs
Booking newBooking = new Booking();
newBooking.setSeminarId(newBookingDao.getSeminarId());
newBooking.setSeatNumber(newBookingDao.getSeatNumber());
// .. set other fields
Booking existing = bookingsRepository.findOne(new BookingId(newBooking.getSeminarId(), newBooking.getSeatNumber());
if (existing == null)
bookingsRepository.save(newBooking);
return new ResponseEntity<>(new BaseCrudResponse(0), HttpStatus.CREATED);
}
return new ResponseEntity<>(response, HttpStatus.BAD_REQUEST);
}
Now what will happen if the save method of the repository didn't finish commiting transaction and another request already gets past the existence check ? There might be incorrect booking (the last commit will override the previous). Is this scenario likely to happen ? Will the repository ensures that it completes the transaction before another save call ?
Also is there any way to tell Jpa to throw some exception (for IntegrityConstraintException if the composite key (in this case seminerId and seatNumber) already exists ? Now in the present setting its just updating the row.
You can use javax.persistence.LockModeType.PESSIMISTIC_WRITE so other transactions except the one that got the lock cannot update the entity.
If you use spring-data > 1.6 you can annotate the repository method with #Lock :
interface BookingsRepository extends Repository<Booking, Long> {
#Lock(LockModeType.PESSIMISTIC_WRITE)
Booking findOne(Long id);
}
For sure you need to handle the locking exception that may be thron in the controller.
There is a problem about generating id while persisting into database.
I added the following code to my jpa entity file, however I'm getting 0 for personid.
#Id
#Column(unique=true, nullable=false, precision=10, name="PERSONID")
#SequenceGenerator(name="appUsersSeq", sequenceName="SEQ_PERSON", allocationSize=1)
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator = "appUsersSeq")
private long personid;
EjbService:
#Stateless
public class EjbService implements EjbServiceRemote {
#PersistenceContext(name = "Project1245")
private EntityManager em;
#Override
public void addTperson(Tperson tp) {
em.persist(tp);
}
}
0 is default value for long type. The id will be set after invoking select query for the related sequence, which commonly is executed when you persist the entity. Are you persisting the entity? In case yes, post the database sequence definition to check it.
The test below fails if I remove the first persist(). Why do I need to persist the NodeEntity in order for the Set to be instantiated? Is there some better way to do this? I don't want to have to write to the database more often than nessesary.
#Test
public void testCompetenceCreation() {
Competence competence = new Competence();
competence.setName("Testcompetence");
competence.persist(); //test fails if this line is removed
Competence competenceFromDb = competenceRepository.findOne(competence.getId());
assertEquals(competence.getName(), competenceFromDb.getName());
Education education = new Education();
education.setName("Bachelors Degree");
competence.addEducation(education);
competence.persist();
assertEquals(competence.getEducations(), competenceFromDb.getEducations());
}
If i remove the mentioned line, the exception bellow occurs:
Throws
java.lang.NullPointerException
at com.x.entity.Competence.addEducation(Competence.java:54)
Competence.class:
#JsonIgnoreProperties({"nodeId", "persistentState", "entityState"})
#NodeEntity
public class Competence {
#RelatedTo(type = "EDUCATION", elementClass = Education.class)
private Set<Education> educations;
public Set<Education> getEducations() {
return educations;
}
public void addEducation(Education education) {
this.educations.add(education);
}
}
Education.class
#JsonIgnoreProperties({"nodeId", "persistentState", "entityState"})
#NodeEntity
public class Education {
#GraphId
private Long id;
#JsonBackReference
#RelatedTo(type = "COMPETENCE", elementClass = Competence.class, direction = Direction.INCOMING)
private Competence competence;
#Indexed
private String name;
public Long getId() {
return id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
What version of SDN are you running?
Because up until the first persist the entity is detached and AJ doesn't take care of the fields (like creating the managed set). Persist creates the node at connects it to the entity, from then on until the transaction commits your entity is attached and all the changes will be written through.
It only writes to the db at commit, so no worries about too many writes. All the other changes will just be held in memory for your transaction. Probably you should also annotate the test method with #Transactional.
Can you create a JIRA issue for this? So that a consistent handling is provided. (Problem being that it probably also complains when you initialize the set yourself.)
Two other things:
as your relationship between Education<--Competence is probably the same and should just be navigated in the other direction you must provide the same type name in the annotation.
e.g. Education<-[:PROVIDES]-Competence
also if you don't call persist your entity will not be created and then the findOne by returning null