I'm trying to write a Groovy/Grails 3 function that looks up a database object, locks it, and then saves it (releasing the lock automatically).
If the function is called multiple times, it should wait until the lock is released, and then run the update. How can I accomplish this?
def updateUser(String name) {
User u = User.get(1)
// if locked, wait until released somehow?
u.lock()
u.name = name
u.save()
}
updateUser('bob')
updateUser('fred') // sees lock from previous call, waits until released, then updates
u.save(flush:true)
Flushing the Hibernate session should complete the transaction and release the lock on the database level.
Generally speaking, pessimistick locking only works in a transactional context.
So make sure to put the updateUser method in a service that is annotated with #Transactional.
Calling get() and then lock() results in 2 sql statements being executed (one for getting the object, another for locking it).
Using User.lock(), a single select ... for udpate query is issued instead.
#Transactional
class UserService {
def updateUser(String name) {
User u = User.lock(1) // blocks until lock is free
u.name = name
u.save()
}
}
Related
I have a transactional method where objects are inserted. The debugger shows that upon eventsDAO.save(..) the actual insert doesn't take place, but there is only a sequence fetch. The first time I see insert into events_t .. in the debugger is when there's a reference to the just-inserted Event.
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class, readOnly = false)
public void insertEvent(..) {
EventsT eventsT = new EventsT();
// Fill it out...
EventsT savedEventsT = eventsDAO.save(eventsT); // No actual save happens here
// .. Some other HQL fetches or statements ...
// Actual Save(Insert) only happens after some actual reference to this EventsT (below)
// This is also HQL
SomeField someField = eventsDAO.findSomeAttrForEventId(savedEventsT.getId());
}
But I also see that this only holds true if all the statements are HQL (non-native).
As soon as I put a Native-SQL Select somewhere before any actual reference to this table, even if it does not touch the table in any way, it forces an immediate flush and I see the statement insert into events_t ... on the console at that exact point.
If I don't touch the table EventsT with my Native SQL Select in any way, why does the flushing happen at that point?
According to the hibernate documentation:
6.1. AUTO flush
By default, Hibernate uses the AUTO flush mode which triggers a flush in the following circumstances:
prior to committing a Transaction
prior to executing a JPQL/HQL query that overlaps with the queued entity actions
before executing any native SQL query that has no registered synchronization
So, this is expected behaviour. See also this section. It shows how you can use a synchronization.
I am using the the following code for redis lock and release
var key = "test-x";
RedisValue token = (RedisValue) Guid.NewGuid().ToString();
if(db.LockTake(key, token, duration)) {
try {
// you have the lock do work
} finally {
db.LockRelease(key, token);
}
}
My problem:
In a unit test I am calling this method 2 times. The first time always work, but the second time I want to obtain the lock on this specific key, it does not work. From my understanding the db.LockRelease should release the lock, making it available for the second request. I did notice that db.LockRelease returns false.
Any idea what might be happening?
The lock key needs to be unique. You are probably using the same lock key as the cache key in you code. From https://stackoverflow.com/a/25138164:
the key (the unique name of the lock in the database)
I do the following:
def currentUser = springSecurityService.currentUser
currentUser.name = "test"
currentUser.save(flush: true)
// some other code
currentUser.gender = "male"
currentUser.save(flush: true) // Exception occurs
This is the exception I get:
ERROR events.PatchedDefaultFlushEventListener - Could not synchronize database state with session
org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
How can I prevent this error? What is the best solution for that?
I found different approaches:
here that you can use discard()
here that you can use merge()
Which one should I use?
You should use merge - it will update the object to match the current state in the database. If you use discard it will reset the object back to what the database has, discarding any changes. Everything else in the hibernate session you need to manage yourself.
More importantly code should be written in a service so that there is a database transaction, and you should use
save(flush:true)
once only at the end.
def currentUser = springSecurityService.currentUser
currentUser.name = "test"
// currentUser.save(flush: true) // removing this line because if a rollback occurs, then changes before this would be persisted.
// some other code
currentUser.gender = "male"
currentUser.merge() // This will merge persistent object with current state
currentUser.save(flush: true)
GORM works fine out of the box as long as there is no batch with more than 10.000 objects. Without optimisation you will face the outOfMemory problems.
The common solution is to flush() and clear() the session each n (e.g.n=500) objects:
Session session = sessionFactory.currentSession
Transaction tx = session.beginTransaction();
def propertyInstanceMap = org.codehaus.groovy.grails.plugins.DomainClassGrailsPlugin.PROPERTY_INSTANCE_MAP
Date yesterday = new Date() - 1
Criteria c = session.createCriteria(Foo.class)
c.add(Restrictions.lt('lastUpdated',yesterday))
ScrollableResults rawObjects = c.scroll(ScrollMode.FORWARD_ONLY)
int count=0;
while ( rawObjects.next() ) {
def rawOject = rawObjects.get(0);
fooService.doSomething()
int batchSize = 500
if ( ++count % batchSize == 0 ) {
//flush a batch of updates and release memory:
try{
session.flush();
}catch(Exception e){
log.error(session)
log.error(" error: " + e.message)
throw e
}
session.clear();
propertyInstanceMap.get().clear()
}
}
session.flush()
session.clear()
tx.commit()
But there are some problems I can't solve:
If I use currentSession, then the controller fails because of session is empty
If I use sessionFactory.openSession(), then the currentSession is still used inside FooService. Of cause I can use the session.save(object) notation. But this means, that I have to modify fooService.doSomething() and duplicate code for single operation (common grails notation like fooObject.save() ) and batch operation (session.save(fooObject() ).. notation).
If I use Foo.withSession{session->} or Foo.withNewSession{session->}, then the objects of Foo Class are cleared by session.clear() as expected. All the other objects are not cleared(), what leads to memory leak.
Of cause I can use evict(object) to manualy clear the session. But it is nearly impossible to get all relevant objects, due to autofetching of assosiations.
So I have no idea how to solve my problems without making the FooService.doSomething() more complex. I'm looking for something like withSession{} for all domains. Or to save session at the begin (Session tmp = currentSession) and do something like sessionFactory.setCurrentSession(tmp). Both doesn't exists!
Any idea is wellcome!
I would recommend to use stateless session for this kind of batch processing. See this post: Using StatelessSession for Batch processing
A modified approach to what you are doing would be:
Loop over your entire collection (rawObjects) and save a list of all the ids for those objects.
Loop over the list of ids. At each iteration, look up just that single object, by its id.
Then use the same periodic clearing of the session cache like you are doing now.
By the way, someone else has suggested an approach similar to yours. But note that the code in this link is incorrect; the lines that clear the session should be inside the if statement, just like you have in your solution.
I have a bit of linq to entities code in a web app. It basically keeps a count of how many times an app was downloaded. I'm worried that this might happen:
Session 1 reads the download count (eg. 50)
Session 2 reads the download count (again, 50)
Session 1 increments it and writes it to the db (database stores 51)
Session 2 increments it and writes it to the db (database stores 51)
This is my code:
private void IncreaseHitCountDB()
{
JTF.JTFContainer jtfdb = new JTF.JTFContainer();
var app =
(from a in jtfdb.Apps
where a.Name.Equals(this.Title)
select a).FirstOrDefault();
if (app == null)
{
app = new JTF.App();
app.Name = this.Title;
app.DownloadCount = 1;
jtfdb.AddToApps(app);
}
else
{
app.DownloadCount = app.DownloadCount + 1;
}
jtfdb.SaveChanges();
}
Is it possible that this could happen? How could I prevent it?
Thank you,
Fidel
Entity Framework, by default, uses an optimistic concurrency model. Google says optimistic means "Hopeful and confident about the future", and that's exactly how Entity Framework acts. That is, when you call SaveChanges() it is "hopeful and confident" that no concurrency issue will occur, so it just tries to save your changes.
The other model Entity Framework can use should be called a pessimistic concurrency model ("expecting the worst possible outcome"). You can enable this mode on an entity-by-entity basis. In your case, you would enable it on the App entity. This is what I do:
Step 1. Enabling concurrency checking on an Entity
Right-click the .edmx file and choose Open With...
Choose XML (Text) Editor in the popup dialog, and click OK.
Locate the App entity in the ConceptualModels. I suggest toggling outlining and just expanding tags as necessary. You're looking for something like this:
<edmx:Edmx Version="2.0" xmlns:edmx="http://schemas.microsoft.com/ado/2008/10/edmx">
<!-- EF Runtime content -->
<edmx:Runtime>
<!-- SSDL content -->
...
<!-- CSDL content -->
<edmx:ConceptualModels>
<Schema Namespace="YourModel" Alias="Self" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2008/09/edm">
<EntityType Name="App">
Under the EntityType you should see a bunch of <Property> tags. If one exists with Name="Status" modify it by adding ConcurrencyMode="Fixed". If the property doesn't exist, copy this one in:
<Property Name="Status" Type="Byte" Nullable="false" ConcurrencyMode="Fixed" />
Save the file and double click the .edmx file to go back to the designer view.
Step 2. Handling concurrency when calling SaveChanges()
SaveChanges() will throw one of two exceptions. The familiar UpdateException or an OptimisticConcurrencyException.
if you have made changes to an Entity which has ConcurrencyMode="Fixed" set, Entity Framework will first check the data store for any changes made to it. If there are changes, a OptimisticConcurrencyException will be thrown. If no changes have been made, it will continue normally.
When you catch the OptimisticConcurrencyException you need to call the Refresh() method of your ObjectContext and redo your calculation before trying again. The call to Refresh() updates the Entity(s) and RefreshMode.StoreWins means conflicts will be resolved using the data in the data store. The DownloadCount being changed concurrently is a conflict.
Here's what I'd make your code look like. Note that this is more useful when you have a lot of operations between getting your Entity and calling SaveChanges().
private void IncreaseHitCountDB()
{
JTF.JTFContainer jtfdb = new JTF.JTFContainer();
var app =
(from a in jtfdb.Apps
where a.Name.Equals(this.Title)
select a).FirstOrDefault();
if (app == null)
{
app = new JTF.App();
app.Name = this.Title;
app.DownloadCount = 1;
jtfdb.AddToApps(app);
}
else
{
app.DownloadCount = app.DownloadCount + 1;
}
try
{
try
{
jtfdb.SaveChanges();
}
catch (OptimisticConcurrencyException)
{
jtfdb.Refresh(RefreshMode.StoreWins, app);
app.DownloadCount = app.DownloadCount + 1;
jtfdb.SaveChanges();
}
}
catch (UpdateException uex)
{
// Something else went wrong...
}
}
You can prevent this from happenning if you only query the download count column right before you are about to increment it, the longer the time spent between reading and incrementing the longer the time another session has to read it (and later rewriting - wrongly - incremented number ) and thus messing up the count.
with a single SQL query :
UPDATE Data SET Counter = (Counter+1)
since its Linq To Entities,it means delayed execution,for another session to screw up the Count (increment the same base,losing 1 count there) it would have to try to increment the app.Download count i beleive between the two lines:
else
{
app.DownloadCount += 1; //First line
}
jtfdb.SaveChanges(); //Second line
}
thats means that the window for the change to occur, thus making the previous count old, is so small that for an application like this is virtually impossible.
Since Im no LINQ pro, i dont know whether LINQ actually gets app.DownLoadCount before adding one or just adds one through some SQL command, but in either case you shouldnt have to worry about that imho
You could easily test what would happen in this scenario - start a thread, sleep it, and then start another.
else
{
app.DownloadCount = app.DownloadCount + 1;
}
System.Threading.Thread.Sleep(10000);
jtfdb.SaveChanges();
But the simple answer is that no, Entity Framework does not perform any concurrency checking by default (MSDN - Saving Changes and Managing Concurrency).
That site will provide some background for you.
Your options are
to enable concurrency checking, which will mean that if two users download at the same time and the first updates after the second has read but before the second has updated, you'll get an exception.
create a stored procedure that will increment the value in the table directly, and call the stored procedure from code in a single operation - e.g. IncrementDownloadCounter. This will ensure that there is no 'read' and therefore no possibility of a 'dirty read'.