Spring JPA transactional to avoid concurrency read / update - spring

Using Spring boot and JPA/hibernate , I'm looking for a solution to avoid a table record being read by another process while I'm reading then updating an entity. Isolation levels Dirty read, Nonrepeatable read and Phantom read are not so clear for me. I mean if process #1 starts a read/update i don't want a process #2 to be able to read the old value (before updated by #1) and then update the structure with wrong values.

Isolation levels all prevent reading changes in different levels of strictness:
Dirty Read -> reading not yet committed changes
Nonrepeatable read -> querying the same row second time finds data changes
Phantom read -> like previous but instead of data changes, it finds more data added (more here)
Serializable level, being the strictest, would prevent reading any changes yet, essentially resulting in sequential processing in DB (no concurrency) and would probably solve your problem
What you are looking for, if I understood correctly, is to block second process from doing any work until row update is complete - that is called row locking, and can be controlled directly as well (without setting serializable isolation)
See more about row locking with Spring JPA here: https://www.baeldung.com/java-jpa-transaction-locks
If it wasn't different process (different program) but just a different thread within the same Java program a simple synchronized would do the trick as well.

Related

Batching stores transparently

We are using the following frameworks and versions:
jOOQ 3.11.1
Spring Boot 2.3.1.RELEASE
Spring 5.2.7.RELEASE
I have an issue where some of our business logic is divided into logical units that look as follows:
Request containing a user transaction is received
This request contains various information, such as the type of transaction, which products are part of this transaction, what kind of payments were done, etc.
These attributes are then stored individually in the database.
In code, this looks approximately as follows:
TransactionRecord transaction = transactionRepository.create();
transaction.create(creationCommand);`
In Transaction#create (which runs transactionally), something like the following occurs:
storeTransaction();
storePayments();
storeProducts();
// ... other relevant information
A given transaction can have many different types of products and attributes, all of which are stored. Many of these attributes result in UPDATE statements, while some may result in INSERT statements - it is difficult to fully know in advance.
For example, the storeProducts method looks approximately as follows:
products.forEach(product -> {
ProductRecord record = productRepository.findProductByX(...);
if (record == null) {
record = productRepository.create();
record.setX(...);
record.store();
} else {
// do something else
}
});
If the products are new, they are INSERTed. Otherwise, other calculations may take place. Depending on the size of the transaction, this single user transaction could obviously result in up to O(n) database calls/roundtrips, and even more depending on what other attributes are present. In transactions where a large number of attributes are present, this may result in upwards of hundreds of database calls for a single request (!). I would like to bring this down as close as possible to O(1) so as to have more predictable load on our database.
Naturally, batch and bulk inserts/updates come to mind here. What I would like to do is to batch all of these statements into a single batch using jOOQ, and execute after successful method invocation prior to commit. I have found several (SO Post, jOOQ API, jOOQ GitHub Feature Request) posts where this topic is implicitly mentioned, and one user groups post that seemed explicitly related to my issue.
Since I am using Spring together with jOOQ, I believe my ideal solution (preferably declarative) would look something like the following:
#Batched(100) // batch size as parameter, potentially
#Transactional
public void createTransaction(CreationCommand creationCommand) {
// all inserts/updates above are added to a batch and executed on successful invocation
}
For this to work, I imagine I'd need to manage a scoped (ThreadLocal/Transactional/Session scope) resource which can keep track of the current batch such that:
Prior to entering the method, an empty batch is created if the method is #Batched,
A custom DSLContext (perhaps extending DefaultDSLContext) that is made available via DI has a ThreadLocal flag which keeps track of whether any current statements should be batched or not, and if so
Intercept the calls and add them to the current batch instead of executing them immediatelly.
However, step 3 would necessitate having to rewrite a large portion of our code from the (IMO) relatively readable:
records.forEach(record -> {
record.setX(...);
// ...
record.store();
}
to:
userObjects.forEach(userObject -> {
dslContext.insertInto(...).values(userObject.getX(), ...).execute();
}
which would defeat the purpose of having this abstraction in the first place, since the second form can also be rewritten using DSLContext#batchStore or DSLContext#batchInsert. IMO however, batching and bulk insertion should not be up to the individual developer and should be able to be handled transparently at a higher level (e.g. by the framework).
I find the readability of the jOOQ API to be an amazing benefit of using it, however it seems that it does not lend itself (as far as I can tell) to interception/extension very well for cases such as these. Is it possible, with the jOOQ 3.11.1 (or even current) API, to get behaviour similar to the former with transparent batch/bulk handling? What would this entail?
EDIT:
One possible but extremely hacky solution that comes to mind for enabling transparent batching of stores would be something like the following:
Create a RecordListener and add it as a default to the Configuration whenever batching is enabled.
In RecordListener#storeStart, add the query to the current Transaction's batch (e.g. in a ThreadLocal<List>)
The AbstractRecord has a changed flag which is checked (org.jooq.impl.UpdatableRecordImpl#store0, org.jooq.impl.TableRecordImpl#addChangedValues) prior to storing. Resetting this (and saving it for later use) makes the store operation a no-op.
Lastly, upon successful method invocation but prior to commit:
Reset the changes flags of the respective records to the correct values
Invoke org.jooq.UpdatableRecord#store, this time without the RecordListener or while skipping the storeStart method (perhaps using another ThreadLocal flag to check whether batching has already been performed).
As far as I can tell, this approach should work, in theory. Obviously, it's extremely hacky and prone to breaking as the library internals may change at any time if the code depends on Reflection to work.
Does anyone know of a better way, using only the public jOOQ API?
jOOQ 3.14 solution
You've already discovered the relevant feature request #3419, which will solve this on the JDBC level starting from jOOQ 3.14. You can either use the BatchedConnection directly, wrapping your own connection to implement the below, or use this API:
ctx.batched(c -> {
// Make sure all records are attached to c, not ctx, e.g. by fetching from c.dsl()
records.forEach(record -> {
record.setX(...);
// ...
record.store();
}
});
jOOQ 3.13 and before solution
For the time being, until #3419 is implemented (it will be, in jOOQ 3.14), you can implement this yourself as a workaround. You'd have to proxy a JDBC Connection and PreparedStatement and ...
... intercept all:
Calls to Connection.prepareStatement(String), returning a cached proxy statement if the SQL string is the same as for the last prepared statement, or batch execute the last prepared statement and create a new one.
Calls to PreparedStatement.executeUpdate() and execute(), and replace those by calls to PreparedStatement.addBatch()
... delegate all:
Calls to other API, such as e.g. Connection.createStatement(), which should flush the above buffered batches, and then call the delegate API instead.
I wouldn't recommend hacking your way around jOOQ's RecordListener and other SPIs, I think that's the wrong abstraction level to buffer database interactions. Also, you will want to batch other statement types as well.
Do note that by default, jOOQ's UpdatableRecord tries to fetch generated identity values (see Settings.returnIdentityOnUpdatableRecord), which is something that prevents batching. Such store() calls must be executed immediately, because you might expect the identity value to be available.

Ensure consistency when caching data in after_commit hooks

For a specific database table, we need an in-memory cache of this data that is always in-sync with the database. My current attempt is to write the changes to the cache in an after_commit hook - this way we make sure not to write any changes to the cache that could get reverted later.
However, this strategy is vulnerable to the following scenario:
Thread A locks and updates record, stores value 1
Thread A commits the change
Thread B locks and updates record, stores value 2
Thread B commits the change
Thread B runs the after_commit hook, so the cache now has value 2
Thread A runs the after_commit hook, so the cache now has value 1 but should have value 2
Am I right about this problem and how would one solve this?
You are right about this problem.
There is a after_save callback that runs within the same transaction. You might want to use that one instead of the after_commit hook that run after the transaction.
But than you will need to deal with a rolled back transaction yourself.
Or you might want to write your caching method in a way that does not depend on a specific instance. But instead caches the latest version that is found in the database by reloading the record from the database first.
But even than: Multithreaded systems are hard to keep in sync. And you cannot even ensure if the first or the second update send to your cache would be stored, because the caching system might be multi-threaded too.
You might want to read about different consistency models.
The solution we came up with is to lock the cache for read / write before_commit and unlock it in the after_commit. This seems to do the trick.

Chunk processing in Spring Batch effectively limits you to one step?

I am writing a Spring Batch application to do the following: There is an input table (PostgreSQL DB) to which someone continually adds rows - that is basically work items being added. For each of these rows, I need to fetch more data from another DB, do some processing, and then do an output transaction which can be multiple SQL queries touching multiple tables (this needs to be one transaction for consistency reasons).
Now, the part between the input and output should be a modular - it already has 3-4 logically separated things, and in future there would be more. This flow need not be linear - what processing is done next can be dependent on the result of previous. In short, this is basically like the flow you can setup using steps inside a job.
My main problem is this: Normally a single chunk processing step has both ItemReader and ItemWriter, i.e., input to output in a single step. So, should I include all the processing steps as part of a single ItemProcessor? How would I make a single ItemProcessor a stateful workflow in itself?
The other option is to make each step a Tasklet implementation, and write two tasklets myself to behave as ItemReader and ItemWriter.
Any suggestions?
Found an answer - yes you are effectively limited to a single step. But:
1) For linear workflows, you can "chain" itemprocessors - that is create a composite itemprocessor to which you can provide all the itemprocessors which do actual work through applicationContext.xml. Composite itemprocessor just runs them one by one. This is what I'm doing right now.
2) You can always create the internal subflow as a seperate spring batch workflow and call it through code in an itemprocessor similar to composite itemprocessor above. I might move to this in the future.

Spring,Hibernate - Batch processing of large amounts of data with good performance

Imagine you have large amount of data in database approx. ~100Mb. We need to process all data somehow (update or export to somewhere else). How to implement this task with good performance ? How to setup transaction propagation ?
Example 1# (with bad performance) :
#Singleton
public ServiceBean {
procesAllData(){
List<Entity> entityList = dao.findAll();
for(...){
process(entity);
}
}
private void process(Entity ent){
//data processing
//saves data back (UPDATE operation) or exports to somewhere else (just READs from DB)
}
}
What could be improved here ?
In my opinion :
I would set hibernate batch size (see hibernate documentation for batch processing).
I would separated ServiceBean into two Spring beans with different transactions settings. Method processAllData() should run out of transaction, because it operates with large amounts of data and potentional rollback wouldnt be 'quick' (i guess). Method process(Entity entity) would run in transaction - no big thing to make rollback in the case of one data entity.
Do you agree ? Any tips ?
Here are 2 basic strategies:
JDBC batching: set the JDBC batch size, usually somewhere between 20 and 50 (hibernate.jdbc.batch_size). If you are mixing and matching object C/U/D operations, make sure you have Hibernate configured to order inserts and updates, otherwise it won't batch (hibernate.order_inserts and hibernate.order_updates). And when doing batching, it is imperative to make sure you clear() your Session so that you don't run into memory issues during a large transaction.
Concatenated SQL statements: implement the Hibernate Work interface and use your implementation class (or anonymous inner class) to run native SQL against the JDBC connection. Concatenate hand-coded SQL via semicolons (works in most DBs) and then process that SQL via doWork. This strategy allows you to use the Hibernate transaction coordinator while being able to harness the full power of native SQL.
You will generally find that no matter how fast you can get your OO code, using DB tricks like concatenating SQL statements will be faster.
There are a few things to keep in mind here:
Loading all entites into memory with a findAll method can lead to OOM exceptions.
You need to avoid attaching all of the entities to a session - since everytime hibernate executes a flush it will need to dirty check every attached entity. This will quickly grind your processing to a halt.
Hibernate provides a stateless session which you can use with a scrollable results set to scroll through entities one by one - docs here. You can then use this session to update the entity without ever attaching it to a session.
The other alternative is to use a stateful session but clear the session at regular intervals as shown here.
I hope this is useful advice.

How to use a memory cache in a concurrency critical context

Consider the following two methods, written in pseudo code, that fetches a complex data structure, and updates it, respectively:
getData(id) {
if(isInCache(id)) return getFromCache(id) // already in cache?
data = fetchComplexDataStructureFromDatabase(id) // time consuming!
setCache(id, data) // update cache
return data
}
updateData(id, data) {
storeDataStructureInDatabase(id, data)
clearCache(id)
}
In the above implementation, there is a problem with concurrency, and we might end up with outdated data in the cache: consider two parallel executions running getData() and updateData(), respectively. If the first execution fetches data from the cache exactly in between the other execution's call to storeDataStructureInDatabase() and clearCache(), then we will get an outdated version of the data. How would you get around this concurrency problem?
I considered the following solution, where the cache is invalidated just before data is committed:
storeDataStructureInDatabase(id, data) {
executeSql("UPDATE table1 SET...")
executeSql("UPDATE table2 SET...")
executeSql("UPDATE table3 SET...")
clearCache(id)
executeSql("COMMIT")
}
But then again: If one execution reads the cache in between the other execution's call to clearCache() and COMMIT, then an outdated data will be fetched to the cache. Problem not solved.
In the cache way of thinking you cannot prevent retrieving outdated data.
For example, when someone start sending an HTTP request (if your application is a web application) that will later render the cache invalid, should we consider the cache invalid when the POST request start? when the request is handled by your server? when you start the controller code?. Well no. In fact the cache is invalid only when the database transaction ends. Not even when the transaction start, only at the end, on the COMMIT phase of the transaction. And any working process working with previous data has very few chances of being aware that the data as changed, in a web application what about html pages showing outdated data in a browser, do you want to flush theses pages?
But let's just think your parallel process are not just there for the web, but for real concurrency critical parallel jobs.
One problem is that your cache is not handled by the database server, so it's not in the transaction COMMIT/ROLLBACK. You cannot decide to clear the cache first but rebuild it if you rollback. So you can only clear and rebuild the cache after the transaction is commited.
And that lead the possibility to get an outdated version of the cache if your get comes between the database commit and the cache clear instruction. So :
is it really important that you have an outdated version of the cache? Let's say your parallel process made something just a few milliseconds before you would have retrieve this new version (so it's the old one) and work with it for maybe 40ms, and then build final report on that without noticing the cache have been flush 15ms before the end of the work. If your process response cannot contain any outdated data, then you'll have to check data validity before outputing it (so you should recheck that all data used in the work process are still valid at teh end).
So if you don't want to recheck data validity that mean your process should have put some lock (semaphore?) when starting and should release the lock only at the end of the work, your are serializing your work. Databases can speed up serialization by working on pseudo-serialization levels for transactions and breaking your transaction if any changes make this pseudo-serialization hasardous. But here you're not only working with a database so you should do the serialization on your own side.
Process serialization is slow, but you may try to do the same as the database, that is runing jobs in parallel and invalidating any job running when data is altered (so having something that detect your cache clear and kill and rerun all existing parallel jobs, implying you have something mastering all the parallel jobs)
or simply accept you can have small past-invalid-outdated data. If we talk of web application the time your response walks on TCP/IP to the client browser it may be already invalid.
Chances are that you will accept to work with outdated cache data. The only really important point is that if you cannot trust your cache data for a really critical thing then you should'nt use a cache for that. If your are manipulating Accounting data for example. The only way to get a serialization of parallel tasks is to do:
in the Writing process: all the important reads (the one that will get some writes) and all the write things in a transaction with a high isolation level (level 4) and with all necessary row locks. That's something hard to do working only with a database, it's quite impossible if you add an external cache for read operations.
in parallel read process: do what you want (read from external cache), if the read data won't be used for write operations. If one of the read data will later be use for a write operation this data validity will have to be checked in the write transaction (so in the Writing process). Why not adding a timestamp watermark on the data, so that when it will come back for a write operation you'll be able to know if it is still valid.

Resources