I have a requirement to read the first enabled account from DB2 database table and immediately update the column to disable it. While server 1 is reading and updating the column, no other server should be able to read the same row since I want one account to be used by only one server at a time.
This is what I have so far..
Account.java
Account{
private Long id;
private Character enabled;
.............
}
AccountRepository.java
public interface AccountRepository extends JpaRepository<Account, Long>{
Account findFirstByEnabled(new Character('Y'));
}
AccountServiceImpl.java
#Service
public class AccountServiceImpl {
#Autowrired
private AccountRepository accntRepository;
#Transactional
public Account findFirstAvaialbleAccount(){
Account account = accntRepository.findFirstByEnabled(new Character('Y'));
if(account != null)
{
account.setEnabled(new Character('N')); //put debug point here
account.save(proxyAccount);
}
return account;
}
}
But this isn't working.. I've put a debug pointer in the findFirstAvaialbleAccount() method. What I was expecting is, if the debug pointer reaches that line and waiting for me to resume execution, if I run select query directly on the database, the sql shouldn't execute. It should only execute after I resume the execution on the server so that transaction is completed. But instead, running the select query directly on the database gave me the complete result set immediately. What am I missing here? I'm using DB2 if it matters.
Answering my own question... I had incorrect Select SQL running against the database. If I run the select sql with "select.. for update", then the execution waits until I hit resume on the server and transaction is complete.
SQL 1 - this executes immediately even though the transaction from server isn't complete.
select * from MYTABLE where ENABLED = 'Y';
SQL 2- this waits until the transaction from server is complete (it will probably timeout if I don't hit resume quick enough)
select * from MYTABLE where ENABLED = 'Y'
fetch first 1 rows only with rs use and keep update locks;
Related
I am creating a Spring Batch process (Spring Boot 2) that reads a file and writes it to a Database. It processes it one record at a time. Read from file, process it, and write (or update) to the Database.
If a record for the same ID exists in the DB, the process has to update the end date of the existing record in DB, and create a new record with new start date. Below is the code:
public class Processor implements ItemProcessor<CelebVO, CelebVO> {
#Autowired
EndorseTableRepository endorseTableRepository;
#Override
#Transactional
public CelebVO process(CelebVO celebVO) {
CelebEndorsement celebEndorsement = endorseTableRepository.findAllByCelebIDAndBrandID(celebVO.getCelebID(),celebVO.getBrandID());
if (celebEndorsement == null) {
CelebEndorsement newEndorsement = new CelebEndorsement(celebVO);
endorseTableRepository.save(newEndorsement);
} else {
celebEndorsement.setEndDate(celebVO.getEffDt.minusDays(1));
endorseTableRepository.save(celebEndorsement);
// create a new row with new start date
CelebEndorsement newEndorsement = new CelebEndorsement(celebVO);
newEndorsement.setStartDate(celebVO.getEffDt());
endorseTableRepository.save(newEndorsement);
}
return celebVO;
}
}
Below is the input txt file (CelebVO):
CelebID BrandID EffDt
J Lo Pepsi 2021-01-05
J Lo Pepsi 2021-05-30
Now, lets suppose we are starting with an empty EndorseTable. When the process picks up the file and reads the records, it will see there are no records for CelebID 'J Lo'. So it will insert a row to the DB.
Now, the process reads the second row and process it. It should see that there is already a record in the table for J Lo. So it should put an endDate to that records and then create a new record.
After this file is processed we should see two records in the table.
But that is not what is happening. Though I do a repository.save() for the first record, it is still not commited to the table. So when the process reads the second row, it doesn't find any rows in the table. It ends up writing only one record to the table.
I tried a repository.saveAndFlush(). That doesn't help.
My chunk size is 1
I tried removing #Transactional. But that breaks the code. So I kept it there.
The chunk-oriented processing model of Spring Batch commits a transaction per chunk, not per record. So in your case, if the insert and the update happen to be in the same chunk, the processor won't see the change of the previous record as the transaction is not committed yet at that point.
Adding #Transactional on your processor's method is incorrect, because the processor will already be executed within the scope of a transaction driven by Spring Batch. What you are trying to do would work if you set the commit interval to 1, but this would impact the performance of your step.
I had to modify the Entity class. I replaced
#ManyToOne(cascade = CascadeType.ALL)
with
#ManyToOne(cascade = {CascadeType.MERGE, CascadeType.DETACH})
and it worked.
I have a transactional method where objects are inserted. The debugger shows that upon eventsDAO.save(..) the actual insert doesn't take place, but there is only a sequence fetch. The first time I see insert into events_t .. in the debugger is when there's a reference to the just-inserted Event.
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class, readOnly = false)
public void insertEvent(..) {
EventsT eventsT = new EventsT();
// Fill it out...
EventsT savedEventsT = eventsDAO.save(eventsT); // No actual save happens here
// .. Some other HQL fetches or statements ...
// Actual Save(Insert) only happens after some actual reference to this EventsT (below)
// This is also HQL
SomeField someField = eventsDAO.findSomeAttrForEventId(savedEventsT.getId());
}
But I also see that this only holds true if all the statements are HQL (non-native).
As soon as I put a Native-SQL Select somewhere before any actual reference to this table, even if it does not touch the table in any way, it forces an immediate flush and I see the statement insert into events_t ... on the console at that exact point.
If I don't touch the table EventsT with my Native SQL Select in any way, why does the flushing happen at that point?
According to the hibernate documentation:
6.1. AUTO flush
By default, Hibernate uses the AUTO flush mode which triggers a flush in the following circumstances:
prior to committing a Transaction
prior to executing a JPQL/HQL query that overlaps with the queued entity actions
before executing any native SQL query that has no registered synchronization
So, this is expected behaviour. See also this section. It shows how you can use a synchronization.
I want to execute a SKIP LOCKED query on Oracle using Spring Data JPA, so I tried the following:
#Lock(LockModeType.PESSIMISTIC_WRITE)
#Query(value = "SELECT * FROM User WHERE ID=?1 FOR UPDATE SKIP LOCKED", nativeQuery = true)
User findOne(UUID id);
I tried the above and found that the generated query contains FOR UPDATE, but not SKIP LOCKED (below is the generated query from logs):
select ent0_.column1 as name, ent0_.CREATED_DATE as CREATED_2_33_0_ from TABLE_NAME alias_name where ent0_.column1=? for update
If I remove #Lock from the query method, the generated query does not even have FOR UPDATE.
Please suggest how I can generate a query with FOR UPDATE SKIP LOCKED, as required.
You can assing -2 to timeout value so that 'skip locked' will be used whenever possible.
PESSIMISTIC_WRITE with a javax.persistence.lock.timeout setting of -2
UPGRADE_SKIPLOCKED
The lock acquisition request skips the already locked rows. It uses a
SELECT … FOR UPDATE SKIP LOCKED in Oracle and PostgreSQL 9.5, or
SELECT … with (rowlock, updlock, readpast) in SQL Server.
public interface MyRepository extends CrudRepository<MyEntity, Long> {
/**
* The lock acquisition request skips the already locked rows.
* It uses a SELECT … FOR UPDATE SKIP LOCKED in Oracle and PostgreSQL 9.5,
* or SELECT … with (rowlock, updlock, readpast) in SQL Server.
*/
String UPGRADE_SKIPLOCKED = "-2";
#Lock(value = LockModeType.PESSIMISTIC_WRITE) // adds 'FOR UPDATE' statement
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = UPGRADE_SKIPLOCKED)})
MyEntity findFirstByStatus(String status);
}
By doing like this, the select query will have select ... for update skip locked
https://docs.jboss.org/hibernate/orm/5.0/userguide/html_single/chapters/locking/Locking.html
Add:
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value
="-2")})
I'm trying to write a Groovy/Grails 3 function that looks up a database object, locks it, and then saves it (releasing the lock automatically).
If the function is called multiple times, it should wait until the lock is released, and then run the update. How can I accomplish this?
def updateUser(String name) {
User u = User.get(1)
// if locked, wait until released somehow?
u.lock()
u.name = name
u.save()
}
updateUser('bob')
updateUser('fred') // sees lock from previous call, waits until released, then updates
u.save(flush:true)
Flushing the Hibernate session should complete the transaction and release the lock on the database level.
Generally speaking, pessimistick locking only works in a transactional context.
So make sure to put the updateUser method in a service that is annotated with #Transactional.
Calling get() and then lock() results in 2 sql statements being executed (one for getting the object, another for locking it).
Using User.lock(), a single select ... for udpate query is issued instead.
#Transactional
class UserService {
def updateUser(String name) {
User u = User.lock(1) // blocks until lock is free
u.name = name
u.save()
}
}
Considering a Spring Boot, neo4j environment with Spring-Data-neo4j-4 I want to make a delete and get an error message when it fails to delete.
My problem is since the Repository.delete() returns void I have no ideia if the delete modified anything or not.
First question: is there any way to get the last query affected lines? for example in plsql I could do SQL%ROWCOUNT
So anyway, I tried the following code:
public void deletesomething(Long somethingId) {
somethingRepository.delete(getExistingsomething(somethingId).getId());
}
private something getExistingsomething(Long somethingId, int depth) {
return Optional.ofNullable(somethingRepository.findOne(somethingId, depth))
.orElseThrow(() -> new somethingNotFoundException(somethingId));
}
In the code above I query the database to check if the value exist before I delete it.
Second question: do you recommend any different approach?
So now, just to add some complexity, I have a cluster database and db1 can only Create, Update and Delete, and db2 and db3 can only Read (this is ensured by the cluster sockets). db2 and db3 will receive the data from db1 from the replication process.
For what I seen so far replication can take up to 90s and that means that up to 90s the database will have a different state.
Looking again to the code above:
public void deletesomething(Long somethingId) {
somethingRepository.delete(getExistingsomething(somethingId).getId());
}
in debug that means:
getExistingsomething(somethingId).getId() // will hit db2
somethingRepository.delete(...) // will hit db1
and so if replication has not inserted the value in db2 this code wil throw the exception.
the second question is: without changing those sockets is there any way for me to delete and give the correct response?
This is not currently supported in Spring Data Neo4j, if you wish please open a feature request.
In the meantime, perhaps the easiest work around is to fall down to the OGM level of abstraction.
Create a class that is injected with org.neo4j.ogm.session.Session
Use the following method on Session
Example: (example is in Kotlin, which was on hand)
fun deleteProfilesByColor(color : String)
{
var query = """
MATCH (n:Profile {color: {color}})
DETACH DELETE n;
"""
val params = mutableMapOf(
"color" to color
)
val result = session.query(query, params)
val statistics = result.queryStatistics() //Use these!
}