Spring Integration JDBC lock failure - jdbc

I don't understand the behavior for distributed locks obtained from a JdbcLockRegistry.
#Bean
public LockRepository lockRepository(DataSource datasource) {
return new DefaultLockRepository(datasource);
}
#Bean
public LockRegistry lockRegistry(LockRepository repository) {
return new JdbcLockRegistry(repository);
}
My project is running upon PostgreSQL and Spring Boot version is 2.2.2
And this is the demonstration use case :
#GetMapping("/isolate")
public String isolate() throws InterruptedException {
Lock lock = registry.obtain("the-lock");
if (lock.tryLock(10, TimeUnit.SECONDS)) { // close
try {
Thread.sleep(30 * 1000L);
} finally {
lock.unlock(); // open
}
} else {
return "rejected";
}
return "acquired";
}
NB: that use case works when playing with Hazelcast distributed locks.
The observed behavior is that a first lock is duly registered in database through a call to the API on a first instance.
Then, within 30 seconds, a second on is requested on a different instance (other port), and it's updating the existing int_lock table's line (client_id changes) instead of failing. So the first endpoint delivers after 30 seconds (no unlock failure), and the second endpoint is delivering after its own period of 30 seconds. There is no mutual exclusion.
These are the logs for a single acquisition :
Trying to acquire lock...
Executing prepared SQL update
Executing prepared SQL statement [DELETE FROM INT_LOCK WHERE REGION=? AND LOCK_KEY=? AND CREATED_DATE<?]
Executing prepared SQL update
Executing prepared SQL statement [UPDATE INT_LOCK SET CREATED_DATE=? WHERE REGION=? AND LOCK_KEY=? AND CLIENT_ID=?]
Executing prepared SQL update
Executing prepared SQL statement [INSERT INTO INT_LOCK (REGION, LOCK_KEY, CLIENT_ID, CREATED_DATE) VALUES (?, ?, ?, ?)]
Processing...
Executing prepared SQL update
Executing prepared SQL statement [DELETE FROM INT_LOCK WHERE REGION=? AND LOCK_KEY=? AND CLIENT_ID=?]
It sounds strange that acquisition process begins with DELETE, though...
I've tried to set a constant client id for the DefaultLockRepository, without improvement.
Does anyone have a clue of understanding of how to fix this ? Thx for any help.

All right. It happens that the repository's TTL is 10s by default, just like my timeout in that specific use case. So the lock obviously dies (DELETE) before timeout period.
Here is a fix then:
#Bean
public LockRepository lockRepository(DataSource datasource) {
DefaultLockRepository repository = new DefaultLockRepository(datasource);
repository.setTimeToLive(60 * 1000);
return repository;
}

try lock.renew to extend lock period. lock.lock() doesn't update lock until it expires.

Trying to maintain a lock, I tried to take benefit of DefaultLockRepository#acquire, called by Lock#lock, which attempts update before inserting a new lock (and after cleaning up expired locks, as said before):
#GetMapping("/isolate")
public String isolate() throws InterruptedException {
Lock lock = registry.obtain("the-lock");
log.warn("Trying to acquire lock...");
if (lock.tryLock(10, TimeUnit.SECONDS)) { // close lock
try {
for (int i=0; i < 6; i++) { // very...
log.warn("Processing...");
Thread.sleep(5 * 1000L); // ... long task
lock.lock(); //DEBUG holding (lock update)
}
} finally {
if (!repository.isAcquired("the-lock")) {
throw new IllegalStateException("lock lost");
} else {
lock.unlock(); // open lock
}
}
} else {
return "rejected";
}
return "acquired";
}
But this didn't work as expected (NB: ttl is on default 10s in this test);
I always get a lock lost IllegalStateException in the end, despite the fact that I can see the lock date changing in PostgreSQL's console.

Related

spring mongodb mysql same transaction rollback

I want to save one document and one row in single transaction, but i want still be able to rollback transaction..
#Transactional(transactionManager = "chainedTransactionManager")
public void createAlienAndSpaceShip(String alienname, String spaceshipname){
spaceShipRepository.save(new SpaceShip(null, spaceshipname, 100.0d));
if (true) {
throw new RuntimeException("Something happened");
}
alienRepository.save(new Alien(null, alienname, 1.0d, 100.0d));
}
I tried to do this using ChainedTransactionManager, but it is deprecated..
I followed following tutorial https://www.youtube.com/watch?v=qOfdE-cFzto

How to prevent data loss from redis where server is stopped forcefully which results in RedisCommandInterruptedException

#Autowired
private StringRedisTemplate stringRedisTemplate;
public List<Object> getDataFromRedis(String redisKey) {
try {
long numberOfEntriesToRead = 60000;
return stringRedisTemplate.executePipelined(
(RedisConnection connection) -> {
StringRedisConnection stringRedisConn =(StringRedisConnection)connection;
for (int index = 0; index < numberOfEntriesToRead; index++) {
stringRedisConn.lPop(redisKey);
}
return null;
});
}catch (RedisCommandInterruptedException e) {
LOGGER.error("Interrupted EXCEPTION :::", e);
}
}
}
I have a method which reads redis content for given key. Now the problem is when my application server is stopped while this method is trying to fetch data from redis i am getting RedisCommandInterruptedException exception which results in loss of some data from redis. So how can i overcome this problem Any suggestions are appreciable.
Pipelines are not atomic operations therefore there is no guarantee that all/none of the commands are executed when an exception happens.
You can use lua scripts or multi command to make run operations in a single transaction.
You can read more about using multi in spring boot data redis in this SO thread and this site.

Alter session hangs or causes ORA-01013

I have the following code block and when the ALTER SESSION statement is executed, it either hangs or it throws an ORA-01013 depending on whether we're connecting to Oracle 12r2 or 19.3 and what version of the OJDBC8 driver is being used:
try(Connection connection = jdbcConnection.connect(false)) {
// We now have a java.sql.Connection open to the database at this point
try(PreparedStatement ps = connection.prepareStatement(someQuery)) {
// We not have a prepared statement based on query, the query is irrelevent
try (Statement s = connection.createStatement()) {
// The following statement fails with the ORA-01013 error
s.execute("ALTER SESSION SET CONTAINER=" + pdbName);
}
}
}
If I rework this code block to the following, the problem disappears.
try(Connection connection = jdbcConnection.connect(false)) {
// We now have a java.sql.Connection open to the database at this point
try (Statement s = connection.createStatement()) {
s.execute("ALTER SESSION SET CONTAINER=" + pdbName);
}
try(PreparedStatement ps = connection.prepareStatement(someQuery)) {
// We not have a prepared statement based on query, the query is irrelevent
}
// or put the alter session here
}
From what I can determine, using Oracle OJDBC8 12.2.0.1, the hang nor the ORA-01013 exception is thrown; however when I migrate to 19.x.0.0, this is where I'm seeing this problem occur.
Is this a bug in the JDBC driver or is there actually a problem with how the code is written that the 12.2.0.1 driver is more lenient with than the later versions?

Benchmarking spring data vs JDBI in select from postgres Database

I wanted to compare the performence for Spring data vs JDBI
I used the following versions
Spring Boot 2.2.4.RELEASE
vs
JDBI 3.13.0
the test is fairly simple select * from admin table and convert to a list of Admin object
here is the relevant details
with spring boot
public interface AdminService extends JpaRepository<Admin, Integer> {
}
and for JDBI
public List<Admin> getAdmins() {
String sql = "Select admin_id as adminId, username from admins";
Handle handle = null;
try {
handle = Sql2oConnection.getInstance().getJdbi().open();
return handle.createQuery(sql).mapToBean(Admin.class).list();
}catch(Exception ex) {
log.error("Could not select admins from admins: {}", ex.getMessage(), ex );
return null;
} finally {
handle.close();
}
}
the test class is executed using junit 5
#Test
#DisplayName("How long does it take to run 1000 queries")
public void loadAdminTable() {
System.out.println("Running load test");
Instant start = Instant.now();
for(int i= 0;i<1000;i++) {
adminService.getAdmins(); // for spring its findAll()
for(Admin admin: admins) {
if(admin.getAdminId() == 654) {
System.out.println("just to simulate work with the data");
}
}
}
Instant end = Instant.now();
Duration duration = Duration.between(start, end);
System.out.println("Total duration: " + duration.getSeconds());
}
i was quite shocked to get the following results
Spring Data: 2 seconds
JDBI: 59 seconds
any idea why i got these results? i was expecting JDBI to be faster
The issue was that spring manages the connection life cycle for us and for a good reason
after reading the docs of JDBI
There is a performance penalty every time a connection is allocated
and released. In the example above, the two insertFullContact
operations take separate Connection objects from your database
connection pool.
i changed the test code of the JDBI test to the following
#Test
#DisplayName("How long does it take to run 1000 queries")
public void loadAdminTable() {
System.out.println("Running load test");
String sql = "Select admin_id as adminId, username from admins";
Handle handle = null;
handle = Sql2oConnection.getInstance().getJdbi().open();
Instant start = Instant.now();
for(int i= 0;i<1000;i++) {
List<Admin> admins = handle.createQuery(sql).mapToBean(Admin.class).list();
if(!admins.isEmpty()) {
for(Admin admin: admins) {
System.out.println(admin.getUsername());
}
}
}
handle.close();
Instant end = Instant.now();
Duration duration = Duration.between(start, end);
System.out.println("Total duration: " + duration.getSeconds());
}
this way the connection is opened once and the query runs 1000 times
the final result was 1 second
twice as fast as spring
On the one hand you seem to make some basic mistakes of benchmarking:
You are not warming up the JVM.
You are not using the results in any way.
Therefore what you are seeing might just be effects of different optimisations of the VM.
Look into JMH in order to improve your benchmarks.
Benchmarks with an external resource are extra hard, because you have so many more parameters to control.
One big question is for example if the connection to the database is realistically slow as in most production systems the database will be on a different machine at least virtually, quite possibly on different hardware.
Is that true in your test as well?
Assuming your results are real, the next step is to investigate where the extra time gets spent.
I would expect the most time to be spent with executing the SQL statements and obtaining the result via the network.
Therefore you should inspect what SQL statements actually get executed.
This might point you to one possible answer that JPA is doing lots of lazy loading and hasn't even loaded most of you really need.

Long running Spring Service is locking DB table

I have a Spring Service that is going through multiple items in a list and for each one it is making an extra WS call to external services. The Service is called by a Job on a fixed time interval.
As a first step, the Service is saving in a JOB_CONTROL table the status of the Job (STARTED), then it iterates through the list and at the end it saves it to (FINISHED).
There are 2 issues:
the JOB_CONTROL table doesn't get saved gradually - only the
"FINISHED" value is saved and never "STARTED"
if using flush method in order to force the commit, the table gets locked, eg. no other select can be made on it until the Service finishes
#Service
public class PromotionSchedulerService implements Runnable {
#Autowired
GeofencingAreaDAO storeDao;
#Autowired
promotionsWSClient promotionsWSClient;
#Autowired
private JobControlDAO jobControlDAO;
public void run() {
JobControl job = jobControlDAO.findByClassName(this.getClass().getSimpleName());
job.setState(JobControlStateTypes.RUNNING.getStateType());
job.setLastRunDate(new Date());
// LINE BELLOW DOES NOT GET COMMITED IN DB
jobControlDAO.save(job);
List < GeofencingArea > stores = storeDao.findAllStores();
for (GeofencingArea store: stores) {
/** Call WS **/
GetActivePromotionsResponse rsp = null;
try {
rsp = promotionsWSClient.getpromotions();
} catch (Exception e) {
e.printStackTrace();
job.setState(JobControlStateTypes.FAILED.getStateType());
job.setLastRunStatus("There was an error calling promagic promotions");
jobControlDAO.save(job);
return;
}
List < PromotionBean > promos = rsp.getReturn();
for (PromotionBean promo: promos) {
BackendPromotionPOJO backendPromotionsPOJO = new BackendPromotionPOJO();
backendPromotionsPOJO.setDescription(promo.getDescription());
}
}
// ONLY THIS JOB STATE GOES TO DB. IT ACTUALLY SEEM TO OVERWRITE PREVIOUS SET VALUE ("RUNNING") from line 16
job.setLastRunStatus("COMPLETED");
job.setState(JobControlStateTypes.SUCCESS.getStateType());
jobControlDAO.save(job);
}
}
I would like to force the commit after changing job state and not locking the table when doing this.

Resources