DB2 SQL Error: SQLCODE=-270 exception thrown by jpa paging item reader - spring

I have created a spring batch service with a item reader, item processor and item writer.I have extended the AbstractPagingItemReader and created my own implementation by the name of JpaPagingItemReader.Now when I ran the batch service the reader reads a fix set of records from db (default page size: 10),processes them and writes them.However on second read it throws me the below exception:
2015-06-25 16:33:00,712 ERROR [jobLauncherTaskExecutor-6][saeedh:120659] org.hibernate.util.JDBCExceptionReporter : DB2 SQL Error: SQLCODE=-270, SQLSTATE=42997, SQLERRMC=63, DRIVER=3.61.65
2015-06-25 16:33:00,712 ERROR [jobLauncherTaskExecutor-6][saeedh:120659] org.hibernate.util.JDBCExceptionReporter : DB2 SQL Error: SQLCODE=-727, SQLSTATE=56098, SQLERRMC=2;-270;42997;63, DRIVER=3.61.65
2015-06-25 16:33:00,712 ERROR [jobLauncherTaskExecutor-6][saeedh:120659] org.hibernate.util.JDBCExceptionReporter : DB2 SQL Error: SQLCODE=-727, SQLSTATE=56098, SQLERRMC=2;-270;42997;63, DRIVER=3.61.65
javax.persistence.PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute query
com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-270, SQLSTATE=42997, SQLERRMC=63, DRIVER=3.61.65
at com.ibm.db2.jcc.am.ed.a(ed.java:676)
at com.ibm.db2.jcc.am.ed.a(ed.java:60)
at com.ibm.db2.jcc.am.ed.a(ed.java:127)
at com.ibm.db2.jcc.am.gn.c(gn.java:2554)
at com.ibm.db2.jcc.am.gn.d(gn.java:2542)
at com.ibm.db2.jcc.am.gn.a(gn.java:2034)
at com.ibm.db2.jcc.am.hn.a(hn.java:6500)
at com.ibm.db2.jcc.t4.cb.g(cb.java:140)
at com.ibm.db2.jcc.t4.cb.a(cb.java:40)
at com.ibm.db2.jcc.t4.q.a(q.java:32)
at com.ibm.db2.jcc.t4.rb.i(rb.java:135)
I get that this error is probably because there is a CLOB column in the table from where I am reading records but the weird thing is it reads the first batch of 10 records fine,process them and write them but on second time read it throws the above exception.Any suggestions?Below is a snippet from JpaPagingItemReader that I wrote.The override doReadPage method from AbstractPagingItemReader.java.
protected void doReadPage ()
{
setPageSize (10);
// Flush we already have in entity manager
getEntityManager ().flush ();
// clear the entity manager: To read and detach
getEntityManager ().clear ();
Query query = createQuery ().setFirstResult (getPage () * getPageSize ()).setMaxResults (getPageSize ());
if (parameterValues != null)
{
for (Map.Entry<String, Object> me : parameterValues.entrySet ())
{
query.setParameter (me.getKey (), me.getValue ());
}
}
if (results == null)
{
results = new CopyOnWriteArrayList<T> ();
}
else
{
results.clear ();
}
results.addAll (query.getResultList ());
// Detach all objects that became part of persistence context
getEntityManager ().clear ();
}
Any help is highly appreciated as I already behind deadline due to this issue.Please if you think any thing is missing,do let me know and I will update the question.Thanks.

I figured it out.The issue was indeed that we are not allowed to have Clob data as a projection in a scrollable JPA cursor.The first 10 records are read just fine but as soon as It starts to read the second batch it has to move the cursor from 0 to the 11th record.That is when I was getting sql exception.
Simple fix was to remove the CLOB column from the select statement and get it in a separate query where its value is required.That fixed my problem.Thanks.

Related

Alter session hangs or causes ORA-01013

I have the following code block and when the ALTER SESSION statement is executed, it either hangs or it throws an ORA-01013 depending on whether we're connecting to Oracle 12r2 or 19.3 and what version of the OJDBC8 driver is being used:
try(Connection connection = jdbcConnection.connect(false)) {
// We now have a java.sql.Connection open to the database at this point
try(PreparedStatement ps = connection.prepareStatement(someQuery)) {
// We not have a prepared statement based on query, the query is irrelevent
try (Statement s = connection.createStatement()) {
// The following statement fails with the ORA-01013 error
s.execute("ALTER SESSION SET CONTAINER=" + pdbName);
}
}
}
If I rework this code block to the following, the problem disappears.
try(Connection connection = jdbcConnection.connect(false)) {
// We now have a java.sql.Connection open to the database at this point
try (Statement s = connection.createStatement()) {
s.execute("ALTER SESSION SET CONTAINER=" + pdbName);
}
try(PreparedStatement ps = connection.prepareStatement(someQuery)) {
// We not have a prepared statement based on query, the query is irrelevent
}
// or put the alter session here
}
From what I can determine, using Oracle OJDBC8 12.2.0.1, the hang nor the ORA-01013 exception is thrown; however when I migrate to 19.x.0.0, this is where I'm seeing this problem occur.
Is this a bug in the JDBC driver or is there actually a problem with how the code is written that the 12.2.0.1 driver is more lenient with than the later versions?

How to get information about error from SqlExceptionHelper for REST spring application

I have some tables in my database and I want to get information about wrong requests to the database. In case If I'm trying to save entity with wrong foreigns keys, I want to get detail information about these keys.
For example:
2020-03-25 18:37:37.595 ERROR 9788 --- [nio-8090-exec-3] o.h.engine.jdbc.spi.SqlExceptionHelper : ERROR: insert or update on table "student" violates foreign key constraint "student_fkey_to_specialty"
Detail: Key (specialtykey)=(2) is not present in table "specialty".
I tried to solve with this code, but I get other information.
could not execute statement; SQL [n/a]; constraint [student_fkey_to_specialty]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement
my code:
#PostMapping
public void saveStudent(#RequestBody StudentDTO studentDTO) {
if(studentDTO!=null){
try {
studentService.save(studentDTO);
}catch (Exception|Error e){
throw new ResponseStatusException(HttpStatus.BAD_REQUEST, e.getLocalizedMessage(),e );
}
}
}
Use a loop to iterate to the original exception,
here a example function that does that:
private String getCauseMessage(Throwable t)
Throwable cause = t;
while (t.getCause() != null) {
cause = t.getCause();
}
return t.getLocalizedMessage();
}
As you never know how many exception might be chained together, using a loop is the safest way. If you just use it directly you risk getting a NullPointerException or
not getting the message of the original exception.
Student table is having "student_fkey_to_specialty" which is getting violated. Check on which filed this constraint is and provide the correct value for that field
I solved this problem with this code. This allows getting information about exceptions from the database.
e.getCause().getCause().getLocalizedMessage()

Spring Integration JDBC lock failure

I don't understand the behavior for distributed locks obtained from a JdbcLockRegistry.
#Bean
public LockRepository lockRepository(DataSource datasource) {
return new DefaultLockRepository(datasource);
}
#Bean
public LockRegistry lockRegistry(LockRepository repository) {
return new JdbcLockRegistry(repository);
}
My project is running upon PostgreSQL and Spring Boot version is 2.2.2
And this is the demonstration use case :
#GetMapping("/isolate")
public String isolate() throws InterruptedException {
Lock lock = registry.obtain("the-lock");
if (lock.tryLock(10, TimeUnit.SECONDS)) { // close
try {
Thread.sleep(30 * 1000L);
} finally {
lock.unlock(); // open
}
} else {
return "rejected";
}
return "acquired";
}
NB: that use case works when playing with Hazelcast distributed locks.
The observed behavior is that a first lock is duly registered in database through a call to the API on a first instance.
Then, within 30 seconds, a second on is requested on a different instance (other port), and it's updating the existing int_lock table's line (client_id changes) instead of failing. So the first endpoint delivers after 30 seconds (no unlock failure), and the second endpoint is delivering after its own period of 30 seconds. There is no mutual exclusion.
These are the logs for a single acquisition :
Trying to acquire lock...
Executing prepared SQL update
Executing prepared SQL statement [DELETE FROM INT_LOCK WHERE REGION=? AND LOCK_KEY=? AND CREATED_DATE<?]
Executing prepared SQL update
Executing prepared SQL statement [UPDATE INT_LOCK SET CREATED_DATE=? WHERE REGION=? AND LOCK_KEY=? AND CLIENT_ID=?]
Executing prepared SQL update
Executing prepared SQL statement [INSERT INTO INT_LOCK (REGION, LOCK_KEY, CLIENT_ID, CREATED_DATE) VALUES (?, ?, ?, ?)]
Processing...
Executing prepared SQL update
Executing prepared SQL statement [DELETE FROM INT_LOCK WHERE REGION=? AND LOCK_KEY=? AND CLIENT_ID=?]
It sounds strange that acquisition process begins with DELETE, though...
I've tried to set a constant client id for the DefaultLockRepository, without improvement.
Does anyone have a clue of understanding of how to fix this ? Thx for any help.
All right. It happens that the repository's TTL is 10s by default, just like my timeout in that specific use case. So the lock obviously dies (DELETE) before timeout period.
Here is a fix then:
#Bean
public LockRepository lockRepository(DataSource datasource) {
DefaultLockRepository repository = new DefaultLockRepository(datasource);
repository.setTimeToLive(60 * 1000);
return repository;
}
try lock.renew to extend lock period. lock.lock() doesn't update lock until it expires.
Trying to maintain a lock, I tried to take benefit of DefaultLockRepository#acquire, called by Lock#lock, which attempts update before inserting a new lock (and after cleaning up expired locks, as said before):
#GetMapping("/isolate")
public String isolate() throws InterruptedException {
Lock lock = registry.obtain("the-lock");
log.warn("Trying to acquire lock...");
if (lock.tryLock(10, TimeUnit.SECONDS)) { // close lock
try {
for (int i=0; i < 6; i++) { // very...
log.warn("Processing...");
Thread.sleep(5 * 1000L); // ... long task
lock.lock(); //DEBUG holding (lock update)
}
} finally {
if (!repository.isAcquired("the-lock")) {
throw new IllegalStateException("lock lost");
} else {
lock.unlock(); // open lock
}
}
} else {
return "rejected";
}
return "acquired";
}
But this didn't work as expected (NB: ttl is on default 10s in this test);
I always get a lock lost IllegalStateException in the end, despite the fact that I can see the lock date changing in PostgreSQL's console.

Native Query With Spring's JdbcTempate

I'm using spring's JdbcDaoSupport for making data base call. I want to execure native query (sql query) for retrieving data. Do we have any API available in JdbcTemplate for native query? I used queryForObject but it throws exception if there is no data whereas i was expecting it to return back null if it couldn't find data.
There are many options available for executing native sql with JdbcTemplate. The linked documentation contains plenty of methods that take native sql, and usually some sort of callback handler, which will accomplish exactly what you are looking for. A simple one that comes to mind is query(String sql, RowCallbackHandler callback).
jdbcTemplate.query("select * from mytable where something > 3", new RowCallbackHandler() {
public void processRow(ResultSet rs) {
//this will be called for each row. DO NOT call next() on the ResultSet from in here...
}
});
Spring JdbcTemplate's queryForObject method expects your SQL to return exactly one row. If the there are no rows returned or if there are more than 1 row returned it will throw a org.springframework.dao.IncorrectResultSizeDataAccessException. You will have to wrap the call to queryForObject with a try catch block to handle IncorrectResultSizeDataAccessException and return null if the exception is thrown
e.g.
try{
return jdbcTemplate.queryForObject(...);
}catch(IncorrectResultSizeDataAccessException e){
return null;
}

How to catch PSQLException value too long for type character varying

I'm currently testing the data access layer that I've created in spring (PersistenceContext is injected). So I have a stateless EJB that calls a service for example UserService, that inserts/delete/update data in the database.
The service works fine, I was able to insert database. But when I was testing and I input string value that is longer than the set length I got:
javax.transaction.RollbackException: Transaction marked for rollback.
WARNING: DTX5014: Caught exception in beforeCompletion() callback:
javax.persistence.PersistenceException: org.hibernate.exception.DataException: ERROR: value too long for type character varying(20)
Caused by: org.hibernate.exception.DataException: ERROR: value too long for type character varying(20)
Caused by: org.hibernate.exception.DataException: ERROR: value too long for type character varying(20)
My partial code:
#PersistenceContext
protected EntityManager entityManager;
try {
entityManager.persist(e);
} catch(Exception e) {
//log message here
}
Then I've tried everything to catch these errors but I was not able to. Any suggestion on how to resolve the issue?
Thanks,
czetsuya
I've used the following code to find out which error is thrown under your circumstances:
BEGIN;
CREATE TABLE t(v varchar(5));
DO $body$
BEGIN
INSERT INTO t VALUES ('1234567');
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE '!!! %, %', SQLSTATE, SQLERRM;
END;$body$;
ROLLBACK;
You'll see, that error code is 22001, error is named string_data_right_truncation per PostrgeSQL's list of error codes.
I don't know how to catch this error in the Hibernate, but on the PL/pgSQL level you can do it using:
EXCEPTION WHEN SQLSTATE '22001' THEN
-- your code follows
END;
I hope this will help you.

Resources