Exception Handling in Spring Boot + Schedulers + Spring Data JPA + Oracle - oracle

I'm trying to figure out what all the exception's can be handled/caught and process for some other means like DB insert into a table?
Majorly, there is no controller or API is being exposed. Instead only schedulers read the data from Oracle DB table by JPA Queries (i.e., Not Native Queries) and used Pessimistic WRITE Locking with Query hints as timeout specified. Later, processed data will be sent to external system as byte array.
I encountered few exceptions like JPASystemException, LocktimeoutException and others but somehow, I couldn't catch them.
Any ideas?

You could use AOP and define pointcuts and advices for all methods that access the db and might throw an exception. Spring offers advices such as AfterThrowing and Around which can hold your error handling logic.

Related

Nested transactions in Spring and Hibernate

I have a Spring Boot application with persistence using Hibernate/JPA.
I am using transactions to manage my database persistence, and I am using the #Transactional annotation to define the methods that should execute transactionally.
I have three main levels of transaction granularity when persisting:
Batches of entities to be persisted
Single entities to be persisted
Single database operations that persist an entity
Therefore, you can imagine that I have three levels of nested transactions when thinking about the whole persistence flux.
The interaction between between levels 2 and 3 works transparently as I desire because without specifying any Propagation behaviour for the transaction, the default is the REQUIRED behaviour, and so the entire entity (level 2) is rolled back because level 3 will support the transaction defined in level 2.
However, the problem is that I need an interaction between 1 and 2 that is slightly different. I need an entity to be rolled back individually if an error were to occur, but I wouldn't like the entire batch to be rolled back. That being said, I need to specify a propagation behavior in the level 2 annotation #Transactional(propagation = X) that follows these requirements.
I've tried REQUIRES_NEW but that doesn't work because it commits some of the entities from level 2 even if the whole batch had to be rolled back, which can also happen.
The behaviour that seems to fit the description better is NESTED, but that is not accepted when using Spring and Hibernate JPA, see here for more information.
This last link offers alternatives for the NESTED type, but I would like to know if NESTED would've really solved my problem, or if there was another behaviour that suited the job better.
I guess NESTED would roughly do what you want but I would question if this really is necessary. I don't know what you are trying to do or what the error condition is, but maybe you can get rid of the error condition by using some kind of WHERE clause or an UPSERT statement: Hibernate Transactions and Concurrency Using attachDirty (saveOrUpdate)

Is a Spring Data JPA #Query dynamic or named?

JPA #NamedQuery is translated to SQL only once when application is deployed and generated SQL is cached.
EntityManager.createQuery translates query every time our method is called.
What Spring-data-jpa is doing with query defined in #Query annotation? Is it translated to SQL once during deployment (like NamedQuery) or translated every time (like dynamic query) ?
Spring Data JPA calls EntityManager.createQuery(…) for every invocation of a query method annotated with #Query. The reason for that is quite simple: the Query instances returned by the EntityManager are not thread-safe and actually stateful as they contain bound parameter information.
That said, most of the JPA persistence provider perform some kind of text-based query caching anyway so that they basically build the actual SQL query once for a certain JPQL query and reuse the former on subsequent invocations.
As an interesting side note, when we started building the support for #Query in 2008 we looked into possibilities to rather register the manually declared queries as named queries with JPA. Unfortunately there wasn't - and up until today - there's no way to manually register a named query programmatically via the JPA.

JPA & JDBC can coexist in DAO layer?

Is there any problem using both JDBC (JdbcTemplate) & JPA (EntityManager) in data access layer ?
I am planning to use JDBC to access the stored-procedures/routines.
These stored-procedures will be returning multiple cursors by joining multiple tables (which is not registered as JPA entities).
These JDBC actions are pure read-only.
I am not combining JPA & JDBC actions in the same transactions as given here
It is okay for me . Use the right tools for the job . For example , if I want to do some report queries which the data are spanned over many different entities or want to use some powerful database features (eg window function , common table expression ) which are not supported or difficult to be achieved by JPA , I would prefer to use JDBC to issue native SQL directly to get the job done.
The architecture CQRS also uses this idea which has two different separate models for updating information (command action) and reading information (query action) .For example , JPA can be used for command action while native JDBC is used for the query action.

Dealing with SQLException with spring,hibernate & Postgres

Hi im working on a project using HibernateDaoSUpport from my Daos from Spring & spring-ws &
hibernate & postgres who will be used in a national application (means a lot of users)
Actually, every exception from hibernate is automatically transformed into some specific Spring dataAccesException.
I have a table with a keyword on the dabatase & a unique constraint on the keywords :
no duplicate keywords is allowed.
I have found twows ways to deal with with that in the Insert Dao:
1- Check for the duplicate manually (with a select) prior to doing your insert. I means that the spring transaction will have a SERIALIZABLE isolation level. The obvious drawback is that we have now 2 queries for a simple insert.Advantage: independent of the database
2-let the insert gone & catch the SqlException & convert it to a userfriendly message & errorcode to the final consumer of our webservices.
Solution 2:
Spring has developped a way to translate specific exeptions into customized exceptions.
see http://www.oracle.com/technology/pub/articles/marx_spring.html
In my case i would have a ConstraintViolationException.
Ideally i would like to write a custom SQLExceptionTranslator to map the duplicate word constraint in the database with a DuplicateWordException.
But i can have many unique constraints on the same table. So i have to get the message of the SQLEXceptions in order to find the name of the constraint declared in the
create table "uq_duplicate-constraint" for example. Now i have a strong dependency with the database.
Thanks in advance for your answers & excuse me for my poor english (it is not my mother tongue)
In my experience it is always better to have your data validation in your application rather than depend on the database. This keeps your database's role limited to that of a data store, meaning wont have business logic spread across two layers.
What happens when you have a database transaction that would break two constraints at the same time? In that case your exception mapping approach will only catch the first failure, rather than some validation code that can show all data validation issues on an attempt to save.

Spring Transaction with Sybase

I am using Spring in my Web Application , with the underlying database as Sybase.
I have 3 complex stored procedures to be executed.
The procs , have create table and drop table commands to hold temporary result sets.
The tables are created in the user db space , rather that in the tempdb space. Hence, I am faced with the need to ensure that the entire service operation from the service bean , that would have DAO objects calling the stored procs, to be serialized. Does simply making the service bean method a Spring Transaction, ensure a solution to potential concurrency related problems in my case?
Another thing that I noticed is that, annotating my service method as #Transactional , made the sybase database throw an error : "Create table command cannot be executed within a transaction". Does this mean that Spring makes the entire database operation a transaction?
I am really not clear about this , and any explanation would be welcome.
Meaning if I have a stored proc named myproc . The sybase statement would be exec myproc. This,say, is executed by the DAOobject from the service method, annotated as #Transactional. Now does Spring make the database operation as "begin tran
exec myproc
end tran". My observation seems to suggest that. Please explain.
And also explain, if just annotation of #Transactional , will solve my concurrency issues. I , actually don't want 2 instances of my stored proc to be running on the database , at a time.
You've asked a number of questions at once, but I'll do the best I can to answer them.
Marking a service as #Transactional associates it with the current JTA (Java Transaction API) transaction (or creates one if required)
Depending on how your datasources are configured, JDBC connections will typically also be associated (enlisted) into the transaction
If a connection is associated with a JTA transaction then anything that is executed on it will take place within a database transaction.
In Sybase ASE, you cannot create (or drop) a table inside a transaction.
So, marking your service as #Transactional will prevent you from executing a proc that contains create table statements.
However, that won't solve the problem you're facing anyway.
Marking something #Transactional, simply means that it executes inside a JTA transaction. And that means that it either commits, or rolls-back, but it doesn't guard against concurrent access.
You have a few options
If you know that your application will only ever run on a single VM, then you can mark the code as serialized. This will make sure the VM only ever has 1 thread inside that code at a time.
You can implement concurrency controls inside the proc, (e.g. use lock table), but that will require a transaction, which will prevent you from creating a table inside the procedure.
Or you can redesign your application to not have to jump through all these hoops.
There are probably easier ways of achieving the outcome you're looking for - creating and dropping tables inside a proc, and then trying to prevent concurrent access to that proc is not a typical way of solving a problem.

Resources