Spring Transaction with Sybase - spring

I am using Spring in my Web Application , with the underlying database as Sybase.
I have 3 complex stored procedures to be executed.
The procs , have create table and drop table commands to hold temporary result sets.
The tables are created in the user db space , rather that in the tempdb space. Hence, I am faced with the need to ensure that the entire service operation from the service bean , that would have DAO objects calling the stored procs, to be serialized. Does simply making the service bean method a Spring Transaction, ensure a solution to potential concurrency related problems in my case?
Another thing that I noticed is that, annotating my service method as #Transactional , made the sybase database throw an error : "Create table command cannot be executed within a transaction". Does this mean that Spring makes the entire database operation a transaction?
I am really not clear about this , and any explanation would be welcome.
Meaning if I have a stored proc named myproc . The sybase statement would be exec myproc. This,say, is executed by the DAOobject from the service method, annotated as #Transactional. Now does Spring make the database operation as "begin tran
exec myproc
end tran". My observation seems to suggest that. Please explain.
And also explain, if just annotation of #Transactional , will solve my concurrency issues. I , actually don't want 2 instances of my stored proc to be running on the database , at a time.

You've asked a number of questions at once, but I'll do the best I can to answer them.
Marking a service as #Transactional associates it with the current JTA (Java Transaction API) transaction (or creates one if required)
Depending on how your datasources are configured, JDBC connections will typically also be associated (enlisted) into the transaction
If a connection is associated with a JTA transaction then anything that is executed on it will take place within a database transaction.
In Sybase ASE, you cannot create (or drop) a table inside a transaction.
So, marking your service as #Transactional will prevent you from executing a proc that contains create table statements.
However, that won't solve the problem you're facing anyway.
Marking something #Transactional, simply means that it executes inside a JTA transaction. And that means that it either commits, or rolls-back, but it doesn't guard against concurrent access.
You have a few options
If you know that your application will only ever run on a single VM, then you can mark the code as serialized. This will make sure the VM only ever has 1 thread inside that code at a time.
You can implement concurrency controls inside the proc, (e.g. use lock table), but that will require a transaction, which will prevent you from creating a table inside the procedure.
Or you can redesign your application to not have to jump through all these hoops.
There are probably easier ways of achieving the outcome you're looking for - creating and dropping tables inside a proc, and then trying to prevent concurrent access to that proc is not a typical way of solving a problem.

Related

How to create a threadsafe insert or update with hibernate.(Dealing with optimistic locking)

My problem.
I have a simple table, token. It has only a few attributes. id, token,username,version and a expire_date.
I have a rest service that will create or update a token. So when a user request a token, I would like to check if the user (using the username) already has an entry, if yes, then simply update the expire_date and return, if there is no entry create a new one. The problem is that if I create a test with a few concurrent users(using a jmeter script), that call the rest service, hibernate will very fast
throw a StaleObject exception because what happens is: Thread A will select the row for the user, change the expire_date, bump the version, meanwhile thread B will do the same but will actually manage to commit before thread A commits. Now when thread A will commit hibernate detects the version change and will throw the exception and rollback. All works as documented.
But what I would like to happen, is that Thread B will wait for Thread A to finish before doing it's thing.
What is the best way to solve this? Should I use java concurrency package and implement locks? Or is it a better option to implement a custom jpa isolation level?
Thanks
If you are using JEE server, EJB container will do it for you using #Singleton.
I think the best way is using JPA lock to acquire lock on resources you are currently updating(row lock). Don't push your effort to implement row locking using java concurrency by your self. Ex: it will be much easier to lock row contain user "john.doe" in dbms level rather finding a way locking a specific row using concurrency in your code.

Spring Transaction propagation: can't get the #OneToMany related entities when using the same transaction for creation and consultation operation

I have the following problem: I am working on a spring-boot application which offers REST services and use a relational (SQL) database using spring-data-jpa.
I have two REST services:
- a entity-creation service, which create the child-entity, the parent-entity and associate them in a same transaction. When this service ends, the data are committed into the database.
- an entity consultation service, which get back the parent-entity with its children
These two services are annotated with the #Transactional annotation. It production case, it works well: I can create an parent-entity with its children in one transaction (which is commited/ended), and get it in another transaction latter.
The problem is when I want to create integration-tests. My idea was to annotate each test with the #Transactional annotation, and do a rollback after each test. This way I keep my database clean between each test, and I don't have a generate the schema again or clean all the records in the database.
The integration test consists in creating a parent and its children and then reading it, everything in one transaction (as the test is annotated with #Transaction). When reading the entity previously created in the same transaction, I can get the parent entity, but the children are not fetched (null value). I am not sure to understand very well the transaction mechanism: I was thinking that using the #Transactional on the test method, the services (annotated with "#Transactional") invoked by this test should detect and use the same transaction opened by the test method (the propagation is configured to "REQUIRED"). Hence as the transaction uses the same EntityManager, this one should be able to return the relation between the parent entity and its children created previously in the same transaction, even if the data has not been committed to the database. The strange thing is that it retrieve the parent entity (which has not been yet committed into the database), but not its children. Is my understanding of the transaction concept correct? If not, could someone explains me what am I missing?
Also, if someone did something similar, could he explain me how he did it please?
My code is quite complex. I first want to know if I understand well how are transaction managed and if someone already did something similar. If really it is required, I can send more information about my implementation (how the transaction-manager and the entity-manager are initialized, the JPA entities, the services etc...)
Binding the Entity-manager in my test and calling its flush method from my test,between the creation and the reading, the reading operation works well: I get the parent entity with its children. But the data are written into the database during the creation to read it latter during the read operation. And I don't want the transaction to be committed as I need my test to work on an empty database. My misunderstanding is not so much about the Transaction mechanism, but more about the entity-manager: it does not keep as a cache the entities created and theirs relations...
This post help me.
Issue with #Transactional annotations in Spring JPA
As a final word, I am thinking about calling an SQL script before each test to empty my database.

Parameterized trigger - concurrency concerns

My question is quite similar to this one but I need more guidance. I also read the Oracle context doc.
The current (test) trigger is :
CREATE OR REPLACE TRIGGER CHASSIS_DT_EVNT_AIUR_TRG_OLD AFTER DELETE OR INSERT OR UPDATE
OF ETA
ON CHASSITRANSPORTS
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
BEGIN
INSERT INTO TS_CHASSIS_DATE_EVENTS (CHASSISNUMBER,DATETYPE,TRANSPORTLEGSORTORDER,OLDDATE,CREATEDBY,CREATEDDATE,UPDATEDBY,UPDATEDDATE) VALUES (:old.chassino,'ETA',:old.sortorder,:old.eta,'xyz',sysdate,'xyz',sysdate);
EXCEPTION
WHEN OTHERS THEN
NULL;
END TS_CHASSIS_DT_EVNT_AIUR_TRG;
Now the 'CREATEDBY', 'UPDATEDBY' will be the web application users who have logged in and made the changes which caused the trigger execution, hence, these values need to be passed from the application.
The web application :
Is deployed in Websphere Application Server where the datasources are configured
As expected, is using db connection pooling
My question is which approach mentioned in the thread and the doc. should I take to avoid the 'concurrency' issues i.e the updates by the app. users in multiple sessions at the application level as well the db level should not interfere with each other.
I don't think any one of the approaches in that link would apply to you, primarily due to multi-user environment and connection pooling.
Connection pooling by nature allows different connections to share the same session. Setting a context (either sys_context or any other application context) is valid throughout the lifetime of the session. So two different connections can overwrite each other's values and read other's values. (concurrency issues)
I'd actually argue against doing an insert like this inside a trigger at all. It seems to me the insert you are doing is to write to a log table all updates that happened on the main table. If that is the case, why not insert to the log table at the time of making any updates to this table.
So the procedure that does UPDATE CHASSITRANSPORTS ... would also have another INSERT statement inside it that writes to the other table. If there is no procedure and it is a direct update statement from the application, then write a procedure for this.
You could say that there are multiple places where the same update happens and I'll suggest that in that scenario create an API for the base table CHASSITRANSPORTS that handles updates and so behind a black box also writes to the log table. Any place where you need to update that table column you'd use that API.
(I'm ignoring the fact that you are suppressing all errors in the trigger with WHEN OTHERS THEN NULL with the hope that this is probably just a small example)

Achieving ACID properties using JDBC?

First of all i would like to confirm is it the responsibility of developer to follow these properties or responsibilty of transaction Apis like JDBC?
Below is my understanding how we achieve acid properties in JDBC
Atomicity:- as there is one transaction associated with connection, so we do commit or rollback , there are no partial updation.Hence achieved
Consitency:- when some data integrity constraint is voilated (say some check constraint) then sqlexception will be thrown . Then programmer acieve the consistent database by rollbacking the transaction?
one question on above say we do transaction1 and sql excpetion is thrown during transaction 2 as explained above . Now we catch the exception and do the commit will first transaction be commited?
Isolation:- Provided by JDBC Apis.But this leads to the problem of concurrent update . so it has be dealt manually right?
Durability:- Provided by JDBC Apis.
Please let me if above understanding is right?
ACID principles of transactional integrity are implemented by the database not by the API (like JDBC) or by the application. Your application's responsibility is to choose a database and a database configuration that supports whatever transactional integrity you need and to correctly identify the transactional boundaries in your application.
When an exception is thrown, your application has to determine whether it is appropriate to rollback the entire transaction or to proceed with additional processing. It may be appropriate if your application is processing orders from a vendor, for example, to process the 99 orders that succeed and log the 1 order that failed somewhere for users to investigate. On the other hand, you may reject all 100 orders because 1 failed. It depends what your application is doing.
In general, you only have one transaction open at a time (or, more accurately, one transaction per connection). So if you are working in transaction 2, transaction 1 by definition has already completed-- it was either committed or rolled back previously. Exceptions thrown in transaction 2 have no impact on transaction 1.
Depending on the transaction isolation level your application requests (and the transaction isolation levels your database supports) as well as the mechanics of your application, lost updates are something that you may need to be concerned about. If you set your transaction isolation level to read committed, it is possible that you would read a value as 'A' in transaction 1, wait for a user to do something, update the value to 'B', and commit without realizing that transaction 2 updated the value to 'C' between the time you read the data and the time you wrote the data. This may be a problem that you need to deal with or it may be something where it is fine for the last person to update a row to "win".
Your database, on the other hand, should take care of the automatic locking that prevents two transactions from simultaneously updating the same row of the same table. It may do this by locking more than is strictly necessary but it will serialize the updates somehow.

How can we implement a nested transaction in JDBC?

First let me explain you what I mean by a nested transaction.
Example: say in the main class we call method1 and create the customer using jdbc[Transaction1]. It is not commited yet. Now we call method2 in the main class and create the account for the just created customer[Transaction2]. Now commit it. As per your explanation both these transactions will be treated as part of one transaction (as there can be a maximum of one transaction with a connection). Till here, if we compare the above scenario, it will be like propagation_required in Spring. Is that correct?
Now if we want to commit transaction2 only not the one. Then this scenario will be like propagation_Nested in Spring. Is that correct?
How can we implement a nested transaction in JDBC if both my assumptions stated above are correct?
This not exactly how nested transactions work. If you roll back transaction 1, transaction 2 also rolls back. With the nested transactions you can rollback transaction 2 and still commit transaction 1.
In the JDBC you can achieve this effect using savepoints. You can call Connection.setSavepoint() before creating account and if you want to rollback that action but still commit creation of the customer, you can rollback to that savepoint.
If you want to be able to commit/rollback two transactions completely independently, like Spring REQUIRES_NEW, in JDBC you should use two connections and manage transactions on them independently.

Resources